Jump to content

Houdini 16 Wishlist


Recommended Posts

something i've wanted for ages: a context variable node (and accompanying expression functions).

 

the context variable node would allow you to add spare parms that you could use as over-ride values for operations UP the chain (ie, for the items cooking in the "current context" if you think of op ancestors as being akin to subroutines in a programming sense).

 

those parms over-rides could be read using an expression akin to the stamp expression, only you wouldn't have to identify the operation that contains the parm you're looking for since it's just looking at the stack.  you'd just need the parm name and a default value for when it can't find the parm on the stack.  when the context op cooks, it sets those parms prior to cooking its inputs and then restores the values after it's done cooking, so you could have multiple over-rides in a single chain that even modify each other without having weird results.

 

this would allow quickly making your op networks into sort of "macros" where you could have multiple exit points with different settings all utilizing the same meat of your network.

 

the difficulty in implementation is being smart about how you identify the need to recook, but beyond that it'd be pretty easy to code up i think and would provide (imo) a ton of flexibility and power that fits perfectly inside of what goring procedural is all about.

Link to comment
Share on other sites

RMB node menu "Bubble up Parameter":

- quick interface building on pulldown. Does what it says on the tin. Takes the current parameter and links it to a quickly generated parameter of a reasonable name in the containing node. 

 

It's often than I only need one or two parameters bubbled up from a node or two. Too, sometimes the whole sheef of available parameters like in the VEX context RMB "Create Input Parameters".

  • Like 1
Link to comment
Share on other sites

The most important is SPARSE FIELD (or VDB FIELD )in DOP 

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++)

 

 

It's simply amazing to see this continuously promoted by so-called TDs, yet I'm yet to read the advantages that you are expecting, the biFrost tests have shown some memory improvements and slight performance efficiency. 

 

Edit. Please read:

http://www.openvdb.org/documentation/doxygen/faq.html#sReplaceDense

Edited by tar
Link to comment
Share on other sites

It's often than I only need one or two parameters bubbled up from a node or two. Too, sometimes the whole sheef of available parameters like in the VEX context RMB "Create Input Parameters".

But this isn't because you're trying to create an HDA? Otherwise, you can just hit Alt+MMB on the parameter to have it promoted to the currently opened Operator Type Properties window.

Link to comment
Share on other sites

It's simply amazing to see this continuously promoted by so-called TDs, yet I'm yet to read the advantages that you are expecting, the biFrost tests have shown some memory improvements and slight performance efficiency. 

 

Edit. Please read:

http://www.openvdb.org/documentation/doxygen/faq.html#sReplaceDense

 

Does OpenVDB replace dense grids?

This depends a lot on your application of dense grids and your configuration of OpenVDB. Clearly, if you are storing or processing sparse data, OpenVDB will offer a smaller memory footprint and faster (sparse) data processing. However, even in some cases where both data and computation are dense, OpenVDB can offer benefits like improved CPU cache performance due to its underlying blocking and hierarchical tree structure. Exceptions are of course algorithms that expect dense data to be laid out linearly in memory, or applications that use very small grids. The simple truth is only a benchmark comparison can tell you the preferred data structure, but for what it's worth it is our experience that for the volumetric applications we encounter in production, OpenVDB is almost always superior.

 

Sounds like the perfect case to use in DOPs for me. Ocean sims. Large Pyro. 

Any saving of disk space is a no brainer. 

It may not save on small flip sims, but then again they aren't voxels.

IF you are comparing to CVEX then maybe they aren't as cheap, but then again cvex isn't the storage format for large sims.

What's the largest sim you have ever saved out in production? Will VDB allow to create even larger sims. 4K resolution isn't going to make file sizes any smaller anytime soon.

Link to comment
Share on other sites

I hear that if you ping support with your ideas and bugs and workflow there's a greater chance you'll be heard. But I think that's just a rumor ;)

But who knows? Castles in the sky, right?

 

********************************************** START EMAIL **********************************************

 

------------

Side Effects Support Ticket: #28967

Hello Kleer001,

I submitted this as #70576.

Cheers,

XXXXXXX

----------------
In response to:
----------------
on all nodes a RMB node menu "Bubble up Parameter": (right below "Parameters")

- quick interface building on pulldown. Does what it says on the tin. Takes the current parameter and links it to a quickly generated parameter of a reasonable name in the containing node.

It's often than I only need one or two parameters bubbled up from a node or two. Too, sometimes the whole sheef of available parameters like in the VEX context RMB "Create Input Parameters".

that is all,

have a great day,

********************************************** END EMAIL **********************************************

 

 

RMB node menu "Bubble up Parameter":

- quick interface building on pulldown. Does what it says on the tin. Takes the current parameter and links it to a quickly generated parameter of a reasonable name in the containing node. 

 

It's often than I only need one or two parameters bubbled up from a node or two. Too, sometimes the whole sheef of available parameters like in the VEX context RMB "Create Input Parameters".

Edited by kleer001
Link to comment
Share on other sites

But this isn't because you're trying to create an HDA? Otherwise, you can just hit Alt+MMB on the parameter to have it promoted to the currently opened Operator Type Properties window.

 

But I don't wanna open an Operator Type Properties window. Waaaa! Seriously though, I don't always want to create an HDA. Sure, if I'm visiting that particular work flow over and over I'll work on weaponizing it.

This is more a request for quick and dirty R&D type workflow as opposed to engineering/toolbuilding workflow.

Link to comment
Share on other sites

But I don't wanna open an Operator Type Properties window. Waaaa! Seriously though, I don't always want to create an HDA. Sure, if I'm visiting that particular work flow over and over I'll work on weaponizing it.

This is more a request for quick and dirty R&D type workflow as opposed to engineering/toolbuilding workflow.

 In any of the VOP context you can MMB on the input field, like pos. Is that what you mean?

 

As far as sop context with subnets, you would need to use the edit parameter interface. So there is no equivalent of the above, besides the previous mentioned short cut and type properties dialog.

 

post-5070-0-75469000-1439832152_thumb.jp

Link to comment
Share on other sites

Does OpenVDB replace dense grids?

This depends a lot on your application of dense grids and your configuration of OpenVDB. Clearly, if you are storing or processing sparse data, OpenVDB will offer a smaller memory footprint and faster (sparse) data processing. However, even in some cases where both data and computation are dense, OpenVDB can offer benefits like improved CPU cache performance due to its underlying blocking and hierarchical tree structure. Exceptions are of course algorithms that expect dense data to be laid out linearly in memory, or applications that use very small grids. The simple truth is only a benchmark comparison can tell you the preferred data structure, but for what it's worth it is our experience that for the volumetric applications we encounter in production, OpenVDB is almost always superior.

 

Sounds like the perfect case to use in DOPs for me. Ocean sims. Large Pyro. 

Any saving of disk space is a no brainer. 

It may not save on small flip sims, but then again they aren't voxels.

IF you are comparing to CVEX then maybe they aren't as cheap, but then again cvex isn't the storage format for large sims.

What's the largest sim you have ever saved out in production? Will VDB allow to create even larger sims. 4K resolution isn't going to make file sizes any smaller anytime soon.

 

 

it's trivial to convert your volumes to vdb post-solve/pre-write.  if you're just talking about disk space, anyways.  add in some tricks to limit your velocities to only the areas where there's density and you can get pretty good shrinkage over just dumping everything to disk directly.

Link to comment
Share on other sites

it's trivial to convert your volumes to vdb post-solve/pre-write.  if you're just talking about disk space, anyways.  add in some tricks to limit your velocities to only the areas where there's density and you can get pretty good shrinkage over just dumping everything to disk directly.

In dops the computation is done per a voxel, and in Houdini you always have to use a box based voxel space. This is how simulation works currently.

So empty voxel will always be computed and checked, even with an optimized expanding bounding box.

 

In the world of production simulations are often defined by rivulets in a splash, waves in a fluid sim, by expanding branches in a pyro sim, non-directional magic or fire breath, etc. Subsequently sim space is non-square. Even in the lego move or mine craft cubic based worlds the sims are not in a box.

 

Why should I always need to deal in box spaces? Why should I have to predict and contain non-square based sim to find the most optimum voxel use? There are plenty of papers on this already. Also not everyone has dozens of terabytes to waste on disk sim space, or several dozen gigs of ram in memory to waste on linear sims.

 

Yes I can enable my caching system to handle for another fringe case to save disk space. Why should I have too? I might as well just work in CVEX, by that logic no disk space required, right? That's bad logic we all know.

So I would get to save my .sim disk space and memory, and post conversion processing time. 

 

This is a production related request based in reality, I know it's hard to convert to VDB space volumes, it's not a small request. Then again didn't sidefx just finish redefining their geometry library right? They have the best volumetric renderer on the market, why not make their default setup out of the box even better? I'm sure even if it was VDB based we could use Voxel Volumes 2.0 just like renderman and mantra.

 

BAHH sorry for the rant Miles.

Link to comment
Share on other sites

 

Sounds like the perfect case to use in DOPs for me. Ocean sims. Large Pyro. 

Any saving of disk space is a no brainer. 

It may not save on small flip sims, but then again they aren't voxels.

IF you are comparing to CVEX then maybe they aren't as cheap, but then again cvex isn't the storage format for large sims.

What's the largest sim you have ever saved out in production? Will VDB allow to create even larger sims. 4K resolution isn't going to make file sizes any smaller anytime soon.

 

 

OpenVDB for storage - yes but Dops it fills the domains for processing - nullifying the benefit. - Flip fills velocity and pyro fills all domains.   I see it as trying to do image processing on an RLE compressed image - you need to uncompress it to do work.

Link to comment
Share on other sites

OpenVDB for storage - yes but Dops it fills the domains for processing - nullifying the benefit. - Flip fills velocity and pyro fills all domains.   I see it as trying to do image processing on an RLE compressed image - you need to uncompress it to do work.

Yes we need to be in an uncompressed space. But there are always better methods of uncompressed space than others. Should I save a targa, tiff or png when all I need is RGBA? I think the web has decided on that one.

 

If flip fills velocity and pyro fill all domains already, there is already an inherent problem of expand-ability. You will reach a limit where the entire bounding box space can not be computed, when you only need non-uniform space. This is a good time as any to work on the problem. OpenVDB opens that door, what ever Volume 2.0 is in the end need to adapt to this reality of non-square space. Bifrost(Niad) has already used this in production.

 

Furthermore, we are entering a world of VR soon enough where this information will need to be saved and stored in 3-D space and in real time. Film people may not think this is important now, but if you haven't gotten that stereo hasn't gone away this time. Look at the crappy cover of Time Magazine. Don't get left in the dust by VR with our old box based volume system.

 

Sorry for the rant Marty. 

Link to comment
Share on other sites

At the moment, you could already get some nice savings by having all the advected fields stored as sparse (vdb) grids, there is no reason to have density, temperature etc stored as dense grids.

 

For the pressure projection, most solutions currently need a dense velocity grid. I bet some solutions that allow a sparser grid for velocities will come along soon as well. Here is an example:

 

http://gdcvault.com/play/1021767/Advanced-Visual-Effects-With-DirectX

 

I think some people close to vdb where playing with some ideas as well, there is also some interesting stuff going on at spacex:

 

 

And there definately is some overlap between that spacex group and the openvdb people.

 

Cheers,

koen

  • Like 2
Link to comment
Share on other sites

Yes we need to be in an uncompressed space. But there are always better methods of uncompressed space than others. Should I save a targa, tiff or png when all I need is RGBA? I think the web has decided on that one.

 

If flip fills velocity and pyro fill all domains already, there is already an inherent problem of expand-ability. You will reach a limit where the entire bounding box space can not be computed, when you only need non-uniform space. This is a good time as any to work on the problem. OpenVDB opens that door, what ever Volume 2.0 is in the end need to adapt to this reality of non-square space. Bifrost(Niad) has already used this in production.

 

Furthermore, we are entering a world of VR soon enough where this information will need to be saved and stored in 3-D space and in real time. Film people may not think this is important now, but if you haven't gotten that stereo hasn't gone away this time. Look at the crappy cover of Time Magazine. Don't get left in the dust by VR with our old box based volume system.

 

Sorry for the rant Marty. 

 

Appreciate the discussion. The way Siggraph papers are going you don't escape a domain. You can make optimisation tricks but must compute to some type of box. VR is the same - it's all about tricks that visually don't offend, and, being able to compute on the GPU! I.e. realtime GI is a simplified ray tracing. https://research.nvidia.com/publication/interactive-indirect-illumination-using-voxel-cone-tracing

 

Bifrost has shown memory improvement where you don't need detailed info, and a little performance improvements. Qualitatively it appears similar to Koen's link above

Link to comment
Share on other sites

 In any of the VOP context you can MMB on the input field, like pos. Is that what you mean?

 

As far as sop context with subnets, you would need to use the edit parameter interface. So there is no equivalent of the above, besides the previous mentioned short cut and type properties dialog.

 

attachicon.gifVopMMB.jpg

 

 

>  In any of the VOP context you can MMB on the input field, like pos. Is that what you mean?

 

Yes, exactly that, but in SOP/DOP/COP/etc node context. Take current parameter, create a sanely named proxy at the top of the the containing node's interface and ref-link it down to control the current parameter. Easy as pie.

 

> So there is no equivalent of the above

 

Yea, that's why I submitted an RFE.

Edited by kleer001
Link to comment
Share on other sites

i'd like to see better support for geometry procedurals.  namely, i'd like to be able to specify them internal to my geometry and not at the object level.  there seems to be some support for this growing with file sops and alembic geo having the ability to do delay-load rendering.  it'd be great to extend this to a more general ability to have your geo on disk be able to invoke all varieties of procedurals.  in fact, it would be great if they could stack -- so that you could delay load your geo and it would itself be a point instance procedural or even a point replicate that feeds a point instance.  not sure how you'd stack some of these unless there were some kind of procedural shader system where you could wire together different procedurals.  but even a limited system were you could delay load via a normal geo procedural at the obj level that would load in a geometry that has a single additional procedural (assigned via a detail attribute on the geo) would be a step in the right direction.

Edited by fathom
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...