Jump to content
Jason

Houdini 10 Wish List

Recommended Posts

Can't you use just CHOPs for that guys?

Yeah is that what is meant by persistent data? If so then yeah geometry CHOP set to animated. It can be turned into a digital asset and the parameters for the frame range, sample rate, all that stuff can be put on the top level and you never know it's CHOPs at work. It even corrects the motion blur by inter-frame-interpolation when you read in animated geometry ... simple trick but very useful.

Some might argue that if it's that useful it should be a native SOP, but so far I like to think of SOPs as cooking once at a point in time, and CHOPs as cooking the whole graph (time or otherwise).

Edited by andrewlowell

Share this post


Link to post
Share on other sites

because i am coming from other applcation like 3d max , maya and modo to houdini am found more complex parts in houdini

but i'll talk about documentation in every part in houdini

step by step tutorial in every part and finaly make a document for project including more parts in it

like modeling , textured it , prepairing it 4 simple animation , finally give it some effects and render it for best result to enjoy

important to document the python script by give it some tutorials

in fact i am found easy way for modeling and textureing in modo but don't have effects or dynamics or character animation

thanx

Share this post


Link to post
Share on other sites

In a software package like Houdini one powerful way to expose/teach users about the more powerful aspects of the software is to have a way to drill down into that level.

There is a modular synth called Reaktor. It's bundled with everything you need. You start with whole instruments and sequencers, wiring them together and making fun sounds and music. But then you want to do more and start learning to build these things on your own by drilling down into the macros (DA's in Reaktor land) and breaking them apart and so on. Houdini could stand to benefit a lot from this structure. In Reaktor it isn't required of you to use the higher-level macros and I think most who have mastered the program have a library of their own.

So to support my comment above that Houdini could be made to support a non-procedural workflow: I meant to say that I want to do some tasks like bridging edges or faces with the ease of other packages, but in Houdini I want to work with these concepts procedurally with some higher-level DAs that would be packaged with Houdini.

I'd be really bummed if Houdini started working like the other packages. But I want a way to learn and get new ideas while I'm still actually creating things. I feel like I've used Houdini for two years and have nothing to show for it. :) Maybe I just suck at this though.

Share this post


Link to post
Share on other sites

I'm going to be echoing a lot of what Jason asked for I think..

* Authoring of realtime shaders via VOPs. I would like to be able to spit out code for GLSL and HLSL. I would like to be able to implement fragment shaders in COPs. How about making some COPs take advantage of the GPU. Screw it, let me apply fragment shaders to the viewport or camera as well. Maybe this is interesting, a camera that is able to render with multiple takes to texture storage on the GPU and those images then composited real time with a fragment shader in the viewport. How freaking cool would that be? I think very.

* Proper Vertex normal support. We are running into this at work now. Quite annoying actually as Maya seems to always want to create vertex normals.

* Improve viewport picking and handles. Maybe look at how picking works in Silo as an example.

* Image paint SOP - or perhaps a whole new TOPs context (maybe it could be part of COPs with better COPs integration). I can envision building an image with a network of layers, filters, curves, fonts, paint operations, etc, in the same way you build anything else in Houdini and it makes me smile.

* Threading for DOPs, especially fluids.

* Relighting/Interactive Lighting - I've seen several attempts at using COPs for relighting... maybe we can do better. nvidia demoed some really impressive relighting and interactive lighting tools for Gelato and the last couple of Siggraphs, maybe look to them for inspiration.

* HDK, is there anything that can be done to make it more accessible to mere mortals, and turn it into a real API? Python is great, but there are some things it's never going to do well, such as building geometry.

* A good brush tool, something that can be used to comb longer hair, and behaves the way an artist would expect. The current comb tool rotates normals from their root. Brush needs to drag multi-segment hair from the tip in screen space and solve in an IK like fashion.

* Improve stability

* More filters on the geometry spreadsheet (filter by group name, filter by datatype)

* Add support for tangents and bi-normals to the point SOP.

Edited by Mcronin

Share this post


Link to post
Share on other sites
That all being said, as soon as I got back from that trip to LA I submitted an RFE to make a few modifications to the Copy Objects DOP that would allow this sort of setup without having to create a subnet for each fracture iteration. There just hasn't been time... But that's what this H10 wish list is all about!

Mark

Ooh sounds good. Thanks Mark

Share this post


Link to post
Share on other sites
Fuzzy logics? Heh - python\Vex\HDK - it's all there already. Dig!

Your argument could be used to block the development of almost any kind of new node

because you 'could always roll your own with the HDK'!! No doubt you enjoy the benefits

of many nodes that have been introduced in recent versions of Houdini that you could

have implemented the hard way with vex or whatever. But you don't because

that would increase your development time and cost, would most likely not work

as efficiently and would be much less easy to use. Any new built-in-behaviour

for Houdini potentially extends its market reach, especially when we are talking

about functionality that is available in Massive at over $25,000 a license.

Replicating the standard functionality that you get out-of-the-box in Massive using

Houdini's current toolset together with python, vex and HDK is no mean feat.

From what I can see the actual fuzzy logic part is relatively straight-forward but

Massive comes bundled with lots of sensory modules that may prove trickier.

For example, Massive has vision modules that can be used for avoidance behaviour

(amongst other things). You could replicate that with a ray-marching system in

vex but it would be pretty difficult to build and probably even harder to maintain.

John.

Share this post


Link to post
Share on other sites
[Lots of good thoughts about fuzzy logic snipped]

Hey Ed, if you really do want to investigate this a bit further I for one would be happy

to share my thoughts off-line if you want to get in touch. Some points come to mind

immediately though....

Your description of a possible implementation in POPs sounds reasonable but I think

you would have difficulty ramping up to anything comparable to Massive without having

the fuzzy logic implemented as its own AI context. That's because complex behavior

requires having very highly interconnected boolean-logic networks. Take your example

of "hungry". You could have an attribute measure for hungriness as you suggest but its more

powerful to encapsulate it as a fuzzy network of logic states. That allows the 'measure'

of hungriness to be different in different circumstances for different agents. So an

agent might feel pretty hungry even if he's just eaten but happens to be right next to

a food source. On the other hand, an agent who hadn't eaten for a long time wouldn't

feel so hungry if there is a predator between it and the only food source! That sort

of reasoning is expressed by logic interconnections and you wouldn't necessarily use

a quantifiable measure for 'hungriness' at all.

The Massive interface is horrible but it does have one or two interesting ideas. One

of these addresses your issue about debugging the logic. Each node in the network comes

with a little bar that indicates the fuzzy logic value being passed through for the currently

selected agent. You quickly get into the habit of selecting various agents and watching the

logic bars go up and down whilst playing the sim to check to see if the values are working

as expected. It makes it easy to trace back where the behaviors are being triggered

and to correct logic flaws.

I'll stop hijacking this thread now! Like I said shoot me a mail if you want to talk more.

John.

Share this post


Link to post
Share on other sites
Yeah I think a few clever new pops like that would be awesome! Something like "sample-attributes" or sample layers similar to deep-raster images on CHOPs samples might also help with the logic stuff but that would radically change the way CHOPs would work, if so I'd want it to be done very efficiently, correctly.

Hmm ... in 2D, deep raster images are simple an array of images. So in 1D, an array of channels is already exactly what CHOPs right now. I see your point about have more specialized functionality though.

Share this post


Link to post
Share on other sites
Replicating the standard functionality that you get out-of-the-box in Massive using

Houdini's current toolset together with python, vex and HDK is no mean feat.

From what I can see the actual fuzzy logic part is relatively straight-forward but

Massive comes bundled with lots of sensory modules that may prove trickier.

For example, Massive has vision modules that can be used for avoidance behaviour

(amongst other things). You could replicate that with a ray-marching system in

vex but it would be pretty difficult to build and probably even harder to maintain.

John.

This got me thinking. It seems to me all this comes down to transferring attributes. I've used Massive and honestly for all the fuzzy logic and brain talk it seems more complicated that it need be. The attributes, which can be represented 0-1, on an agent together make up the state of the agent's mind and determine what course of action the agent will take. In order to create this state you need the agent to be aware of the state of the objects around them by proximity. Now you could set up a reasonable version of this without programming or writing a ray marcher using something like the attribute transfer SOP. It'd be a laborious process, though. Maybe a suggestion for a new general purpose tool to aid in doing something like this could be an "attribute aggregator" SOP and/or POP. It'd work like attribute transfer, but have an interface like a blend shapes SOP, with a bit of the merge DOP. I see it like this: You can specify many inputs. Each input has broadcast and receiver areas and functions. The area can be omni directional, or directional akin to a spotlight, maybe the user could even specify bounding geometry for the area min/max. The function is a user selectable falloff function. The user can set which of each input's attributes are broadcasting, and which attributes the input should listen for. As the sim runs the inputs aggregate each other's attributes based on proximity, falloff, and their broadcast and reception areas.

Anyway, it seems like a simple idea that could go a long way towards getting you Massive like functionality, not to mention a tool like this could have many, many other uses.

Share this post


Link to post
Share on other sites
Your description of a possible implementation in POPs sounds reasonable but I think

you would have difficulty ramping up to anything comparable to Massive without having

the fuzzy logic implemented as its own AI context.

You have no argument from me about that as I mentioned that UI is an issue. :) I want to present it in a low-level manner to make sure we actually mean the same thing about "fuzzy logic" though. The term by itself is fairly vague. Also, a low-level description might inspire someone to try to write a POP VOP to do it along similar lines. :)

You could have an attribute measure for hungriness as you suggest but its more

powerful to encapsulate it as a fuzzy network of logic states. That allows the 'measure'

of hungriness to be different in different circumstances for different agents. So an

agent might feel pretty hungry even if he's just eaten but happens to be right next to

a food source. On the other hand, an agent who hadn't eaten for a long time wouldn't

feel so hungry if there is a predator between it and the only food source! That sort

of reasoning is expressed by logic interconnections and you wouldn't necessarily use

a quantifiable measure for 'hungriness' at all.

I think what you're talking about is a method of expressing the state transition expressions? I think a "fuzzy state machine" can model this type of behaviour. One would just need a transition expression that is a function of distance to food as well as number of foes it has on the way to the food. Of course, different systems will be easier to model some things better than others. :) In terms of UI, one can also imagine a fuzzy logic context that is a graphical representation of the transition expressions themselves. But then that also necessarily narrows/biases your computation model for them while gaining ease of use.

In my mind, only the transitions of the non-zero weighted states would be evaluated for each agent though. In this example, we're just talking about a very simple case where "hungriness" is a continuous measure so a "state machine" is actually unnecessary. We just need to have a POP that computes the value of the hungriness for every agent, every frame using whatever means it sees fit (eg. dependent on agent, food, foe positions).

I'll stop hijacking this thread now! Like I said shoot me a mail if you want to talk more.

I don't mind discussing it in this thread. :) Maybe start a new thread so others can share their ideas? I figure I'll discuss it now since several people mentioned "fuzzy logic" and I wanted it to be more explicit. Features are not usually easily described, and even easily described features can be much more complicated than one might think.

Share this post


Link to post
Share on other sites
Hmm ... in 2D, deep raster images are simple an array of images. So in 1D, an array of channels is already exactly what CHOPs right now. I see your point about have more specialized functionality though.

Not quite what I mean exactly. One of the difficult things about working with rigs, scene automation, etc in CHOPs is the fact that there are all these different unrelated channels in a node that don't have any relationship except for possibly a user-defined name. It would be nice to have a grouping system or layering system similar to how deep raster images work. So, if you just want to view or process the rgb than just do the function on that, but the other stuff will still get passed through to the next operation.

So, here's a cenerio I"m doing right now in CHOPs. A few hundred characters in the scene. All have the same animation but different animation start/stop/rate. So, for this using a lookup CHOP with different rate / percentage ... same number of lookup curves as characters. However, this lookup curve needs to be duplicated x the number of parameters in the character rig in order to copy all of the characters parameters. It would be nice to somehow encapulate all of the channels for characters into a single chops / channels / etc group, so it could be processed more or less like a single channel.

Yes, CHOPs works this way already, but the UI and the current channel flow (always outputting the first input channels) makes this particular thing rather difficult. This is a very simple crowd and wasn't really an issue for this example, but something more complex would make this very difficult and you'd have to go to pops with attributes logic etc.

If something like deep chops etc were implemented than it would be great to be able to "add and remove" layers as well.

Edited by andrewlowell

Share this post


Link to post
Share on other sites
Yeah is that what is meant by persistent data? If so then yeah geometry CHOP set to animated. It can be turned into a digital asset and the parameters for the frame range, sample rate, all that stuff can be put on the top level and you never know it's CHOPs at work. It even corrects the motion blur by inter-frame-interpolation when you read in animated geometry ... simple trick but very useful.

Some might argue that if it's that useful it should be a native SOP, but so far I like to think of SOPs as cooking once at a point in time, and CHOPs as cooking the whole graph (time or otherwise).

My problem with CHOPS as a way of getting persistent data is that you can't create cyclic networks the way you can in POPs. I.E., in pops, say you have a network flow like this:

A->B

A will always be the state of the network as initialized by A, as well as the state of the network at B from the last subframe.

The record CHOP is good in many conditions, but it doesn't allow that same cyclic flow. So if I had this:

Output of Record A -> B -> Record A,

That would result in infinite recursion and wouldn't cook.

At the moment, the only way I'm aware of to get around that in CHOPS is to hack it by exporting files:

Read File A -> Modify Channels -> Write File A

Writing to A doesn't force a cook on the read, so this would be perfectly legal, and it does succeed in creating the cycle. Fine... I just kind of want to avoid having to use files whenever possible. It's not a very nice solution.

In most cases, this kind of setup won't be necessary. Recording is perfect if you just want to add new data into your accumulated data that doesn't depend on the contents of that accumulated data. But it's those cases where you want to record new persistent data, based on some modified form of the older persistent data from the same network, that you get stuck with the recursion problem in CHOPs and have to resort to writing out temp files.

To clarify what I'm driving at here, in pseudo-code it might be like this:

old_data = initialize_data()
foreach frame:
	new_data = modify(old_data)
	old_data = new_data

So, maybe persistent data in SOPs isn't the answer; maybe the better answer would be a more convenient way of doing cycles in CHOPs without having to resort to a file export hack. How about an export CHOP that will export to a named CHOP cache within the CHOP network, which is held in RAM. So you could have:

Read memory cache ("/ch/chopnet1", "cache A") -> Modify Channels -> Write memory cache ("/ch/chopnet1", "cache A")

You can almost get there with the Fetch chop, but even that errs with infinite recursion.

Edited by Vormav

Share this post


Link to post
Share on other sites

I'm starting to see what you mean ..

Are you describing geometry read in as static or animated? For basic recursive stuff I'd use animated. Then for the recursion you could either shift negative and process, use the oc() expression in the expression CHOP, or the feedback CHOP (although oc() is just as good or better I think). For recursive stuff particles are still easier I think because you can do multiple nodes, with the expression etc you'd have to stick it all in the same expression.

I haven't tried it but maybe it would work like this with static geometry. Before the expression CHOP duplicate whatever channels are needed x the number of recursive operations you are using. Then us the oc() expression like this ... oc($C-1 , $I-1) (operation bla bla + - etc). If would be ugly but you should get the final result on the last channel processed since the first incomming channel is processed first, then the next, and so on.

Or am I still way off? It's not that I don't think they need the feature I'm just trying to get my head around your method. What type of situation are you using this in?

My problem with CHOPS as a way of getting persistent data is that you can't create cyclic networks the way you can in POPs. I.E., in pops, say you have a network flow like this:

A->B

A will always be the state of the network as initialized by A, as well as the state of the network at B from the last subframe.

The record CHOP is good in many conditions, but it doesn't allow that same cyclic flow. So if I had this:

Output of Record A -> B -> Record A,

That would result in infinite recursion and wouldn't cook.

At the moment, the only way I'm aware of to get around that in CHOPS is to hack it by exporting files:

Read File A -> Modify Channels -> Write File A

Writing to A doesn't force a cook on the read, so this would be perfectly legal, and it does succeed in creating the cycle. Fine... I just kind of want to avoid having to use files whenever possible. It's not a very nice solution.

Edited by andrewlowell

Share this post


Link to post
Share on other sites

I think someone already mentioned the integration of geometry cache import/export... well, I'll bring this up again.

I think it's been a huge (and very late) addition to Maya, although their implementation is rather basic. I hope to see in both Houdini, Maya and other 3D applications something like.... an industry standard in reading/writing AND (reliably) editing(!!!) geometry caches.

Share this post


Link to post
Share on other sites

My 5 cents:

I really think that SESI has to find a way to "lock" or change the color of a parameter field if other attribute is overwriting it. (as with key or expression)

For instance:

If you are creating an attribute in POPS, "diff" or "Alpha" or anything to override the shader value, it must block the shader parameter.

Edited by onesk8man

Share this post


Link to post
Share on other sites
My 5 cents:

I really think that SESI has to find a way to "lock" or change the color of a parameter field if other attribute is overwriting it. (as with key or expression)

For instance:

If you are creating an attribute in POPS, "diff" or "Alpha" or anything to override the shader value, it must block the shader parameter.

The major problem being that many objects can use the same shader.. and also that the UI will have to cook the the geometry to interrogate it's attributes in order to provide this indication; and this could be crippling if you're running sims (very common) or handling large geometry.

Perhaps a compromise might be for Mantra to be allowed to print this information at some verbosity level: like:

VEX Parameter Override: object /obj/geo1 attribute 'diff' overrides shader /shop/vm_plastic1

VEX Parameter Override: object /obj/geo1 attribute 'rough' overrides shader /shop/vm_plastic1

VEX Parameter Override: object /obj/geo2 attribute 'diff' overrides shader /shop/vm_plastic1

Share this post


Link to post
Share on other sites

a few small ui things:

-I would like to see connections made at the channel level or expressions (a dotted line that connects a node using an expression that references another node)

-visual grouping, like the color labels (hit 'c'), but extend outside the nodes to connect like labeled nodes. Shake does this.

-I have never been too comfortable with basic curve drawing, the editing after the curve is drawn in not intuitive to me. In maya I can lay down curve quickly and precisely, esp with the pickwalk on the cv's (arrow keys to step between the CV's)

-on that matter, if I wanted to lay down some rough curves as guides or whatever, I have to keep wiring them to a merge SOP, or go back and shift select template to see everything I have drawn. Maybe there is another way to allow multiple node outputs to be seen. Perhaps a flag to say, template all new SOPS (maybe not that..), or a way to just have another level of visibility flags for modeling purposes.

-flipbook directly to quicktime movies, quick and easy, with adjustable things like timecode and notes burn in. or even show the state of HUD's

-it would be great to add something else on the level of l-systems (meaning their level of coolness), like the AI solvers that others have mentioned, or a framework for genetic algorithms. Or even something like maya paintfx brushes but more flexible.

-Rag doll DOP

-GLSL in VOPS, and GLSL in general. I sometimes need quick results for previs or even mograph

- .rat creation button in the UI file browser

I'll think of more

MD

Share this post


Link to post
Share on other sites
-I would like to see connections made at the channel level or expressions (a dotted line that connects a node using an expression that references another node)

You can actually do this for quite a while now. Hit "d" over a network view and check out the Dependency tab. And, quite concidentally I think, there was this journal entry just yesterday: "Houdini 9.1.144: Added netviewdep command to save the worksheet's dependency viewing options, which are now saved along with the desktop."

-flipbook directly to quicktime movies, quick and easy, with adjustable things like timecode and notes burn in. or even show the state of HUD's

I was actually wanting to add some menu options to MPlay (using the new-ish xml menu configs) to use mencoder and such to create movies from sequences.. the problem being that there is no mplay hscript command to save images (like a "saveseq" or the like). I'd like for MPlay to be at least have this command available. How 'bout it malexander? ;)

-Rag doll DOP

Yes, this would be nice. I tried to build a system myself once that tries to recreate an existing skeleton in DOP land but I ran into time and resource issues.

Share this post


Link to post
Share on other sites

some great ideas here

i'd love to see a fuzzy logic system in houdini! lets just not call it FLOPs :unsure:

what id also like to see(most if which has been mentioned already):

faster rendering. i'm still having trouble rendering on all 8 cores, i've tried light, heavy scenes, changing bucket sizes, yet still mantra renders twice as fast using 3 cores than it does using 8!

faster dynamics

much faster cloth

faster fluids, better shaders(fluid mapping?ala FumeFX)

complete fur system/OTL, i'm thinking more like other 3d apps, u know a pretty GUI, with lots of buttons:)

complete muscle system/OTL, easier to setup, muscle jiggle, skin sliding, etc.. (still thinking about what the etc... would be)

did i say fuzzy logic?

h9's shaders are looking pretty good, now can we have some more please:)

thats my main ones for now.

now can we maybe get some kind of voting system going? it would be great to show sidefx what the most wanted features/fixes are!

jason

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×