Jump to content

Approach to Lighting and Managing big Scenes


Recommended Posts

(edit: maybe this is the wrong forum section and some moderator can move this to the general section)

Hi,

I'm currently working on our first project that requests an asset based approach. Before that our projects never really passed the scope where we just created everything inside a master scene and then shaded and lit all that at the end. Now we are creating dedicated HDAs for every object so our Set Dresser can work while Modeling, Sculpting and Look Development can take place at the same time. Now, as I'm entering the lighting stage of that project I'm wondering what the best approach is to handle this workflow.

1. I've planned to have some uber materials in the scene which will be referenced by the HDAs and propagated to the objects inside which will have material overrides to customize them. Additional specialized shaders will be embedded in the HDAs. How would I go about changing material parameters in my lighting scene as I don't want to propagate all the attributes of all objects to HDA level and I don't want to make all HDAs editable. Whats your approach on this? How do you handle large scenes with hundreds of different objects and materials (most of whom can be handled by changing values on our uber material). Do you create a new shader for each object in the scene? Or don't you rely on HDAs so much?

2. Do you use categories for light/reflection/refraction inclusion/exclusion or masks which have the benefit of having a dedicated light linker pane? If I choose to use the light linking approach I cannot select the HDA as an object in the light linker. is there a way to make HDAs which are seen by Houdini as a real geo node?

My workaround would be to make the nodes inside the asset editable, which I'm not too happy with, as it makes the whole nodes independent from the original asset and changes are not propagated to all scenes with this asset. Is it possible to make only certain parameters editable? Propagating the light mask parameters to HDA level also doesn't help much as I still can't access them via the light linker.

I also tried using bundles for this, but they only seem to change the light mask parameter of all objects that are included which also doesn't help when the asset is locked.

Ok, that's a lot, but I'm struggling with these problems and would really appreciate some feedback and input from some of you more experienced guys. I'm still trying to figure this workflow stuff out.

cheers,

-dennis

Edited by dennis.weil
  • Like 1
Link to comment
Share on other sites

Hey dennis!

Do you have an access to H12 beta forum? Accidentally I yesterday posted a couple posts at the very subject. I thought I'd start a discussion on scene assembling pipeline although I'm somehow feeling it won't take much attention from anyone. Unfortunately Houdini is rarely used in studios at that pipeline stage, except for smaller one, who have chosen it in replace of Maya and lack of custom tools (read Katana). It looks like Maya is hardly a competitor on that field, which seems to discourage making changes in status quo.

To say it shortly, HDA pipeline is extremely unfriendly for layout/rendering work. As long as you don't want to manually link various render parms on a top level (see Fur asset), which as I personally think is completely unrealistic, you have two options:

- use takes extensively (with "Allow Auto-Take for Closed Assets" preference) - which does allow you overwrite closed parms

- stay away from HDAs, and do set dressing with Object (possibly customized) containing pure baked meshes. Keep your shaders in different network, basically work as ordinary human being, not Houdini user.

As the first option seems to be tempting I think it's in fact an abusing of takes. They weren't meant to record all changes in a scene, which appears obvious if you consider how poor they are in terms of managing entires. Once you have a take with a few dozens of parms' entries, you're lost.

It is very important topic, as I think without some adjustments, Mantra, a power-horse of Houdini loses a lot. There is so much flexibility there, and it's so darn hard to use it.

Edited by SYmek
  • Like 1
Link to comment
Share on other sites

Hi Szymon,

thank you for your insights.

I've actually just read your threads over at sidefx before I came back here and you are making some very valid points. As flexible and powerful Houdini and Mantra are I also think that the lighting workflow needs some love to make it artist friendly and just WORK for big production scenes.

I all our previous small jobs I didn't really encounter all the shortcomings of the workflow as it is right now, but the more our current project grows the more obvious they get.

I think it might be a little too late to switch from the HDA workflow as nearly all our assets are finished, so I might have to deal with the shortcomings of that workflow and figure something better out for the future.

I didn't know that takes can override closed asset paramaters. Thanks for that tip. Though I also think that it's not ideal to use takes for this.

I'm not very good at this "ordinary human being" thing. That's why we all started using Houdini in the first place ;)

You are absolutely right. And btw: I'M VERY MUCH INTERESTED IN A DISCUSSION ABOUT THE SCENE ASSEMBLING AND LIGHTING PIPELINE IN HOUDINI ;)

-dennis

Edited by dennis.weil
Link to comment
Share on other sites

Firstly, we've been using a naming convention which has helped organize all the HDAs on a project. That's something you should think about.

asset-[network]-[name].otl

So if we were to create an asset out of the geometry for a car, the file name would be "asset-sop-car.otl", and the car material would be "asset-shop-car.otl" and maybe a vex shader for the special paint would be "asset-vex-car.otl". Could also put the dust effects into "asset-pop-car.otl".

The advantage is that the modellers can update the asset-sop-car.otl file while the shader is working on the car paint in "asset-shop-car.otl". It also groups all the otl files together by "asset-*.otl" or by network "asset-sop-*.otl"

1. I've planned to have some uber materials in the scene which will be referenced by the HDAs and propagated to the objects inside which will have material overrides to customize them.

Be very careful how you make your references. One bad side to working with assets is it's easy to compound things that will have to cook, and this can slow down Houdini greatly. For example, if you had an asset that was a model, and that asset contained a shop network for the materials, then every time you use that model in the scene the shop network will have to be cooked. Houdini will cook each material separately, and this can be a big problem when layering assets within assets.

So it's best to put your material assets in the Shop network, and then have parameters on your SOP assets to reference a single reference of the material asset they should use. Then if you have 10 cars using a car paint shader, there is only one paint shader for Houdini to cook. Where as, if you embed a Shop network with in the car asset, there will be 10 shaders of the same thing that will get cooked.

Try to use "Object Merge" SOP nodes to reference other Assets, rather then embedded an Asset within an Asset. Then you promote the object merge parameters to the Asset's parameters. This makes the assets more independent, then having Super Assets which contain lots of other Assets.

Whats your approach on this? How do you handle large scenes with hundreds of different objects and materials (most of whom can be handled by changing values on our uber material). Do you create a new shader for each object in the scene? Or don't you rely on HDAs so much?

Has a lot to do with using a naming convention, and then setting up your Light Assets to include/exclude/matte/phantom objects based upon a name pattern.

For example; the name name pattern "geo-*-noshadow*" could be used for objects that should not cast a shadow. That way, "geo-car-noshadow3" can be a SOP node that is auto excluded by just it's name only.

2. Do you use categories for light/reflection/refraction inclusion/exclusion or masks which have the benefit of having a dedicated light linker pane? If I choose to use the light linking approach I cannot select the HDA as an object in the light linker. is there a way to make HDAs which are seen by Houdini as a real geo node?

I don't understand, why can't you select an HDA? Houdini often won't like you select a Subnet in the dialogs, but if you manually type in the Subnet name it will still work (stupid Houdini).

My workaround would be to make the nodes inside the asset editable, which I'm not too happy with, as it makes the whole nodes independent from the original asset and changes are not propagated to all scenes with this asset.

"Editable" is really a horrible name for the feature. Sidefx should have called this "Template" which is more accurate. When you drop an asset that is open for editing, it's really like dropping a template.

The only time I've used the editable features in an Asset, is when I have an asset that modifies the contents in an inner Subnet. Something like a tool that builds a POP network when you click a button, but even with this approach I recommend providing the user with a parameter to reference a subnet that the modifications should ocure in.

Is it possible to make only certain parameters editable? Propagating the light mask parameters to HDA level also doesn't help much as I still can't access them via the light linker.

I also tried using bundles for this, but they only seem to change the light mask parameter of all objects that are included which also doesn't help when the asset is locked.

Can you post a file demonstrating the problem, and what it is you want to achieve? I don't use much light linking, but I don't very complicating lighting either.

p.s. Good topic :)

  • Like 1
Link to comment
Share on other sites

I've actually just read your threads over at sidefx before I came back here and you are making some very valid points. As flexible and powerful Houdini and Mantra are I also think that the lighting workflow needs some love to make it artist friendly and just WORK for big production scenes.

I all our previous small jobs I didn't really encounter all the shortcomings of the workflow as it is right now, but the more our current project grows the more obvious they get.

I tried using takes as part of the rendering process, but it quickly became a problem when revisions were required. Switching takes can be time consuming for large scenes, and a list of what parameters a take modifies doesn't really explain the purpose for the take. So you come back later to a scene, and your scratching your head thinking "why did I create this take again?". It's just not clear, is this take for rendering or is it an animation revision? Then an animator comes along and asks "which take to append my changes too?"

So now, we only use takes for animation revisions and it's a tool used by the animators. When I do my render setups, I put multiple mantra nodes into an Asset, and then reference copy a master mantra node. That master node has all the settings for raytracing, while each reference copy changes the overrides, the matte objects, include/exclude lists of objects. They all connect to a merge node, and that's how I do my multiple passes. If I need to adjust the ray samples, then I only edit one node, and since it's in an asset it populates to all the other scene files.

For lights, I've placed a bunch into an asset, reference copied them, and then used names to define which one should be included by a mantra node. Rather then turn lights on/off by Takes. I exclude them from the Mantra node rendering that pass, and if I need a light to be different colors/settings then I reference copy the light, call it "KeyLight_Diffuse" and "KeyLight_Occlude" and then in Mantra I just include "*_Diffuse" lights in one, and "*_Occlude" in the other. So I end up with a Light asset called "KeyLight" which actually contains two lights inside it called "KeyLight_Diffuse" and "KeyLight_Occlude". They have shared parameters, but maybe the color/intensity is different.

It's a different way of working, but it's WAY easier to manage.

Link to comment
Share on other sites

....

To say it shortly, HDA pipeline is extremely unfriendly for layout/rendering work.

It is very important topic, as I think without some adjustments, Mantra, a power-horse of Houdini loses a lot. There is so much flexibility there, and it's so darn hard to use it.

...

Let me disagree with you here. Why HDA pipline became extremely unfriendly in case of rendering work? A I see no problems at all. It's all about flexibility of your asset, and scripts to manage them.

There is so much flexibility there, and it's so darn hard to use it

Maya comes to mind. Yeeeeah! Its so much power in maya, and its extremely easy to use :lol: (sarcasm)

All our assets created according to template, all parameters we need was promoted at the top level in the stage of creating asset. So no need to dive inside and edit something.

Dennis,

For light linking, we using categories! This is very powerful technique. My ubershader uses categories alot. For example: All objects in character asset, have category do_trace. All shaders in asset,use this category to trace only objects in this set. Also take a look at bundles/smart bundles. We use it alot for light linking.

Edited by Stalkerx777
Link to comment
Share on other sites

The advantage is that the modellers can update the asset-sop-car.otl file while the shader is working on the car paint in "asset-shop-car.otl". It also groups all the otl files together by "asset-*.otl" or by network "asset-sop-*.otl"

Yes, a good naming convention is definitely needed here. I also like the idea of separating all the different assets for different purposes. Do you assemble those assets only inside your final lighting scene or is the set dresser already responsible for putting all those nodes in place so the lighter can just open the scene and work right away?

So it's best to put your material assets in the Shop network, and then have parameters on your SOP assets to reference a single reference of the material asset they should use. Then if you have 10 cars using a car paint shader, there is only one paint shader for Houdini to cook. Where as, if you embed a Shop network with in the car asset, there will be 10 shaders of the same thing that will get cooked.

Thats exactly what I meant, you just explained it in a better way than my non-native-semi-technical-english-gibberish. ;)

Right now I'm using a single Master Uber Shader in my scene and the geo nodes reference that via a parameter on the HDA. Then I'm using local material overrides to change the values for the specific geo nodes. But what if I have to change some shader values for a specific shot? Do you promote the most important parameters to asset level?

Another interesting question comes to mind: If you only have one material for an asset is it more efficient for mantra to specify this at object level or via a material SOP?

Try to use "Object Merge" SOP nodes to reference other Assets, rather then embedded an Asset within an Asset. Then you promote the object merge parameters to the Asset's parameters. This makes the assets more independent, then having Super Assets which contain lots of other Assets.

Yes, I just realized how convoluted many embedded assets can get.

Has a lot to do with using a naming convention, and then setting up your Light Assets to include/exclude/matte/phantom objects based upon a name pattern.

For example; the name name pattern "geo-*-noshadow*" could be used for objects that should not cast a shadow. That way, "geo-car-noshadow3" can be a SOP node that is auto excluded by just it's name only.

I see. However I can imagine that the names can get quite long if you try to create more elaborate lighting rigs. I like categories more for that very reason. They can use simple boolean expressions and I can just use CAR and NOSHADOW as two separate categories and then decide what to do with them. I could also add SUV or SPORTSCAR as categories to get more specific. Categories just need a nice interface (light linker) to be more user friendly. I think the documentation mentions (since H9) that there is more to come with categories.

Dennis,

For light linking, we using categories! This is very powerful technique. My ubershader uses categories alot. For example: All objects in character asset, have category do_trace. All shaders in asset,use this category to trace only objects in this set. Also take a look at bundles/smart bundles. We use it alot for light linking.

Yes, I'm using categories and bundles more and more for my lighting. They are great.

For me however bundles didn't work very well with closed assets. Geo nodes inside of closed assets don't respond to changes in light linking using bundles. Are they supposed to? Do you have a specific workflow for bundles that you wouldn't mind to share?

All our assets created according to template, all parameters we need was promoted at the top level in the stage of creating asset. So no need to dive inside and edit something.

Would you mind to be a little bit more specific with the parms you promote to top level?

I don't understand, why can't you select an HDA? Houdini often won't like you select a Subnet in the dialogs, but if you manually type in the Subnet name it will still work (stupid Houdini).

What I mean is that I'm unable to select them inside of the light linker. They are not seen as an entity to select. I think this boils down to the problem that mantra's rendering parameters don't work on subnets (like Szymon has posted over at sidefx).

"Editable" is really a horrible name for the feature. Sidefx should have called this "Template" which is more accurate. When you drop an asset that is open for editing, it's really like dropping a template.

The only time I've used the editable features in an Asset, is when I have an asset that modifies the contents in an inner Subnet. Something like a tool that builds a POP network when you click a button, but even with this approach I recommend providing the user with a parameter to reference a subnet that the modifications should ocure in.

Can you post a file demonstrating the problem, and what it is you want to achieve? I don't use much light linking, but I don't very complicating lighting either.

Template is a good description of what it's doing. I would like to be able to Template just specific parameters like "light mask". I know I can promote this to asset level, but there's the problem of not being able to select it in the light linker. If I was able to make the light mask parameter editable (as well as the other mask parameters) I would be able to do light and reflection linking inside the light linker.

So now, we only use takes for animation revisions and it's a tool used by the animators. When I do my render setups, I put multiple mantra nodes into an Asset, and then reference copy a master mantra node. That master node has all the settings for raytracing, while each reference copy changes the overrides, the matte objects, include/exclude lists of objects. They all connect to a merge node, and that's how I do my multiple passes. If I need to adjust the ray samples, then I only edit one node, and since it's in an asset it populates to all the other scene files.

It's a different way of working, but it's WAY easier to manage.

I also use the includes and excludes on the mantra nodes exclusively as I think It's a way cleaner to work with. It's easier to see what's going on and you don't have the hassle to manage your visibility flags and phantom/matte parameters. If you want, you can simply put down a mantra node and force matte some objects without having to create a take and change each object individually (and after that create a new mantra node anyway which renders with that take)

So far great input guys. I see that most of you have figured out a lighting workflow in Houdini with the tools that are available. But my hope is that this discussion may also lead to a wealth of RFEs (like Szymons) to improve the workflow and make it more artist friendly. :)

cheers,

dennis

Link to comment
Share on other sites

Let me disagree with you here. Why HDA pipline became extremely unfriendly in case of rendering work? A I see no problems at all. It's all about flexibility of your asset, and scripts to manage them.

Because parameters will have to be predicted and promoted on a top level, and as you say, during creation stage by a person who most probably won't render it, in a moment you don't actually fully know how to render it.

Once you go the HDA route, and create nested, multi-hda assets, which *evolve* during production, you are most probably in trouble. Your assets have 30+ channels linked downwards, you have a couple of variations of them in one scene, you link a lot string parms, which tend to be picky in evaluation, and after that extensive work of designing, promoting, thinking through etc, you end-up with scenes, which most of them have opened assets with broken updates, deleted links in parameters, and are 10x more complicated than Object's based equivalents, because your lighting artists and render TD really had to modify settings inside anyway, your shaders have to do tricks with double lights sets for various contributions and such --- and many of those you really couldn't realize until you try to render final image.

Is it: "did I designed my HDA so badly?" or "next time I will do it better!"? No, it's just that closed containers for Houdini nodes, and parameters linking, is not the right way to work on assembling scenes for a renderer (in my option). It's is a great tool for many other things though, including building a smart geometry generators from atomic procedural nodes and saving them in a scene independent files which updates automatically.

The problem with HDA, i.e. subnets in fact, is that they are not a citizens of Mantra and SOHO ignores them either, so an automation is paid by manual 1<->1 setup. You have to link all your features, unlike in case of Objects, in which case categories, links, and render properties are recognized and propagated by SOHO.

At some point you may ask your self if the time you put in setting this system up, making a huge, over-complicated scenes really pay of, since the time you invested in making your assets render-proof compared to basic Object nodes consisting meshes, may not justify this effort.

Anyhow, seems you like your setup, so it must be much better than anything I've came trough, which is great! I would love to hear about scripting and managing assets you do!

Another option is, that your hard work, what you may not even know being happy you don't have to work in Maya anymore, is in fact a work-around for something which doesn't work as it meant to.

IFD, like RIB is a set of commands for state machine, which structurize the world with attribBegin/End statements. Houdini at the end of the pipeline is a tool for preparing such file. It should (as it does partially with pop/camera/object/primitive) mimic as much as possible this structure. Like it does in case of transformation, simply because subnets were designed to have local coordinate system. Houdini doesn't support this clean path for generating IFD/RIB file in case of user/geometry/shading and mantra specific attributes. Instead it provides a set of half-finished, mutually exclusive, broken by assets which are aliens (and poorly supported by takes) solutions for dressing renderer specific primitives with user provided or intrinsic attributes.

This is more or less, what I meant saying "extremely unfriendly" :)

Edited by SYmek
Link to comment
Share on other sites

  • 2 weeks later...

Hey all,

Is the H12 beta part of this conversation still going on, or you guys still responding to this thread?

We just finished a whole project with a Houdini HDA pipeline. I have a lot to agree and disagree with you all for the sake of conversation/self-learning on this thread. We overcame most of these issues, but were presented with so many challenges. Most of these RFE have been sent in a while ago, but maybe it would be better if we could come up with a common vernacular from multiple companies to express the request for these changes, so they actually get done.

Personally my biggest problem was to get seniors TDs to work with the pipeline. I could roll every single FX and model we did into a common asset design, but to get others to comprehend the how and the why created a lot of development and change in concept. Everybody works differently and trying to allow for the different workflows and defining what tools should and should not do. Also defining the difference between a one off asset, and a common asset was hard for them to understand, usually at a big company like imageworks, dd, and R&H these things have been prep-ed and designed already for them.

I can respond more, but the amount of discussion you've already had, and time to comprehensibly respond to it has grown large. So I'll wait for a response back.

Link to comment
Share on other sites

Hey all,

Is the H12 beta part of this conversation still going on, or you guys still responding to this thread?

First we have to convince SESI that these are important topics. H12 beta thread doesn't work well, does it? Perhaps this is my fault, it's not that I felt I knew the answers, but I felt, there is an issue there, so together it could be recognized. I gave rough proposals to not to looked like I only lament, and I'm ready to hear disagreements. The only thing I won't believe in, is that hda are good for rendering... :)

So if you care, please, fire it up. For me it's pretty much vital thing.

Personally my biggest problem was to get seniors TDs to work with the pipeline. I could roll every single FX and model we did into a common asset design, but to get others to comprehend the how and the why created a lot of development and change in concept.

Perhaps this is how it should be? The difference with lighting is that the pipeline is pinned up by what renderer is expecting to get from you. It's not arbitrary "I link lights this way", so it's good to keep toolset close to it, and it's not good, when there are more exceptions than rules to that. So for example, you never know which parameters are properties, which will be picked by soho, which are per object, and which not, which will be used my shaders, illuminance loops, and which not, but of course non of this concerns your assets, which are blind for all that technology, and practically takes, as 1) they are not good managing lots of entries 2) properties have to be applied beforehand to make it work with takes.

I can respond more, but the amount of discussion you've already had, and time to comprehensibly respond to it has grown large. So I'll wait for a response back.

I think the only way to change things is to make this discussion happen. SESI forum is best option I think.

Link to comment
Share on other sites

Currently I don't have access to the H12 Beta thread so I'll reply here. Below I list how I got here, the system I used briefly that lead us to this logic, and some thoughts to the random logic below. I don't have the time to clean this up to be very specific in my reply, but this shoudl give some food for thought.

You'll have to come up with a very decisive argument for SESI to modify the fundamental setup. I've only been able to have this conversation with one other person so far that knows what I am talking about, and he was my boss that drove me to create the system, based on his previous ones at other companies.

HDA are perfectly fine for rendering... if you use Mantra. Any other render package I concede the point in lieu of a shit load of testing, that I can't or would not want to do. Also my mentor showed me a lot of tricks up his sleeve that circumnavigated problems that would derail normal people, plus we had SideFX support for the rest. So that is how are setup got through a bunch of the problems. Plus some creative out of the box thinking.

My latest background for note is coming off a project with FloqFX. We just finished a project called Wildest Weather in the Solar System for National Geographic, so a 22min stereoscopic dome planetarium show, with a very minimal render farm. Mostly all in Houdini and some Maya Mental Ray love from some outside vendors for a couple sequences.

I'm a shader, lighter, render primarily which is one of the reasons I got rolled into this task. So my personal focus is to get the stuff out of the computer. This concept is actually contrary to some peoples view which I directly ran into, which is it looks good render it no matter the cost. Most people were reasonable, but when it came to this attitude these FX became one offs.

So one of the biggest things we came across on our pipeline are one offs, and assets. A one off is an FX or a severe customization of an asset that you do per a shot. An asset is an FX or model that is used repeatedly through out the show, which needs to be rendered and interacts the same for everybody. For instance from our show a planet, or the stars, would be two different assets. This is how we framed our logic for the middle part of the pipeline. So in my term I called it Digital Asset Management... So I became bitch to managing all the otls for the show. In order to deal with this we made a common setup. When hit with a repeated task organize and cull the repetition, type of thinking.

As for not being able to manage all the properties and task in a method like Katana. I would not think like that at all, managing HDAs in Houdini is not like Katana, and if you try to make it that way you're in for a world of pain. You need to think more of a simplistic view. We should not be personally responsible for building the ifds for each frame, let the software do it for you. I am not a "programmer" so I try and make what I have work for me. I think this is one of the reason I could solve the problems the way I did, I have no "experience" with big studios. So I had to use a different frame of reference, and never did I once think of the problem as such as trying to control the flow of properties and attributes. Not to be rude, but we just come from different backgrounds, which is why this discussion is interesting. That outlooks just seems impossible to solve from my standpoint.

So in order to hit a bunch more points in this thread, I'm gona explain more of how our setup was to how we got around them. Bytw, pipeline 2.0 was in development before production got in the way so there are a few things we did not get to doing, but I'm gona talk hypothetical with tested. Since there were lots of problems with our pipeline, but it worked even after I got my hands out of it.

So we had a three point pipeline in houdini, the camera and the rop, two distinctly different entities but they were tied together do to the nature of the dome stereo setup. then the middle and third point was asset manager which we called a prop manager. We used a lot of theater terms to convey stuff.

The camera and rop were pretty project specific, so I'll skip talking about them, some cool stuff, but different all together.

For the common assets the fundamental highest point is an object level subnet. We never made assets that did not have one level of object level space stored in them. This causes problems with light linking, but this was able to be solved in a different manner. Object space is the primary area where animation was suppose to happen. Also it's one of the few way you can place light rigs, character rigs, or anything else in a common useful area.

So we customized this one subnet node into a common asset shell, we called a prop manager. Every FX and Model could flow from this, whether it was being called from disk, as is a model from maya, or an fx that is native to Houdini. Inside this we had a highly modified prop loader, an fx area, a light linker, and a shop area.

We ended having an otl for each object to deal with the non-common parameters, for object controls and sop controls. So we had to manually update each otl, but you can have one otl for all that modifies and changes based on an .xml script. Which I strongly reccomend. Then you are playing with the big boys. The only thing different on these common nodes was a collection of object level parameter changes, stuff you can modify and change to a delayed load object per say, and stuff you would need to bake out for each frame like a very customizable FX.

We had plenty of otls imbedded within otls. This was not a problem after the initial figuring out what we wanted situation. There are a few rogue issue of updating parameters of for say the prop loader, but a solution was to drop al the assets of the show, allow editing of them(this is done by a python script) and modifying one and they would all update. Fucking sucky, and there is an inherent bug in there that I could not isolate to SideFX satisfaction, but it worked. Especially towards the end of the show, the initial couple iterations of pipeline 1.0 sucked mightily, in trying to figure out nuances and problems with this setup. Like menu promotions from one node to another, is not even worth trying.

Inside the prop manager we had;

The FX Assembly area was where all sop based effects happened, depending on how you design you can have multiple ones of these, but I stuck to one.

The shopnet contained all the packets of shaders required for each asset. A packet was a subnet contain all the assets for that object. this extra layer of network, allowed me to call up the shaders and not the assets to modfiy and change things. You could put this is the common /shop area for over use of an asset, but with our ubershader and vop sop otl stuff went smoothly and packed very small ifds. Even with requiring all material shopnets to be called in the scene for ifd generation. Also even though we did have a couple common shaders for the show, only when we were in the one off shots did we use one shader in an area like /shop and reference all the parameter names. This comes a liability when you want to modify the shader. Also most of our artist were comfortable creating their own shaders so this was done commonly enough that in almost all one off they contained their own. Often plucking parts from the show assets to make these.

Bytw, all ifds can and should be limited to under a megabyte, a couple hundred kilobyte at best.

The lightlinker was the solution to the light mask issues you guys were having. The light linker should be able to accept subnets, and when an object does not have a light mask it should not be in that list. Also I agree categories should have their own linker pane. We did use a mask setup as opposed to a category setup, which I wish we did setup in the first place. We could have switched over but the mentality of the artist weren't ready for that.

The light linker was modified geometry node in which all other parameters except the masks and categories were excluded. All the prop loader which were the only renderable objects in the scene, were all linked to this and locked so that you could only select one object the light linker for all the objects in the asset. We made a self imposed limit that an asset would interact with lights as one object. This is the only node inside the subnet asset that we left allowed editable. Some artist have not worked with the light linker before and are only use to working with the mask at object level, and we tried to prevent artist from diving inside the subnets. I would love to explain to everyone how the system works, but I don't have time on my day to do this. So with out having artist go inside these nodes outside the fx assembly are there would lead to less breakage and easier problem solving. Inorder to keep the artist out we made a second set of parameters that were truely linked to all the prop loaders and to a category set up above. The switch would be if mask are enabled at the top of the assets then that person would not be using the light linker and switch from the default light mask parameter to reference the one above. This solved the light linking problem.

The prop loader was the renderable node in the asset. You would force out the top of the subnet, but these were the parts that did the work. This node was common in every asset, it load the components of an object pre-built delayed load objects. Allowed for caching of the FX Assembly area. This would also change relative shader paths into the final full shader paths, this is encase you had two dozen of the same asset no matter what the name changed to above it would fill it out. Also this contained the delayed load shop inside this. It was very key this was a locked off asset and common to everything, as it solved problem so much faster, allowing me to work with more assets than everyone else in the show. The outside of this node let you set all the object level parameters, and the inside did most of the heavy sop lifting, but in a very limited amount of sops.

So hopefully the above explains some stuff and wasn't me rambling too much. Below are more specific comments.

I def agree with the explicit otl naming convention. Also you should setup a script to hide old otls when new ones come online. This should probably be a custom list in the common show Houdini 11 folder. We had a version number on each otl, so when you ever did a destructive change to the otl it would be updated in a version. If it was only additive then you would keep the same version number. Allow for 3 spaces of padding.

[Network]_[assetname]_[version].otl

Object Merge in and of itself causes cooking, and sometimes will not cook when something is updated. I would recommend not using it heavily. Use the performance monitor when in doubt. Suffice to say you never want to wait be waiting for Houdini, if there is a process that is holding you up, then cache it or circumnavigate it if you can. Having your materials all in the shop area as opposed to spread out should not matter. You are probably having other problems, like live vops shop that are not otls with cached code on.

Naming conventions are keys, categories are great for this, but think of the naming convention with other people so that it makes sense to everyone.

It is not possible to make only certain parameters editable inside an asset. Promote them up, or make the whole node editable.

Takes are cool, but I would not use them in an approach to control shot specific changes. I would only do it for modification of render time changes to a scene when you break out to layers, or simulation alternatives to wedges. Then again I have never thought of doing it to control changes per a shot in general, so I have not though out the pros or cons of that. Animation changes are somethign new to me too. Something like that we call an output event, and version up the scene.

We used forced object and lights, and I used a naming convention similar to that of categories to simplify the process. to make sure which nodes rendered when it came to alternative renders. We did not have the option of being able to use render planes on this show.

We didn't touch bundles or groups, we considered that vodoo land, but it is probably more of a lack of using them than anything at all. Houdini has a lot of black magic vodoo that I still need to get my hands into figure out.

Categories are the pet project of a couple developers at sidefx, and they can be the logical solution to solve a lot of the problems of light mask. I had it explained to me a while back when there, but I hope they can work a lot better in H12.

As for the hda troubles that were had, I understand the issue that were/can be had, but you need to design a better asset that is more commonly controlled and pre-thought out. I've ran into each of these problem of upward and downward linking, and modifications needed to render. But that is where a one off effect, and asset diverge to solve those problems. Once you repeat the task enough times only then you roll it into an asset.

My system sucks from knowing all the problems with it, but I think the biggest problem is time it take to solve it. A pipeline based on hda just takes time and is constantly evolving. Stuff that works for one shop and show will not work for the next shop or show.

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...