Jump to content

why use renderman monolithic shaders?


haggi

Recommended Posts

Hi,

prman, as well as 3delight, air and others use monolithic shaders instead of shader nodes evaluated at rendertime like mentalray. To me the mentalray approach looks more flexible and elements can be reused in a lot of shaders. In renderman if I use e.g. a ramp, the whole ramp procedure is included in the shader.

So maybe anyone knows why renderman compliant renderer use this shader evaluation model?

Link to comment
Share on other sites

RenderMan's approach is actually more flexible and efficient than mental ray, VRay, etc approach, for a large scale production. For Arch and/or other smaller scale projects, which shading, lighting, rendering and comp are done by one or few artists, mental ray and VRay approach is simpler and more convenient.

In a large scale production, most often the off-the-shelf shaders from both RenderMan and mental ray are not sufficient for the entire production, and most often new and/or new functionality of current shaders will have to be implemented and improved. The only way to write new shaders for mental ray and VRay is in C/C++ via their API. Writing shaders in C/C++ is much harder and troublesome than writing shaders in RSL, and C/C++ compiled code is platform dependent which is harder to maintain for render farms. In addition, RenderMan offers much more extendability via DSO etc, than mental ray API. From simple shaders, using Slim is as simple as using Hypershade or alike. For complex or cutting-edge shaders of large production, studios could have resources to have dedicated shading TDs and programmers to implement the shaders.

Forgot to mention, you could write mental ray shaders in MetaSL. I don't know if VRay has any equivalent. But MetaSL is not production proven yet, and not as mature as RSL. Arnold will have its SL too once it is out.

Edited by kelvincai
Link to comment
Share on other sites

Thanks for participating this discussion.

I agree that platform dependency is a very important argument and api shaders have to be built much more carefully than rsl shaders because they can crash the whole system very easy.

But I dont agree what you are saying about fexibility. With c++ I have access to a huge amount of libraries where in rsl I have to access an existing dso or write one by myself in c++ based on these libraries.

In mentalray you can simply use a prebuilt shader and connect its inputs and outputs to create a new shader what is impossible in rsl shaders. And you can share the nodes in the network. E.g. if you have an expensive node like an ambient occlusion shader, you can use its outputs for any shader so this calculation is done only once.

In renderman every shader will do its own thing and calculate the whole shader again and again.

Link to comment
Share on other sites

Is not impossible; You have to look at the Co-shader feature of RSL, and many other improvements to the language, its awesome the amount of performance and flexibility that you can have with the new RSL.

Edited by Pazuzu
Link to comment
Share on other sites

In Slim, you could share the nodes and reuse the networks.

In RfM, it translates Maya shading nodes and networks pretty good.

In 3delight, it even translates mental ray shading nodes and networks.

But, I agree that RenderMan doesn't have as much advanced shaders (ie. mia_material) off-the-shelf as mental ray.

Link to comment
Share on other sites

Yes, I know that you can use nodes to create rman shaders. And you can build your own nodes and implement them so that they are used for shader compilation.

But all shading networks are compiled into one shader. e.g. if you have a color ramp and reuse it in 100 shaders, renderman build 100 shaders, every containing the whole ramp instead sharing data and nodes during rendertime.

I just wanted to know if there is a reason why the renderers dont use shading networks at rendertime instead of final compiled shaders. I suppose the answer is: It is not necessary. So of course you can save rendertime and a little bit memory if you share nodes but the overhead of building shader graphs internally would be to expensive. And because shader evaluation is done only on vertices not at every sample point as in mentalray, the impact on renderime would be very small.

Link to comment
Share on other sites

Yes, I know that you can use nodes to create rman shaders. And you can build your own nodes and implement them so that they are used for shader compilation.

But all shading networks are compiled into one shader. e.g. if you have a color ramp and reuse it in 100 shaders, renderman build 100 shaders, every containing the whole ramp instead sharing data and nodes during rendertime.

I just wanted to know if there is a reason why the renderers dont use shading networks at rendertime instead of final compiled shaders. I suppose the answer is: It is not necessary. So of course you can save rendertime and a little bit memory if you share nodes but the overhead of building shader graphs internally would be to expensive. And because shader evaluation is done only on vertices not at every sample point as in mentalray, the impact on renderime would be very small.

As mentioned above co-shaders are the answer to your question. This link should provide some insight, although it is based on a slightly older version of prman. http://www.fundza.com/rman_shaders/oop/intro/index.html

A lot has changed in RSL 2.0, but I'm not aware of any public resources for such info or I'd point you in that direction.

Link to comment
Share on other sites

Yeah

I cant agree more that co-shaders are increadible, for so many different reasons

In my own experience, when changes are made underneath you, which happens a lot here, in the past it was really hard to keep all the shaders using the same core code, ie stereo shaders, occlusion, etc.

With co-shaders, u update the core shader and across the board the problem is solved, or the feature now exists.

:blink:

Link to comment
Share on other sites

Forgot to mention, you could write mental ray shaders in MetaSL. I don't know if VRay has any equivalent.

Right now VRay has a (undocumented) shading language only for the RT engine. But VRay's plugin interfaces can be used as a language of sorts.. And its architecture is already "node based" under the hood.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...