Jump to content

shading params and BSDF's


stelvis

Recommended Posts

I've noticed that in the supplied VOPs that actually use BSDF's for PBR, they never seem to get passed any of the usual global vars like P and N whereas for normal rendering this is common

looking at the actual VEX code it appears that none of the internal BSDF functions themselves actually get passed these anyway

so is there any mechanism for altering this stuff before computing the BSDF?

eg do they implicitly use the global variables supplied through the shading context? (and its just never made explicit) and if so (I presume they must do somehow) can we actually alter these somehow? - ie if I actually modify 'global P' inside the shader (rather than creating a new attribute like "newP" or something) then call a BSDF function will the BSDF actually use the modified value?

second question - is there access to some mechanism within a shader to access the guts of how PBR is sampling the point?

eg number of samples fired, some kind of 'id' for the current sample, total number of samples (though I guess I could get that from the output node), whether the shader is being called from a primary camera ray or from a secondary bounce ray, etc (I know we can't get what type of ray it is necessarily but seeing how PBR can limit bounces I assume it somehow knows the difference between primary and secondary sample rays internally)

Link to comment
Share on other sites

okay - in certain aspects the previous post is incorrect in its assumptions

you can pass a normal to BSDF's its just the VOP for lambert and the shading model VOP set to diffuse don't do that explicitly - simply changing one bit of the inline code worked for that:

ie in the BSDF part of the code changing bsdf = kd * diff * diffuse() to bsdf = Kd * diff * diffuse(Nf) (paraphrasing a little - not open in front of me right now) works fine

regarding P:

I guess I had assumed that one can actually 'fool' mantra (of any flavor) into computing shading from a different P than the one that is actually derived from the shaded point, by directly altering global P in the shader (I don't mean displace the surface, just displace where the shading is being computed from)

is that assumption actually true?

so far some experiments seem inconclusive (ie I can apparently 'break' the shading by setting a param called P to something, but have yet to get it do anything that looks like I have actually displaced the shading in world space - eg by say adding 0.1 to each component of global P then passing that to a param called P)

Link to comment
Share on other sites

Good point; it seems as if there are only two signatures for, say, diffuse().

http://www.sidefx.com/docs/houdini10.0/vex/functions/diffuse

bsdf diffuse()

bsdf diffuse(vector nml)

So it would seem that the rest of the bsdf might be closed, yeah? Perhaps there are technical reasons why remapping P might be difficult? eg. perhaps importance mapping, etc, is calculated strictly at P?

Link to comment
Share on other sites

Good point; it seems as if there are only two signatures for, say, diffuse().

http://www.sidefx.com/docs/houdini10.0/vex/functions/diffuse

So it would seem that the rest of the bsdf might be closed, yeah? Perhaps there are technical reasons why remapping P might be difficult? eg. perhaps importance mapping, etc, is calculated strictly at P?

I'm not even sure its got anything to do with PBR itself really - now I think about it some more, just tweaking P alone wouldn't work - you'd also need some way of altering some of the associated vectors as well (eg eye vector, vector to lights etc, ) for that to make sense in terms of actually computing lighting from a different point

what motivated this was, I was trying to think of a "PBRish" way to do SSS - by PBRish I mean the whole idea of having only one path tracing 'sample' per bounce per eye sample

that way I was thinking you could (maybe) get SSS effects for 'free' as it were if you could somehow get mantra to shift where it was shading across the surface for a single shading point since sampling the local surface area in that fashion would have the same 'expense', samples vs noise wise, as calculating an indirect lighting bounce...

PBR appeals to me in that it would appear to simplify 'shading optimization' (in terms of choosing noise vs quality) to one place (the number of primary eye samples) that scales very linearly, whereas any 'normal' shading approach that requires shaders to fire off multiple rays and gather the results within a loop called each time the shader is run, creates many more places where you may have to tweak that balance and also many more opportunities for exponential increases in render time (ie: indirect bounces get exponentially more expensive, whereas with the PBR sampling approach they are relatively 'cheap' as the increase is more or less linear with each extra bounce, with the caveat that each subsequent bounce is going to get progressively noisier - though that is offset by the fact that each subsequent bounce level is also generally going to have a progressively less significant effect on overall illumination)

it would be great to see some examples of shaders for PBR that weren't simply "pass a BSDF" and that's more or less it. I think that would really help get my head around whats doable

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...