Jump to content

Questions about shader writing


~nature~

Recommended Posts

Hi, everyone,

I have some questions about writing the shader and need your help here ^-^

1.How can I implement the Perfect Mirror Reflection BRDF (see the image below) in the shader, I was frustrated when I tried to convert delta function in the BRDF into the shader code.

My goal is to write a shader which can create a square specular from the area light source.

2.What does "Cl" exactly mean in the illuminance loop, does it mean "irradiance"(watts per square meter),I just wonder how it works under the hood in mantra, if I use the physically correct attenuation, will mantra calculate "Cl" at shading point like this: Cl = I*cos(theta)/r2 where I is the intensity of the light , theta is the incident direction of the light between the shading point normal, r is the distance between the sample point of the light and the shading point.

Hope you can help me, thanks very much :D

post-5114-129948908459_thumb.jpg

Link to comment
Share on other sites

1. The product of those 2 delta functions is zero everywhere except when the viewing direction is the exact mirror (about the normal) of the incident direction (or, stated in polar coords, when theta_r==theta_i and phi_r is exactly +/-PI radians, or 180 degrees, away from phi_i), at which point the argument to both delta functions is 0 and therefore the functions themselves evaluate to 1 (as does their product). The scaling of 1/cos(theta_i) is there to cancel out a cos(theta_i) factor that would normally appear outside the brdf to convert incident power to irradiance.

All of it essentially boiling down to a radiant value of "I" along the exact mirror direction from incidence and 0 everywhere else -- an effect we all know as a "mirror reflection".

What do you mean by a "square specular"?

2. The kind of analysis you mention in #1 is better suited to a statistical context where the BRDF's can be explicitly sampled (like in PBR). It's not really suitable for "illuminance loops" (you mention Cl) where you have no control over the directions in which to sample incident illumination. In that context, the probability that any one of the samples that the loop iterates over is in the exact mathematical mirror direction to the viewing direction is pretty much zero -- so yeah, not the right context to be thinking in terms of delta distributions. In the traditional old-style shading approach, a perfect mirror reflection would have necessitated a "reflection map", which you can indeed sample in a specific direction. In that method, the illuminance loop is only used to do approximations to broad glossy or diffuse reflections of light sources.

HTH.

Link to comment
Share on other sites

1. The product of those 2 delta functions is zero everywhere except when the viewing direction is the exact mirror (about the normal) of the incident direction (or, stated in polar coords, when theta_r==theta_i and phi_r is exactly +/-PI radians, or 180 degrees, away from phi_i), at which point the argument to both delta functions is 0 and therefore the functions themselves evaluate to 1 (as does their product). The scaling of 1/cos(theta_i) is there to cancel out a cos(theta_i) factor that would normally appear outside the brdf to convert incident power to irradiance.

All of it essentially boiling down to a radiant value of "I" along the exact mirror direction from incidence and 0 everywhere else -- an effect we all know as a "mirror reflection".

What do you mean by a "square specular"?

2. The kind of analysis you mention in #1 is better suited to a statistical context where the BRDF's can be explicitly sampled (like in PBR). It's not really suitable for "illuminance loops" (you mention Cl) where you have no control over the directions in which to sample incident illumination. In that context, the probability that any one of the samples that the loop iterates over is in the exact mathematical mirror direction to the viewing direction is pretty much zero -- so yeah, not the right context to be thinking in terms of delta distributions. In the traditional old-style shading approach, a perfect mirror reflection would have necessitated a "reflection map", which you can indeed sample in a specific direction. In that method, the illuminance loop is only used to do approximations to broad glossy or diffuse reflections of light sources.

HTH.

Hi,Mario,many thanks for your reply. :D

What I meant by "square specular" referred to the reflection of the light source, in this case, the square reflection of square area light. you can see the image below which is from Sony Pictures Imageworks Paper.

http://renderwonk.com/publications/s2010-shading-course/martinez/s2010_course_notes.pdf

post-5114-129956911558_thumb.jpg

I am curious about how to implement this physically based light source shape specular in the shader. Again I also found the mantra surface shader in houdini, it does a good job in the "square specular" for both Phone, Blinn, cone model.

Here I was trying to implement my own version, but obviously it suffers from many problems

1) The sampling issue, is it what you mentioned before?

2) The light intensity is 8, but the specular is only 0.2 or so. I guess it suffers from the "Cl" attenuation at the shading point.(I use "Physically correct attenuation" and turn off "normalize light intensity to area").

The image rendered by my shader

post-5114-129956912676_thumb.jpg

post-5114-129956914436_thumb.jpg

The image rendered by mantra surface shader

post-5114-129956934886_thumb.jpg

however, the radiance(W·sr−1·m−2) in the perfect reflection is constant, so the expecting intensity of the specular should be the same as the light intensity, am I right? this is also the reason why I asked what is "Cl" actually mean in the shading context,because “Cl” behavior is pretty much the same as "irradiance" rather than "radiance". I have read your wiki post about "ReflectanceFunctions", you mentioned:

accumulator += Cl * BRDF; (cos term is hidden in the BRDF)

so the "Cl" meaning here is "radiance" rather than "irradiance" according to the math equation, am I right? it really confused me at this point.

3)Did you mean this light shape specular cannot be practically implemented by traditional shader due to sampling issues?

So how can I do it using VEX or RSL, could you enlighten me a bit, Is the implementation of mantra surface specular function not the traditional shader?

My code is as follows


#include <math.h>

vector phone_brdf(vector N,I; float roughness)
{
vector Rd = 0;
vector Nn = frontface(normalize(N),I);

illuminance(P,Nn,M_PI_2)
{
    shadow(Cl);
    vector R = reflect(-normalize(L),Nn);
    vector V = -normalize(I);
    float rdv = dot(R,V);
    if (roughness == 0) 
        Rd += Cl*floor(abs(rdv)+0.02); // I implement delta function like this,I 
                                       // am not sure if it is correct, the 0.02 
                                       // overcomes the numerical issues.                  
    else
        Rd += Cl*pow(max(rdv,0),1/roughness);
}
return Rd;
}

#pragma label Cp "Phone Color"
#pragma hint Cp color
#pragma label Kp "Phone Amplitude"
#pragma label rough "Roughtness"
#pragma range rough 0 1

surface
my_brdf(
    vector Cp = 1;
    float Kp = 1.0;
    float rough = 0.1;
)
{
    Cf = Cp*Kp*phone_brdf(N,I,rough);
}

Best regards

Lianyi

square reflection.hipnc

perfect_brdf.otl

Edited by ~nature~
Link to comment
Share on other sites

The mirror reflection brdf is a bit of a strange animal in that its density distribution integrates to 0, which is why it's modeled as a delta distribution (which is more like a limit than a function). In any case, if you were writing it as a VEX function that computes the fraction of energy leaving the surface in the direction 'wo', after arriving from direction 'wi' at a location on the surface with normal 'wn' (all vectors unit length and pointing away from the surface position 'P' -- and note that here we're using vectors instead of spherical angles), then it might look something like this:

float brdf_mirror(vector wi,wn,wo) {
   return (wo==reflect(-wi,wn));
}
vector illum_mirror(vector p,wn,wo) {
   vector out = 0;
   illuminance(p,wn,M_PI_2)
   {
       shadow(Cl);
       vector wi = normalize(L);
       out += Cl*brdf_mirror(wi,wn,wo);
   }
   return out;
} 

This would be a direct interpretation of the delta function you posted above -- a function that returns zero everywhere except for the unique case where wo is in the exact mirror direction (about wn) of the incident vector wi (where it returns 1) -- a situation which, if drawing from a random set of directions wi, would occur with probability 0. That's what I meant when I said that it's not a very useful model in the context of an illuminance loop, where the wi's are chosen for you by Mantra -- that is: inside an illuminance loop, *Mantra* decides where the samples on an area light will go, not you, and the chances that it will pick a sample (with direction 'wi') that just happens to exactly line up with the mirror direction of the viewing vector ('wo' above) are zero.

And, as expected, it looks like this:

post-148-129969964306_thumb.jpg

The only way to work with a delta distribution is to sample it explicitly -- you manually take a sample in the single direction where you know the function will be meaningful. This can be done either using ray tracing (see the functions reflectlight(), trace(), and gather()), or using a reflection map (see the function environment()) -- but *not* inside an illuminance loop. This is not "cheating", it just follows from the kind of statistical animal we're talking about. Even the PBR path tracer handles delta BxDF's this way -- when a BSDF contains a delta lobe, it will, when sampled, return a single direction with probability 1, and be excluded from multiple importance sampling.

Here's a version using trace(). The only catch is that, when using ray tracing (as opposed to a reflection map), you'll need to turn the light geometry into an actual object so that it can be reflected:

 vector illum_trace(vector p,dir; float maxcontrib) {
   // Using reflectlight():
   //return reflectlight(p,dir,-1,maxcontrib);

   // Or... using trace() instead of reflectlight():
   vector hitCf = 0;
   trace(p,dir,Time,"raystyle","reflect","samplefilter","opacity","Cf",hitCf);
   return hitCf;
}

And it looks like this (using the RT engine):

post-148-129969966206_thumb.jpg

Here's your hipfile augmented with those two approaches (the otl is embedded in the file).

square reflection_mgm.hipnc

Oh, one more thing: A Phong lobe is not the same as a Delta lobe -- if you want Phong then just use the phongBRDF() function (and note it's "phong", not "phone").

Cheers.

Link to comment
Share on other sites

Oh, one more thing: A Phong lobe is not the same as a Delta lobe -- if you want Phong then just use the phongBRDF() function (and note it's "phong", not "phone").

Mario :P Thanks for your invaluable information,

Coincidentally,this day of 135 years ago, Bell invented the first "phone", he must thank me for inviting him to the wonderful computer graphics world. B)

Mario, sorry for my bad mind, I still have several points not so clear about mantra rendering.

1)How to access the Multiple Importance Sampling(MIS) feature in Houdini, sorry for the stupid question :(

2) "Physically based specular" vex node in mantra surface does the perfect square highlights job in the phong model and can also produce the geometry light shape highlights pretty much like mia_light_surface in mentalray.

Does it use specularBRDF()?

Does it employ Multiple Importance Sampling or it invokes the PBR engine implicitly?

3)what is the difference between phongBRDF(),vector phong() and bsdf phong(), will they invoke different rendering engine at rendering time?

selfill-demo2.jpg

Best regards.

lianyi

Edited by ~nature~
Link to comment
Share on other sites

1)How to access the Multiple Importance Sampling(MIS) feature in Houdini, sorry for the stupid question :(

The short answer: "Use PBR" :)

MIS is used by the default PBR path tracer.

The path tracer is written in VEX and, if you're interested, you can look at its source code in $HH/vex/include/pbrpathtrace.h. This means you could, in theory, customize pretty much all of PBR except for the BSDFs (bsdf's are not written in VEX).

2) "Physically based specular" vex node in mantra surface does the perfect square highlights job in the phong model and can also produce the geometry light shape highlights pretty much like mia_light_surface in mentalray.

Does it use specularBRDF()?

Does it employ Multiple Importance Sampling or it invokes the PBR engine implicitly?

The PhysicallyBasedSpecular VOP, and all other "Physically Based xxxx" VOPs resolve to a BSDF -- notice that its output (F) is not a color (vector type) but a BSDF type (which is an opaque type that represents a linear combination of scattering distributions, or "lobes"). All these nodes that only output an 'F' (a bsdf) are meant to be used with the PBR engines. You can look at their code by RMB on the VOP and selecting "Type Properties...", then click on the "Code" tab of the Type Properties dialog to see the source code for that VOP. You'll notice that none of these "Physically Based" VOPs use illuminance() or phongBRDF() or any of those functions. PBR samples (or transports) light differently than MP or RT -- for example, you'll see things like "sample_light()" instead of "illuminance()", and "sample_bsdf()" instead of "phongBRDF()"... similar ideas but different approach (in PBR, a BRDF is a probability distribution instead of a weighting function, and things like MIS are used to balance the various importance measures assigned to each sampling strategy).

3)what is the difference between phongBRDF(),vector phong() and bsdf phong(), will they invoke different rendering engine at rendering time?

*/ float phongBRDF() is the standard Phong lobe as a weighting function (in [0,1]) -- note that it returns a float.

*/ vector phong() computes illumination using the Phong lobe as a weight (i.e: using phongBRDF() as the weighting function). That is: it returns the color (notice it returns a vector, not a float) of the incident illumination, as weighted by phongBRDF() and so is equivalent to using phongBRDF() inside your own illuminance loop.

*/ bsdf phong() is, again, the Phong lobe but this time expressed as a probability distribution. It is normalized in the sense that it integrates to 1 over its domain of incident directions (a hemisphere in this case), meaning that, unlike phongBRDF(), its range is not necessarily in [0,1]. Also note that its return data type is "bsdf", the contents of which are inaccessible to the user (you can only combine bsdf's with other types in certain ways but not manipulate their values directly). Long story short: these "bsdf" animals are meant to be used with the PBR engines -- they can be sampled and evaluated to resolve into a color, yes, but the scaffolding required to make that happen correctly (or in a useful way) is, well, a path tracer, not an illuminance loop.

*/ None of these functions "invoke" anything -- they just compute and return values. But, yes, some shading globals (like F) are only used by certain engines (F -- and the code path that defines it -- is only executed when rendering with PBR, for example). So, any assignment to the global F when rendering using, say, the MP engine, would be ignored, and conversely, any assignment to Cf will be ignored by the PBR engines. But these functions themselves do not "invoke" anything.

HTH.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...