Jump to content

Anisotropic Reflections


Recommended Posts

Any way to achieve them in VOPs? Mind you, I'm talking strictly of raytracing, not speculars or env maps. Thanks!

27848[/snapback]

Quick answer: Not really. Or rather, not easily... and if you value your sanity: not in VOPs.

Longer answer: generate a number of uniformly-distributed sampling directions over the hemisphere about N, trace along each direction and add the BRDF-weighed result into an accumulator. Finally, multiply the accumulated value by solid_angle/samples, which, for a hemisphere, would be 2*PI/samples.

That would be your basic, "raw", Monte Carlo estimator -- and it would be shockingly inefficient (meaning you'd need a ton of samples to get a somewhat passable result... if you squint... really hard).

You can improve the efficiency to the point where it becomes practical, if you can manage to place your samples only "where they matter the most" (i.e: if you can match your sample distribution to the density distribution of your BRDF), but as you can imagine, this complicates the algorithm quite a bit.

Actually, this is a pretty deep subject. You can google for "Monte Carlo integration", or "Quasi-Monte Carlo integration", or "importance sampling". Followed by "random variates" and "probability density function" and "cumulative density function". Followed by... you get the picture.

Anyway, implementing the vanilla estimator is pretty straight forward. First you need a way to distribute points on a unit-hemisphere uniformly (in terms of solid angle). One way to do this is to take two uniformly distributed random variables and map them to a single direction on the hemisphere. Here's one approach:

vector rv_UniformHemi(matrix3 space; float u1, u2) {
   float z = u1;
   float r = sqrt(max(0.,1.-z*z));
   float phi = 2*M_PI*u2;
   return set(r*cos(phi),r*sin(phi),z) * space;
}

That distributes them in the +z hemisphere and then transforms it to the given space. Note that the mapping preserves area, so samples don't bunch up at the pole, but that has nothing to do with the "quality" of the distribution. If you just feed it the usual output from an RNG, you'll get... crap... but that's a whole other topic.

Next, we need a BRDF to sample. If we take Houdini's (Ward's) anisotropic model and split it into two separate functions, one for the BRDF and one for the local illumination, it might go something like this:

float brdf_anisotropic(vector n,x,y,wi,wo,rough) {
   float rho   = 0,
         cos_r = dot(n,wo), 
         cos_i = dot(wi,n);

   if(cos_r>0. && cos_i>0.) {
      float  norm = 4.*M_PI*rough.x*rough.y;
      vector h    = normalize(wi+wo);
      float  uval = dot(x/rough.x,h);
      float  vval = dot(y/rough.y,h);
      rho = cos_i*exp(-2.*(uval*uval + vval*vval) / (1.+dot(h,n)));
      rho /= norm*sqrt(cos_i*cos_r);
   }

   return rho;
}

float illum_anisotropic(vector p,n,x,y,wo,rough) {
   vector C=0;
   illuminance (p, n, M_PI/2, LIGHT_SPECULAR) {
      vector wi = normalize(L);
      C += Cl*brdf_anisotropic(n,x,y,wi,wo,rough);
   }
   return C;
}

We split them because we don't want to be tied to only sampling along light directions. In this case for example, we're going to want to sample reflections over a whole hemisphere of directions. So now that we have a light sampler (the illum_anisotropic() function above), we need an environment sampler:

vector env_anisotropic(matrix3 space; vector p,n,x,y,wo,rough; 
                        float bias; int samples; string scope) 
{
   vector Csamp;
   vector Cr = 0;
   int i;
   for(i=0;i<samples;i++) {
      vector wi = rv_UniformHemi(space,nrandom(),nrandom());
      Csamp = reflectlight(p,wi,bias,1,"scope",scope,"maxdist",-1,"angle",0);
      if(Csamp!={0,0,0}) Csamp *= brdf_anisotropic(n,x,y,wi,wo,rough);
      Cr += Csamp;
   }
   return 2.0*M_PI*Cr/samples;
}

Add parameters/options to taste. Finally, all we need is a little test shader to setup P's local (tangent) space and make the various calls. Here's a test:

AnisoRefl.zip

post-148-1148327959_thumb.jpgpost-148-1148327967_thumb.jpgpost-148-1148327974_thumb.jpg

That's at 500 samples... not only does it, well, suck, but it's also veeeery slooooow.

The good news is: this is as bad as it gets :lol: -- any one of a whole bunch of possible improvements will make it faster/better. Woohooo!

Making it better is left as an exercise to the reader :P

At the very least, this should help explain why we've been asking for a "gather" loop and a way to pass PDF's to Mantra (some/all of which may show up in 9.0).

HTH.

Link to comment
Share on other sites

Thank you Mario, I was hoping you'd answer this thread, though I have to admit you lost me at "generate a number of". :lol:

I kinda had a feeling when I made this thread that I was treading in deep water. I appreciate you taking the time to explain (just sorry the effort is wasted on a noob like me). Hopefully someone can take the info and make use of it/build on it.

I shall have a looksee at the file.

Link to comment
Share on other sites

Thank you Mario, I was hoping you'd answer this thread, though I have to admit you lost me at "generate a number of". :lol:

You're very welcome. I hope it's somewhat useful, even if just as a sandbox to test other ideas.

Just for a hoot, I left a test rendering with 10,000 samples while I went out for a while. And when I came back, I got this:

post-148-1148359948_thumb.jpg

10,000 samples, took 4 hrs, 16 min! :lol:... well, at least you can see what the poor little thing was so desperately trying to render before... except I think now you can also start to see the renderer's grid boundaries (unfortunately I also used a grid for the walls, so it's hard to tell which is which. If it is, then I believe it's an issue related to using nrandom())... OK, it's earned a little rest now :).

Link to comment
Share on other sites

There's a quick way to get this if you use MR. Take the little blureflect shader from my website and replace mi_reflection_dir_glossy() with mi_reflection_dir_anisglossy() (check MR doc for function description).

Link to comment
Share on other sites

There's a quick way to get this if you use MR. Take the little blureflect shader from my website and replace mi_reflection_dir_glossy() with mi_reflection_dir_anisglossy() (check MR doc for function description).

I'm not familiar with MR, but reading their docs for that function, it certainly sounds like it would work, at least for the built-in "glossy" and anisotropic models.

Actually, I think the latest version of 3Delight supports arbitrary distributions (via env maps) for both gather() and occlusion(), though I haven't tried them yet. I think the ability to pass arbitrary distributions (through an env map like 3Delight, or some other mechanism) is a very useful thing, and Mantra should support it. As an intermediate solution, it would be good if it at least supported a "gather" for all the built-in models (like the MR functions you pointed out do). Then again, for all I know, this could already be in the works for 9.0... let's hope.

Link to comment
Share on other sites

I've done a shabby-looking test render. It renders in under 5 seconds with 16 samples/pixel. The no. of ray samples/pixel = no. of pixel samples and are filtered in the same way. Have to turn on "compute first derivatives" in the MR ROP, or else the shader won't work.

anisotest.jpg

It's a sphere on the left side that reflects the torus, with shinyu and shinyv set alternately to 1, 100 and 100, 1. I didn't bother to chain in a direct illum shader so the sphere is black.

the shader code:

#include "shader.h"

struct blureflect {miScalar shinyu; miScalar shinyv;};

DLLEXPORT int blureflect_version(void) {return 1;}

DLLEXPORT miBoolean blureflect(

Link to comment
Share on other sites

Hey Hwee,

That function does work.

I never for a moment doubted that it would. :)

But just to make sure that someone casually reading this thread doesn't get the wrong idea: that timing of 4hrs:16m that I mentioned is *not* due to some deficiency in Mantra's raytracing abilities. It is entirely and emphatically due to the brute-force approach that I described (which was given as an illustration of how a bare-bones Monte Carlo estimation might be built in VEX).

One *can* (and should) refine that method so that, at the very least, it's sampling is tuned to the BRDF being used. For example, an optimized (but still in VEX) implementation of Ashikhmin's anisotropic model with the same settings as Hwee's test (shiny=[1,100] and [100,1]) and for a similar scene, with 16 (4x4) pixel samples and 50 reflection samples, renders in roughly 9 seconds -- a heck of a lot better than 4.5 hours. Although it could never beat a renderer's native implementation, of course.

post-148-1148449541_thumb.jpgpost-148-1148449548_thumb.jpg

Timings on my machine were 8.42 and 11.04 seconds for those two (and you'll just have to take my word for it because I'm not sharing the code, sorry). And that's abusing the reflectlight() call which I'm sure was never built as a replacement for a gather mechanism.

Anyway, just wanted to make that clear before someone starts yelling "OMGBBQBRDF Mantra SuXoRs!!!11!!one!!!"

P.S: Even though I can't share the code, I *can* tell you that 99% of the guts of that implementation came from the absolutely awesome book Physically Based Rendering by Matt Pharr and Greg Humphreys -- highly recommended!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...