Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 08/09/2009 in all areas

  1. Here is a simple scene that uses some patchy Volumes (in Houdini 9) and renders it. Note I'm raytracing the shadows - you can speed things up if you switch to Depth Map Shadows probably. Raytracing just give instant resultants, useful more as a guide. lightsmoke.hipnc
    1 point
  2. The Maxwell people have put a lot of thought into their ball. Why not just use it (with attribution)?
    1 point
  3. OK. First stab. This model is feeling good! It uses the smooth() function (smoothstep() in PRMan) for the shape of the extinction. This falls to zero at a finite distance, which simplifies a bunch of things. For starters, it is very easy to come up with a normalizing factor (very hard to do for Jensen's model) so that the overall luminance is maintained nicely across the range of scattering distances. For those of you who don't have access to the paper, the full model is: Where Ai is the area represented by sample i, I(Pi,Ni) is the irradiance per unit area (i.e: diffuse illumination at Pi), T(P,Pi) is the attenuation through the material from Pi to P, and B(P,Pi,N,Ni) is a so-called "bounce attenuation" factor that attempts to discard contributions where the light had to "jump through space" in order to get from Pi to P. The extinction term is T(P,Pi) which is given by: T(P,Pi) = (1-smooth(0,D,length(P-Pi))) / norm Where D is our "Scattering Distance" parameter. Gotta double-check to make sure that VEX's implementation of smooth() is the same as RSL's smoothstep(), but if it is (which it appears to be), then the normalizing factor (norm) is just the integral of 1-smooth(0,D,sqrt(x^2,y^2)) over the plane. Assuming VEX's smooth() is defined as -2x^3 + 3x^2 for x (= r/D) in the interval [0,1], and after converting to polar coords we get norm = 3*PI*D^2/10, like this: Here's a test of a few unit-radius spheres with "Scattering distance" set to (from left to right) 0.03, 1.0, 2.0, and 3.0. In doing this test though, I noticed something that was throwing the numbers off. See; each cloudpoint's contribution is weighed by that point's representative surface area (Ai). This is calculated by the scatter SOP and passed to the shader as the attribute ptarea. But this value is just the mean distance from each point to a number of its surrounding neighbors (4 by default). If one were to sum all the ptarea values on a cloud distribution, you'd expect a number that's some proportion to the actual surface area -- and we do get that, i.e: 100 points over a large surface gives you a proportionally larger total ptarea than the same 100 points over a smaller surface. However; you'd also expect the total ptarea to remain somewhat the same regardless of the number of points in the cloud (over the same surface). But due to the way in which it is calculated, this is not the case. A pointcloud with 1000 points will have a significantly lower total ptarea than a distribution with 2000 points over the same surface. That's no good. To stabilize this, I ended up modifying the ScatterSOP's calculated ptarea attribute by the factor (TotalArea/TotalPtarea). This must obviously be done at the SOP level, meaning that the point-scattering step requires some care. Right now, you'll see this in a single network box in the attached hip, but I will naturally turn it into an HDA eventually. OK. there are a few other things, but I'll stop for now. I'm attaching this work-in-progress version of the shader (even though it's really just a proof-of-model at the moment) for anyone curious. I'll eventually turn it into a VOP, of course. SSSpixar1.zip Next thing is to decide how to treat the surface color. One way is to simply calculate monochrome scattering and then tint it; and the other way is to do separate scattering per channel (which should give better results but requires 3 samples instead of one). Here's an early test of chromatic sampling and bounce attenuation: Cheers!
    1 point
×
×
  • Create New...