Jump to content

The SSS Diaries


Recommended Posts

Hi all,

I have a few days of "play time" ahead of me, so I thought I'd revisit the various SSS models, and see if I can come up with something a little more user friendly. And since I'm sharing the code, I thought I'd take a cue from Marc's "Cornell Box Diaries" and share the process as well... selfishly hoping to enlist some of the great minds in this forum along the way :P

My initial approach to this SSS thing was a somewhat faithful implementation of the di-pole approximation given in Jensen's papers. However, that model is very hard to parameterize in a way that makes it intuitive to use; the result is that, as it stands, it can be very frustrating. Regardless, I'll continue to think about ways to re-parameterize that model; but I must confess it's evaded every one of my attempts so far -- maybe I can convince someone here (TheDunadan? ;) ) to look at the math with me.

As a user, I'd love to have a model that I can control with just two parameters:

1. Surface Color (or "diffuse reflectance").

We need to texture our surfaces (procedurally or otherwise), so we must have this one. In Jensen's model, this gets turned into the "reduced scattering albedo", which in turn gets used to calculate the actual scattering and absorption properties of the material; all of which relate to each other in very non-linear ways, making it hard to control. So the goal here is to come up with a "what you set is what you get" model (or as close to that as possible).

2. Scattering Distance.

This should behave exactly as one would expect; i.e: "I want light to travel 'this far' (in object-space units) inside the medium before it gets completely extinguished". No more and no less. Well... the main problem with an exponential extinction (Jensen) is that, while physically correct, it never quite reaches zero, so again, it is hard to control.

At this point in time, I don't see how any model that satisfies this "two parameter" constraint can ever also be physically correct -- meaning whole swathes of Jensen's model will need to go out the window. And first in the list of things to dissappear will likely be the di-pole construction... next in line is the exponential falloff... and the list grows...

OK. Looking over a whole bunch of papers, I think I've decided that Pixar's approach from the Siggraph 2003 renderman notes (chapter 5, "Human Skin for Finding Nemo") is the closest thing to what I'm looking for, so I'll start with that.

I'll post my progress (and the code, natch), in this thread so people can take it for a spin and see what they think.

Cheers!

Link to comment
Share on other sites

-- maybe I can convince someone here (TheDunadan? ;) ) to look at the math with me.

14278[/snapback]

Hehe, I actually warned two numerics gurus at my university that I'll visit them with a few siggie papers soon :P

Ron Fedkiw has published quite a few intresting papers on fluid and rigid body dynamics but some of the underlying math is :cry2: ...

Looking forward to see some nice SSS Shader code of yours :)

Jens

Link to comment
Share on other sites

OK. First stab.

This model is feeling good! :)

It uses the smooth() function (smoothstep() in PRMan) for the shape of the extinction. This falls to zero at a finite distance, which simplifies a bunch of things. For starters, it is very easy to come up with a normalizing factor (very hard to do for Jensen's model) so that the overall luminance is maintained nicely across the range of scattering distances.

For those of you who don't have access to the paper, the full model is:

post-148-1097714634_thumb.png

Where Ai is the area represented by sample i, I(Pi,Ni) is the irradiance per unit area (i.e: diffuse illumination at Pi), T(P,Pi) is the attenuation through the material from Pi to P, and B(P,Pi,N,Ni) is a so-called "bounce attenuation" factor that attempts to discard contributions where the light had to "jump through space" in order to get from Pi to P.

The extinction term is T(P,Pi) which is given by:

T(P,Pi) = (1-smooth(0,D,length(P-Pi))) / norm

Where D is our "Scattering Distance" parameter.

Gotta double-check to make sure that VEX's implementation of smooth() is the same as RSL's smoothstep(), but if it is (which it appears to be), then the normalizing factor (norm) is just the integral of 1-smooth(0,D,sqrt(x^2,y^2)) over the plane. Assuming VEX's smooth() is defined as -2x^3 + 3x^2 for x (= r/D) in the interval [0,1], and after converting to polar coords we get norm = 3*PI*D^2/10, like this:

post-148-1097714605_thumb.png

Here's a test of a few unit-radius spheres with "Scattering distance" set to (from left to right) 0.03, 1.0, 2.0, and 3.0.

post-148-1097714666.jpg

In doing this test though, I noticed something that was throwing the numbers off.

See; each cloudpoint's contribution is weighed by that point's representative surface area (Ai). This is calculated by the scatter SOP and passed to the shader as the attribute ptarea. But this value is just the mean distance from each point to a number of its surrounding neighbors (4 by default). If one were to sum all the ptarea values on a cloud distribution, you'd expect a number that's some proportion to the actual surface area -- and we do get that, i.e: 100 points over a large surface gives you a proportionally larger total ptarea than the same 100 points over a smaller surface. However; you'd also expect the total ptarea to remain somewhat the same regardless of the number of points in the cloud (over the same surface). But due to the way in which it is calculated, this is not the case. A pointcloud with 1000 points will have a significantly lower total ptarea than a distribution with 2000 points over the same surface. That's no good.

To stabilize this, I ended up modifying the ScatterSOP's calculated ptarea attribute by the factor (TotalArea/TotalPtarea). This must obviously be done at the SOP level, meaning that the point-scattering step requires some care. Right now, you'll see this in a single network box in the attached hip, but I will naturally turn it into an HDA eventually.

OK. there are a few other things, but I'll stop for now. I'm attaching this work-in-progress version of the shader (even though it's really just a proof-of-model at the moment) for anyone curious. I'll eventually turn it into a VOP, of course.

SSSpixar1.zip

Next thing is to decide how to treat the surface color. One way is to simply calculate monochrome scattering and then tint it; and the other way is to do separate scattering per channel (which should give better results but requires 3 samples instead of one).

Here's an early test of chromatic sampling and bounce attenuation:

post-148-1097714731.jpg

Cheers!

  • Like 1
Link to comment
Share on other sites

But this value is just the mean distance from each point to a number of its surrounding neighbors (4 by default). If one were to sum all the ptarea values on a cloud distribution, you'd expect a number that's some proportion to the actual surface area -- and we do get that, i.e: 100 points over a large surface gives you a proportionally larger total ptarea than the same 100 points over a smaller surface. However; you'd also expect the total ptarea to remain somewhat the same regardless of the number of points in the cloud (over the same surface). But due to the way in which it is calculated, this is not the case. A pointcloud with 1000 points will have a significantly lower total ptarea than a distribution with 2000 points over the same surface. That's no good.

To stabilize this, I ended up modifying the ScatterSOP's calculated ptarea attribute by the factor (TotalArea/TotalPtarea). This must obviously be done at the SOP level, meaning that the point-scattering step requires some care. Right now, you'll see this in a single network box in the attached hip, but I will naturally turn it into an HDA eventually.

Looking good.

Perhaps you should square the PtArea for each point? Or even include the constant PI? This could be done in the shader, no?

Link to comment
Share on other sites

Hi guys,

Perhaps you should square the PtArea for each point?  Or even include the constant PI?  This could be done in the shader, no?

14288[/snapback]

There's no need; ptarea is just a weight -- a constant value (per point) which is a function of the cloud's local density. If you wanted to be literal about it, you could interpret it as twice the radius of the "splat disk"; so as an area, you'd calculate it as:

CircleArea = PI*r^2 = (ptarea*.5)^2*PI = ptarea*ptarea*0.25*PI

But as you can see, all that does is give you is a scaled version of ptarea, so we might as well use ptarea directly and save ourselves a few cycles.

In my tests so far, I have also found that, after the normalization I mentioned earlier, I don't even need to scale it to get the overall intensity that I'd expect... a bit of good luck I guess :)

Soon, all this will be easy:  http://www.boring3d.com/

Look at the Archive...:)

14289[/snapback]

WOW!

Lots of very, very nice images there!

Yup; I don't see why we couldn't get that kind of result with our new pointcloud bounty. Thanks for the link! :)

Cheers!

Link to comment
Share on other sites

This installment mainly deals with how to treat surface color, although I've also added bounce attenuation and a couple of other tidbits.

Chromatic Sampling

In most materials (all?), light scatters differently for each wavelength, and diffuse reflectance and subsurface scattering are separate components whose contribution is governed by scattering and absorption terms, as well as the Fresnel function (transmittance and reflectivity). But here we're trying to control all these things with just one color parameter: "Surface Color". So the question is how to interpret it.

Having just one parameter doesn't leave much room to play with, but here's what I've tried for this update, which I think is a reasonable compromise:

*) You have a choice of sampling chromatically (separate sampling for each RGB channel), or in monochrome. This is a toggle.

*) For chromatic sampling, the scattering distance for each channel is calculated as ScatterDistance * SurfaceColor[chan], which results in separate scattering_distance values for each one of R, G, and B.

*) For monochromatic sampling, the scattering distance becomes ScatterDistance * max(SurfaceColor) which is a scalar, and the final result is "tinted" by the SurfaceColor parameter.

Obviously, chromatic sampling requires 3 samples per cloud point, where monochromatic sampling gets away with just one, so it's computationally less expensive. The advantage with sampling each wavelength separately though, is that you get a more natural looking hue shift as the light extinguishes.

Here's a side-by-side comparison where all the settings are identical, except for the sampling method. The sphere on the left uses a single sample (faster), and the one on the right uses separate samples. These spheres are not a good timing test, but, for what it's worth, the render times were 8.96s and 9.61s for the left and right respectively -- so about a 7% time increase for the "expensive" one.

post-148-1097790180.jpg

Another way to possibly handle this would be to have separate colors for "diffuse" and "scattering" and fade between them according to the distance traveled (length(Pi-P)). This would allow for the possibility of drastically different hues between mostly scattered light and mostly reflected light (for some funky effects I guess), but it would complicate the interface and.... well, I may give it a try later on just out of curiosity. Please let me know if anyone has some ideas about this.

Diffuse Mix

As stated in the paper, this model is not quite a complete replacement for diffuse. This is due to the fact that the smooth() curve doesn't spike at zero. As a result, there is a need to mix in a tiny amount of good old Lambertian diffuse. However; it is also true that in order to catch high frequency changes in irradiance (like shadow edges and such) you would need a *lot* of points in the cloud, which is in most cases impractical. So in the end we likely need to add a bit of diffuse anyway -- regardless of the possible shortcomings of this particular model. So I've added this parameter now.

Bounce Attenuation

The point cloud carries no connectivity information, so it can't tell wether "that" point is part of the same surface as "this" point. So if you naively gather contributions within a certain radius, you'll likely end up including contributions from cloud points that are not even connected to the same surface -- light will appear to "bounce" from one surface to another (across empty space). Any concave portion of the surface would be susceptible to this (as well as surfaces that are totally disjoint, natch).

The "Bounce Attenuation" parameter is implemented as a "bias": no attenuation at 0, and full attenuation at 1. It *is* possible to go above 1, but you will (in theory) be suppressing too much. However; while the mechanism is pretty smart, it is just an approximation, so it is possible that you need to pump it above 1 in some cases (although it's been behaving really well in my tests so far).

Here's a closeup of the Utah Teapot's lid. You can see the "ghost" bounced light on the neck of the knob on the left image (attenuation = 0). The one on the right exorcizes the ghost (attenuation = 1).

post-148-1097790161.jpg

OK. That's it for now. I'll do some more tests and build the HDA and VOP next, before I move on to single scattering. Here's the bundle for this test (you have to generate the point clouds yourself for all these tests -- there's no point including them in the post).

SSSpixar2.zip

l8r.

Link to comment
Share on other sites

Wow.

Great stuff Mario - clearly separate samples are the way to go. In my mind it's not a subjective question, but a fairly quantitative right or wrong issue.

I've been wanting to dive into this stuff but my day job has other ideas so I'm going to live vicariously through you for a while. :)

Keep it coming!

stu

Link to comment
Share on other sites

Thanks for the encouragement Simon and Stu :)

Yeah, I agree that sampling the channels separately is much closer to the "real thing", but the other way *is* cheaper and when the scattering distance is really short it might not matter, so you could go the cheaper way... dunno... I think we all agree that it should be there as an option anyway -- if for no other reason than that it makes for quicker tests :P

This installment is about HDA's, VOP's, and All That Jazz.

OK. 'been scratching my head most of today trying to come up with a way to minimize the trial-and-error part of setting up the point cloud and the ways in which it relates to the shader parameters. I've only really half succeeded :(

Automatic Pointcloud Generation

This is now implemented in a SOP HDA called "SSS Point Cloud".

There are three things that are very inter-related here: 1) the scattering distance dictates the radius of integration; 2) The optimal pointcloud density is a function of the scattering distance (#1) and the object's surface area; and 3) The number of points to filter for the reconstruction ("Points to Filter") is a function of #1 and #2. And to muddle everything up even more, we're dealing with a point distribution that is.... well... far from "even", so things get a little murky.

For an object with surface area A, and with a scattering distance r, I try to come up with an "optimal" pointcloud density using the relation:

NumOfPoints = A Ns / PI r^2

where Ns is the number of samples to take over the disk whose radius is the scattering distance. Think of Ns as a "Super Sample" value. I've found that something around 8 samples gives good results across the board. But! It *is* possible that the user might want a scattering radius that is many times the radius of the bounding sphere for the whole object, in which case we end up under-sampling even though the relationship still holds. To counteract this problem, I've added a "Density Threshold" parameter that controls the "floor" of the density -- the minimum number of points in the cloud, regardless of scattering distance.

If all fails, I've added a "manual" density mode, which you can use to set the number of points explicitly.

The only awkward aspect of the automatic cloud generation right now, is that the user has to remember to match the scattering distance value he used for generating the cloud, with the value used in the shader. I can't see any solution for this (and it is something you'd have to do mentally even if you were using "manual" mode). Suggestions?

TODO: I haven't yet explored how the surface area and the cloud density relate to the number of points needed for a smooth reconstruction ("Points to Filter") -- so this is currently still a trial-and-error thing. I would just *love* to turn that ugly parameter into something like a "blur" control that defaults to somewhere in the neighborhood of a correct value... I have no doubt that the relationshp can be found; it is how to pass the necessary info to the VOP that will be the real challenge here.

To Save Or Not To Save

Sheesh! You'd think that saving files would be simple!

I wanted to add the convenience of saving the pointcloud to a .tbf file from within the HDA. Originally I did it using a geometry ROP with a post-frame script to do the i3dconvert from bgeo to tbf (through the "unix" hscript comand). Then I noticed that Apprentice doesn't support the geometry ROP... yet it *does* let you save bgeo's from SOP's... so; like... what the heck's up with that!?!

Not wanting to leave Apprentice users out in the cold, I ended up doing my own callback script to do the save/convert and threw away the ROP -- I hope this works for Apprentice, but I'm not sure because I'm using Master right now.

And another thing! :rolleyes: Even though I'm not using the ROP, the callback script still uses the "unix" command to run i3dconvert, which begs the question: will this work for Windoze users? ... (I don't know if the "unix" shell command turns into a "dos" command under Win). If it doesn't work, I started turning the i3dconvert call into a TCL call in hopes that at least *that* may do the trick, but I didn't finish. All suggestions welcome! :)

Bye Bye Shader

The shader became a VOP.

I didn't get to do much more than the conversion because I spent most of my time dealing with the point cloud (and endless testing thereof). But now that we have a VOP, it will make testing textures and such much easier. Which brings me to the next installment: should we sample surface color on the incoming side (currently, surface color is only considered on the outgoing side) or not?

Diffuse scattering "bleeds" colors pretty quickly so that it all pretty much becomes mush at a few "scattering_distance" units into the medium, but it *does* make a difference for very translucent materials. Now that I have this in VOP form, I think I'll explore this little corner next.

Here's the Standford bunny with *very* long scattering distance and surprisingly few samples... one of my auto-cloud tests... totally gratuitous and unnecessary but hey! every post has to have at least one image, no? :)

post-148-1097900250.jpg

And the latest bundle:

SSSpixar3.zip

P.S: I don't think I'll get to touch any of this over the weekend, so I doubt I'll make any updates until next week.

Cheers!

Link to comment
Share on other sites

The only awkward aspect of the automatic cloud generation right now, is that the user has to remember to match the scattering distance value he used for generating the cloud, with the value used in the shader. I can't see any solution for this (and it is something you'd have to do mentally even if you were using "manual" mode). Suggestions?

14316[/snapback]

How about adding it as an extra attribute to the cloud and just picking it up in the shader?

Not wanting to leave Apprentice users out in the cold, I ended up doing my own callback script to do the save/convert and threw away the ROP -- I hope this works for Apprentice, but I'm not sure because I'm using Master right now.

14316[/snapback]

This definately works, I used it in my forloop op.

Link to comment
Share on other sites

And another thing! :rolleyes: Even though I'm not using the ROP, the callback script still uses the "unix" command to run i3dconvert, which begs the question: will this work for Windoze users? ... (I don't know if the "unix" shell command turns into a "dos" command under Win). If it doesn't work, I started turning the i3dconvert call into a TCL call in hopes that at least *that* may do the trick, but I didn't finish. All suggestions welcome! :)

14316[/snapback]

Unix commands work fine in hscript on windows... so probably should be ok.

Link to comment
Share on other sites

How about adding it as an extra attribute to the cloud and just picking it up in the shader?

14317[/snapback]

Hey Simon,

Yes, you're totally right, of course. When I wrote that, I was thinking "I don't have a shader anymore, now I have a VOP; so... 'attribute shmattribute' (it's all up to the user)". But this attribute belongs to the pointcloud, not the geometry (duh!), so I can still be sneaky and get at it regardless of how the user wires up the VOPs. Thanks. :)

I'm not in front of Houdini right now, but here's what I'm thinking:

*) The VOP (more precisely, the ssMulti() function) will always override the user-specified value for the scattering distance parameter (sd) if (and only if) it is bound to the pointcloud data.

*) The pointcloudSOP (the HDA I posted) will add this attribute to the pointcloud only when using "Automatic" mode for the distribution -- the thinking being that since this is the only time when the user is being asked to think about "scattering distance" in a geometry context (SOPs), then this is the only time when the bridge to the shader is built.

It would be done behind the user's back, which is usually something that comes back to bite you later somehow, but.... what do you think?

Unix commands work fine in hscript on windows... so probably should be ok.

14318[/snapback]

I'll confirm with my Apprentice cut at home (dual linux/windows) when I get a chance.

Why no option "Min/Average/Max"? Because it's have no sense?

14319[/snapback]

Hi Hoknamahn,

Good question. It could also be luminance, or any other combination you like. The truth is that once I decided to make the "true" scattering distance dependent on *both* terms (Scattering Distance *and* Surface Colour), I introduced some ambiguity behind the curtains -- depending on the mode (chromatic or mono), some portions of those two terms become redundant.

I'm hoping that this won't become a problem though, because at least conceptually, one thinks of "Surface Colour" as colour, and of "Scattering Distance" as a "dimmer", or a "dial", or an "intensity" control. But of course, we all know that "colour" also encodes "intensity" -- and that's where we have to make a choice.

So no; it's not so much that having min, max, and avg available "doesn't make sense", but that providing all three is unnecessary (and worse; confusing), so we should pick one. And the reason that I picked Max is simply that that would be your longest wavelength if you were sampling chromatically, and you'd always want to derive your normalizing factor (the integral of the smooth() function) from the signal that contains the highest value. Also note that picking Max makes for the least amount of visual difference if one were flipping back and forth between chromatic and monochromatic sampling.

Just a choice.

Cheers, and thanks for the comments! :)

Link to comment
Share on other sites

Here's the Standford bunny with *very* long scattering distance and surprisingly few samples... one of my auto-cloud tests... totally gratuitous and unnecessary but hey! every post has to have at least one image, no? :)

14316[/snapback]

Hi Mario, when you say you used surprisingly few samples, do you mean you didn't use your auto generate method? I'm finding for large surfaces that point clouds generated is pretty damn big, especially for small scattering distances ( naturally) however the effect I'm after is going to be quite subtle so I'm wondering whether under-sampling will actually be ok, and what is the effect of under sampling anyway, can something as blurry as sss alias, or is it a question of what is mathematically correct versus artist tweaking values til he gets what looks right?

Link to comment
Share on other sites

Where are you? ;)  I'm transfixed.

14346[/snapback]

Hehe. I decided to leave the small things for later and concentrate on implementing the other big chunk: single scattering. This is not trivial, and I hit a bit of a mathematical snag, which has delayed things a bit. <_< .... I should have a major update soon (either end-of-day today or tomorrow sometime).

Hi Mario, when you say you used surprisingly few samples, do you mean you didn't use your auto generate method? I'm finding for large surfaces that point clouds generated is pretty damn big, especially for small scattering distances ( naturally) however the effect I'm after is going to be quite subtle so I'm wondering whether under-sampling will actually be ok, and what is the effect of under sampling anyway, can something as blurry as sss alias, or is it a question of what is mathematically correct versus artist tweaking values til he gets what looks right?

14347[/snapback]

By "surprisingly small num of samples" I meant that judging by the pointcloud, I would have expected artifacts, but there were none (that I could see anyway) -- a very sparse cloud.

The auto-generation uses a pretty conservative estimate, and 8 samples per disk (the default) may be too high, but I didn't want to under-shoot it and end up with splotchy renders by default. Maybe 8 is too high for a default.

As far as getting a specific effect (like a super-blurred SSS with very few samples), you can override the auto-generation by setting it to "manual" and you're the boss. Then it becomes a game of tweaking the number of point-cloud points and the number of samples in the reconstruction. For a very blurry effect, go low on the pc points and high on the "Points to Filter" param.

The visible artifact when undersampling (and by that I mean so low that main features in the object are completely ignored) is a "splotchy" look where illumination is uneven with possible discoloration (or incorrect hue) when doing chromatic samplig. In the case of short scattering distances, undersampling results in a "burn" effect, where the SSS will tend towards white when it should be colored. To see an extreme of this, you can set the "Points to Filter" param to 1 (single sample), and you'll see the individual splats.

(I think it might be a good idea if I post a gallery of artifacts and what they could be caused by at the end of the diary.)

Sorry for the delay... stay tuned ;)

Cheers!

Link to comment
Share on other sites

All good. Your explaination tallies nicely with my experimental results... :P

As you say this method is good even for quite low sample numbers as long as the "Points to filter" is high enough. All very stable.

What I'm still a little confused about though is that the "Points to sample" seems to effect things more than I thought. For instance if I set a scattering distance that would seem to include at most X number of sample points and I set "points to sample" to well above this value I still get different results in the render when I turn this value up even more.

So in your test file with teapot lid try this.

Make the the scattering distance 0.044 and copy a circle of this radius to some of the point cloud points. My guest is about 50 sample points lie in this radius. So I put 50 in the "Num of Filter points".

Do a render and you see splotches.

So turn up the number to 100, much better.

Turn it up to 200 and it is perfect.

Now go to the scatter sop and reduce the number of points to 1000 and re-render. Still looks pretty good, although the shadows aren't as sharp, no bad thing. Reduce the "Num of Filter points" back to 50 and it looks splotchy again.

Seems to me the "Num of filter points" has a much greater effect on the quality than anything else, and isn't very intuitive, since it needs to be much higher than you would think to work.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...