Jump to content

The SSS Diaries


Recommended Posts

I made an attempt before to do this in Sops, by measuring the perimeter of each polygon against it's perimeter in uv. It's in one of the scenes I uploaded here. Would be cool if it could be done in the shader though... just to save another bit of pre-processing.

S

thats sounds pretty close actually but probably not quite the right value you need - the 'stretch' map in the nvidia paper measures u and v stretch per pixel separately (which works nicely for them as they convolve on u and v separately as well)

a difference in perimeter per poly stored on the points is kinda one dimensional - ie although it tells you something about the local distortion it doesn't tell you which direction the stretching occurs in UV space (or by how much in in any precise way) and thus how to correctly apply the 'distortion' values to compensate the variance value in u and v

the other (somewhat familiar) issue is that you really need to get at the sample points surface distortion rather than that at the local shading point (which is of course the reason the irradiance is baked in the first place :)

Link to comment
Share on other sites

Sorry for the offtopic (slighly less so than the gamma intermezzo), but don't you guys sit about 10 meters away from each other? :P

nerds.jpg

Go on, interesting conversation.

interesting snapshots andras

is this the kind of thing you get up to these days? >:D

anyhoo sorry for taking up the bandwidth with office chat :)

Link to comment
Share on other sites

yah... I've been trying to find some examples of a gaussian blur implemented in code (ie I have a hard time decoding the raw maths) without much luck so far, but translating to a vex method the gist seems to be...

Hi Stu,

I've been interestedly reading this dialog and I thought I might mention the Fast Gaussian technique of using 3 box blurs - and you can visualize this by using the Filter CHOP with Gaussian and then compare the result to 3 box blurs. The filter width is different but you can get the same profile.

From wikipedia: "Box blurs are frequently used to approximate a Gaussian blur[2]. By the central limit theorem, if applied 3 times on the same image, a box blur approximates the Gaussian kernel to within about 3%, yielding the same result as a quadratic convolution kernel."

And: http://www.gamasutra.com/features/20010209/evans_01.htm

Link to comment
Share on other sites

  • 4 months later...

hi Folks,

Been messin a lot with the gather vop lately so I though I'd give it a crack with sss (again...)

Basically it works by firing a cone of rays from a small distance inside the surface towards the surface normal (it fetches the lighting), and applying the nvidia diffusion profiles to it.

I avoid the self intersections that arise from displacing P by using a rayhit's distance (in the opposite direction) as a scaler in the displacement. This had an undesirable effect of reducing the apparent blurriness in those repaired areas, since the scatter radius is controlled by the distance between P and dispP. But it worked out ok by using the rayhit is once again to increase the cone angle of the gather vop to compensate.

I wasn't expecting the light to transmit from one side to the other, I found by accident that if the cone angle was big enough that it turned into a sphere and thus also shot rays in the opposite direction and get light from there also. The light through the ears happens because of this. Which is great but it has the side effect of darkening the red too much because the rays are spread over a bigger area, I got around it by using the dot prod of the the gathered hit-N and N as a multiplier on intensity.

Because the samples are being taken from dispP along N you get distortions when picking up light from the other side, I correct this by randomizing dispP with a nrand (thankfully no mp boundary lines so far).

Another thing left to do is attenuate the intensity with the ray's length... at the moment any falloff is happening due to the Gather's maxdist being blurred.

Another way of doing transmission through surfaces is with two gather vops (I will try this later), one that just blurs the surface (the cone isn't large enough to flip over) and another that inverts the ray direction so that it looks only for light coming from the opposite direction (for glowing ears effect). It might be more efficient to render because the cones can be more focused rather than going all over the place. At the moment a LOT of samples are used to smooth the bigger blurs because of this. I put an oversample control in there to increase the samples as the blur increases, otherwise you have a situation where even the small blurs get a ton and the red stuff not enough.

Interestingly, for the quality it's much much faster than my old shadow blur trick shader, it doesn't screw up the diffusion profiles anywhere near as much, has no need for light masks and doesn't paint wireframes when rendering in mp mode :)

With area lights you still have to render them with one sample (the shader will do the smoothing)... I don't know how to override shadow sampling within the surface shader.

The shader will apply the diffusion to whatever you plug into the "input", so you can use baked textures to save some render time.

Anyway I'm sure there's plenty of scope for improvement! have a look, rip it apart, rebuild it properly if you like :)

Cheers

S

post-1495-1228760542_thumb.jpg

Gamma upped so you can see it better.

post-1495-1228760558_thumb.jpg

AXIS_Skin_SSS.otl

Link to comment
Share on other sites

Been messin a lot with the gather vop lately so I though I'd give it a crack with sss (again...)

Fun with gather! :)

That's pretty cool Serg. I didn't dig too deeply (VOPs defeat me :)) but it looks like the same approach as your "curvature" shader except now you've generalized it to gather any input. Nice.

The only possibly objectionable problem with all of these "scan the surface from a height" methods, is that you have to restrict the scanning to primary rays, else you'd end up with a runaway explosion of rays. As a result, none of the effects based on this method will be visible to secondary rays -- IOW, the sss in this case will not show up in reflections.

post-148-1228774833_thumb.jpg

As an aside, there's one other thing to watch out for in *any* gather-based method, not just this one, and that is that any surface whose shader forces its normals to face forward (a common practice unfortunately) will produce the wrong result when scanned from the "wrong" side. Try your shader with a single point light with no shadows, and feed a LightingVop (at default) set to Lambertian to its "input", and you'll see what I mean.

Lastly, if you feel like experimenting some more along these lines, you might want to try just gathering inside the entire bottom hemisphere (along -N) attenuating contributions by distance. This is basically the original "brute-force" sss method, and I think you can still find it mentioned in PRMan's sss application notes. It works. The only reason it's not used often is because it tends to be expensive. But it has the advantage that it's not restricted to primary rays, so it can show up everywhere. Check it out.

Cheers!

Link to comment
Share on other sites

The only possibly objectionable problem with all of these "scan the surface from a height" methods, is that you have to restrict the scanning to primary rays, else you'd end up with a runaway explosion of rays. As a result, none of the effects based on this method will be visible to secondary rays -- IOW, the sss in this case will not show up in reflections.

post-148-1228774833_thumb.jpg

Yeah that runaway explosion is why I have to block the shader bits if ray level isn't 1. I'm hoping someone with a better understanding of the gather vop would look at this and know how to do it properly. Or maybe I can reduce samples to 1 for higher level rays?

As an aside, there's one other thing to watch out for in *any* gather-based method, not just this one, and that is that any surface whose shader forces its normals to face forward (a common practice unfortunately) will produce the wrong result when scanned from the "wrong" side. Try your shader with a single point light with no shadows, and feed a LightingVop (at default) set to Lambertian to its "input", and you'll see what I mean.

Yeah I noticed that sampling an OrenNayer or occlusion from the inside out don't work with this shader :( and lambert only works if you turn Off "Ensure faces point forward" ...

My first try at this shader sent rays from the outside towards the surface which would have overcome this issue, but I switched to shooting from the inside out because it rendered something like 10 times faster! but I don't understand why that is at all... And using the rayhit distance to prevent self intersections didn't work nearly as well when displacing outward. I'm hoping someone might know of a much simpler way to ignore self intersections than limiting the offset position via raytracing.

The other problem of shooting rays from the outside inwards is that you can't get light to transmit through surfaces since the ray is blocked by first surface it comes across... it would work though if you somehow got the ray to keep going until it hit the next surface and added the back colour * by the ray length between the 2 surfaces... dunno if that is even possible with the gather vop, maybe with vex code, but I cant code yet :(

Lastly, if you feel like experimenting some more along these lines, you might want to try just gathering inside the entire bottom hemisphere (along -N) attenuating contributions by distance. This is basically the original "brute-force" sss method, and I think you can still find it mentioned in PRMan's sss application notes. It works. The only reason it's not used often is because it tends to be expensive. But it has the advantage that it's not restricted to primary rays, so it can show up everywhere. Check it out.

Thanks, I'll check that out! is the doc freely available on the net? we only have some RMfM licenses left here, dont think it's in that.

I think nowadays it isn't that expensive, 10-15mins for a full frame with an sss shader that can capture displacement detail is pretty cheap! :) That frame is using 512 samples for the most blurred layer (the "Over Sample" control is a multiplier on the main "Samples" parm that ramps through the itteration number), I'm pretty sure I can get that way way down if I can focus the samples more, I see that 10-15min becoming 5 mins! once it's down to that it becomes a non-issue really, it depends on how fast you actually need the frames of course :)

Link to comment
Share on other sites

Thanks, I'll check that out! is the doc freely available on the net? we only have some RMfM licenses left here, dont think it's in that.

I think this link contains a modified version of what you'd find in the prman docs. The gather loop looks unchanged to me and it's pretty much the heart of whats going on.

http://www.sfdm.scad.edu/faculty/mkesson/v...ey/sss/sss.html

I could be mistaken though. I thought you're referring to absorbtion only (no scattering).

If you have RFM you can check out the Jam shader from the docs as well. I am pretty sure it uses a gather loop (under the hood) to fire rays into an object and attenuate it's opacity based on ray length. Not exactly the same as what you're doing, but similar and equally cool use of gather imho.

Link to comment
Share on other sites

  • 5 weeks later...
  • 2 weeks later...
  • 3 weeks later...
  • 3 weeks later...
  • 3 months later...

hey. as i'm new to houdini i wonder about those sss/point cloud stuff. how to set it up, is there a tutorial to get into? is there another way without pointclouds to render good skin/sss in mantra?

thanks in advance for any hint. (just downloaded the axis sss shader, and testing it out, thanks for sharing!seems to work great out of the box!)

Edited by theflu
Link to comment
Share on other sites

  • 1 month later...
  • 3 weeks later...
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...