Jump to content

The SSS Diaries


Recommended Posts

Hey Serg,

I agree with everything you're saying. All the issues you point out become particularly important with materials that have a relatively short scattering distance, like skin... here's where the pc-based approach becomes less and less effective (while still keeping a reasonable amount of points). IIRC, the Pixar paper that I adapted the algorithm from used texture atlases to store irradiance and do the transfer (it most certainly did not use pointclouds), so in that sense it was "per-pixel".

At the time I did this, point clouds had just appeared on the scene and seemed like they could be useful in this context, so I thought I'd try them out, and the result is this thread. But to be honest, I never liked working with them in practice (i.e: solving actual shots) -- all kinds of issues that are by now painfully familiar to anyone who's tried to use these vops.

I really like the results you're getting (no matter how much of a cheat it may or may not be :)). I just downloaded your archive and will dig around as soon as I get a chance. I'm also aware of the NVidia chapters in GPUGems3, and even though I haven't read through them yet, the results are certainly compelling. I also keep thinking that, given the advances since I started this thread, there's just *got* to be a better, equally or more accurate, *and* user-friendly way to do this.

Here's a wild thought: have you tried to bend PBR to the task? (I haven't)... anamous?

Cheers!

Link to comment
Share on other sites

I just finished (or almost finished) implementing the GPUGems3 method. I am still working on the color texture, and adjusting the parameters.

I post here a few test renders, but don't think they are final. I'm still working on everything.

post-1475-1216747525_thumb.jpg

post-1475-1216747640_thumb.jpg

post-1475-1216747747_thumb.jpg

Link to comment
Share on other sites

I just finished (or almost finished) implementing the GPUGems3 method. I am still working on the color texture, and adjusting the parameters.

I post here a few test renders, but don't think they are final. I'm still working on everything.

:D

Would be cool to see one with shadows, to show off those neat redish glows :)

btw, re my shader, there are a few gotchas to be aware off. The main one is that due to the fine tolerances involved (especially with small scatter) the shading is very sensitive (the look will change) to certain things... i.e. shadowmap/deepshadow/raytrace shadows (and their bias settings), raytrace shading vs micropoly, and any combinations of these things.

So, you will need to compensate for it by tweaking the scatter radius and diffuse bias settings. Very annoying, especially if you have to change these things midway through a project according to the needs of a shot that you didnt pre-empt... it is unavoidable with this technique. Also, if you are using shadowmaps/deep shadows you might get strange results with displacement (sunk areas will appear darker giving a dirty look), this is because the shadow map simply isn't accurate enough.

Using raytrace lights with ray trace shading (add rendering property to object) is imo the most reliable method. Just remember the shader iterations can be set way lower that you would need if using micropoly, since the render engine will be doing most of the work in getting rid of the noise (according to your Pixel Sample settings in the render output)

If you want area lights you need to create duplicates that affect only objects that have the shader, and set their samples to 1. The shader will do the noise reduction.

cheers

S

Edited by Serg
Link to comment
Share on other sites

This is a render using Area lights, raytrace shading is added to the object, 3 pixel samples and 32 shader iterations. 5m to render.

Because the lights area samples is set to 1, they are essentially free. We are rendering multiple samples of the shadow in the shader anyway :)

There is a sort of faceting effect, it goes away with Sub-Ds. Texturing/displacement is likely to hide it if you dont render as subd.

post-1495-1216753979_thumb.jpg

Link to comment
Share on other sites

This is a render using Area lights, raytrace shading is added to the object, 3 pixel samples and 32 shader iterations. 5m to render.

Because the lights area samples is set to 1, they are essentially free. We are rendering multiple samples of the shadow in the shader anyway :)

There is a sort of faceting effect, it goes away with Sub-Ds. Texturing/displacement is likely to hide it if you dont render as subd.

post-1495-1216753979_thumb.jpg

heh Serge

hope yer feelin better - we're all missin you! :D

yeah that latest test looks nice but still too much like amb occ rather than scattering to me...that technique does work nicely for more translucent effects though

I know the diffuse model (oren nayer in my case) + blurry SSS is a kind of cludge but I'm getting some nice results with it nonetheless (not perfect yet mind you by a long way - ie still not near enough Jensens reference stuff - or even the example renders in GPU gems 3 (I bought that today funnily enough!) which come very close to Jensens work using the guassian filtering method)

the pc cloud stuff wasn't that bad (and for the "mr V" shots its well doable) - my latest cloud is 1.5 million scattered over a level 3 subdivision displaced head - takes about 2 minutes or so to cook, but the render times for marios SSSmulti VOP go up a lot at that density as well - think I may try using the actual points from the model at maybe subdivision level 2 displaced next time

Edited by stelvis
Link to comment
Share on other sites

<much good stuff snipped>

Untested...

Maybe I'll get a chance to test it out tonight, but I *think* that should work.

Cheers.

Mario - a huge thankyou AGAIN! :)

BTW - I know there have been some slightly joky references to it already on this thread but what do you really think about Jensens more recent attempts to re-parametarize his own model (ie the whole melanin, hemoglobin thing) or accelerate it using lookups for that matter... are they doable in VEX? - even if not 'easily' ;)

Edited by stelvis
Link to comment
Share on other sites

Hey Ofer,

Is the implementation you're working on for Mantra?

I'd love to hear your experiences with it so far. Anything to watch out for? Any war wounds? :)

Hey Serg,

I just had a look at the hipfile you posted. I wish I had the energy do dig through all those VOPs, but alas, after a full day at work, my head is starting to explode at layer 1 :) Any chance you have this in a VEX flavour?

A few observations after playing with it a little:

1. The are two kinds of subtle artifacts I'm seeing. One looks like MP boundaries, and they remind me of an issue with using nrandom() in combination with displacing P to either trace or sample lighting (which I believe is your case). Try replacing nrandom() with a high freq noise to see if that's what's causing it. The other one looks like the stepping you get when raymarching volumes. This one gets worse when I lower the scatter radius. My gut tells me its related to the pattern of your displaced positions -- could it be that the spatial distribution of your sample positions stays the same for all P? (i.e: the cloud of samples around each P is constant?).

2. Why the fixed list of light sources? Is there something inherent in the algorithm that doesn't let you just loop through all lights?

3. I'm not familiar with what's going on inside, but I can't seem to get a really tight scatter radius. I get the feeling that "Back Scatter Boost", "Normal Bias" and "Diffuse Bias" are all playing a role in "blurring" the lighting and they end up clobbering the concept of an actual measured radius, but I could be wrong of course. Here's what I mean (the following have been gamma corrected (2.2) to linearize the result):

This is vanilla Lambert. This is the reference for raw irradiance (these are the boundaries that should get blurred more or less depending on the scatter radius):

post-148-1216767912_thumb.jpg

Here's a very tiny scatter radius of 0.01 (which in object space for this head is very small) rendered using the pc-based multiple sss (no lambert mixed in, and no single scattering, which is why the ears are not translucent):

post-148-1216767918_thumb.jpg

And here's your algorithm (without occlusion) also set to 0.01.

post-148-1216767924_thumb.jpg

I'm seeing the shadow areas filled in quite a bit, and some shadow boundaries not blurring at all (neck shadow line from left light). Maybe it's getting low-level illumination from the samples that get pushed into the surface? There is also a considerable boost to the overall illumination which seems to be affected by the scattering radius. Could there be some normalization missing somewhere?

The overall result is quite nice, but after you push the iterations to 128 and add the 64 occlusion samples... it gets a lot more expensive than I thought it would be. That last image, without occlusion, took a whopping 50:43.71 on my machine (2 procs), and the pc version took 8:07 with ~1 million points.

Despite all those "tweaky" things I just mentioned though, the main advantage I see with your approach (and it's a big one) is that it doesn't require one to mess about with yucky point clouds... that is without a doubt the bane of the pc approach... and the one thing I'm hoping can be removed from the picture at some point.

Anxiously awaiting ofer's report on the Nvidia approach :)

Cheers!

P.S: While doing these tests I realized that the bundled single scattering VOP is totally busted (at least in 9.1.246), which is why I didn't include it in the above. Some function must have changed along the way... haven't investigated, but it is definitely unusable as is...

Link to comment
Share on other sites

ie still not near enough Jensens reference stuff

Whew! That's good news, since it is not Jensen's model to begin with :D (it's Pixar's, from Finding Nemo)

BTW - I know there have been some slightly joky references to it already on this thread but what do you really think about Jensens more recent attempts to re-parametarize his own model (ie the whole melanin, hemoglobin thing) or accelerate it using lookups for that matter... are they doable in VEX? - even if not 'easily' ;)

Yes, I actually do think it can be implemented in VEX. I can't say with 100% certainty of course, since I haven't attempted to do it yet, but I don't see anything there that can't be done. The real question at this point is: what underlying structure do you use (within VEX/Houdini/Mantra) to sample irradiance and do the transfer? The only requirements for implementing the details of the transfer function or a particular interpretation of the model's original parameters (which is what this skin-specific version is, as far as I can tell) is being able to understand the math.

Nope. The toughie here is: Given that we're all pretty sick of having to babysit pointclouds in order to sample at some distance from P (and be able to do so efficiently), is there a better, more user-friendly way of doing it (without using the HDK)?

At least that's what I think... today... it could all change tomorrow :)

Link to comment
Share on other sites

Hi,

My implementation is in mantra. I think it is working pretty good, and it wasn't hard to implement. But I still have a few problems:

1. When turning on depth shadow-maps, the mesh unwrapping first pass (with mantra -u), is scaled in a weird way. So I just remembered to

try ray-traced shadows and it works well. Not even sharp, because it is blurred anyway.

2. Since the rendering takes several passes, I need several mantra nodes, and a COP network for the blurring. Now I have this annoying problem that I can't render them all, one after the other, because I have to come in and manually push the 'Reload Sequence" button in the File COP. Maybe I'll write a python script to automate it.

3. It is not slow, compared to the SSS vops, but of course not nearly as fast compared to implementation on the GPU, so playing with the params is not as easy.

Aside from that, I think I have no problems. I solved the seams problem with an expand COP, and compositing the result under the original unwrapped texture.

The specular model looks good, and all parameters can be either constant across the face, or painted as a point attribute (Maybe I'll add an option to use a texture, but I don't think it is necessary). I had some problems with using the normal-maps in houdini, but I guess it is only because I don't have much expirience in shader writing.

Link to comment
Share on other sites

Nope. The toughie here is: Given that we're all pretty sick of having to babysit pointclouds in order to sample at some distance from P (and be able to do so efficiently), is there a better, more user-friendly way of doing it (without using the HDK)?

At least that's what I think... today... it could all change tomorrow :)

totally - with the PC densities involved for good skin (ie with sampling radius's around or under 1 cm) it would probably be more efficient to just sample the surounding shading points (ie after dicing) and I guess if that was available as an option you could probably get connectivity information as well which would be useful...

I was thinking there must bea way to raytrace to do it instead but given that the points you would want to hit with any trace calls on the surrounding surface are like probably the worst case scenario in terms of likely distribution for random sampling (ie roughly in a plane tangential to the shading normal in most cases) that sounds very expensive... given small scattering distance is there anyway to use the surface derivatives (if I understand what these are) to "help" the raytracing along?

Link to comment
Share on other sites

Hi mario

I just had a look at the hipfile you posted. I wish I had the energy do dig through all those VOPs, but alas, after a full day at work, my head is starting to explode at layer 1 :) Any chance you have this in a VEX flavour?

No VEX I'm afraid... my brain can only deal with nodes and wires ;)

A few observations after playing with it a little:

1. The are two kinds of subtle artifacts I'm seeing. One looks like MP boundaries, and they remind me of an issue with using nrandom() in combination with displacing P to either trace or sample lighting (which I believe is your case). Try replacing nrandom() with a high freq noise to see if that's what's causing it. The other one looks like the stepping you get when raymarching volumes. This one gets worse when I lower the scatter radius. My gut tells me its related to the pattern of your displaced positions -- could it be that the spatial distribution of your sample positions stays the same for all P? (i.e: the cloud of samples around each P is constant?).

I used to get weird wireframe-ish renders, I think this is what you refer as MP boundaries. I haven't seen this particular bug in ages, it was very obvious when I was making attempts to blur normals.

The faceting you see, I think is a side-effect of using shadows to achieve the look. This is because shadows have no concept of interpolated polygon normals (they can only see P). In other words, it sees the angle between polys for what they are (thinner thus brighter at the edge).

So, they are kind of like self shadowing artifacts, I suppose for the same reason you sometimes see jagged edges along the terminator in regular shaded stuff.

Also, I tried different ways to generate random per pixel values, but I found that "Non-Deterministic = Quasi-Stratified" to be by far the best. It a very uniform distribution compared to all the other (very clumpy thus harder to iterate till smooth) solutions.

Re the shadow, that is some kind of shadow map aliasing, I see it sometimes even on regular stuff. In this case I increased it to 2K rez and it virtually disappeared.

2. Why the fixed list of light sources? Is there something inherent in the algorithm that doesn't let you just loop through all lights?

In order to use a lambert function for the "Diffuse Bias" amount, this needs to be computed separately for each light. Because we can't change P from within an Illumination Loop.

3. I'm not familiar with what's going on inside, but I can't seem to get a really tight scatter radius. I get the feeling that "Back Scatter Boost", "Normal Bias" and "Diffuse Bias" are all playing a role in "blurring" the lighting and they end up clobbering the concept of an actual measured radius, but I could be wrong of course. Here's what I mean (the following have been gamma corrected (2.2) to linearize the result):

Normal and Diffuse bias only contribute to how much light is visible. It just moves the scattered samples around.

I'm seeing the shadow areas filled in quite a bit, and some shadow boundaries not blurring at all (neck shadow line from left light). Maybe it's getting low-level illumination from the samples that get pushed into the surface? There is also a considerable boost to the overall illumination which seems to be affected by the scattering radius. Could there be some normalization missing somewhere?

I don't really understand why your render so bright, maybe you turned Off occlusion but still left the ambient light On. But even then I don't get how it's so bright!

Re the sharp shadows, this is precisely what I was trying to achieve (sharp edge with decaying soft glow). The scatter radius is the Maximum distance that light can travel. For-Each iteration the radius ramps between radius/20 until it reaches Radius (the min value will be a parm in future).

As I mentioned in a previous post, light is continuously and increasingly scattered with depth. Skin is a very good example of this, we know it only reflects ~6% directly, yet it is still capable of receiving very sharp shadows. This is because light can exit at any point after it enters.

Sharp shadows also happen with very translucent materials, its just harder to see see because the deeply scattered light looks more intense relative to the surface, or because the material is actually transparent up to a certain depth. As you said, thats why your shader works really well for these types of things.

This is also why Nvidia use several blurred maps, with increasing blur values. Also, iirc Jensen's pdf, he is pretty much comping two or three layers together with different scatter distance for his multi layer skin stuff. This is kinda still discontinuous, but its happening in the right places the righ colors etc (in-between, dermis/epidermis/flesh).

The overall result is quite nice, but after you push the iterations to 128 and add the 64 occlusion samples... it gets a lot more expensive than I thought it would be. That last image, without occlusion, took a whopping 50:43.71 on my machine (2 procs), and the pc version took 8:07 with ~1 million points.

Sounds like you used the raytrace engine, but with a lot of samples. I find that RT engine, 64 iterations, 16 occlusion, and 3x3 pixel samples is acceptable noise wise. I get 10m with this (3 2.4ghz cores).

some more tests, I removed all tinting to make it easier to see whats up, and without ambient light/occlusion. 3X3 mantra pixel sampling.

Raytrace shading, 1K deep-shadows, 16 samples. 2m36s (3 cpus)

Ugly artefacts

post-1495-1216817313_thumb.jpg

Raytrace shading, 2K deep-shadows, 16 samples. 2m36s (3 cpus)

Less ugly artefacts

post-1495-1216817586_thumb.jpg

Raytrace shading, raytraced shadows, 16 samples. 1m58s (3 cpus)

note: it's quite a bit faster than deepshadow map (not surprised in this instance). It looks noisier...This is because there is some inherent softness in the shadowmaps which helps with the noise reduction. It also looks like the scatter radius has also slightly increased, for the same reasons.

post-1495-1216818252_thumb.jpg

Raytrace shading, raytraced shadows, 64 samples. 7m30s (3 cpus)

Noise is at a level I would be happy to use for finals. Some texturing and a bit of noise reduction in comp and its gone :)

post-1495-1216818469_thumb.jpg

cheers

S

Link to comment
Share on other sites

heh Serge

hope yer feelin better - we're all missin you! :D

yeah that latest test looks nice but still too much like amb occ rather than scattering to me...that technique does work nicely for more translucent effects though

I know the diffuse model (oren nayer in my case) + blurry SSS is a kind of cludge but I'm getting some nice results with it nonetheless (not perfect yet mind you by a long way - ie still not near enough Jensens reference stuff - or even the example renders in GPU gems 3 (I bought that today funnily enough!) which come very close to Jensens work using the guassian filtering method)

the pc cloud stuff wasn't that bad (and for the "mr V" shots its well doable) - my latest cloud is 1.5 million scattered over a level 3 subdivision displaced head - takes about 2 minutes or so to cook, but the render times for marios SSSmulti VOP go up a lot at that density as well - think I may try using the actual points from the model at maybe subdivision level 2 displaced next time

I tried using the actual geometry as the pcloud too, with Mario's shader. It worked well. The most obvious benefit is that it wont flicker in animation, because you don't have points coming in and out of existence. I even used poly reduct on it (small stuff like ears/fingers/nails etc will have way too many points relative to the rest), the reduction is stable in animation if you also plug the geo into the second input of the poly reduct node (rest position).

cheers

S

Edit: Oh, and check "Use Original Points" in the reduction sop.

Edited by Serg
Link to comment
Share on other sites

I tried using the actual geometry as the pcloud too, with Mario's shader. It worked well. The most obvious benefit is that it wont flicker in animation, because you don't have points coming in and out of existence.

ahh handy tip, i must give this a try

thx

jason

Link to comment
Share on other sites

Hey Ofer,

Aside from that, I think I have no problems. I solved the seams problem with an expand COP, and compositing the result under the original unwrapped texture.

Cool. Glad to hear it's working out.

I was reading that chapter on my way to work today. I'm guessing you're sticking with their approach of 1xIrradianceMap + 6xProfileConvolutions + 1xStretchTexture + 1xAlbedoMap? Did you go with their TranslucentShadowMap thing as well, or are you doing raytraced single scatter?

I was also reading Jensen's multipole paper, and I think it's very doable in the pointcloud context... maybe I'll give it a try sometime.

Hey Serg. Thanks for the breakdown.

In order to use a lambert function for the "Diffuse Bias" amount, this needs to be computed separately for each light. Because we can't change P from within an Illumination Loop.

Right, I think I see what you're doing in there... well, maybe this restriction could go away with the new array support in 9.5?

Normal and Diffuse bias only contribute to how much light is visible. It just moves the scattered samples around.

I don't really understand why your render so bright, maybe you turned Off occlusion but still left the ambient light On. But even then I don't get how it's so bright!

Yup, I had turned off occlusion, but left the ambient light on. Sorry. That accounted for some of the extra brightness, but I think the main contributor is the smaller scattering radius -- you had it at 0.05 and I tested with 0.01. What I was talking about is a boost in illumination as the radius decreases (a boost well beyond irradiance), that's what made me think that maybe there was some normalization missing somewhere. In theory, as the scattering radius approaches zero, the result approaches raw irradiance (lambert diffuse).

Here's a test of diminishing radii -- 64 samples, no occlusion, no ambient:

post-148-1216833285_thumb.jpg

No biggie though, it just caught my eye that's all.

Re my crazy slow renders... yep, I was playing around with the rop and left it set to the raytrace engine. MP does much better indeed.

Hey. In case I haven't said it already: Thanks so much for sharing this! It's a really nice model, and a welcome relief from dealing with those nasty point clouds.

ahh handy tip, i must give this a try

Don't forget to calculate "ptarea" when you're doing this though, else the results are going to be weird.

Cheers.

Link to comment
Share on other sites

I didn't implement the TSM part yet, and Im not sure I will. I didn't implement the stretch correction also, but I will.

I don't do any single scattering. The results I get are only thanks to the texture-space diffusion technique. And I think it is good enough for me right now.

Currently Im writing the 6 irradiance textures to disk, but I now try to sample directly from the different COPs, using the 'pic' expression. I'm just trying to

figure out how to do that in VOPs. Maybe using the Inline VOP...

I didn't try reading Jensen's papers, I won't understand them enough to implement anything, anyway.

Link to comment
Share on other sites

Currently Im writing the 6 irradiance textures to disk, but I now try to sample directly from the different COPs, using the 'pic' expression. I'm just trying to

figure out how to do that in VOPs. Maybe using the Inline VOP...

Oh... you mean pic() to assign point colours in SOPs?

In VOPs I'd suggest you stick to the Texture VOP and yes, let Mantra load from disk. Also, keep in mind that images from COPs cannot be filtered, so I'd stay away from that. Stay with writing your convolved textures to disk and loading them into your shader with the Texture VOP (or texture() function).

Link to comment
Share on other sites

Don't forget you can convolve the texture in vops also. Much the same way I'm blurring P you can also blur UV... would save you re-rendering potentially large numbers of 4K bakes when you only want to tweak scatter radius and the like :)

The way it works you can easily use a point attribute to change the blur radius in U and v... saving you from also having to bake the distortion data to images.

It pretty darned cheap too re render time :)

I've used it to blur procedural texture displacements etc...

I'm doing a quick little test to illustrate.

Serg

Link to comment
Share on other sites

ok here it is.

The bake with the two lights and an environment light took around 2m40s, 2K rez. The render was 42s, including the pointless re-rendering the lights at 0.06%! :)

The Uvs aren't really helping, as this doesn't have the radius compensation. It's not really following Nvidia's model either.

with 0.02 radius (UV space of course):

post-1495-1216911965_thumb.jpg

With 0.04 radius:

post-1495-1216912425_thumb.jpg

0.15 :)

post-1495-1216912567_thumb.jpg

The Hip: Use the toggle in the shader to switch to the baking shader before baking, then render with the Mantra1 output.

SSS_BAKE_RnD_V1.hip.rar

Link to comment
Share on other sites

Thanks, I'll have a look. But since the texture is convolved several times, and each time the previous one is used as a source, I think it would be difficult achieving the same large kernel with shifting uv's. Unless I don't understand what you mean, since I didn't look at the HIP yet.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...