Jump to content

The SSS Diaries


Recommended Posts

Hey there,

Sorry for being a little absent.

Just wanted to say great tutorial. Understood every word. And the concepts.

Should complie your post into a pdf or something.

Here is a lil render

copy0kh.jpg

Multiple scattering, but really looking forwrd to how displacements work with this. I will go back through this thread and try and guess how its done, but i will see how it goes.

-andy

Link to comment
Share on other sites

sorry to keep posting, but here is a lil pic with just single scattering, sorry for the horid colour. Im really understanding this shader, esp with the help from this thread and from the help files.

cheers

ssssingletest0fx.jpg

Link to comment
Share on other sites

  • 2 months later...

Hi

here are two renders I did with Houdini sss shader

hope you like them :rolleyes:

thorn.jpg

captain.jpg

I have some questions though

How should I continue to texture map sss shader for intensitiy and coloration of sss effect

and if I have objects using different shaders + sss on how do I render this , because light setup for opaque shaders will not necesarily fit to sss and vice versa?!

So should I always keep sss in layer and add it in composite, and if so how do I multiply it with the diffuse?

And this, how do I render opaque object poking through jelly mass of sss and get out nice transparency faloff on it?

Hope this makes sense :huh:

thanks

Z

Link to comment
Share on other sites

  • 9 months later...

i realy won't to add some feature in shader .

us knowen skin need a multiple sss but their is a lot of sss method for computating .i realy love point cloud and it's the best way to deal with this type of effect using Mantra or Prman.

i tend to use this shader for skin but i need to know how to get a Cookcorrence Model for skin i realy have a one written but it's realy old and not either based on Wavelength.

also a Brdf orenNayer needed for skin.

so this is the topic that i'm trying to get it

oren nayer Diffuse

Cooktorrence model

and multiple sss wich ready in here also how diffuse and sss well be used to get the final image is their any multiplication complex equation or just the standard one .

Link to comment
Share on other sites

i realy won't to add some feature in shader .

us knowen skin need a multiple sss but their is a lot of sss method for computating .i realy love point cloud and it's the best way to deal with this type of effect using Mantra or Prman.

i tend to use this shader for skin but i need to know how to get a Cookcorrence Model for skin i realy have a one written but it's realy old and not either based on Wavelength.

also a Brdf orenNayer needed for skin.

so this is the topic that i'm trying to get it

oren nayer Diffuse

Cooktorrence model

and multiple sss wich ready in here also how diffuse and sss well be used to get the final image is their any multiplication complex equation or just the standard one .

The Cook-Torrance specular model is usually associated with metallic surfaces, not skin. If you're looking for an oily "sheen" then you're probably better off using the specular component of Matt Pharr's old skin model -- it's faster to compute than any microfacet model and likely more believable.

If you insist on trying the Cook-Torrance model and, as you say, you already have an "old" implementation kicking around, then by all means use it! (this model hasn't really changed much since its original publication, except maybe for a couple of alternative distributions). If you don't already have an implementation (old or new), then just Google for it and you'll likely end up with no less than five billion hits :)

The Oren-Nayar diffuse model is already available to you: just use VEX's bult-in diffuse model and set its roughness parameter to 1.

The multiple scattering VOP in this thread already provides you with a way to mix in Lambert diffuse (I think I called the parameter "Diffuse Mix" or something like that). You could always change it to Oren-Nayar (just set roughness to 1), but keep in mind that the more translucent a surface is, the less diffuse reflectance you'd expect to see (the "oily sheen" comes from a presumed oily layer *on top of* the skin, not intrinsic to it).

I'm curious as to why you're asking specifically for Cook-Torrance and Oren-Nayar... did you read somewhere that these where "necessary" for "good skin"?

Hope that helps.

Cheers!

Link to comment
Share on other sites

hi freind and thanks for info

well i need to know if the sss model in here implement the realy physical mulitple sss or the what i need it is a di-pole shader the some developped by jensen in "A Spectral BSSRDF for Shading Human Skin" done by Craig Donner and Henrik Wann Jensen.

an any way i have creat my own sss shader but it's an approximaition and i use just one layer for sss also i use point cloud with renderman .

and in general skin need a cooktorrence model with a low value of turbulence or wavelength turbulence but genelraly artist creat that effect using two specular texture to fake the effect it can be achieved in this way.

okay i just want to know if the model is a dipole one or not.

also here is some of my test that i have did with renderman i use maya in this case .i have creat a mel script to automtize the point cloud generation with a complete UI.

if u are interested i will share it hand02.jpg

Hand_shoot01.jpg

render passes.

hand_pass.jpg

Link to comment
Share on other sites

I hope nobody minds that I'm merging Serg's and Saber's comments from the "Realistic Human Skin Urgent Please" thread into this one -- things were starting to get a little hard to follow when spread over two threads like that, and I thought that if code was going to be shared, it should then come over to this thread as it has more visibility.

Saber: well i need to know if the sss model in here implement the realy physical mulitple sss or the what i need it is a di-pole shader the some developped by jensen in "A Spectral BSSRDF for Shading Human Skin" done by Craig Donner and Henrik Wann Jensen.

The model I presented here in this thread is not the Jensen model. It is an adaptation of a model developed by Pixar and published in the SIGGRAPH 2003 Renderman Notes under the title: "Human Skin For Finding Nemo". See the first and second postings in this thread for more details on the model. I did attempt an implementation of Jensen's original paper but found it hard to use/control under the demands of production, so I set out to explore other approximations, finally landing on the Pixar one due to its simplicity. This, however, doesn't mean that I'm putting it out there as the "best" model or anything like that -- it's simply the model I happened to choose when I started the thread, and I think it turned out to be simple to use, fast to compute, and convincing enough for most applications -- but I'm not "married" to it :)

Saber: thanks for info again but realy we found that using di-pole sss is the best way to deal with effect.

Perhaps. One could make a case that it is "more physically correct", but in my opinion, it is much harder to use/control due to its parameterization being so unintuitive (and I'm talking about really *controlling* the look as per the demands of a typical production -- read: some person art-directing down to the pixel level). In other words, my personal reason for choosing the Pixar model had nothing to do with "physical accuracy" and everything to do with usability (again, the model in this thread is the one by Pixar).

Saber: but my problem is how to use the diffrenent sss layer and how to combine them together what equation did need i to apply i have calucalte the two sss pass but i need to know how to apply them to the diffuse

Now I'm a little confused (and I'm blame us stupid humans for not having a single language to communicate with), but it would seem from those images you posted, that you have no problem generating and/or combining the different layers. In any case, I can pretty much guarantee that in both the Pixar model (this thread) and Jensen's model, the two sss components are simply added. Period. From Jensen's paper: BSSRDF = Smulti + Ssingle (and optionally: + specular). And in more general terms, components of reflectance are typically constructed such that they can be brought back together again through ADDition.

Also, the multiple-scattering component (of any sss model -- Pixar, Jensen, whatever) is not meant to be "applied to diffuse" as you say. This is because it itself *is* the diffuse component. IOW: the multiple-scattering component of an sss model would, in a perfect world, completely *replace* a local-illumination diffuse model like Lambert or Oren-Nayar. Unfortunately, in the specific case of the Pixar model (and for reasons that have to do with the shape of the absorption curve, and the fact that the number of points in a typical point cloud would be waaay too few to catch high-frequency detail like shadow edges and such), you need to help the multiple component along just a teeny-tiny bit with a local diffuse model like Lambert. But please note that this is a (truly minor) shortcoming of a particular implementation and not some theoretical feature of subsurface scattering in general.

Serg: The other one is for the very shallow light scattering (it's subtle but very obvious in its absence) that occurs at the skin level before being transmitted to the flesh underneath. <...snip...> Unfortunately I dont think you can do the scattering part of this step in mantra efficiently, the point clouds would have to be very very heavy.

Perhaps you could try single scattering with a phase of 0 and a small scattering distance? (it has the benefit of not requiring a pointcloud and the requirement is localized enough that it could work).

Serg: I found this link a couple years ago about the skin shading used in the film Matrix Reloaded: <...snip...> for animation its very impractical to be baking the diffuse lighting per frame, blur it, re-apply it, re-render, oops the light changed, re-bake... etc, etc... and uvs must have as few seams as possible <...snip...>

This is why I'm hoping that Mario will one day give us a shader that blurs the diffuse lighting a little bit, or maybe version of his shader that does the scattering without the need for pclouds

Yeah... I read that leaflet on the Matrix method back when it came out... it made me cringe back then, and it's still having that effect on me while re-reading it now :) It re-casts the problem as a 2-D blur, but generates a fresh new mountain of headaches... also this is only useful for very small scattering distances, like skin, but not as a generic model.

As far as developing a "shader that blurs the diffuse lighting a little bit", well, that's pretty much what's going on in this thread's shader (the multiple-scattering part). You can think of the scattering distance as the object-space blur radius with a kernel that's somewhat reminiscent of a Gaussian.... all of which is happening on the surface (as opposed to an arbitrary sphere away from P along N -- see below).

Serg: There is a shader for C4D called chanlum that seems to do could pretty much what I'm after: http://www.happyship.com/lab/chanlum/docum...tion/index.html

Wow. I have never visited that site, but his "random-samples-in-sphere" approach is exactly the method I came up with quite a few years ago (before Jensen and the BSSRDF craze) to do scattering in snow. The only difference was that my sample positions were not truly random (and therefore avoided the noise problem even at low sampling rates). Spooky.

Anyway... the obvious problem with this is that it ignores the surface's topology (within the sphere's volume). So again, only useful for very small scattering distances. But a piece of cake to implement. Really. Give it a try -- every P gets the average irradiance ("blurred diffuse") within a spherical volume. But note that this is different than every P getting the average irradiance over a chunk of the surrounding *surface*. And so you can see that the difference between those two would only be negligible for the cases with relatively small scattering distances ("blur radius").

Serg: Another often neglected component of realistic skin is its varying soft reflectivity, very soft at the cheeks transitioning to quite sharp reflections at the nose for example.Usually the case is that the render time is too long for this

Every parameter of the VOPs presented here can be modulated by some texture, or function, or heck, an entire sub-shader if you like (just about anything that outputs the right data type) -- that's why I made them VOPs :). There's nothing stopping you from defining the scattering distance and/or the amount of diffuse mix (or anything else) via a texture map. The render time penalty for using a few texture maps should be close to undetectable on a frame with "typical" shading complexity, so I'm not sure what you mean by "render time is too long for this".

Serg: btw, why havent developers already put glossiness/roughness multipliers onto the lights themselves? isnt it a very obvious thing to try to imply the size or proximity of a light by changing the size of its fake reflection?

Absolutely. Controls for specular size and sharpness are standard in all our light shaders.

Saber: <...>and in general skin need a cooktorrence model with a low value of turbulence or wavelength turbulence<...>

Are there some reference you could cite where I can read more about this? I'm intrigued.

Saber: okay i just want to know if the model is a dipole one or not.

No. It is not a di-pole model.

Saber: also here is some of my test that i have did with renderman i use maya in this case .i have creat a mel script to automtize the point cloud generation with a complete UI.

if u are interested i will share it

Nice images. And yes I would very much appreciate if you would share your method. At the very least I can attempt to convert it to VEX so you can use it with Mantra. It would also make the things you're asking about very clear from looking at the SL code (words can be very confusing sometimes :)).

Cheers!

Link to comment
Share on other sites

this my shader and their is two sperated sl one for pointcloud generation called bakeptc.sl wich then need to be filtred with ptfilter and the other one is the skin.sl wich u wiil specify the ptc to be used with the skin.

this need some twick when fitreling the sss u need correct value of skin scattering on th rgb and then u can twick ur unit scale depending on ur model size to get the correct value.

any way this right and give clean sss but for skin i tend always to use two sss layer wich is one for eppidermis upper-dermis and for the blooby dermis wich is optional and generaly come in later place.

but i always try to do that and realy don't get what i'm looking for or don't understand either how this could be created.i use a lot of mentalray and i have take a look in depth on how miss_fast_skin work in MR and it's right but based on approxiamtion and realy use layers, to be considered us skin layer.

one of the matrix guy who help in creat shader or help to work with shader tell me that this is a di-pole shader and they don't know reading code but he tend to understand that in sure the shader is a di-pole using two sss layer we was trying to recereate together but us u know they can shader proper stuff so we don't have any acces to know how it done.and my real problem wasn't in creat two skin layer but in combining them togeher cause those layer need to react between then not us add function or multiply or screen but point could after merged need to be compiled with other stuff .and i realy hope to get in an idea of how this could be done here is my shader and i'm sure it can be the one that we are looking for.i'm don't care on render time but i need to get the most sofesticated look ever seen. also i have talk to Chistophe Hery the ilm guy responsibal on developping Dave jones shader and u can find the dissucion in pixar forum and search for skin i think.we talk together and they say that they use what i use use but we three component of sss cause the dave jones face have many part that appera us toransparent and u can see nicely what behind the epidermis .any way u can found more info their and we was trying to get a realistic skin and he also speek about using:

cooktorrence model for specular based on wave length and he shader a basic cooktorrence model .and in my oppinion u can fake this out using texture and highres maps

any help i appericate.

BakePtc.rar

Skin.rar

Link to comment
Share on other sites

"So again, only useful for very small scattering distances. But a piece of cake to implement. Really. Give it a try -- every P gets the average irradiance ("blurred diffuse") within a spherical volume. But note that this is different than every P getting the average irradiance over a chunk of the surrounding *surface*. And so you can see that the difference between those two would only be negligible for the cases with relatively small scattering distances ("blur radius")."

This is how I imagined it could work, I just dont find it a "pice of cake" to put into practice in a vop network (cant code) :)

I get as far as blurring P by adding the output of a NonDeterministic Random to it. But I cant see how I get it to affect the lighting model since it has no P input... this seems to work for blurring procedural noises, as long as you render with -r and lots of samples. Tried it on the occlusion vop as well, but since its a P blur (not surface) it starts to darken the polygon boundaries. it's nteresting though.

Edited by Serg
Link to comment
Share on other sites

"So again, only useful for very small scattering distances. But a piece of cake to implement. Really. Give it a try -- every P gets the average irradiance ("blurred diffuse") within a spherical volume. But note that this is different than every P getting the average irradiance over a chunk of the surrounding *surface*. And so you can see that the difference between those two would only be negligible for the cases with relatively small scattering distances ("blur radius")."

This is how I imagined it could work, I just dont find it a "pice of cake" to put into practice in a vop network (cant code) :)

I get as far as blurring P by adding the output of a NonDeterministic Random to it. But I cant see how I get it to affect the lighting model since it has no P input... this seems to work for blurring procedural noises, as long as you render with -r and lots of samples. Tried it on the occlusion vop as well, but since its a P blur (not surface) it starts to darken the polygon boundaries. it's nteresting though.

Just tried the single scatter with 0 phase (dunno why I hadn't tried that before!), it works really nice, the only problem is that the shadows are sharp.

I was going to try and do the P blur thing to it, but I noticed but the shader will croak if I plug P straight from the global var into the shader P input.

heres the error:

errors during compilation:

"C:/DOCIME~1/blahblah/LOCAL~1/Temp/surface133840.vfl" line 321 WARNING (2005) Implicit casting failed for set(). This is because there are multiple versions of the function and the compiler cannot determine which version to use based solely on the arguments given. The compiler has chosen "vector set(int)" instead of:

"vector set(vector)"

"vector set(vector4)"

"vector set(float)"

and a few more similar errors on lines 321 289 291 and 174 do do with "uninitialized variable"pp" used as argument for "set"

cheers

S

Link to comment
Share on other sites

this my shader and their is two sperated sl one for pointcloud generation called bakeptc.sl wich then need to be filtred with ptfilter and the other one is the skin.sl wich u wiil specify the ptc to be used with the skin.

Ok. I had a look at the prman shader you posted, Saber.

I don't have prman at home so I can't run tests right now, but looking at the code, the subsurface portion looks like a straight lift from Application Note #37.

Mechanically, the way it works is very similar to the way the shader in this thread works. The algorithm however, is different: PRMan's algorithm is an implementation of this accelerated version of Jensen's original, whereas I used the method in these notes.

Their version is not really possible to reproduce using a shading language (VEX or SL), that's why they did it as a separate program (ptfilter). The "hierarchical" part of the name refers to the way the data (P,N,area,radiance) is stored so that it can be accessed very quickly. These kind of data structures are just not possible in a shading language (VEX or SL). The only option is to either do it as a stand-alone (like ptfilter) or as a dso.

Note, however, that each tiny parameter tweak in the PRMan system (*after* pointcloud generation) entails running ptfilter and then re-rendering the frame. This is rather cumbersome, but wouldn't be so bad except that the parameters themselves relate to each other in very non-linear (hard to predict) ways, making the whole shader-space exploration part of the process pretty irritating (unless, of course, you happen to want one of the pre-measured materials: apple, chicken1, chicken2, ketchup, marble, potato, skimmilk, wholemilk, cream, skin1, skin2, and spectralon).

However... it *should* be possible to implement Jensen's original paper (i.e: minus the acceleration) in VEX using point clouds. In fact, before I started this thread I had attempted to do just such a thing. I just looked around and managed to dig up (more like "resurrect" -- this is from 2003 ferchrissakes!) that code:

old_di_pole_sss.rar

(the alpha tables were generated by a separate standalone C++ program I wrote for that purpose, but I haven't found that bit yet... anyway, the pre-generated tables are there at least).

You're definitely welcome to read, look at, use, abuse, even point and laugh at it, but please don't expect me to know exactly what every line is doing in there -- it's just been too long and I'd have to sit and spend a couple of days recreating the whole mess back in my head again (something I'm not really looking forward to doing right now, sorry). About all I can tell you is that at the time I was reasonably content that this was a somewhat error-free implementation of Jensen's original di-pole model.

So... have fun and tear it to shreds if you like, but you pretty much gotta take it "as is".

@Serg: I'll take a look tomorrow at what could be causing those errors when wiring P (sounds like vcc is saying "pp" was never declared). This happens when you wire the global P (to the vop's P), right?

Cheers!

Link to comment
Share on other sites

I was going to try and do the P blur thing to it, but I noticed but the shader will croak if I plug P straight from the global var into the shader P input.

Hi Serg,

Yup. There's a bug in the VOP side of the code -- forgot the silly dollar signs in front of the local vars P_, N_, and I_.

I'll update the submission to the exchange, but in the meantime you can fix this yourself very easily:

Open up the "Operator Type Properties" on the VOP (you'll have to do both the multi- and single-scattering VOPs, one at a time). Go to the "Vex Code" > "Inner Code" tab, and change the first three lines, which currently read like this:

vector $pp = $isconnected_P_ ? P_ : P;

vector $nn = $isconnected_N_ ? N_ : N;

vector $ii = $isconnected_I_ ? I_ : I;

to this:

vector $pp = $isconnected_P_ ? $P_ : P;

vector $nn = $isconnected_N_ ? $N_ : N;

vector $ii = $isconnected_I_ ? $I_ : I;

Cheers!

Link to comment
Share on other sites

  • 1 month later...

Hi All,

I have been using Mario's amazing SSS shader (SSSFull5 version). and I managed to get successful still renders. However, when I trıed to make a sequence test It came up with a flicker artifact which looks like a point cloud failure. Firstly this is my first attempt on SSS so I am not quite sure about the parameters that I need to use for a sequence.

Obviously, I need to find a way to keep the point distribution at the same oder but since I am using metaballs... I am not sure how to to it

Any suggestions regarding this matter would be appreciated.

The Details About the Problem will be seen in the following link

http://forums.odforce.net/index.php?showtopic=5624&st=12

cheers

Link to comment
Share on other sites

The Details About the Problem will be seen in the following link

http://forums.odforce.net/index.php?showtopic=5624&st=12

This is the bane of any pointcloud-based method... maintaining reliable distributions across animating surfaces or volumes... sigh...

I don't think there's an "easy" solution that fits all cases. With metaballs... hmmm... I guess it depends on how much they're distorting. If they're always attached to each other in roughly the same way, then you could use some smoothly-varying attribute (say, texture UVs) as a space for the point distribution, then map them back to the geometry. If they're attaching to, and breaking apart from each other all the time, then you're out of luck (with respect to using well-behaved pointclouds).

If there's nothing that can be used as a reference (something that stays roughly continuous between frames), then you might have to drop the whole pointcloud idea and use a different sampling method altogether. The brute-force method would be to cast lots of rays from the entry position of the principal ray (P) in the hemisphere about (-N) and run the same algorithm that is there now on each hit (these hit positions would replace what each PC position now represents). Problem with this of course is that you'll be doing it for every shade point and so you'll lose the ability to cache results along the way (which is why the pc method is faster).

I just saw your video.

I'm glad that more points stabilized things (though I'm sure the render times went up as well), but now you have a different problem. See how when the tentacles get close together they get brighter? That's because as they get closer, there's more liklihood that their points will be included in the SSS calculation (much like metaballs start affecting each other as they get close to each other). Solving this takes some effort though... pointclouds are great, but they can be such a pain in the arse sometimes...

Cheers!

Link to comment
Share on other sites

pointclouds are great, but they can be such a pain in the arse sometimes...

Cheers!

Hi,

Thank you for you detailed explanation Mario. that really helped in order to understand the logic of the pc method.

And yes the render times went up after I increased the point number. But I am happy that it kind of fixed the flicker problem. On the other hand like you mentioned. high pointed areas got brighter. In this case I like that effect actually but I am going to try to find the minimum point number to get rid of the high brightness and expensive render times for this case.

Thanks for the explanation and the great shader again

Cheers

Selcuk

Link to comment
Share on other sites

Hi Mario,

I have enjoyed myself quite thoroughly tinkering with your SSS solution. Thank you for making your OTLs available to us. I have learned a great deal in the process of getting a successful SSS render.

A couple of questions come to mind:

1.Color Mapping

In experimenting with NVIDA's Gelato's SSS shader, mapping a texture into the color channel (Rd inside the VOP network) yields very crisp detail while retaining accurate subsurface scattering effects. (See picture below)

post-2285-1185309962_thumb.jpg

I have modified the MultiSS Vop to allow for texturemapping, but it seems that the pixels are interpolated as a function of the "number of points" attributte (more points, smoother values, less points condenses to vornoi patterns) which leads me to conclude that this is the wrong channel to be messing with. I suppose I can combine the SSMulti output with another color channel and composite the two together, but I was wondering if I am missing something obvious.

2.Blocking Geometry

Pixar's Application Note #37 suggest a method for creating "blocking geometry" (imagine bones in a transluscent fish) by baking out negative illumination values on the internal object into the point cloud to make the desired regions darker. (See picture below)

post-2285-1185311546_thumb.jpg

Does your point cloud generator, in it's current state, provide the means for a similar mechanism or would this necessitate a code rewrite?

Many thanks for all your effort on this project!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...