Jump to content

The SSS Diaries


Recommended Posts

Hey guys,

I'm using the SSS Multi shader and loving it. I'm currently creating a pile of metaballs moving around (yep lava lamp effect....) and the SSS unfortunately flickers very badly. Any tips? I've tried upping the res of the SSS Scatter sop to quite high levels, do I need to go from high -> insane point clouds to eliminate the flicker?

Hey Gallen,

I'm assuming you're working with H9 where these VOPs are now part of the distribution, right?

Generally speaking, point clouds and deforming geometry don't play well together. Increasing the point count won't necessarily help.

You might want to try Serg's "AXIS_SSS" method (posted above on this page), which doesn't depend on point clouds, or the bundled "SSS Single" without a pointcloud file and with the "Scattering Phase" set to something close to zero. "SSS Single" shouldn't flicker unless there's intersecting geometry, and Serg's method may or may not flicker -- it samples illum away from the surface so it *may* flicker in the presence of shadows... dunno, but definitely try it.

Coming up with some robust/generic method for keeping pointcloud points coherent on deforming geometry has been on my TODO list for a long time... I haven't had a pressing need to do it yet, so it's still sitting on the shelf, I'm afraid.

HTH.

Link to comment
Share on other sites

Coming up with some robust/generic method for keeping pointcloud points coherent on deforming geometry has been on my TODO list for a long time... I haven't had a pressing need to do it yet, so it's still sitting on the shelf, I'm afraid.

Going out on a limb here regarding the flicker issue as I haven't played with any of this stuff but has anyone tried using a "rest scatter" and attribute transferring into it from the original scatter made on the deforming geometry?

Basically, I mean getting a bounding volume that covers the entire 3D space of the deforming object throughout its animation and scattering a dense point cloud over it which stays constant. When you scatter points on the deforming object you attribute transfer an attribute such as colour to the rest scatter. Then remove those points from the rest scatter that aren't close enough to the deforming scatter based on the value of the transferred colour. You are left with a scatter that fills the volume of the deforming object but whose points stay fixed in 3D space consistently from frame to frame in terms of position and density. You could get fancy to blur or fade in or out the areas where the object is growing or shrinking by comparing the scatter of the current frame to the ones around it.

Link to comment
Share on other sites

Mario - thanks! The wierd thing is that it works great on smaller scale deforming objects, but on a larger scale like 2-3 meter3 volume worth of deforming blobbies, it doen't work.

I will check out the SSS Single though - I haven't had much luck getting that to work - keep getting black - but I will try again.

Thanks Mario! You are an inspiration as always!

Alvin

Link to comment
Share on other sites

Just installed Serg's SSS Axis shader - and works great outta the box! I had to set Mantra to raytrace mode though, else I'm getting the poly's outlines showing up in the render? Raytracing seems to be a major speed hit for me.

Thanks Mario and Serg for sharing your stuff!!

Alvin

post-1921-1213070896_thumb.jpg

Edited by GallenWolf
Link to comment
Share on other sites

Just installed Serg's SSS Axis shader - and works great outta the box! I had to set Mantra to raytrace mode though, else I'm getting the poly's outlines showing up in the render? Raytracing seems to be a major speed hit for me.

Excellent!

About those lines... I think you're seeing the grid edges (not poly lines). I suspect there's probably an nrandom() call somewhere in that shader. If there is, then replacing it with a high-frequency noise() call might make the grid-like artifacts disappear.

Going out on a limb here regarding the flicker issue as I haven't played with any of this stuff but has anyone tried using a "rest scatter" and attribute transferring into it from the original scatter made on the deforming geometry?

Basically, I mean getting a bounding volume that covers the entire 3D space of the deforming object throughout its animation and scattering a dense point cloud over it which stays constant.

Interesting idea. I've tried something similar for pointcloud-based multiple scattering in clouds, but my gut reaction is that it likely wouldn't work for the SSS case (a hard surface shaded as having zero optical depth, and full occlusion) -- though it's the kind of thing that really has to be tested to know for sure.

The thing that makes me doubtful is that the volume-based rest scatter would never lie exactly on the surface -- or more importantly, its relationship to the surface (the spacial differences) would be constantly changing as the surface evolves/changes within the volume (if I understand your suggestion correctly). In my mind, this is likely to result in low frequency flicker, but if you also consider that some percentage of these rest points would be constantly fluctuating between being occluded and unoccluded, then, well... I suspect you might end up with high frequency popping as well.

Worth testing for sure, though.

There are usually a bunch of intuitive solutions for purely deforming surfaces -- i.e: local deformations but constant topology. But metaballs (Alvin's case)... well, that's a different animal I think... probably close to a worst-case scenario.

I agree with the basic idea of working in a "rest space" -- this was my suggestion for the "growing vines" problem (a few posts back in this thread). But I think the rest space needs to be mappable to a surface, not a volume. So, a 2D distribution in a parametric rest space that maps uniquely to the surface. In that context, we can think of the 2D pointcloud distribution as a texture map that projects unambiguously onto the surface.

The actual projection would need some custom treatment because we're not talking about an AttributeTransfer exactly, and the "unambiguous" and "unique" parts of that sentence may cause some headaches...

Actually... I got some play time today, so I'll try to come up with some sketch for what I'm talking about.... it will probably fail miserably, but now you got me thinking about it again :)

Link to comment
Share on other sites

  • 4 weeks later...
  • 2 weeks later...

Hi Mario :)

thought I'd add my thoughts on the multiSSS VOP here rather than on the displacement SOP thread :)

after many many tests I have finally grasped I think how to control the various parameters and am getting some very nice results (displaced subdivision surface character head, skin shader)

would love to post the final image but unfortunately the project I'm working on will be under NDA for a while so I can't just yet - could maybe pm you with it if you want though :)

anyway - a few issues have raised themselves:

1. the difference between computing the RGB stuff seperately and doing it as one 'max value' is pretty big - ie very large red shift with the former option (which does look much better) - I had to tweak the SSS colour (Rd) a lot to get the output back to a reasonable value again

a scalar value to control how much the RGB channels are 'seperated' would be a great addition (if I'm correct these are seperated kindof arbitrarily?) in controlling this

2. I'm mixing the SSS lighting with an oren nayer diffuse layer but I have to create 'corpse' like colour maps (ie with 90% of the colour leached and slightly blue shifted) for doing the standard diffuse shading otherwise the skin gets too pink - I'm finding that post multiplying the SSS part with this map helps a huge amount as well before adding it the diffuse lighting (difffuse is currently at around 30% intensity which I find is about perfect - more than that and the skin looks too hard, much less and the SSS just blurs everything a bit too much). oren nayer is set to about 0.15 roughness BTW - not so much an issue but I guess this is one of those things that breaks the "what you see is what you get" type approach

3. I'm coming up against a problem that I think has been previously mentioned in this thread - namely using a map to alter the SSS colour input into the VOP ("Rd" I think? - edit: nope - "sssclr" - sorry about that) - if I heavily blur my "normal" skin tone diffuse map (I before leaching out the colour) and feed that in I get great results, but unfortunately this also creates unstable splotchiness (ie that wonders from frame to frame) that I can't get rid of.

If I don't map this then I go back to a "monotone" SSS colour thing which i feel plagues a lot of the SSS skin stuff I see (ie the SSS stuff just lacks colour variation and looks weird)

I tried multiplying the SSS output with this blurred diffuse map after the SSS stuff had been computed but it just didn't do the same thing - ie see attached (think I can get away with showing just the SSS pass!)

anyway - I read the thread again and there was the suggestion that I could add the colour as an attribute to the actual point cloud (which would be fine considering that its quite blurry anyway) and the VOP would pick it up

to do this I take it I can just add a colour attribute called "sssclr" to the points and build a SOP to transfer over the map colours into this? or should I be using Cd?

does the included multSSS Vop in H9 actually look for these attributes on the PC cloud as you suggested?

on that subject actually are there any other attributes we can override in this way??

anyway - many thanks for the VOp anyway - it really does give some great effects for very little render overhead

post-3889-1216566480_thumb.jpg

Edited by stelvis
Link to comment
Share on other sites

one other thing I forgot to mention:

in some of Mario's initial posts there was the suggestion that the apparent 'depth' of the scattering effect could somehow be modulated through the incoming SSS colour (ie "sssclr" (note edited - is this checked for isbound?))

given that this probably needs to be a PC attribute and not a texture map (for the same reasons mentioned in the previous post) how does varying the SSS colour intensity affect scattering 'depth'?

eg should a darker luminosity be used to represent 'deep' and a lighter one used to represent 'shallow' etc?

will probably play around with this if I can get it to work (ie the PC attribute thing) but some pointers would be great :)

cheers

Edited by stelvis
Link to comment
Share on other sites

and another one :)

- how does the multiSSS VOP node deal with motionblur?(or any other PC based shading stuff for that matter)...ie on say a DRA I have to tell houdini that it should load object_$F+1 as a blur file...it just occured to me that there may need to be something similar for the PC file as well...

kinda hoping that the VOP magically does this for me, but OTOH guessing it kinda won't :(

if it doesn't how would one go about implementing this?

Link to comment
Share on other sites

and another one :)

- how does the multiSSS VOP node deal with motionblur?(or any other PC based shading stuff for that matter)...ie on say a DRA I have to tell houdini that it should load object_$F+1 as a blur file...it just occured to me that there may need to be something similar for the PC file as well...

kinda hoping that the VOP magically does this for me, but OTOH guessing it kinda won't :(

if it doesn't how would one go about implementing this?

ok answering my own question - mantra shades first then transforms to compute MB so it doesn't matter.... correct?

Link to comment
Share on other sites

anyway - a few issues have raised themselves:

Wow. They've chosen some interesting defaults for the SSSMultiple vop.

1. The primary parameter for controlling the falloff is "Scattering Distance" (depth) -- this is the object-space distance that light will be allowed to travel before it dies out completely. When "Evaluate RGB Separately" (rgb) is off, the color of the source will be scattered equally for that distance (as though it were white light), then "tinted" by the "Subsurface Color" value (sssclr). When "Evaluate RGB Separately" is on, then "Scattering Distance" is still the maximum scattering distance, but the channels of "Subsurface Color" become the relative scattering distance weights for each of the R, G, and B channels.

So, if "Evaluate RGB Separately" is on, and "Scattering Distance" is set to 1, and "Subsurface Color" is set to [1,0.8,0.78], then red will travel 1 unit, green 0.8 (1*0.8) units, and blue 0.78 (1*0.78) units, producing a pinkish main hit which gradually falls off to a more reddish hue (since red scatters about 20% more than both green and blue). If you keep those same settings but turn "Evaluate RGB Separately" off, then the color of the light source will be scattered equally for 1 unit, and the result will then get multiplied by [1,0.8,0.78], resulting in the entire reflectance looking pinkish, without variation over distance. Hope that makes sense.

Regarding the defaults... for white skin, I'd start exploring with ksss=1, sssclr=[1,0.8,0.78], rgb="on", bounce=1. It's probably too red, but a little closer than those strange factory defaults. (why is ksss defaulted to 0.5? ditto for bounce?... and red for sssclr?... <shrug>)

2. The original SSSMulti did indeed have a Lambert diffuse component (which was mixed in, not added). This was there to counteract a slight deficiency in the model, so I'm not surprised that you find you need to add a little bit of some diffuse model. I've never found I needed to go above ~10%, but yes, that was Lambert not Oren-Nayar, so YMMV. This is ultimately a "whatever looks good" thing.

3. I'd have to look at their code to see if they've added support for overriding shader parameters via PC attributes. IIRC, my original version posted here didn't allow for this. So, if they kept things the same, then your best bet is to come up with a neutral base scattering tone (possibly with a red or blue shift) then simply multiply the output by the texture map (non-blurred). I'll try to have a look at their code tomorrow and see what I find.

4. For the micropolygon engine, yes, I believe it shades first then transforms, so everything should be OK. However, if memory serves, the raytracing engine used to behave differently (probably still does). I can't remember the exact details, but MB PC had to be treated differently with the raytracing engine -- it's been a while, but I think we ended up having to use a rest position (instead of vanilla P) to initiate the PC search, because the transformation happened first... but again, this was a while back and things may have changed now.

HTH. Cheers.

Link to comment
Share on other sites

thanks Mario - thats cleared a few things up for me :)

so when compute RGB seperately is ON the RGB values are *really* scattering distance values for each of the RGB channels rather than a colour per se?

from my tests either this equals getting that colour in the sss output as a by product of its scattering the different RGB components at different distances or its using it as a colour as WELL as a scattering distance...not sure which :)

the comment about mixing diffuse was more about the need to feed the diffuse part of my shader slighty strange colours on diffuse (ie dead skin colours) to mix properly with the SSS - from a few things I've seen on SSS this seems to be a common approach though...

from tests last night the VOP definately isn't picking up any sssclr attributes from the PC, which is a shame

am I correct in thinking that colour variation on the sssclr input needs to be a pc attribute rather than a local surface property? (eg as controlled by a texture map...) I bleieve someone had mentioned getting very unstable results (ie flickering splotches) if they attempted to map that via a texture map which is what I was doing for the first image in the attachement from yesterday and I had noticed that animation tests wehere very flickery...

the thing is if its not supposed to be mapped I'm not sure how to vary the SSS over the surface at all (beyond trying to map the depth radius, or simply multiplying the result)

thanks again

Link to comment
Share on other sites

sorry to pester you again

I take it if I wanted to add a pcimport to get the sss clr from my point cloud I'd need to grab the actual vex code from somewhere and recreate my own VOP from that with any additions?

Link to comment
Share on other sites

sorry to pester you again

I take it if I wanted to add a pcimport to get the sss clr from my point cloud I'd need to grab the actual vex code from somewhere and recreate my own VOP from that with any additions?

ok think I see where to make that change...

Link to comment
Share on other sites

so when compute RGB seperately is ON the RGB values are *really* scattering distance values for each of the RGB channels rather than a colour per se?

from my tests either this equals getting that colour in the sss output as a by product of its scattering the different RGB components at different distances or its using it as a colour as WELL as a scattering distance...not sure which :)

With "rgb"=1, the "sssclr" parameter stops being treated as a tinting color and is instead used as three separate multipliers to "depth" - one each for R, G, and B. The result is you end up with 3 separate scattering distances: sssclr.r*depth, sssclr.g*depth, and sssclr.b*depth. So yeah, not a color anymore. However, the color variation in the scattering ends up being similar to what you see when those three values are interpreted as though they were a "tinting" color (and hopefully you now see the reason for the symmetry), but strictly speaking, they stop being used as a color and are instead treated as three separate distance multipliers, or weights, with the global control still being "depth".

am I correct in thinking that colour variation on the sssclr input needs to be a pc attribute rather than a local surface property? (eg as controlled by a texture map...) I bleieve someone had mentioned getting very unstable results (ie flickering splotches) if they attempted to map that via a texture map which is what I was doing for the first image in the attachement from yesterday and I had noticed that animation tests wehere very flickery...

OK. I just had a look at the code ($HH/vex/include/pcscatter.h, function vop_ssIntegMulti) and the short answer is no, shader attributes attached to the PC (like sssclr, for example) will not get picked up by the scattering function. And no, feeding a varying color (like from a texture map) into the "sssclr" input of the vop will not work as you might think (blotchiness, animation popping, and other artifacts will no doubt ensue) -- that parameter should be a constant color, not varying. The only PC-bound attribute (besides P and N) recognized and used (actually, more like *expected*) by the PC code is "ptarea". Adding the ability for the PC code to use sssclr and anything else you need is pretty straight forward, but it involves changing the code in $HH/vex/include/pcscatter.h::vop_ssIntegMulti().

Translation: any variation in the scattered color must be done after the fact: i.e: the output of sssMulti can be multiplied by some texture. That's your only recourse right now. However, this is not as limiting as it sounds. Given the way in which diffuse scattering works, all high frequency detail would become "mush" pretty much instantly (gets blurred a lot right away). The high frequency detail would instead get picked up by single scattering vop (which doesn't need a point cloud). If you think about it, this is not dissimilar to multiplying standard diffuse by a texture representing the "surface color". Even assuming that the PC code were able to pick up and use "sssclr", you'd need an insanely dense cloud to retain small detail (in terms of illumination, that's exactly the same reason for adding a small amount of diffuse -- to capture high frequency shadow boundaries).

I know it would be better if ssIntegMulti could use the attributes directly, and like I said, this is not hard to add, but... yeah, I'm afraid you need to get in there and tweak the code yourself for that to happen. Sorry.

At one point I was going to post a version that did this and a few other things, but since SESI took over the VOP, I got lazy and decided to let them maintain it. My bad :)

HTH.

Link to comment
Share on other sites

brilliant - cheers Mario

not afraid to make some changes to the VEX :)

I would have probably stuck it in the wrong place though - ie added:

vector Rdb; //declare var for bound sssclr values

then

pcimport(handle, "sssclr", Rdb);

if(Rdb){

Rd = Rdb;

}

to the pcunshaded loop directly inside voplib.h::vop_multiSSS rather than the in the pciterate loop in pcscatter.h::vop_ssIntegMulti

mmm - looking at the code again, the only place I can see to put the pcimport call where it will not require changing anything else IS in the pcunshaded loop called directly inside voplib.h::vop_multiSSS...

from what I can make out the var representing the sssclr in pcscatter.h::vop_ssIntegMulti (Rdo) is already multiplied by the intensity (sd) by the time you get to the pciterate loop - ie

vector ld = Rdo*sd;

where Rdo is the sssclr value passed to vop_ssinteMulti from vop_multiSSS and ld is what actually gets used for the calcs the pciterate loop

so to put it in the pciterate loop in pcscatter.h::vop_ssIntegMulti I'd need to move that multiply inside the loop as well? (or I guess just ignore it since its just a multiplier anyway)

BTW - this NOT to try and get high frequency detail into the SSS shading - I AM multiplying by a desaturated version of the diffuse map to do that post the sss calcs (which works well) as you suggest :)

its just to vary the colour (smoothly for low frequency large scale features) in the sss stuff - it really looks better if you do this by varying the sssclr but I need it not to pop when animating :)

(the image I posted was an attempt to show the difference - ie simply multiplying afterwards doesn't give you the same effect as altering the input sssclr if seperate rgb is on - the latter method looks a lot better)

Edited by stelvis
Link to comment
Share on other sites

I would have probably stuck it in the wrong place though - ie added:

vector Rdb; //declare var for bound sssclr values

then

pcimport(handle, "sssclr", Rdb);

if(Rdb){

Rd = Rdb;

}

to the pcunshaded loop directly inside voplib.h::vop_multiSSS rather than the in the pciterate loop in pcscatter.h::vop_ssIntegMulti

The pcunshaded loop in multiSSS gathers scattered illumination from neighbouring PC points and stashes it into the current PC point. It does this for all points (up to "Pointcloud Samples") involved in the filter, closest to the current shading point P. Think of it as the main loop. The pciterate loop in ssIntegMulti does the actual walk through the neighbours and calculates how much each one contributes to the caller's position. In pseudo-code:

// In multiSSS:
foreach point "p" in PC, up to PointcloudSamples, near shading position P {
   // In ssIntegMulti:
   vector contribution_to_p = 0;
   foreach point "p_neighbour" in PC within radius "sd" of point "p" {
	  contribution_to_p += some_sss_function(p_neighbour,p);
   }
   store "contribution_to_p" for point "p" in PC channel "ch_ssm" (pcexport);
}
vector Final_SSS_at_shading_point_P = pcfilter(handle,"ch_ssm");

So you see, multiSSS() just gathers already-shaded sss values for all the points involved in the filtering and then goes ahead and filters them into a single sss value. It is ssIntegMulti() that actually carries out the sss calculation for each point (and directly uses Rd, sd, etc) -- IOW, ssIntegMulti() does the actual integration over the scattering distance "sd".

Be careful not to pass the varying colors to both the actual surface (by overriding shader parameter "sssclr") and the pointcloud points (by adding the attribute "sssclr" and picking it up in the PC code). Otherwise you'll go from totally ignoring it (the factory situation), to accounting for it in two separate (and possibly conflicting) locations.

I would suggest you just keep "sssclr" and "sd" as constant parameters (as they are now) -- with the current meaning of "sd" as the main "Scattering distance" dial, and "sssclr" as the overall color bias (if any), and give the per-pc-point color variation a separate name (which you can interpret as "the surface color", regardless of the scattering profile given by sd and sssclr). This will allow you to say "No matter what the surface color is, this material will always scatter red wavelengths more than green or blue".

With that in mind, and picking the PC-bound attribute "Cd" (might as well keep it standard) to represent surface color variations, you could change the function "vop_ssIntegMulti()" as follows:

vector vop_ssIntegMulti (
   string pcmap;
   vector Rdo;
   float sd;
   float bounce;
   int t_rgb;
   vector pcP;
   vector pcN;
   )
{
   vector Xi,Ni;
   vector Xo = pcP;
   vector No = normalize(pcN);
   vector ld = Rdo*sd;
   float ld1 = max(ld);
   int handle = pcopen(pcmap, "P", Xo, ld1, (int)1e9);
   vop_pcIllum(handle,"illum");
   float r,ptarea;
   vector ssm=0, ptillum=0;
   while (pciterate(handle)) {
      pcimport(handle, "P", Xi);
      pcimport(handle, "N", Ni);
      pcimport(handle, "point.distance", r);
      pcimport(handle, "ptarea", ptarea);
      vector ptclr = 1;                            // &lt;- THIS IS NEW
      pcimport(handle, "Cd", ptclr);               // &lt;- THIS IS NEW
      pcimport(handle, "illum", ptillum);
      Ni = normalize(Ni);
      vector Li = (Xo-Xi)/ld1;
      float kb = vop_ssBounceAtten(No,Ni,Li);
      kb = lerp(1.0,kb,bounce);
      if(kb&gt;0.0 ) {
         if(t_rgb)
         {
            int wave;
            for(wave=0;wave&lt;3;wave++) {
               setcomp( ssm,
                        getcomp(ssm,wave) +
                           kb * getcomp(ptillum,wave) * ptarea *
                           getcomp(ptclr,wave) *   // &lt;- THIS IS NEW
                           (1-smooth(0,getcomp(ld,wave),r)),
                        wave
                      );
            }
         }
         else
            ssm += kb * ptillum * ptarea * 
                   ptclr *                         // &lt;- THIS IS NEW
                   (1-smooth(0,ld1,r));
      }
   }
   pcclose(handle);
   if(!t_rgb) ssm*=Rdo;
   float norm = 3.0*ld1*ld1*M_PI / 10.0;
   return ssm / norm;
}

Untested...

Maybe I'll get a chance to test it out tonight, but I *think* that should work.

Cheers.

Link to comment
Share on other sites

IMO the scattering that happens during the first 5 mm of penetration is by far the most important contribution to the overall look of skin. Yet because we lack an appropriate tool for the job in Houdini, its the part that we most often have to cheat on (by adding in some Lambert percentage). If we think about it, the balance of Lambert vs SSS we use is relative to the detail that the SSS shader can capture... this is assuming that a SSS shader would start to look Lambertian if the scatter radius was tiny, which it should. Well not exactly Lambertian but you know what I mean.

If we want to achieve realistic skin we cant treat the look as discontinuously as this, i.e. a un-scattered lambertian layer added to a very blurry SSS, however much we tweak the percentage one way or the other, it's never going to really look like skin. This is because in reality only about 6% of light is reflected at the surface of skin, therefore if we are ever going to get near the look of skin we should try to build our SSS shaders under this constraint as much as possible.

The other 94% is light increasingly scattered with depth, this means the shader must be very spatially accurate since most of the visible light is that which has only traveled a small distance through skin.

The problem right now is of course that its impractical to create and manage the huge number of points we'd need, we effectively need one per pixel, and of course you need an equally highrez model to scatter the points to, if you were to use a pcloud based shader.

This shallow scattering effect is what I was trying to achieve with my first version of the chanlum type shader. Of course, this failed miserably at the task because it was only blurring shadows without considering surface to light direction, so as you reduced the scatter radius it just became a cell shader (worked good at the time for snow though).

However, the new version tries to account for this by displacing the jittered samples along the surface normal using a Lambert function for the amount. In other words as the surface looks away from the light the sample positions get pushed into the surface thus you get a darkening effect because more samples are shadowed. This is quite an interesting parameter to have because it seems to control the waxiness of a material, you'd turn it down for was and up for skin. Unfortunately, this has an unwanted side effect in thin areas like ears or nose where the samples facing away from the light get pushed through the surface facing toward the light, thus its possible to get too much light though these areas. Still thinking about how to avoid this or control how much of it happens.

The other thing is that the Lambert function must be done in an Illumination loop so the shader now has 5 light mask fields, whereas before it would work with any number of lights. I though 5 is the max I would bear having to wait for.

I also want to change the way the internal coloring happens, at the moment its a very very cheap cheat... :) ramping the color through each itteration would look much more realistic, since the scatter radius is also ramped from nothing to the set value.

This is still a big cheat though... I'd expect a proper SSS shader like Mario's to look better if it could be made to work per pixel.

Lately I've also been thinking about setting up a process to generate baked illumination maps and the required uv distortion data, to copy Nvidia's realtime skin shading method.

It's darned effective... as evidenced by The Matrix 2 and 3. I suspect that rendering these pre-passes might be less of a pita than managing dense pclouds.

While looking for the HUman head link I found this PDF Nvidia SSS slides this looks like the precursor to the their human head demo... seems to have lots of useful information.

The Human Head Video

A render using my shader (added to 6% Lambert and spec), and the hip file with model and shader. It's about 3-4 mins (sorry alway forget to look at the render scheduler) using 3 out 4 cpu's on my quad 2.4ghz opty.

post-1495-1216740997_thumb.jpg

SSS_RnD_V14.hip.rar

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...