Jump to content

The SSS Diaries


Recommended Posts

Yeah the shader can't use the previous iteration for the next blur. I reckon it's probably not a big deal though, if any at all.

Unfortunately the shader aproach is also a box blur, but you have many more "textures" (the shader iterations the number of blurs) which might compensate for boxy uglyness. Fortunately the biggest Kernels are nearly blended out of existence

One thing I noticed is that the nrandom MP borders bug has show up again with this UV blurring bussiness (very visible if you up the gamma on renders).

cheers

S

Edited by Serg
Link to comment
Share on other sites

Reworked the shader to follow NVidia's blur and rgb weights (I was blurring rgb separately before, rather than weighting the colors), looks way way better now although the radius is too big. This is how I'd like to get my other shader to look like, but I don't think its possible.

Not sure if my gamma correction stuff is correct though.

heres a render without amb occ:

post-1495-1216990277_thumb.jpg

the scene: I removed the model to save bandwidth... just copy/paste the locked nodes from the last update. I left some form of rudimentary distortion in the /obj/TestGeo... its not plugged into the shader yet.

Just noticed I left Entry and Exit weight parameters in there... they don't do anything. The Nvidia weights are hardwired inside the ForLoop, using constants with a Spline Vop to ramp them.

SSS_BAKE_RnD_Shader.hip.rar

cheers

S

Link to comment
Share on other sites

you said you have the UVs, could you bake the AO on your map and overlay it on your shader. i guess this will improve it alot. also paint a subdermal and epidermal map and apply it.
I have the feeling that no matter how good/bad your shader is, the color texture is doing the most difference. So maybe you should paint a texture to your face model, and it will help you get better results.

Uhmmm... guys... I don't mean to speak out of turn here, but I believe Serg is busy trying to get the basics of the scattering algorithm down at this point -- not trying to make pretty pictures (not yet anyway). The inclusion of the diffuse map is among the last steps of the algorithm (and pretty much secondary to the diffusion itself, which is the heart of the thing). And occlusion is taken care of implicitly at the irradiance step.

Anyway... just sayin'... but carry on :)

Link to comment
Share on other sites

'brute force' raytraced sss?

hey guys,

i was wondering if there's any SSS shader out there that just does straight raytracing on the surfaces.

I'd like to compare speeds with a pointcloud based system... on an effect that i'm working on that has constantly changing topology, so it's hard to get a consistent point cloud.

i'm working on a system that keeps a consistent pointcloud over time, which allows me to use few points, but the system in itself is getting very heavy, takes time to calculate, and doesn't work perfectly yet.

the only other option to my knowledge is scattering a 'whole bunch'o points'

anything out there?

Edited by Aearon
Link to comment
Share on other sites

thanks serge - having a look at this now

one comment springs to mind immediately (though I could just be plain wrong about what your doing) - do you actually need to create quasi random sample locations into the UV map?

ie instead actually create a 7 * 7 convolution matrix centred on the actual shade points UV coords using the variances (blur widths) - and then just sample the map at each of the 49 resultant UV locations?

I'm assuming the normal map filtering will do the job i think your using the random generator to do....

you could then actually do a 'proper' Gaussian convolution on these values rather than simply averaging them - seems that would bring this more in line with the Nvidia reference

on a more general level and just to recap a bit, it seems that the whole approach to SSS can be broken down into two basic steps:

bake surface irradiance - either in a UV map or in a pc - both of these methods involve storing the surface irradiance as samples, but the map is probably always going to have the edge here - ie a 2k map will have 4 million samples (and these will have been subsampled as well thus its sampling accuracy is even better) whereas its very difficult to get the same sampling density in a PC (actually this got me thinking - why is a pc file size so much bigger than a correspondingly dense FP image ?? - obviously the PC needs to encode more data - P, N, ptarea, etc, but still they seem orders of magnitude larger....)

for each shade point apply a diffusion profile to this encoded irradiance over a defined area (ie the scattering radius) to get the SSS contribution for that point - for Jensen & Donner that means some seemingly very complicated maths and multipoles, for nvidia it means 6 summed Gaussians designed to fit the same profile produced by jensen & Donner's work and in marios SSSmulti case this seems to be applying a "smooth" (= roughly a single Gaussian?) over the values which will give you a falloff but not necessarily one that accurately represents the 'accurate' diffusion profile.

from reading the nvidia paper it would seem that the diffusion profile is quite important though...

I like the idea that serge has implemented in his latest shader though - ie of using a spline to model the diffusion profile - seems to me that we should be able to use that even on a pc cloud? (ie as opposed to a less accurate smooth)...or even use one of the new ramps to do the same thing

a diffusion profile controlled by a colour ramp plus scattering radius (ie sampling values are scaled to fit the ramp by the scatter radius) sounds like a good 'control' interface as well

Link to comment
Share on other sites

'brute force' raytraced sss?

hey guys,

i was wondering if there's any SSS shader out there that just does straight raytracing on the surfaces.

I'd like to compare speeds with a pointcloud based system... on an effect that i'm working on that has constantly changing topology, so it's hard to get a consistent point cloud.

i'm working on a system that keeps a consistent pointcloud over time, which allows me to use few points, but the system in itself is getting very heavy, takes time to calculate, and doesn't work perfectly yet.

the only other option to my knowledge is scattering a 'whole bunch'o points'

anything out there?

note sure and I have no doubt that others will give you a far more emphatic answer than I can, but it seems to me that brute force raytracing is problematic in that its very hard to get enough of the rays to actually hit the surface your interested in sampling - there may be ways around that though -

eg offset the source of the rays along the surface normal and then use the reverse normal (ie pointing back down towards the surface) as the center of a cone (cone angle defined by the scatter radiance) and fire the rays back at the surface point - that sounds like it would get you a lot more hits on the surface surrounding the shading point...might try that actually :)

Link to comment
Share on other sites

eg offset the source of the rays along the surface normal and then use the reverse normal (ie pointing back down towards the surface) as the center of a cone (cone angle defined by the scatter radiance) and fire the rays back at the surface point - that sounds like it would get you a lot more hits on the surface surrounding the shading point...might try that actually :)

- the attached jpeg shows the idea but with a flat (relative to the sampling offset and scatter radius) surface.

on further inspection there are other problems - if the area you are trying to sample (given the above idea) has any significant amount of curvature within the scatter radius then its going to be very hard if not impossible to work out the surface area you are actually sampling

with a significantly curved surface (again relative to offset P and scatter radius) the solid angle represented by the sampling area will no longer correspond to the actual surface area anymore - this may not be a big problem in practice though as long as the sampling radius and trace offsets are small relative to any surface curvature on the model

if its larger scale translucency you are after (ie large scattering radius) I'd recommend trying serge's original axis_SSS shader (note that's axis, not AXYZ!) - it works pretty well for that stuff, even if its not strictly "physically correct"

post-3889-1217169703_thumb.jpg

Edited by stelvis
Link to comment
Share on other sites

Reworked the shader to follow NVidia's blur and rgb weights (I was blurring rgb separately before, rather than weighting the colors), looks way way better now although the radius is too big. This is how I'd like to get my other shader to look like, but I don't think its possible.

Not sure if my gamma correction stuff is correct though.

cheers

S

couple of questions - what are the self multiply and sqrrt ops doing on the diffuse colour value? is that the gamma correction you mention?

Link to comment
Share on other sites

couple of questions - what are the self multiply and sqrrt ops doing on the diffuse colour value? is that the gamma correction you mention?

Yep... not sure it's correct. Feel free to rip it apart and implement it properly (see OTL bellow) :)

My only reference is Nvidia's 2007 gdc slides that I linked to a couple pages back, kinda hard to understand without hearing the guy talk through it! :)

On that note I'm pretty sure the really important stuff such like the blur and rgb weights are the same as in Gems 3... I suppose they are based on actual measurements.

Re the method to do the blurring... it's the only way I know how (within a shader) :)

Anyway I cleaned it all up and made an otl... I also exposed the weights as parameters. AXIS_Image_SSS.otl.rar

I think its important to also scatter the occlusion just the same as irradiance (becomes obvious when the occ takes bump/displacement into account), so the shader provides for scattering both separately.

Another reason for baking these separately is that it would be really slow to bake 2K of occlusion for every frame, and according to Nvidia the gamma correction should not not be done to the ambient occlusion.

Also there could be interesting ways for combining this texture space diffusion with my other diffusion shader. For using it's occlusion sss or modified to do single scattering.

I also want to see what it would look like if I plug-in nvidia's blur and RGB weights into to it.

btw, the slides say this re distortion correction:

Accurate Distortion Correction

We can easily estimate distortion

Compute a map and inversely stretch our blurs

float3 derivu = ddx( v2f.worldCoord );

float3 derivv = ddy( v2f.worldCoord );

// 0.001 scales the values to map into [0,1]

// this depends on the model

float stretchU = 0.001 * 1.0 / length( derivu );

float stretchV = 0.001 * 1.0 / length( derivv );

Can someone translate this into mantra Vop shading language? It would good to not have to get SOP's involved in the process.

And a render, with and without diffusion. I'm pretty sure the results would be a lot better with decent UV's (they have been linearly frozen here) and distortion correction.

post-1495-1217174794_thumb.jpg

Link to comment
Share on other sites

Re the method to do the blurring... it's the only way I know how (within a shader) :)

yah... I've been trying to find some examples of a gaussian blur implemented in code (ie I have a hard time decoding the raw maths) without much luck so far, but translating to a vex method the gist seems to be:

for each shade point:

sample the UV map on a grid of 7*7 locations (the shade point UV coord being the centre) with the variance value used to denote exactly how far apart in UV space the samples are (these are referred to as "Taps" in the Nvidia paper)

convolve using the appropriate 2D gaussian kernel - ie essentially multipying two arrays - each value in the 'samples' array is multiplied by a suitably 'gaussian' weight given by an appropriate value in the 'kernel' array

multiply the resultant convolved RGB value for the shade point by the weight for that specific variance (ie using the RGB blur weighting values)

add to the accumulator

repeat for the other 5 variances

apparently the very first variance can just use the shade point tex value on its own as the variance is too small to actually use any of the surrounding values (as per the paper)

this may well be rather slow since it would seem to require at least 49*5 multiplications (ie around 250 mults!!) and quite a few adds per shade point... the NVidia method would speed this up a lot as it can split the the job up into u and v 1D convolves done separately and its not having to constantly recompute values its already dealt with (ie it convolves the entire irradiance image as a whole first which would seem to be MUCH more efficient)

I think its important to also scatter the occlusion just the same as irradiance (becomes obvious when the occ takes bump/displacement into account), so the shader provides for scattering both separately.

Another reason for baking these separately is that it would be really slow to bake 2K of occlusion for every frame, and according to Nvidia the gamma correction should not not be done to the ambient occlusion.

mmm - surely the whole advantage is that you are baking all the irradiance in one go? (ie including your ambient light multiplied by amb occ) that way it would be scattered the same - or are you trying to maintain the ability to have a seperate AOV out for amb occlusion so that ambient light contribution can be tweaked later in post?

I think maybe your getting confused about the comments on gamma - the chapter on that in GPU gems is simply stating that all inputs into shaders should be linear and that amb occlusion generally IS linear (as its computed by a renderer) whereas often photo based or painted texture maps are NOT linear (because they have been created from gamma corrected images or by users not using appropriate gamma correction)

ie the theory is make everything linear going into the renderer then gamma correct the eventual output as the FINAL step in terms of image output (ie after all comping and grading too)

the whole gamma correction is fraught with problems though - ie in a lot of the image creation stages its very NON obvious what gamma correction is going on (adobes autoinstalled gamma correction stuff just makes this situation worse IMO) anyway if any

the output stage is easier - just make sure that any image reviewing is done through an appropriate correcting LUT

I think trying to gamma correct inside a shader though is a bad idea. if its suggested in the Nvidia ref then thats probably because the CG shader they are creating IS the final output stage :)

btw, the slides say this re distortion correction:

Can someone translate this into mantra Vop shading language? It would good to not have to get SOP's involved in the process.

As far as I can fathom the texture map for distortion encode a measure of the rate of change (ie derivative) in P values relative to UV space - ie you'd need to render baked P to the UV map then find a way of filtering that so that the final distortion value is given by looking at the difference in P between each pixel and its immediate neighbours then comparing that to some mean value that represents 'no distortion' (ie there should be some notional 'normal' rate of change that represents an undistorted relationship between UV space and actual surface area)

Link to comment
Share on other sites

for each shade point:

sample the UV map on a grid of 7*7 locations (the shade point UV coord being the centre) with the variance value used to denote exactly how far apart in UV space the samples are (these are referred to as "Taps" in the Nvidia paper)

convolve using the appropriate 2D gaussian kernel - ie essentially multipying two arrays - each value in the 'samples' array is multiplied by a suitably 'gaussian' weight given by an appropriate value in the 'kernel' array

multiply the resultant convolved RGB value for the shade point by the weight for that specific variance (ie using the RGB blur weighting values)

add to the accumulator

actually reading this again there is one main thing I'm not sure is right

ie- what the relationship should be between the size of the convolution kernel (ie how many samples are needed) and the variance (and does variance equate to std deviation which is the term used in any mathematical ref on gaussians?)

part of me wants to believe that it should be a direct one to one ratio - ie if the 'variance' covers a distance of 3 pixels in UV space then the kernal should be 7*7 (ie the centre pixel plus 3 pixels either side) and if it covered 6 pixels then it should be 13*13 - obviously this is easy to grasp but would be rather unfortunate in terms of evaluation if true

the paper does mention using the previous gaussian for the basis of the next higher variance and since apparently two smaller gaussians add up to a bigger one this may be how they are apparently managing to consistently keep to a 7 'tap' kernel even though the larger variances would suggest they need a much bigger kernel than that

this doesn't bode well for doing it inside a shader though, as unlike the nvidia paper we can't 'keep' our previous results in the same manner by rendering to off-screen targets (and I'm assuming trying to keep values in an array would just be stupid due to the extreme number of elements)

ie pre computing the gaussians would seem to make much more sense than trying to do it inside a shader no matter how attractive that idea is in terms of being 'neat'... unless I have everything seriously wrong of course(very plausable)

however all this has made me think though that it would be really COOL if mantra could use images as temporary repositories for computation in much the same way as modern realtime rendering does...

Link to comment
Share on other sites

Re this gamma shenanigans, I thought it might be a bad idea too, and yes I am confused by it ;)

The whole thing seems to be more of a kludge for the inappropriateness of baking a Lambert shader to begin with. In that sense sqrt'ing the shading helps, as skin shading is generally quite flat looking, though not as flat as the renders I just posted. The surface texture gets compensated for this by multiplying with itself before the baking process, so that in the end the color remains the same.

I'd rather use a better shading model. Unfortunately, any shaders you want to bake must have a "ensure faces point forward" toggle otherwise they don't bake properly. Any shading where normals point away from the camera (the actual scene camera!) will be fecked.

mmm - surely the whole advantage is that you are baking all the irradiance in one go? (ie including your ambient light multiplied by amb occ) that way it would be scattered the same -

They are scattered the same way, just not at the same time. The advantage of having them separate is that you don't have to render the occ at 2K.

I don't think baking everything together is needed for purposes of correct diffusion. In fact I reckon baking and diffusing each light separately is the most correct, since in reality light from multiple sources wont affect each other in any way. Not that I would be baking each light separately :)

or are you trying to maintain the ability to have a separate AOV out for amb occlusion so that ambient light contribution can be tweaked later in post?

And yeah, also because amb occ would still be available separately in comp :)

As far as I can fathom the texture map for distortion encode a measure of the rate of change (ie derivative) in P values relative to UV space - ie you'd need to render baked P to the UV map then find a way of filtering that so that the final distortion value is given by looking at the difference in P between each pixel and its immediate neighbours then comparing that to some mean value that represents 'no distortion' (ie there should be some notional 'normal' rate of change that represents an undistorted relationship between UV space and actual surface area)

I made an attempt before to do this in Sops, by measuring the perimeter of each polygon against it's perimeter in uv. It's in one of the scenes I uploaded here. Would be cool if it could be done in the shader though... just to save another bit of pre-processing.

S

Link to comment
Share on other sites

Re this gamma shenanigans, I thought it might be a bad idea too, and yes I am confused by it ;)

The whole thing seems to be more of a kludge for the inappropriateness of baking a Lambert shader to begin with. In that sense sqrt'ing the shading helps, as skin shading is generally quite flat looking, though not as flat as the renders I just posted. The surface texture gets compensated for this by multiplying with itself before the baking process, so that in the end the color remains the same.

I'd rather use a better shading model. Unfortunately, any shaders you want to bake must have a "ensure faces point forward" toggle otherwise they don't bake properly. Any shading where normals point away from the camera (the actual scene camera!) will be fecked.

S

theres nothing wrong with the lambert in terms of gamma response - if its too simplistic thats another matter :) this is mostly why i tend to use oren nayer - ie you can get a more matte look)

the issue is that we are used to seeing linear images gamma corrected - if you don't gamma correct they look too dark on a normal monitor

eg an uncorrected lambert tends to look like it falls off very smoothly from light to dark when in fact if its gamma corrected it doesn't really (and as such looks much more flat...)..unfortunately most of us - myself included till I read that chapter in GPU gems 3 - assume that the incorrect version is actually what we should be seeing

the issue is when and where to gamma correct - if you are inputting some kind of image into a shader it should be 'un-gamma corrected' - ie in a linear colour space (which it probably won't be given that it was made using a monitor that is very nonlinear) - rendered output (eg amb occ pass) IS already linear unless someone has gamma corrected it already

to view linear CG rendered output properly it should be gamma corrected first (or viewed through a LUT which does the same thing just more accurately) or else it will look 'wrong'

the problem is that the gamma correction applied needs to relate to the gamma of the viewing hardware - gamma correcting for your monitor might not look right on mine if it has a different gamma curve (we all know that issue!)

thus baking it into an image is always a bit of a kludge really - ideally we should all be using linear images all the time and just viewing them corrected on the fly through gamma correcting filters applied by our software

this is in fact fairly easy to do in comp (ie fusion et all will very easily let you apply a gamma correction curve or LUT to any image viewing window) and even in houdini or Mplay (eg set the Mplay default gamma correction to 2.2 rather than 1)

photoshop attempts to do the same thing for viewing images on the creation side but the issue is massively more complex because you never really know whether the image you are starting with is linear or or gamma corrected already (in which case additional gamma correction by photoshop is 'wrong'), unless it comes with an embedded colour profile (eg sRGB) which is why photoshop is always complaining about colour profiles - even worse adobe gamma tries to install itself at an OS level which means your never sure what kind of gamma correction is going on even if your not in photoshop...sigh

its the same problem we (ie at axis) have with compiling QT's - QT always seems to try and gamma correct the input material (certainly it tries to gamma correct stuff thats playing), but since we tend to view our output without gamma correction (ie we grade to a non corrected ideal) it always looks 'wrong' gamma corrected (which always generally lifts the shadow detail)

the reality is that since the vast majority of people don't run proper gamma correction (including ourselves at axis which is bad) the tendency is to alter the gamma on images permanently so they look good uncorrected but this still shouldn't be done until the final stage - given that everything we do gets comped and graded, i would say that trying to gamma correct in the shader is the wrong place to do it

the whole thing is indeed a) a mess and B) very hard to sort out properly and thoroughly (and even if you manage to get the studio all 'gamma corrected' properly then many images you look at from the outside world will look incorrect as they are effectively being corrected twice)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...