Jump to content

The SSS Diaries


Recommended Posts

I have enjoyed myself quite thoroughly tinkering with your SSS solution. Thank you for making your OTLs available to us. I have learned a great deal in the process of getting a successful SSS render.

Thank you! I'm glad you found it useful. :)

Regarding your other two questions...

Internal Occluders:

There are a bunch of ways to do this:

1. Probably the cheapest would be to render the internal blockers as a mask, and then blur and subtract in COPs. Won't look as nice and you'll likely loose some sense of depth, but certainly cheap.

2. A little more expensive would be to render the occluders with the same settings as the main SSS object (in a separate pass), then subtract it from the main layer.

3. The most expensive method would be to actually calculate the occlusion/shadow in the shader. This needs a couple of small changes to the code, which I've included in the attachment. Both the single and multi SSS VOPs now have an "Enable Blockers" toggle, and a list of blockers to consider for occlusion.

Texture Maps:

Your best bet with textures is simply to scatter white light and then tint it by the texture color. Anything else is a lot more expensive and doesn't really justify the effort (especially with multiple scattering -- it *might* be a little more realistic with single scattering, but not with multi).

I'm including a test with both textures and occluders (plus the slightly modified version of the OTLs), here's a sample output:

post-148-1185403686_thumb.jpg

post-148-1185403742_thumb.jpg

SSSoccluders.tar.gz

Cheers!

Link to comment
Share on other sites

Hey Mario and other subsurface scatterers,

We recently wandered into a trap where we attempted to plug a varying value into the subsurface colour input (Rd) of the Axyz SSS Multi VOP. It looks tempting to do this since right there on the VOP is an input jack, however this is only exposed as an input to the VOP for you to feed a constant value into, like that provided by a Parameter VOP.

Why you can't do this: Mario's code in the VOP accumulates all the subsurface illumination for all light measured in the point-cloud within the defined scatter radius and returns the result... there is no way to vary the sss color (Rd) within the loop that gathers the light. So if you attempt to vary the color you'll see splotchy artifacts, especially if you're varying the Rd quickly wrt the scatter radius.

So beware: if you're trying to plug something like an Occlusion value into Rd, which is the "correct" way to do occlusion, lest you do it in the light shader (which is not easy/fun to limit & control it's scope) -or modify/hardcode it into Mario's code to do it inside the pciterate loop -or sigh and post-multiply the occlusion with the resultant SSS value and live with the subtle inaccuracy in what you're doing (which is what we do all the time every day in production so this is OK to me; however we do strive for As Good As Humanely Possible whenever we can).

There is no way to really guard against this potential problem right now; VOPs (or VEX) has no way of forbidding varying values (enforcing uniform values) either in the UI or in the VEX language. The VOP label itself might stand to sport a warning like: "(uniform!)" or something but that's it.

Cheers,

Jason

Link to comment
Share on other sites

Hey Mario and other subsurface scatterers,

We recently wandered into a trap where we attempted to plug a varying value into the subsurface colour input (Rd) of the Axyz SSS Multi VOP. It looks tempting to do this since right there on the VOP is an input jack, however this is only exposed as an input to the VOP for you to feed a constant value into, like that provided by a Parameter VOP.

Thanks of the tip, Jason.

I gotta admit I'm a little confused as to exactly what you mean (I'm working on it ;)), but the truth is I've never used the Rd parameter with a varying input (like a texture, say), so it's entirely possible I simply haven't come up against the problem yet.

Looked at what the code is doing.... bounced it around with Wolfwood some... and I *think* I know what you might be talking about. The most obvious source of a potential problem is that Rd is attached to the *surface* (shade-points), whereas the SSS calculation is done for the point cloud (pc-points) -- so there's the first disconnect right there: Rd would be varying on the surface but not on the pcpoints, so pcunshaded() would lock on the Rd value of the first caller in the shading grid (whichever shade point in the grid happens to go first, that is... which in SIMD is likely impossible to predict). So... yeah... a little madness going on there fersure...

What the code *should* be grabbing for albedo is a pc-bound attribute, not a shader parameter (the shader parameter then only used as a uniform/constant modifier).

/me goes off to do some testing...

  • Like 1
Link to comment
Share on other sites

The most obvious source of a potential problem is that Rd is attached to the *surface* (shade-points), whereas the SSS calculation is done for the point cloud (pc-points) -- so there's the first disconnect right there: Rd would be varying on the surface but not on the pcpoints, so pcunshaded() would lock on the Rd value of the first caller in the shading grid (whichever shade point in the grid happens to go first, that is... which in SIMD is likely impossible to predict). So... yeah... a little madness going on there fersure...

Yeah, that's exactly it; it took me 10 mins of grokking what Andrew C meant when I got suspicious of some artifacts in Mantra 9 we were having with this stuff and he replied with a brief-but-incredibly-accurate summary of the issue.

What the code *should* be grabbing for albedo is a pc-bound attribute, not a shader parameter (the shader parameter then only used as a uniform/constant modifier).

This is the root of what was meant by allowing the light shader to perform ambient occlusion effects and such. Because the light shader (and shadow shader) does get called within the pciterate() loop, there is already a way to do this. However - as you infer, to be able to specify Rd on the cloud points would allow one to vary the scatter color with greater effect.

I was wondering too how one would effectively vary the scatter radius and I thought that there could be a constant parameter on the VOP to specify "Constant Scatter Distance" which would be the fixed radius if the user decided to attempt to vary the radius with another parameter dubbed "Optional Varying Scatter Distance". In other words, I don't think that the pcopen() VEX suite can even cope with a varying radius - and so a constant value should be only be fed into the VOP parameters (i.e, same rules as above) but in this case it could possible support a varying scatter radius field for the purposes of the math itself; you just run the risk of running inefficiently (the varying radius is much smaller than the constant radius) or developing artifacts (the varying radius is larger than the constant radius and sampling issues might appear).

What do you think?

Jason

Link to comment
Share on other sites

...

In other words, I don't think that the pcopen() VEX suite can even cope with a varying radius

...

What do you think?

I think that pcopen() does accept a varying radius. It also accepts a varying number of points :-)

However, I also think that it would be very hard to get continous shading if you were to have varying radius or number of points...

Still in SOP/POP contexts, it might be more useful.

Link to comment
Share on other sites

I think that pcopen() does accept a varying radius. It also accepts a varying number of points :-)

However, I also think that it would be very hard to get continous shading if you were to have varying radius or number of points...

Still in SOP/POP contexts, it might be more useful.

Really? How nice! :) Thank you for the clarification. Onward then!

Link to comment
Share on other sites

Yeah, the scatter radius could vary on the pc-points, and we could treat the shader parameter (ScatterDistance or whatever it's called) as a global scaling factor. In fact, if the shader accepted pc-bound Rd, and the calculation was set to do RGB separately, then the scatter radius would vary implicitly.

We've also added some other pc-bound attributes over here that we've found useful over time (e.g: an ID that can be used to separate the cloud into groups, the ability to add unique prefixes to some of the runtime pc attributes when clouds represent multiple objects, etc.). So adding Rd and scatter distance and whatever else people think necessary is really not a big problem -- just needs re-coding since you can't get at it from the outside.

Perhaps going forward I should first have a look at the H9 implementations (I only noticed the SingleSSS VOP thus far) and extend those, then post them to the exchange as "proposals", which then SESI can choose to include, ignore, or whatever.

Sound good?

<edit> I just had a thought as I posted this: we should be able to override *all* shader parameters with pc attributes... as you would shader parameters with geo-bound attributes. Would make for a more intuitive mechanism since everything else works that way... (this is outside of any pc-specific attributes, that is). Yes? </edit>

Link to comment
Share on other sites

Perhaps going forward I should first have a look at the H9 implementations (I only noticed the SingleSSS VOP thus far) and extend those, then post them to the exchange as "proposals", which then SESI can choose to include, ignore, or whatever.

Yeah; the H9 version is subtley different, isn't it? The Light Mask fields now need to have "*" in the them instead of blanks and so on. I'd say forge ahead with H9 and only critical backports might be able to make it back to 8.x if its needed desperately.

Yes, and allowing pc overrides whereever logical sounds like a good idea. I wonder if this shader (or a versioned instance of it) can make it onto the new Downloadable shelf thing? Or I wonder if SESI is going to do the same kind of thing with the Material Palette and have downloadable materials?

EDIT: In 9.0, a much better idea for the Light Mask thing is to rather put this in the field by default: object:lightscope - so that it follows the object's light mask rather than overriding it.

Link to comment
Share on other sites

  • 4 weeks later...

Hi

I have been trying to use the sss shader with a new model however I couldn't manage to create accurate point clouds .because ıt ıs flickering again :blink: like the one I posted before ıf you remember.

I thought the problem was the point numbers but when I tried to increase the number it didn't work this time. Maybe I need to find a way to have the point positions attached (not sure)

flicker2.mov

I am also posting the .hip file that includes the model. If you have time check it out to give some tips to get a successful result with this type of models.

flicker2.zip

thanks in advance

Selcuk Ergen

Link to comment
Share on other sites

I've just been a lurker to this thread so I've probably got this all wrong. Is the source of the flickering from changing point positions in the point cloud? If it is, and if the points do not need to be on the surface, then you could try generating point cloud files for each frame and then merge them all into a single point cloud file. Then use copies of this merged one for each frame. That way, you'll have the same point cloud positions every frame. Those are big if's though.

Link to comment
Share on other sites

I've just been a lurker to this thread so I've probably got this all wrong. Is the source of the flickering from changing point positions in the point cloud? If it is, and if the points do not need to be on the surface, then you could try generating point cloud files for each frame and then merge them all into a single point cloud file. Then use copies of this merged one for each frame. That way, you'll have the same point cloud positions every frame. Those are big if's though.

Actually, this is the first time that I decided to use sss in a project so I must say I am a lurker to the thread as well. as far as I know that the more you keep the point positions same the more you get successful (non-flickering) sequence renders.

For instance I got rid of this artifact by increasing the point numbers in a different model. but it didnt work with this one.

Your suggestion makes sense actually but my shot is approximately 1250 frames with a growing model so I wonna know which one is the best way.

But I definitely will keep your suggestion in mind

cheers

Edited by cellchuk
Link to comment
Share on other sites

I have been trying to use the sss shader with a new model however I couldn't manage to create accurate point clouds .because ıt ıs flickering again :blink: like the one I posted before ıf you remember.

Hey Selcuk,

Wow. I know it wasn't intentional, but I think you've managed to create one of the most sss-unfriendly setups ever! :)

The key things to remember when creating clouds for sss is that the density (points per area) stays roughly the same, and that the distribution doesn't vary wildly from frame to frame (i.e: that points don't pop in and out of existence all over the place as the thing animates).

With that in mind, let me try to break down your specific case into small steps:

First, you've got to try to get a single one of those strands (or "leafs" or whatever they are) to render without flicker. You can worry about getting the whole plant to behave later.

If you just scatter points with the scatter SOP, you'll notice that both the density and the distribution change dramatically from frame to frame as the thing grows. These two issues, combined, are responsible for 99% of the flicker.

The good news is that your strands are parametric (NURBs), and so there is an implicit 2D space (uv) where you could distribute the points, and then map them back to the surface (with the CreepSOP). This lets you work with the points on a plane, making it much easier to control both the density and the distribution.

As the strand grows, its surface area increases. Therefore, in order to keep the density constant, the number of points in uv-space should increase as well. One quick solution I used was to distribute a static number of points over a uv-area roughly proportional to the surface area of the strand in its final position. Then, as the strand grows, the space is shrunk (scaled) so that more and more points occupy the [0,1] range (in uv-space), which is the range that the CreepSOP will map to the surface. This results in a smooth increase of points throughout the animation and avoids any crazy changes in the distribution -- i.e: it satisfies what the sss cloud needs. The result is a flicker-free growing strand. Here's a visual representation of what's going on:

post-148-1187829670_thumb.jpg (click to see the animation)

OK. Now for the plant itself.

Given that we've taken pains to ensured that each strand is flicker-free, then, if you suddenly see flicker when you render the multiple copies of the strand that make up the plant, it's reasonable to assume that the flicker is comming from some source that is *NOT* either the density or the distribution of the pc points. Translation: don't go and start mucking about with any of the stuff we just did, because the problem is coming from somewhere else, and adding more points to the soup will solve nothing.

Well, the fact is that the flicker *will* magically appear again when you render the multiple strands (as opposed to a single one), and the reason has to do with the way your geometry (for the plant, not each strand) is built: it's a crazy bunch of interpenetrating geometry (and therefore, interpenetrating cloud points). This is not ideal for sss (heck, it's even unhealthy for plain-vanilla-diffuse, never mind sss). Nevertheless, there are solutions, albeit not without some extra effort.

The flicker is caused by all those internal points, which are close to the shading surface at the beginning of the animation, and get farther away as the plant grows. The shader filters over a number of neighbouring points and alternately picks up and drops some of these points as the geometry changes (meaning that these internal points can, depending on the topology, become part of the sss solution, when you'd rather not want them to be considered at all since they're not part of the surface). To solve this, I generated an SDF of the surface (IsoOffset SOP), and used it to group and delete these unwanted internal points.

post-148-1187829883_thumb.jpg (Click to see the animation)

It would be better if you trimmed the geometry as well (i.e: removed internal polys in a similar way to how I removed pc points), because then the single-scattering woildn't produce the subtly faceted results you see there -- these come from the single-scattering algorithm tracing into these internal walls -- and you'd get an even smoother result.

Note that the single scattering is raytraced (I don't feed it a point cloud like you did). This gives better/smoother results and is much easier to set up, so in general, I'd recommend you always leave the single scattering without a pointcloud. For some types of geometry (notably NURBs) this may produce some artifacts, so you'll probably want to render polys instead (that's why I ran your plant through a convert SOP before rendering).

Anyhoo... hopefully this gives you some idea on the characteristics to strive for when building an sss cloud. And also please note that a lot of the time it's not necessarily the shader that is at fault, but the stuff you feed it :)

Cheers!

flicker2.tar.gz

P.S: I didn't include the cloud in the attachment because it would be too large. Just re-generate them from the ROP in the Plant object. It will take some time to generate because it's creating an SDF each frame.

Link to comment
Share on other sites

Hey Mario,

First of all I don't know how to thank you. It' s very kınd of you to solve my problem in this particular example.

The thing is that, as I mentioned before this is my first attempt on sss and I am still trying to understand the key points and the procedure. (with your helps).

.

My disadvantage in my current student project is the lack of time and this geometry is a part of the tree that I am working on . Unfortunately I couldn't find the time to create the perfect geometry for this piece. Accordingly, I thought that pointclouds and sss will handle this sort of intersecting geometry. But it is obvious that the model was a bit problematic.

On the other hand I wouldn't figure out how to attach the points on this geometry that precisely in a very short time. So "respect" one more time :)

Finally, now I have a better understanding about the pc' s and sss. with the explanations of yours.

Thanks again for your time, all the information and the sss shader.

Cheers :thumbsup:

Selcuk Ergen

Link to comment
Share on other sites

Can someone show me how to write out the pointcloud in displacement level instead of sop level?

You can displace scatter point after. The question is if you can displace them during shader acces or we need extra step to save them as displaced geo...

Link to comment
Share on other sites

  • 1 month later...
  • 1 month later...

Hi,

I was wondering if it is possible to get this kind of effect:

http://static.highend3d.com/tutorialimages/135/blocker.jpg

I used the sss multi scatter vop, rendered out several versions with different distance settings. Then I comped them back together screening them over eachother.

What I am really looking for is how to get the occluded object inside of the geometry.

post-1666-1195434100_thumb.jpg

sss_test_03.hip

Edited by pclaes
Link to comment
Share on other sites

  • 4 weeks later...
Serg: There is a shader for C4D called chanlum that seems to do could pretty much what I'm after: http://www.happyship.com/lab/chanlum/docum...tion/index.html

Wow. I have never visited that site, but his "random-samples-in-sphere" approach is exactly the method I came up with quite a few years ago (before Jensen and the BSSRDF craze) to do scattering in snow. The only difference was that my sample positions were not truly random (and therefore avoided the noise problem even at low sampling rates). Spooky.

Anyway... the obvious problem with this is that it ignores the surface's topology (within the sphere's volume). So again, only useful for very small scattering distances. But a piece of cake to implement. Really. Give it a try -- every P gets the average irradiance ("blurred diffuse") within a spherical volume. But note that this is different than every P getting the average irradiance over a chunk of the surrounding *surface*. And so you can see that the difference between those two would only be negligible for the cases with relatively small scattering distances ("blur radius").

Arghh... it's xmas season and my turn to do snow has now arrived, a 15s continuous shot of snowy photoreal landscape. At the moment I am using single sss with phase zero in order to catch the fine detail on my displacement shader, the sheer size of the scene puts pclouds and the "baked diffuse & blur" technique out of the question, although I will use multi sss for large features like icy overhangs etc... but single sss is not giving me the results I need overall since the it casts sharp shadows, and it's illumination doesn't quite wrap the surface as much as regular diffuse, i.e. seems still quite sensitive to pov even with 0 phase, very bright at glancing angles.

I was wondering if you could elaborate a bit on the blurry diffuse technique you used for your own snowy job, specifically how can I (in vops) gather/average the illumination of neighboring pixels. I've stared at my vopnet for hours now and still cant get my head around how you'd do that.

much apreciated

Serg

Link to comment
Share on other sites

I was wondering if you could elaborate a bit on the blurry diffuse technique you used for your own snowy job, specifically how can I (in vops) gather/average the illumination of neighboring pixels. I've stared at my vopnet for hours now and still cant get my head around how you'd do that.

Keep in mind this is a complete hack from many years ago... :ph34r:

The idea is to distribute a number of samples at the surface of (or in the volume of) a sphere of some user-specified radius, located at, above, or below the current shading point. Each sample's value is simply the diffuse calculation at that point, weighed by some function. This weighing function can depend on angles, distance, whatever. You can make it as complex as you want, if, for example, there is particular information you can pass your shader that it could use to make the weighing smarter. In general though, you can start with a cosine-angle weight, and then go from there.

There are many variations on this theme (for example, you could determine the radius of the sphere based on the curvature of the surface... stuff like that), but that's the gist of it. Here's a very quick example which uses the same sampling directions for each shade point -- this avoids noisy results, but has the disadvantage that, with a very low sample count, the illumination will be biased. However, this shouldn't be a problem with 50 or more samples (which is still quite cheap, since you're just computing illumination, not tracing).

#include &lt;math.h&gt;	// for M_PI and M_PI_2
#include &lt;shading.h&gt; // for LIGHT_DIFFUSE

// same as diffuse() but takes a position as well
vector lambert(vector p, n) {
   vector C = 0;
   illuminance(p,n,M_PI_2,LIGHT_DIFFUSE) {
	  vector l = normalize(L);
	  shadow(Cl,p,l);
	  C += Cl * dot(l,n);
   }
   return C;
}

// this generates uniformly-distributed points on the surface of a sphere
vector rvSpherical(float u0,u1,r) {
   float phi   = u0*M_PI;
   float theta = u1*M_PI*2;

   // Unit sphere with Z-orientation
   float sph = sin(phi);
   return r * set(sph*cos(theta),sph*sin(theta),cos(phi));
}

// this returns the sign of the argument
float sign(float v) {
   return v&lt;0 ? -1 : 1;
}



// test the algorithm
surface testFakeSSS (
	  float polarity = 1;	 // &lt;0 = below surf, 0 = at surf, &gt;0 = above surf
	  float rad	  = 0.1;   // radius of implicit sphere
	  int   samps	= 50;	// number of samples
	  int   invol	= 0;	 // samples on sphere surface or in volume?
	  float Kdir	 = 0.1;   // amplitude of direct illumination
	  float Kind	 = 2;	 // amplitude of indirect illumination
   )
{
   vector n = normalize(frontface(N,I));
   vector o = P;
   if(polarity!=0) o += n*rad*sign(polarity);

   // Cdir &lt;- direct contribution
   vector Cdir = diffuse(n);

   // Cind &lt;- indirect contribution
   vector Cind = 0;
   int i=0,j=0; 
   float w, wsum=0;
   for(i=0;i&lt;samps;i++) {
	  // generate a repeatable random point on surf or in volume of sphere
	  float rmod = invol ? pow(random(j+2),1./3.) : 1.0;
	  vector sn = rvSpherical(random(j),random(j+1),rmod);
	  vector sp = o + sn*rad;
	  sn = normalize(sn);
	  // simple weighing function based on cos-angle
	  w = dot(n,sn)*.5+.5;
	  Cind += lambert(sp,sn)*w;
	  j += 3; 
	  wsum += w;
   }
   if(wsum&gt;0)Cind/=wsum;

   // out &lt;- direct + indirect
   Cf = Kdir*Cdir + Kind*Cind;
}

Very bare-bones, but hopefully enough to get you started. Here's the side- and back-lit result on a sphere:

post-148-1197430581_thumb.jpg post-148-1197430592_thumb.jpg

Cheers!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...