Jump to content

The SSS Diaries


Recommended Posts

Might be very wrong here, but due to speed issues I wouldn't be too surprised if #ofpointstofilter aren't the ones that are most relevant (e.g. closest).

The reason for this might be the following: If you calculate on each point its distance to all other points your CPU stays busy for quite some time + you'd need to sort all those distances (O(n log n) to O(n^2) with something like quicksort on each point). This might make point clouds shaders even slower than conventional ones.

--> Likely "#50 points to sample" means that 50 more or less random points are chosen for sampling. This would explain to some degree why we have those splotches. With a smart datastructure (that surely exists) those chosen points won't be that very random, yet they are likely not the really closest ones you'd expect in this imagniary radius/volume sphere.

Jens

Link to comment
Share on other sites

I guess you may well be right. It would certainly be nice to know for sure. I thought the idea of the tbf format was that you could efficiently find the closest points, it must do more than just randomly pick points though, cos' if you randomly picked 50 points out of about 3000 I reckon you'd be lucky if you got 1 hit that was in range.

Link to comment
Share on other sites

I guess you may well be right. It would certainly be nice to know for sure. I thought the idea of the tbf format was that you could efficiently find the closest points, it must do more than just randomly pick points though, cos' if you randomly picked 50 points out of about 3000 I reckon you'd be lucky if you got 1 hit that was in range.

14356[/snapback]

Would like some clarification on this as well (+ hints or even better technical papers :P). This was really a simple guess of mine, however I spoke with a prof today who does graph theory and he agreed with me that likely the data structure itself is responsible for finding the closest point, but if you want something like the 50 closest points it's going to be a bit 'fuzzy'.

Anyhow, lets keep the fingers crossed for a nice update today of Mario's SSS diary with colorful pictures :D

Jens

Link to comment
Share on other sites

Anyhow, lets keep the fingers crossed for a nice update today of Mario's SSS diary with colorful pictures :D

14359[/snapback]

Tried this, tried that... made a couple of bad choices (and a few good ones too; it's not all grim).... got interrupted a few times.... gotta rework one of the main chunks.... it's close, but it's going to take a little longer :(

In the meantime, I may have a question for some of the math-heads here (Jens? :) ) . It has to do with deriving a random variate (PDF->CDF->inverseCDF) of the smooth() function (3 x^2 - 2 x^3, x in [0,1]). It would be a nice thing to have so I can do importance sampling on the outgoing segment and reduce the number of samples needed (converge on a solution faster, and render faster).

I'll formulate the question in more detail when I get home.

[EDIT] No worries; I think I figured it out... but it ain't pretty :cry2: [/EDIT]

Link to comment
Share on other sites

OK. I finally have a model that is somewhat stable (as far as I can tell), with as few controlling parameters as possible, and reasonably fast. But this implementation is barely out of the design stage, so I'm sure there are a few bugs lurking around in there.

Lots to cover here, so this will be one heck of a long post... apologies.

I thought I'd start with some motivational images (off the web). These show a red laser beam hitting the underside of a leaf, and the side of a glass of milk. I've marked the portions of the effect that are due to single versus multiple scattering.

post-148-1098390294.jpg

Those are obviously extreme examples, but they serve to give a mental image of the portion of the BSSRDF that we're after here.

Single Scattering

The Model

I tried to stick to the same goals I had for the multiple scattering model: control the whole thing with just a couple of intuitive parameters. In order to do this I had to throw away a large chunk of the full Jensen model. The challenge was coming up with exactly *what* to replace all that stuff with -- if we just threw it all away, we would end up with something cheap to compute, but looking nothing like the "real thing". As you know, this took some time to do :P , and the result is, as always, a compromise.

Here's the geometry of single scattering:

post-148-1098390328.jpg

The code uses the same symbols to refer to each component (in case you go digging ;) ).

1. At the surface point P, we find the refracted direction -Wpo (the primed symbols are the refracted versions of the non-primed ones). The amount of "bend" at the interface is governed by the "index of refraction" (IOR) parameter. Most materials have an IOR around 1.3 and higher.

2. What we do now, is accumulate the amount of light hitting this line (we only care about the portion of the line inside the medium, of course). To do this, we randomly scatter a whole bunch of points (samples) along this line and, for each point (Psamp), we loop through all lights and store the irradiance arriving from each one. The more points we use, the "more accurate" the result will be (more on this later).

3. When looping through the lights, it is impossible to exactly determine the refracted incoming direction of the light ray (Wpi). So what we do, is use the direct direction to the light (-L) -- in other words, pretend there is no refraction on the incoming side -- and then cleverly modify it (see Jensen) to approximate the length that the ray would have had, if it had refracted on its way in.

4. A lot of materials exhibit "forward scattering" (skin, milk, etc.), and comparatively very few show some "backward scattering" (silk, velvet, etc.) -- also known as "retro-reflectors". The terms "forward" and "backward" here refer to the direction along the light ray (-L) and opposite the light ray (L) respectively. In order to model this aspect of scattering, we use a "phase function" (Henyey-Greenstein in this implementation) that is controlled by the parameter "Scattering Phase" (also referred to as scattering "anisotropy" or "eccentricity"... but I thought "phase" was a little more friendly/intuitive, yes?). This parameter ranges from -1 (full backward scattering) to +1 (full forward scattering). Skin and milk, for example, have a phase somewhere in the range 0.7 to 0.9 just to give you an idea -- so they are strongly "forward scattering" materials. (some examples of what this means visually further down).

In symbols then, the full model (in its current incarnation) is:

post-148-1098390376.jpg

Where I(Ps,,Wi) is the direct irradiance at the sample point, F is the combined incoming and outgoing Fresnel transmittance factors, P(g,Wi,Wo) is the phase function with scattering asymmetry g, T(Xo,Ps) is the extinction from the entry point to the sample point, T(Xi,Ps) is the extinction from the sample point to the exit point, and the term (1-g) is a normalizing factor for the phase function.

Some general comments about this model:

*) It is highly directional -- how much or little you see of it is highly dependent on your position and viewing direction.

*) Its directionality is controlled by two things: how much light bends at the surface (IOR), and the scattering eccentricity of the material (Phase).

*) In general, single scattering travels a *lot* less distance inside the material ("Scattering Distance") compared to multiple scattering -- something in the order of 129 times shorter according to one reference. Currently, this implementation makes no effort to relate the two distances, but I will add this in the next iteration.

*) Take the previous comment with a grain of salt though, because some materials (e.g: marble) have almost equal amounts of single and multiple scattering; so caveat emptor.

Implementation Notes

As with the multiple scattering model, I replaced the exponential falloff with the smooth() function 1 - (3 x2 - 2 x^3) for the same reasons as before.

Scattering points at random is not a very good way of arriving at a result quickly. It helps if you know the expected density of the distribution (the "probability density function, or PDF). Lucky for us, we do! :) it is the smooth function. We can use this to distribute the samples in such a way as to maximize their contribution and converge on the correct result much quicker. This is known as "importance sampling".

The idea is to, instead of generating uniformly-distributed random locations, we generate random locations with a pre-determined distribution. In order to do this we need something called the "inverse cumulative distribution" function, which is a second cousin of our PDF (the smooth function). Finding this cousin isn't necessarily always easy, but here's what I've come up with for this implementation... I'm pretty sure it's correct, but if someone finds fault, *please* speak up.

post-148-1098390456_thumb.png

And here's a plot of the three curves:

post-148-1098390478_thumb.png

Having this means two things: 1) we need far fewer samples than if we were just using the random function, and 2) we can drop one of the terms in the original model, so that now it becomes:

post-148-1098390417.jpg

To avoid needless sampling a little more, I do a trace call from the exit point (Xo) along the refracted viewing direction (I). This is necessary because the scattering distance could likely be much longer than the cross section of the object, so if we didn't do this, we would waste a great deal of samples testing empty space.

However! wherever there is a call to rayhittest(), there's also the likelyhood that it may fail (even though there might well be a legitimate surface to intersect, but it just misses it -- collision is a little flakey for some surface types). For the case of the outgoing refracted ray (Wpo), I choose to fall back to the full scattering distance in the case of a rayhittest() failure. However; I haven't yet added the logic to compensate for the possible "samples lost in space" that such a failure would result in... in my TODO list.

The incoming surface position (Xi) is also determined by casting another ray, except this time it's toward the light source (along L) -- there's just no way around this (that I can think of). Since this is also done using rayhittest(), we again need to deal with the possibility of failure. Currently, failed samples are simply not counted, but I'm sure I can come up with something a little more robust than that. :rolleyes:

In short; if you see "speckles" in some render, now you know where they're coming from: failed ray traces (and unfinished implementation). About all I can say right now is that in the specific case of a primitive sphere, the intersection code seems to be doing well. But I have a feeling that NURBs may not fare as well. And my TODO list grows... ;)

Results

Here are a few images to test the effect of some of the parameters. Since single scattering is most pronounced when looking directly at the light source, these are all directly back-lit, with a single point-light source very close to the object, and with the object as a flattened sphere (where the profile goes to zero depth. Here's the setup:

post-148-1098390507.jpg

And here are the effects of modifying the phase

post-148-1098390524.jpg

And the index of refraction

post-148-1098390540.jpg

To stress the fact that this is still very early goings with this implementation (and just to generally annoy people :P ), I have *not* included this model in the VOP. Instead, it is a stand-alone shader which does single scattering *only* (no diffuse scattering). For one thing, I haven't even begun to think if there should be some built-in way to relate the two (I think there should, but I haven't thought about how to implement it yet). And for another.... I thought I'd go ahead and post the basic model before I get into all those details (otherwise you'd have to wait even longer for an update! :D )

So here's the bundle to test this "first pass" version:

SSSpixar4.zip

Oh; one more thing. I litterally added chromatic sampling a few minutes ago, but I'm not entirely sure the way I'm doing it is the best way to go... it might give strange results at short scattering lengths... very much a "use at your own risk" feature at the moment.

Cheers!

Link to comment
Share on other sites

OK. Now that that second big chunk is well on its way, I can relax and catch up with some of the posts. First of all, thanks everyone for all the comments and suggestions... much appreciated! :)

Why do you use 1e37 as the search radius for the pcopen function and not the scattering distance?

14382[/snapback]

Because it is the only way I can think of to tell pcopen() to give me "these many closest points" regardless of how far "closest" actually is. The calls that have this number appear in the functions that do the reconstruction (the filtering), ssMulti() and ssSingle(), which is where we put the "Points to Filter" parameter to work. So the intent here is to tell pcopen() "Go and look as far as you like (1e37), but bring me back no more than the closest Points to Filter points".

However; if you look at the ssIntegMulti() function, you'll find another call to pcopen() which is more along the lines of what you were thinking (the opposite of the filter function). Here we're saying "Look no further than 'this' radius (lu1), but bring me *all* the points you find along the way ( (int)1e9).

But you know what? this is just the way I've been doing it since I started using point clouds, so I don't even pay attention to it anymore. But I never actually *tested* whether a call with unrestricted_radius+restricted_npts returns the same points as a call with restricted_radius+restricted_npts (assuming npt points actually exist within the restricted radius). A kd-tree would be partitioned according to distance from the query (P in this case) so, as I understand it, the data would be traversed in sorted order (closest to furthest) regardless of the given radius. The only source of variation I can think of is the number of allowable points per leaf node -- if that were a high number, then there is a possibility that those little clumps could be traversed differently on each call.

If you get a chance to test this and come up with some definitive answer, I'd really appreciate it. (you've gone and made me doubt the whole thing now, thanks <_<:P )

BTW, the "SSS Point Cloud" SOP HDA could use a second input for rest geometry, perhaps?

14360[/snapback]

Good point. Yes. Although... I think the scatter SOP should pick up (interpolate, etc.) all incoming attributes, so you could presumably add any number of attributes before feeding it to the HDA, and they'll be part of the cloud... gotta check this.

Regardless though, the code currently assumes your query is against the pc-stored-attribute "P" (and "N"), not "rest" and "rnml" (or some other arbitrarily-named pair for that matter). Hmmm. That could be a problem; yup.

At the SOP level, one can always easily hack around some deficiency in the HDA, but having the guts of the VOP be rigid about these attributes is a different story altogether. I need to have a look at this. Thanks.

Might be very wrong here, but due to speed issues I wouldn't be too surprised if #ofpointstofilter aren't the ones that are most relevant (e.g. closest).

The reason for this might be the following: If you calculate on each point its distance to all other points your CPU stays busy for quite some time + you'd need to sort all those distances (O(n log n) to O(n^2) with something like quicksort on each point). This might make point clouds shaders even slower than conventional ones.

--> Likely "#50 points to sample" means that 50 more or less random points are chosen for sampling. This would explain to some degree why we have those splotches. With a smart datastructure (that surely exists) those chosen points won't be that very random, yet they are likely not the really closest ones you'd expect in this imagniary radius/volume sphere.

14353[/snapback]

I'm by no means an expert on kd-trees, but I'm fairly certain that, within some hard-wired (but presumably "reasonable") margin of error, the number of points visited during a query are indeed the closest to the seed position (in terms of L2 distance). My speculation would be that *if* there is a margin of error, it would come from the built-in maximum allowable population for each leaf node. i.e: if this number is allowed (by the implementation) to be >1, then it is perhaps possible that the traversal *within the leaf node* would be non-deterministic. However; it is customary to make the max population of a leaf node be statistically small relative to the cloud dimensions (where the members of any given leaf node would be statistically "very close" to each other -- so much so that it doesn't warrant splitting the node).

Having said that, I still think Simon has brought up a very good point, and definitely something that needs to be put to rest once and for all (I was kind'a trying really hard to ignore this problem and pretend it didn't exist :whistling: <sigh>... guess it didn't work). The final authority on the subject is of course Mark Elendt (the programmer at SESI who wrote these pointcloud tools). Maybe he won't mind if I bug him with this one question :) ... let me see what I can dig up.

Thanks again for all the suggestions everyone!

Cheers!

Link to comment
Share on other sites

I'm also wondering about the whole notion of using rest positions and normals with point clouds, surely this throws up the issue of the lighting calulation should only be done on the current position of the points not referencing back to a rest position which has no relavance to the current situation with light interacting with geometry? Or am I totally missing the point of how you would use a rest position in this instance. :huh:

Link to comment
Share on other sites

I'm also wondering about the whole notion of using rest positions and normals with point clouds, surely this throws up the issue of the lighting calulation...

14388[/snapback]

Yup. Right again.

On the other hand... you *could* still use an arbitrary attribute by passing the difference between it and P (or N) to the sss functions (they'd take an extra parameter, and would have to be modified to be aware of this, of course). So it *could* be implemented in a way that allows for arbitrary attributes... which means that the question becomes "should we?".

So Jason. In what context did you feel it could be advantageous to have pc-side rest positions?

Link to comment
Share on other sites

So Jason. In what context did you feel it could be advantageous to have pc-side rest positions?

14395[/snapback]

Doing small deformations say on a human face, I'd think that there'd be a danger to scattering the points again for every frame - I'm pretty sure it'd causes some low-frequency throbbing in the SS component, wouldn't it? It'd be great to location the scattered point distribution but use the new Normal. I'd believe that it'd *look* better to have steady luminance even if its a little incorrect.

What do you think?

Link to comment
Share on other sites

Doing small deformations say on a human face, I'd think that there'd be a danger to scattering the points again for every frame - I'm pretty sure it'd causes some low-frequency throbbing in the SS component, wouldn't it?

14396[/snapback]

Right; I see.

Yes, re-scattering every frame could cause frame-to-frame throbbing; especially on a sparse cloud. I think I can implement this without the need for extra parameters.

OK. I'll give it a go. For now, I'll assume that if the surface has "rest" and/or "rnml" bound to it, then so does the cloud. I'll further assume that if that is the case, then user expects them to be used instead of P and N -- I can refine this behaviour later, but just to test it out for now.

Internally then, the VOP will pass "prest" as the surface position for the query, as well as the rest_delta=(P-prest). The sss filter function will then test for proximity based on the passed-in seed (prest) and sample illumination based on ow_space(pc_prest)+rest_delta. This relies on the caller telling the sss functions that prest is bound to the actual geometry so that the sss function can call pcopen() on the appropriate attribute (prest), otherwise the whole thing falls down.

I'll try to add this for the next update.

Link to comment
Share on other sites

Doing small deformations say on a human face, I'd think that there'd be a danger to scattering the points again for every frame - I'm pretty sure it'd causes some low-frequency throbbing in the SS component, wouldn't it? It'd be great to location the scattered point distribution but use the new Normal.  I'd believe that it'd *look* better to have steady luminance even if its a little incorrect.

What do you think?

14396[/snapback]

I see what you are saying, and certainly throbbing is not what you want, but surely this will only work if the head stays still and only the mouth, eyes etc move. If the whole character goes for a walk taking his/her head with him/her then the mouth isn't going to be anywhere near where the point cloud is. Or are you suggesting you would scatter the points on a version which has bone deforms but not shape blends used for facial movement?

Would that be any more stable? <_<

Link to comment
Share on other sites

Concerning chromatic sampling; I haven't looked too deeply into this, but chromatic aberration seems like a fairly important issue really at least if you look at physical / optics theory of it.

There are a few things we can do fairly little about it. I assume Mantra uses the somewhat limited RGB color system for calculations. Now to the problem behind it: How do you want to derive the wavelength spectrum from the rgb color space?!

If you consider simply the R, G and B components seperatly with their appropriate wavelength, it somehow gets closer to the real thing, but in a way it isn't at all*. Although it wouldn't be an intuitive way to describe colors we'd need to be able to input the wavelength spectrum via a ckspline.

Well, likely such model would be too time-consuming and maybe there is a faster, simple way to slightly add some control to the shading model and fake the physics behind it. (Another problem surely is that we lack any way to describe the light sources apropriatly in the first place and this would have to be considered too).

Your idea Mario was an easily/intuitive controlable SSS model but some extra/optional parameters for those things might be handy. In particular for plants and alike, whose colors / greens heavily depend on the illumination they receive. (We all remember those little biology tests when we seperated all the different Chlorophyll types etc. Don't we ?! ;) )

If I can come up with an idea for this I'll let you know, but maybe you're already ahead of me and solved this problem. And big thanks for sharing all your work with us.

Jens

* For those who haven't had a closer look on what colors have to do with wavelength, here a few explanations: the rgb color model is only a simple model that is useful to describe colors (in particular for computer monitors and alike). Trouble is, that this is really just an attempt to map realworld colors into the rgb color space of our monitors. Most of you will have heard of the wave-theory of light. In the range of visible wavelength we can map a visible color to each wavelength (like a wavelength of 400nm is violet, 520nm is green and 700nm is red). White light consists of an continous range of wavelength** and when it hits an object there are certain frequencies that get absorbed and others get reflected. The ones that get reflected determine the color we "see", e.g. a white material reflects the all the wavelengthes back, while a black one absorbes the light of all frequencies. A yellow object mainly absorbes the wavelength of "blue" light and our eye receives those frequencies of the green & red light. Now, depending on the wavefrequency of light we get chromatic abbarations, when it enters a medium. Usually this is hardly noticable, but if you use a prism it gets fairly obvious.

Anyhow this happens as well with SSS: depending on the wavefrequencies of the light and ones that get absorbed by our material we'll see a slight color shift due to the chromatic aberration. So far everything seems fine, but here is the big problem. The three color components red, green and blue already consist of multiple wavelengthes. I couldn't find any nice diagrams, but the trouble is that we possibly can find a combination of rgb-components that perfectly seems to match the color of the realworld for our eyes, but the wavelength-frequency-spectrum is a different one. As soon as we'd change the color of our lightsource the difference would become evident and both colors wouldn't match anymore. Same for all SSS calculations, we cannot properly describe the materials with the rgb color system. Hopefully this makes some sense, otherwise on wikipedia.org or a proper physics book will describe those things more clearly. Anyhow, this whole thing is likely not really dramatic, but it might make the difference between a photoreal / natural look and the slightly artificial cg look.

**this is not quite true, if you look at the quantum theory.

[edited the original post and hopefully it's a bit clearer now what I was aiming at]

Link to comment
Share on other sites

Hey Jens,

Thanks for that nice explanation :)

In reality, I just picked the term "chromatic" to denote a method that samples the RGB channels separately -- it's more compact than saying "separate samples for each channel". It has no relationship to chromatic aberration whatsoever. But all the things you mention are very valid observations, as you can see by the type of research that Edward pointed to (thanks for the link Edward!).

It is also quite clear from looking at that paper that, in order to account for these effects, you need a much more complex model than what I've posted in this thread -- in fact, the model in that paper uses about five times as many parameters as the original Jensen model (which in turn needs a few more parameters than the model posted here).

Jensen's model takes a "proper" description of the material properties, i.e: absorption, scattering, reflectance, ior, and scattering eccentricity. They are all necessary for a "physically correct" description. You could take these, and trace photons for as many wavelengths as you like and arrive at a more accurate solution. But since this would take forever to render, he uses a di-pole approximation to by-pass all that tracing. However; wonderful as that is, it still doesn't change the fact that the relationship between these parameters (esp. absorption and scattering) is very non-linear, and so a real pain in the butt to control.

There is a re-parameterization of the Jensen model that is mentioned in a few papers. This involves using the BRDF (reflectance) representation of the BSSRDF to extract the "scattering albedo" (or "alpha") using a root finder to do the inversion in the [0,1] interval. Having this, one can then derive absorption, scattering, and the rest. This is in fact the parameterization that I used in the first version I posted here (that's what those "alpha tables" are for). Trouble is that even then, things are still very unintuitive for the user -- i.e: how far will light travel within my object if I feed it a reflectance of 0.5 and a "mean scattering distance" of 0.001?

The answer is "hold on, let me get out the calculator" -- which of course, is unlikely to sit well with clients :)

So; to tackle this chromatic aberration that you mention, you need to use the full model (with the di-pole approximation for sanity's sake). But to make it "usable" you first need to somehow reparameterize the damn thing; and how exactly one could go about doing that is not very clear (to me at least). If you're looking for a meaty challenge, then let me know, and I'll post a condensed version of how all the quantities relate to each other; I'd love it (we would all love it) if you could come up with something! :)

Cheers!

Link to comment
Share on other sites

The explanation wasn't really directed to you Mario, I suppose you know in way more detail about those things already. I just recently picked up a book read about color systems and some optics theory, and it made me start thinking :huh: Reading only those bits that sounded intresting my knowledge on this subject is still fairly sketchy, i.e. it's a bunch of wild assumptions and based on these I tried to do a few conclusions :blink:

The paper Edward pointed out is really intresting, but as you said it's somewhat complex. I have a few ideas on how it may be possible to 'fake' it's impact. My idea goes somewhat in the direction to reformulate some formulas in hsv space with an extra parameter for the purity of the light color / surface color. i.e. a laser would be a very 'pure' lightsource since it's only a single wavelength... natural lightsources that might appear the same color to our eye would however consist of a broader range of wavelength and the 'chromatic aberration' would be evident. Anyhow, something along these lines... Let's see if I can come up with something :unsure:

Jens

oh... and we want new colorful pictues :P

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...