Jump to content

The SSS Diaries


Recommended Posts

hey y'all,

so i got this thing working, hor-ray. Well i got the multiple scattering working. Which is fine by me, its better than the single scattering and thats all im gonna need.

I havent got the single scattering working, so if you wanna take a look at the file, please do. Just replace the multiSSS vop with the axyz single and tell me where im going wrong.

cheers, and Thank you Mario i will hopefully have a nice render for you, to show how much you helped me bring characters into Houdini. :)

-andy

ps, when use a texture map it always comes out washed out. Cant make any of it out in the render.

pps do you always have to light it from behind?

workingMulti.hipnc

Link to comment
Share on other sites

been scanning the file last night, notcied some little differences, still rtying to figure out why your one works, is it because the lightcloud generator it placed directly after the sphere sop... i dont know.

I will try to do more searching, but if you can work on an explaination i would really appreciate it. In return i might do a tutorial saving you to keep on repeating yourself :).

cheers

-andy

edit:// messing about with it, i cant get the multiple scattering that you demonstrate, single fine, just hit render.

Link to comment
Share on other sites

Hey Andy,

I will try to do more searching, but if you can work on an explaination i would really appreciate it. In return i might do a tutorial saving you to keep on repeating yourself :).

Sure, that would be great. But I'm in the middle of a large crunch to finish a job, so you'll have to be a little patient -- maybe this weekend.

edit:// messing about with it, i cant get the multiple scattering that you demonstrate, single fine, just hit render.

In the file I posted, if you just render frames 1 and 2 (without touching anything else), you don't get the same images I posted??

If that's the case, then something might be busted...

Cheers.

P.S: Remember to save the .tbf file before you hit render.

Link to comment
Share on other sites

ahhh i see.

Yep both single and mutliple scattering is working on both frames. Searching through the file cant find out how you did it :). Can see any keyframes or expressions except for the scattering distance on the point cloud sop.

I dont mind waiting for the mini walkthrough/explaination. Also how would displacement work with this, i tried a few days ago and displacement came out wierd, but i will leave that to you

Thanks again Mario.

-andy

Link to comment
Share on other sites

I dont mind waiting for the mini walkthrough/explaination. Also how would displacement work with this, i tried a few days ago and displacement came out wierd, but i will leave that to you

Thanks again Mario.

-andy

27246[/snapback]

If you need to add displacement you'll have to displace the point cloud by the same amount so that the relative positions stay the same. Since you can't add a displacement shader to the point cloud itself you would need to write a sop that is based on the displacement shader code and displace the mesh before scattering the points. Follow?

Bear in mind though that if you have a very dramatic displacement you may need a lot of detail in the mesh and a lot of scattered points to pick up the correct displacement effect.

Link to comment
Share on other sites

I must say this is a little disheartening to hear. While my knowledge of Houdini is growing more and more by the day. I am still a little incompetant when it comes to Vop's which is what i want to prevail in, and eventually to a code foundation in VEX.

with my current set of skills in programming i dont think i would be able to contruct such a thing.

I understand the logic of most things but implementing them can be quite frustrating as you all probably have experienced.

Right now there are a few options left for me and this shader.

1) Mario helps me to get it working (im sure he will, he has already shown me more than i expected, just need to figure out where im going wrong).

2) Without able to implement displacement (which i got working with the help of LEO-oo), there is very little can do with my characters as they are low poly based with a displacement carrying them to the next level. I might have to opt for normal and bump maps to carry me through but again i dont think i will get the results i desire.

3) Await the mighty SESI new release which might have some sort of SSS implemented.

4) Search for a shader that has fake SSS which can implement displacement. I have seen at least on the exchange. So that might help.

5) Or finally, the distant idea of someone creating such a SOP that will displace the point cloud, or maybe telling me what to do in general terms for something for me to work on, ie gather all the points, subtract etc etc.

I guess i need to learn alot more. :).

I still await the almighty Mario (take your time mate i understand the pressures invovled)to explain where im going wrong, but untill then its faking and poor shading unrealistic :). Cheers sibarrick for you response its good to hear about this now rather than later :).

Link to comment
Share on other sites

The character i have in question is the one in the displacement thread. Is generated from a 2 million poly object. :).

Same goes for the legs, arms and gloves. Im usiing multiple UV spaces.

Is this what you mean, just to generate the pointcloud and then swap it out for the low rez, either way i dont know if houdini can handle it, let alone my machine :P

Link to comment
Share on other sites

why not..

I did a quick test with 1 mill poly and it just fine.

I have a p4 with 1 gig ram and window is already eating up 400meg before running houdini.

also not that you don't really need displcement for very fine detailed.may be displacement for subd level 0 to level 6 and bump map for 6 to 8.

post-602-1146950583_thumb.jpg

Link to comment
Share on other sites

thats certainly something im gonna do in the future, but the character i am currently working with is a multi object piece. Its a workflow based on zbrush where you seperate your object based on Uv map which goes beyond the normal 0 - 1.

Forgive me if im confusing you, easier to explain with pictures, but the thing is the character that is recombined together will weight in a 10 million polys since, there are 5 UV texture spaces i will use up (0-1, 1-1, 2 -1, 3-1 and so on).

like i said i might go down this route ni the future but it depends on what im doing, print, animation etc.

Just wanted to say thanks to all who have and are trying to help. I read everything you guys post and have learnt alot from this thread alone.

-andy

Link to comment
Share on other sites

why not..

I did a quick test with 1 mill poly and it just fine.

I have a p4 with 1 gig ram and window is already eating up 400meg before running houdini.

also not that you don't really need displcement for very fine detailed.may be displacement for subd level 0 to level 6 and bump map for 6 to 8.

post-602-1146950583_thumb.jpg

27313[/snapback]

Just FYI,

You can manage your memory in a more frugal way if you want to; you can take advantage of the "Read From File..." capabilities in Mantra. If you manage this right, you can get Mantra to load in the object directly from disk and you never have to load it in Houdini, save to generate the point-cloud.

Link to comment
Share on other sites

If you needed to take it to the ultimate extreme you could even split the object up and load it in bits and generate a point cloud for each bit and clear the memory after each one then combine all the point clouds into one at the end. There's always ways to reduce memory.

Just realised I've gone past the 1000 posts and become a Grand Master, hoorah!

Link to comment
Share on other sites

@King Jason and Grand Master Simon: Thanks for helping out with the explanations, much appreciated... I really didn't know what I was getting into when I posted this SSS stuff, did I :)

Forgive me if im confusing you, easier to explain with pictures, but the thing is the character that is recombined together will weight in a 10 million polys since, there are 5 UV texture spaces i will use up (0-1, 1-1, 2 -1, 3-1 and so on).

like i said i might go down this route ni the future but it depends on what im doing, print, animation etc.

Point clouds are very light-weight in terms of ram -- particularly if saved to, and loaded from, a .tbf (as opposed to a .bgeo) file. "Tiled Block Format" files (.tbf) achieve this by only loading and retaining the bits that are needed for shading instead of keeping the entire cloud around all the time, which is the case with .bgeo files.

All early tests of the SSS code were done on geometry extracted from scanned objects (from the Stanford 3D Scanning Repository), and the hi-res version of these are all >1million polys (except for the bunny I think). BTW, houdini can load .ply files no problem, so go ahead and download a few if you feel like doing some stress testing ("Lucy" is around 28 million triangles). But like Jason says, when working with heavy geometry like this, always set things up to read from file... *always*.

If your base model lives in that rarefied region of massively dense objects... I pity you :P... but, OK, sometimes you *have* to work with them (car models from manufacturer's CAD programs immediately spring to mind, and are fairly common actually... and massive). If this is the case, then chances are good that the number of points (and more importantly, their distribution) is already good enough for all your SSS needs (and likely better than anything the scatter SOP could generate for you, distribution-wise). For this reason, it is sometimes better to bypass the scattering step altogether and use the geometry's own points as the pointcloud for the SSS calculation... assuming the object is translucent enough, that is -- and this is a key concept: you have to understand how the number-of-points-per-unit-surface-area in the point cloud (where all units are in *object space*) relates to the translucency settings of the SSS shader. But I'll get to that later. First, here's an HDA that allows you to switch between a scattered point cloud (using the ScatterSOP in much the same way as the one you're using now does), or using the points of the input geometry directly -- still calculating the "ptarea" attribute and whatever else is needed by the SSS shader. This HDA will give you the option to go either way depending on your case. BUT!... please note that this HDA was created as an internal tool for testing stuff (stuff completely unrelated to SSS, as it happens) and has not been "prettyfied" for general consumption -- this means no help, no tooltips, no hand-holding of any kind. If you want to find out more about what's going on inside, I'm afraid you'll have to dive in and look at the OPs on your own, sorry (it's actually very simple). It does, however, give you two ways to generate point clouds that are compatible with the SSS shader(s), which is useful. Here it is. Use at your own risk:

PointCloud.otl

I'm also going to assume that, until you understand how to put together your own SSS-enabled shaders, you'll be using the VOP SHOP I included in the Exchange submission, called "sssVOP". It's a very simple shader, just SSS+specular, but it's in VOP form at least (no scary VEX code), and shows the basic hooks into the SSS functions (which *are* in VEX, and embedded in the VOPs). You can copy-paste it into new sessions, or save/load it using opsave/opread, or opscript it out and then source it, or convert it to a SHOP and store it in an OTL file for later reuse as a shader only... s'up to you.

OK. Here we go. Let's start with the meaning of some of the main parameters (I'll get to texture maps, displacements, bound parameters, and all that jazz later, in separate installments, as time permits... best I can do):

Scattering Distance:

1. Add a camera ("cam1") at {0,0,5} with "Projection"="Orthographic", "Ortho Width"=1, and "Resolution"={100,100}.

2. Add a light ("light1") at {5,0,0} with y-rotation of 90 deg, and set its "Light Color"={1,1,1}, "Attenuation"="No Attenuation", and "Projection"="Orthographic" (ortho proj. is important for this test).

3. Create a geometry object (let's call it "tut1") and go inside it.

4. Put down a default BoxSOP ("box1") and toggle "Consolidate Corner Points" off. Leave the render flag set on this SOP throughout the tutorial.

post-148-1147035034_thumb.jpg

5. Append a PointCloudSOP ("PointCloud1" -- the HDA I posted above), and set "Point Source" to "Scatter Points" and "Output File" to "tut1.tbf". Click on the "Output Tiled Block File" button to save the point cloud (this is very important!).

post-148-1147035042_thumb.jpg

6. Set the viewport to show you the view from cam1, enable point display on the viewport, and set the display flag on the PointCloud1 SOP. You should see a bunch of points.

post-148-1147035049_thumb.jpg

7. Create an instance of "sssVop" in the "/shop" folder and set "Multiple Scattering > Pointcloud File" to "tut1.tbf", toggle "Single Scattering > Enable Single Scattering" off, and set "Specular > Specular Color" to {0,0,0}. Assign this shader to the "tut1" object.

post-148-1147035057_thumb.jpg

8. Create a Mantra ROP, and set its Camera to "/obj/cam1" and "Super Sample" to {4,4}, then fire off a render from the viewport using this ROP. You should see this:

post-148-1147035063_thumb.jpg

OK. That's the basic "hello world" of an SSS setup, and the steps taken to get there. Here's the hipfile thus far, for convenience, but please do the steps yourself first, so you understand the basic bits that need to be present:

tut1_base.hip

What we have here then, is parallel light rays coming directly from frame right and hitting a unit cube, whose dimensions coincidentally happen to fit our viewing extents exactly, filling frame. The shader is set to only do "multiple scattering", using the pointcloud file "tut1.tbf" that we generated with the "PointCloud1" SOP -- and we note that this is a "tiled block format" file (.tbf), not a .bgeo file.

... now we can start talking about the scattering distance... phew...

Crank the gamma of the mplay window to 20 so that we blow up all the low values:

post-148-1147035070_thumb.jpg

Note that there is actually light going all the way to the left of frame (i.e: all the way through the cube). Also recall that the cube is 1 unit in all dimensions. This means that light has travelled (at least) 1 unit in object space (the space in which both the cube and the point cloud were created -- i.e: "SOP space"). Now look at the "Scattering Distance" parameter of the shader and note that its value is 1. Coincidence?... I think not ;)... but let's confirm:

Change "Scattering Distance" to 0.5 and fire off another render (leave the gamma correction on the mplay window where it is), and you should see this:

post-148-1147035077_thumb.jpg

Note that now the light is traveling about 0.5 units in object space.

What happens if the cube is 2 units big instead of one?

Change the "Size" parameter in the box SOP to {2,2,2} and click on the "Output Tiled Block File" button of the PointCloud1 SOP to save the new point cloud. Then change cam1's "Ortho Width" to 2 so we can see the whole cube again. Leave the scattering distance in the shader alone (at 0.5) and fire off another render:

post-148-1147035083_thumb.jpg

As you can see, the light is *still* traveling 0.5 units in object space, but now 0.5 units represent 1/4 the length of any one of the cube's sides, instead of 1/2 as it was when the cube was one unit in size.

One last test. Set the "Uniform Scale" of the "tut1" object to 0.5, and the camera's "Ortho Width" back to 1. Render another frame.

post-148-1147035083_thumb.jpg

The reason that the result is identical to the previous one is that this last scaling was done in world space and therefore does not affect the cube's dimension in object space -- that remained the same. Also, and more importantly, note that we didn't have to re-save the point cloud file in the last step. Again, this is because nothing changed in object space (where the cube is defined and its point cloud generated).

So. "Scattering Distance" represents the distance that light will travel inside the scattering medium (the cube) before it gets completely extinguished.

Now re-load the tut1_base.hip file to start fresh.

Filter Points and Reconstruction:

Go to the PointCloud1 SOP and change the "Number Of Points" from the default of 2000 to 200, then save the .tbf file (don't forget to save the file!). Now in the shader, change "Points To Filter" to 1. Fire off a render:

post-148-1147035092_thumb.jpg

At the center of each one of those patches lives one of our pointcloud points (PC points henceforth). Subsurface scattering *is* being computed for each one of these PC points (as you can see, there is a clear attenuation of the intensity of each patch as the light travels through the cube), but it's also painfully clear that it is *not* being computed for every single surface position being shaded -- just the PC points... the ones we saved in the .tbf file which the shader loads up.

The shader is being told (through setting the "Number Of Points To Filter" to 1) to grab the SSS value computed for the PC point closest to it and use that as the final shading value at that position. This means the shading is held constant until a new PC point is deemed to be "closest"... giving us a pretty Voronoi diagram :)

It would be better if we could tell the shader to grab, say, the 8 closest points and interpolate between them. We can test what this would look like by setting the "Number Of Points To Filter" (NPF henceforth) to 8, which gives us this:

post-148-1147035099_thumb.jpg

Better.

In general then, the NPF parameter is responsible for how "smooth" the reconstruction looks -- the higher the number of points being used, the smoother the result. But as with any other filter, if you go too high, you'll end up blurring the result too much. The balance between the number of points in the pointcloud and how many of them to filter over is something you need to experiment with to get a gut feel for how they relate.

OK. That's all the time I have right now. Maybe I'll do another installment next weekend, but this covers the two most fundamental concepts and helps you get going somewhat.

I recognize that this is not a plug-and-play SSS solution, and that it takes a lot of explaining to actually "understand" what to tweak in order to achieve some specific result, but it *is* pretty fast, and fairly flexible, once you get what's going on.

Knowing what you know now, maybe you can go back to the beginning of this thread and read about how the other parameters came into being, and hopefully deduce how they were meant to be used (skip all the math, it is not necessary).

Cheers!

Link to comment
Share on other sites

ive just got in and seen this post. Haven even started reading it but wow. Thank you so much for all the time you spent.

Gonna read it, learn from it, and post my results.

Did you submit this thing for siggraph, i think people need to know about it. :)

Cant thank you enough.

-andy

Link to comment
Share on other sites

Did you submit this thing for siggraph, i think people need to know about it. :)

Hehe, thanks, but I wouldn't know where to start. Lucky for all of us though, there was a fellow by the name of Henrik Wann Jensen who *did* submit a now famous paper to SIGGRAPH. This, and subsequent papers based on it, are what spawned all the implementations that you see out there, including the one you're using now -- I invented nothing, just implemented a simplified version of it. What you're using is not anywhere as complex as Jensen's original algorithm, and really owes more to Pixar than to Jensen -- see the very first post in this thread.

Link to comment
Share on other sites

"if i have seen further, it is because I have stood on the shoulder's of giants".

Like stu said, there is a big difference between readin theory and finding a practical application/outlet.

Im currently reading this and nodding my head, understanding what your saying. I will let others post untill i will show you my latest render, again thank you.

-andy

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...