Jump to content

sss Quality as 3delight`s? possible in Mantra?


kensonuken

Recommended Posts

Is it possible to get sss quality like this?

The sss from 3delight looks very full bodied and natural.. is this possible in Mantra?

what is the code for such SSS... Ive checked the sss dairies but nothing was as close as what it looks like 3delights sss model... how is it possible in Mantra?

any code which helps to get such look..

post-3174-1228415864_thumb.jpg

Link to comment
Share on other sites

Is it possible to get sss quality like this?

The sss from 3delight looks very full bodied and natural.. is this possible in Mantra?

what is the code for such SSS... Ive checked the sss dairies but nothing was as close as what it looks like 3delights sss model... how is it possible in Mantra?

any code which helps to get such look..

No, Mantra does not have a built-in sss calculation (or an equivalent to PRMan's ptfilter).

As far as getting the code goes... I suppose you can always ask Aghiles over at 3Delight, or the fine folks at Pixar, though I wouldn't hold my breath if I were you...

Link to comment
Share on other sites

Mario, is that you?;)

Tongue firmly planted in cheek.

A little sarcasm slipped through during another all-nighter at work...

What? Can't I be shocked and bummed out? ;)

But surely you're not surprised kensonuken, that these people are a tad reluctant to give away their many hours of work for free, right?

Link to comment
Share on other sites

Do you mean something supported in internally in C++? (As opposed to the 2 VEX solutions (Axyz and Axis))

hopefully SESI will implement a proper SSS shader in their next release?

i mean SSS has become pretty much an everyday shader, and a much needed/used one to..

jason

Link to comment
Share on other sites

I suppose what we need here is a better way to store data... pclouds are a pita. I personally would like to be able to do something similar to what you can in real time hardware, that is be able to render anything to off-screen textures (to uv or ndc), automatically like shadow-maps, without any scene/shader/bake/files wrangling.

Basically the way I see it you would stick a "render off-screen" anywhere in your vop network/vex shader, acting as a trigger with options and an AOV at the same time, this would be detected as the ifd is parsed and spawn as many pre-renders as necessary, the beauty render then starts, making use of the data it gathered. Heck this is probably already possible to setup in SOHO... but that stuff is way over my head.

You could go further to optimize the baking process, like baking to UV but only what is seen from camera and accumulate to the texture as the camera moves around, or something.

What I liked about the rman sss shader was that it was fast, and the brickmap stuff is really stable (as in easy to keep flicker free). Still hard to get it to work good with shallow detailed sss though.

That said, I reckon the Axyz sss can mimic that teapot render :)

S

Link to comment
Share on other sites

As I understand it, both PRMan and 3Delight use the same approach -- an implementation (proprietary, natch!) of this paper, which uses an octree to store the radiance samples and speed up the calculation. In both cases, the actual light transport is carried out using Jensen's original dipole algorithm (which the Axyz sss vop does not! -- doing so is left as an exercise for the user :)).

In PRMan's case (as of 13.5 anyway), the radiance samples (along with each point's approximate area) are generated in a separate pass, using a "bake radiance" shader which stores them in a point cloud file using the bake3d() function. A few settings are crucial for this step to work: 1) a special mode of the area() function ensures that actual micropolygon ("MP" henceforth) areas are used (instead of smoothed shading areas, which would be too large), 2) the "interpolate" parameter for the bake3d() function has to be set so that a single point at the center of each MP is generated (avoiding duplicate points along shading grid edges), and 3) the RIB has to be told to not cull hidden or backfacing surfaces, and to turn off view-dependent dicing.

Next, this pointcloud map is fed to the "ptfilter" standalone utility which diffuses the light (clustering samples in the octree according to solid angle, etc, etc -- see the paper), which spews out a modified (diffused) version of the original pointcloud map.

As an aside, note that there is a disconnect here between the point generation step ("bake radiance" pass) and the diffusion step ("ptfilter"). This disconnect will bite you in the ass if the baked samples are much further apart than what you tell "ptfilter" to use as a "mean free path length" (same problem as using a sparse pointcloud and then telling the Axyz shader to use a short scattering distance) -- and the only remedy is to go back and redo the radiance baking pass at a higher shading rate; and repeat until satisfied.

But we're not done yet. After ptfilter has done its thing, the resulting pointcloud file has to be turned into a "brick map" (a 3d texture similar to i3d) using another standalone utility called "brickmake". Then you can finally feed this brick map to a shader (which in turn passes it to the texture3d() function) to render your sss.

Finally, all of the above has to be repeated for every frame in the case of deforming geometry (which may be self-evident to some but not to all).

That somewhat tedious description was mostly for the benefit of non-Renderman users who may hear about this "ptfilter" thing and think it's some kind of magical push-button automated sss solution. It's definitely cool, but far from automated. :)

Last I checked, 3Delight did the same internal processing (that is, some hierarchical storage approach to speed up the dipole calculation), but took a much, much, much more user-friendly approach to the whole thing: You pass all the sss light transport parameters as a single Attribute in the RIB (Attribute "subsurface"..., see the user's manual, starting at page 114), then 3Delight does all the preprocessing (pointcloud-to-diffusion-to-"brickmap") for you. Not only that, but you can also use another RIB Attribute to collect different closed surfaces into different sss groups! This is a huge deal by the way. Imagine a bunch of leaves or petals very close to each other; you want each to have sss, but not to "bleed onto" each other; in 3Delight, you just assign them to different sss groups (Attributes in RIB) and you're done. Then, at the shader level, you use the rayinfo() function to distinguish between normal shading and "radiance baking" mode (the ray type will be "subsurface" during sss preprocessing). That's it. Works as advertised. Very clean. Very nice. Kudos to Aghiles!

Now, some people reading the above will realize that we already have pretty much all the ingredients in Mantra/Houdini, with the exception of a "ptfilter" standalone. So, SESI could simply provide the missing link (ptfilter), and leave all the file wrangling and sss grouping up to us, or implement it in a similar way to 3Delight and create a new set of per-object properties to direct the sss preprocessing (or better yet, special attributes at the primitive level to support grouping) and do all the preprocessing for us (with an option to cache to file) -- this would be my choice for an RFE.

Sorry for the long post... <_<

Link to comment
Share on other sites

SSS is a little different in prman 14+. Ptfilter can now compute "partially evaulated" point clouds, which wouldn't compute any SSS values on your points like before. The brickmap and texture3d() portions are also discarded for the new subsurface() shadop, which you would pass in your partial point cloud and define all of your values for dmfp, albedo, ior, etc.. It's a bit easier to manage now imho.

Link to comment
Share on other sites

SSS is a little different in prman 14+. Ptfilter can now compute "partially evaulated" point clouds, which wouldn't compute any SSS values on your points like before. The brickmap and texture3d() portions are also discarded for the new subsurface() shadop, which you would pass in your partial point cloud and define all of your values for dmfp, albedo, ior, etc.. It's a bit easier to manage now imho.

Ah. Haven't used PRMan in a while now, so I'm not up to speed on the latest. Thanks for the correction!

I do know that the 3Delight method has been around for quite some time (it was this way a couple of years ago when I first tried it).

Out of curiosity: does Rman 14+ provide some means of grouping sub-sections of a cloud?

Link to comment
Share on other sites

Now, some people reading the above will realize that we already have pretty much all the ingredients in Mantra/Houdini, with the exception of a "ptfilter" standalone. So, SESI could simply provide the missing link (ptfilter), and leave all the file wrangling and sss grouping up to us, or implement it in a similar way to 3Delight and create a new set of per-object properties to direct the sss preprocessing (or better yet, special attributes at the primitive level to support grouping) and do all the preprocessing for us (with an option to cache to file) -- this would be my choice for an RFE.

An enterprising person could attempt to code something like this up in the i3d context and so emulate a ptfilter-like workflow, possibly accessing baked illumination from exported pointcloud or unwrapped image or deepshadow. I'm sure there some neat optimizations you can do if you know you're sampling on a regular grid instead of a point-cloud too - even just taking advantage of the mip-mapping capability of i3d.

Wouldn't it be nice to just be able to select a "subsurface" filter type when sampling a lightfield? (alongside the "gaussian", "box", etc) - e.g shadowmap("light1.rat", nml, normalbias, densitymult, "filter", "subsurface"). I suppose the deepshadow would have to be able to store a "group" index field too, to support exclusive volume chunks.

Jason

PS. Mario, I think I told you that back at DD we got [an older version of] your SSS shader to support groups for the work on "The Golden Compass" - the ice bridge sequence.

Link to comment
Share on other sites

Out of curiosity: does Rman 14+ provide some means of grouping sub-sections of a cloud?

I don't think so. Before your post I assumed 3Delight's SSS groups were used to specify different albedo, etc. for groups of objects, but I didn't realize it considered all of the geo in a group 1 single closed object. Thats pretty neat stuff.

I've never tried, but I think you'd have to bake out different point clouds for each group of geometry in prman. The only grouping method I know of is to assign data to different channels, but in this case it's surface illumination and I don't think you can give ptfilter or subsurface() two different channels for those calculations.

Pixar provides the source for their sss tools in ptfilter (ssdiffusion.cpp), so something like this could probably be developed.

Edited by Alanw
Link to comment
Share on other sites

Wouldn't it be nice to just be able to select a "subsurface" filter type when sampling a lightfield? (alongside the "gaussian", "box", etc) - e.g shadowmap("light1.rat", nml, normalbias, densitymult, "filter", "subsurface"). I suppose the deepshadow would have to be able to store a "group" index field too, to support exclusive volume chunks.

Interesting. You mean interpreting light diffusion as a shadowmap "blur"?

Pixar provides the source for their sss tools in ptfilter (ssdiffusion.cpp), so something like this could probably be developed.

:) :whistling:

Hehe... I was wondering when someone was going to mention that.

It was brought to my attention recently and yes indeed, the code for ptfilter (or at least the diffusion and storage parts of it) *is* available for licensed users of PRMan who could, if of a sufficiently devious bent, and with a few minor tweaks, convert it to a VEX shadeop, but then keep it to yourself because it's not freeware, mmkay?

//
// ssdiffusion.cpp
//
// Copyright (c) 2004-2007 Pixar Animation Studios.
//
// The information in this file is provided for the exclusive use of the
// licensees of Pixar.
// &lt;snip&gt;

... so there you go kensonuken... missing link found! :ph34r:

Link to comment
Share on other sites

Do you mean something supported in internally in C++? (As opposed to the 2 VEX solutions (Axyz and Axis))

not sure, i'd just go with what works, my biggest gripe is just trying to setup a decent SSS shader out of the box,

i don't want to undermine what has been developed by the guys here. I know i couldn't do it, but why should we have to dig through a 1000 pages on the forums to get a good looking shader?

jason

Link to comment
Share on other sites

Guest xionmark
Interesting. You mean interpreting light diffusion as a shadowmap "blur"?

:) :whistling:

Hehe... I was wondering when someone was going to mention that.

It was brought to my attention recently and yes indeed, the code for ptfilter (or at least the diffusion and storage parts of it) *is* available for licensed users of PRMan who could, if of a sufficiently devious bent, and with a few minor tweaks, convert it to a VEX shadeop, but then keep it to yourself because it's not freeware, mmkay?

//
// ssdiffusion.cpp
//
// Copyright (c) 2004-2007 Pixar Animation Studios.
//
// The information in this file is provided for the exclusive use of the
// licensees of Pixar.
// &lt;snip&gt;

... so there you go kensonuken... missing link found! :ph34r:

Oh man that's tempting ...

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...