Jump to content

Occlusion Cacheing


Recommended Posts

Sorry in advance, this is another one of those "need a bigger box" ideas.

Occlusion - looks good takes a long time to render. In my case for our big models too long.

Point clouds come about as close to a solution as I think i'm going to get, so starting with them....

What would help is if after doing the shader calculation you could perminently save the resulting data on the cloud back to disk.

I have a feeling that in another thread someone may have said this is possible in PRman, is that true?

Is it possible to hack this in Mantra, or can it be added using the HDK?

Does anyone (probably Mario :P ) Know if in the sop context of the HDK you can get a point to "see" all visible objects without having to object merge them all first. This is the only way outside of doing a render that I can think of doing this. It would be kind of doing a render but in sops.... just so I could then save the result.

The reason this would massively help is this. I would like to be able to store a kind of deep raster of occlusion data for each occlusion sample direction. Here's why.

I have my scene broken up into groups of objects. These groups of objects may or may not appear in any given view. Given that at some point they will all appear rather than doing the normal occlusion calculation that stops as soon as it hits something I would turn on all the objects and do an occclusion pass that marches along the ray direction recording a chunk of data per group of objects. This chunk would store whether any object in said group occluded the point. All this data would be saved to disk. Then given a scene with a subset of all the groups of objects in it I could intergrate through the data chunks looking only at the ones that include the current visible set of objects. I think this would be quicker than just re-doing the occlusion calc. Not 100% sure but I can't see why not.

In other words I would only have to do the occlusion calculation once, albeit it would take longer in the first place. Furthermore because the model is static and the camera moves are fixed too I could pre-cull all the points that were never going to contribute to the calculation.

Any thoughts or suggestions? And if anyone from Sesi is listening any comments on cacheing point clouds, H8.5???

Link to comment
Share on other sites

Hi Simon,

What would help is if after doing the shader calculation you could perminently save the resulting data on the cloud back to disk.

I have a feeling that in another thread someone may have said this is possible in PRman, is that true?

Yes. In PRMan you can use the bake3d() function to write arbitrary attributes (token-value pairs) to a "pointcloud" file -- this pointcloud file would be equivalent to a .bgeo file containing points with attributes. The destination file can be a shader parameter so you could write each object's self occlusion to a separate pointcloud file for example.

This pointcloud file (much like a .bgeo file) isn't very memory efficient, so after the bake pass, you'd normally convert it to what they call a "brick map" file, whose equivalent in Mantra would be a "tiled block format" (.tbf) file.

Subsequent render passes would then read these brick map caches to fetch occlusion, or other data, like incoming radiance, irradiance, or whatever you decided to store there.

Of course, this whole caching machinery only really gives you the most benefit with static objects, though parts of it can be reused for objects that undergo rigid transformations only (see app note #35 if you have access to it).

Is it possible to hack this in Mantra, or can it be added using the HDK?

I could be wrong, but I think your only option at render time would be to implement the caching via a VEX DSO -- the file name would be your unique instance ID (handle), and you'd call it once per shade point to store (into some runtime structure) your custom attributes (not unlike PRMan's bake3d() function). The cleanup function would then save the data to the file (maybe as a bgeo file if you bring the HDK into the picture). Alternatively, you can probably come up with a way to stream it directly (but buffered) to the file as the render takes place (instead of all at once at the end) to save on RAM. If the format you save is .bgeo you could then convert it to .tbf and proceed as usual. If you use your own format, well, then you're on your own ;)

Does anyone (probably Mario tongue.gif ) Know if in the sop context of the HDK you can get a point to "see" all visible objects without having to object merge them all first. This is the only way outside of doing a render that I can think of doing this. It would be kind of doing a render but in sops.... just so I could then save the result.

I suppose you could build a sop that takes a varying number of object references (like an ObjectMergeSOP) and traces against them. But I believe that the tracing facilities in Mantra are more sophisticated than what you can use in a SOP (like the ray sop), but that's just a guess -- you'd need to run some tests to see if it's worth the trouble.

Then given a scene with a subset of all the groups of objects in it I could intergrate through the data chunks looking only at the ones that include the current visible set of objects. I think this would be quicker than just re-doing the occlusion calc. Not 100% sure but I can't see why not.

Whether you go with the occlusion() function, or somehow manufacture your own gather-type loop with rayhittest(), you'd still need to somehow query the hit objects for either their name, or the "occlusion group" they belong to, and I don't think that would be possible, even considering export variables...hmmm.

So... I think you'd have to orchestrate all the combinations using the "scope" parameter (available to most/all? these tracing functions), and outputting to multiple layers which you'd then need to combine later. Assuming you approach it as a render solution, that is... if you go the SOP way, well, you might have a similar problem actually... not sure.

I'd explore this whole side of the problem first, since I think it has more potential for turning into a dead end than the whole caching issue...

Cheers!

Link to comment
Share on other sites

Thanks Mario, since it's in PRman already I can either roll my own or hope it turns up in mantra soon. I'm guessing you're pressuring Mark on this? I'll add my voice too. ;)

Since 80% of what we do is very heavy but static models it sure would be worth some effort. I've done some tests and the renders look 100% better with some occlusion thrown in, really brings them alive (excuse the pun), but I need it to be quick, as in not much slower than computing shadows and that is asking a lot.

Link to comment
Share on other sites

Hey Simon,

Just wondering if you've had any particular luck with the cacheing available in Mantra? I've found it to be pretty fast these days.

That, and if your model is static, unwrapped occlusion? That's just a simple texture() call if you spend a little time up ahead making sure you have a nice high-quality unwrap. In many cases this works muuuch better (less artifacting, smaller files, image-processible) than any 3d data structure.

But I will 100% back up being able to query/write arbitrary 3d datastructures from VEX.

Wiki info on Unwrapping, and particularly on the unpremultiplying trick on fp images.

Link to comment
Share on other sites

To be honest I haven't had time to explore this fully, but just thinking about it has put me off the texture map approach.

We have 6,500 objects in our model (and increaseing), that's object names, I don't even know how many actual primitives that means, just doing the setup on that lot let alone having to load all the maps to render it makes my head spin. There's only 4 of us and we have zero time to actually spend on doing this.

As for point clouds sure they can help, and I really need to do a test to see how much. It is the approach I think I'll take but I was just musing on how having done it once I might be able to really speed things up for all the other renders. Do you have any idea how well mantra handles the memory implications for this, ie if I have 6,500 point clouds does it need them in memory all at once? That's my biggest worry really. Need to find some time to properly test this out. Should be able to do it on the next project I'm just trying to figure out what the best options are at the moment, so as not to waste time on it.

Link to comment
Share on other sites

We've made an HDA (Object level) which we put into every object that becomes responsible for generating the unwrapped occlusion maps for it's contents. Another mode of it's operation (which sounds like it might be right for you) is that it can generate the occlusion for it's parent object, writing those maps to a preset location based on the object name. A third mode of operation is having a single node that can take a bundle as its argument and it loops across the bundle, generating a map for every object.

There are benefits to each mode. The Object version will handle "Read From File", performing the UVUnwrap, the unpremult (in a COPnet) _and_ applying the the resultant shader.

Regenerating maps can be as simple as selecting a bunch of these Objects and clicking the "Generate AO Maps" button.

Link to comment
Share on other sites

Ok this is all good, but for each object I need a map that relates to what other objects are in the scene. Although the model is static the combination of objects that make up the scene is infinity variable, I just think manageing all this is easier when it's just done directly by the renderer.

That's not to say I can't get the unwrapping method to be automated this way, it just "feels" heavy - we'll see. In my mind the problem comes either when the model changes or the scene does, running through all the maps that need changing would be a massive render in itself. Now if you could do a mantra -u at the same time as the normal mantra then it wouldn't be such an issue. Thats why at the moment I'm leaning towards point clouds.

One way or another I'll get it going, having seen the results I'm keen to find a way forward.

Also when you do a mantra -u you have to render loads of parts of the model which are never seen. This might not be a problem in the case of a model of a ship, probably at some point in some shot you will see most of the surface. When your whole scene is made up of hundreds of overlapping shapes probably less than 50% of it is ever visible, and the more stuff you turn on the worse the problem gets, you see less and less of the surfaces and the occlusion render time goes up and up. Not a good situation.

Link to comment
Share on other sites

Doing a few test on this today and found that I can tweak pure raytraced occlusion to the point where it is damn near as fast as using point clouds. The main reason seems to be that point clouds spend a lot of time reading off disk, the result being that the cpu never gets much over 30% use during a render. Do you guys find that to be the case too? Is there any way around it, some secret cacheing setting that means the pcs stay in memory longer, anything like that?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...