Jump to content

rendering 8 billion unique points


Recommended Posts

I have a very large, static scientific simulation that I need to render and am having trouble getting mantra to unload the geometry from tiles that are no longer being rendered.  So far I've split up the sim into 8,000 cubes, all as bgeos with bounding box info saved, and brought each sub-slice into mantra using the point instance procedural and "instancefile" parameter, and am rendering using micropolygon rendering.  From a Houdini scene standpoint the point instances are the lightest to deal with.

 

While I can see that mantra does indeed unload some of the data when it is finished with a region, it seems to hang on to far more than it unloads and eventually my RAM fills up.

 

Is there a way to force mantra to more agressively unload objects?  There is NO raytracing being used.

 

I can eventually render everything by slicing the scene with camera clipping planes and rendering multiple layers, but the sequence is nearly 2000 frames so every layer significantly multiplies the render time.

 

-Jon

 

EDIT: I'm using Houdini 12.5.  Upgrading to 13 may be a possibility if it is worth it.

Edited by jonp
Link to comment
Share on other sites

Last time I've tried (it was probably around Houdini10) the only way to convince Mantra to deallocate memory was to render with single thread (non raytrace scenario). Sadly, but true. I reported it as a bug, but developers claimed it's expected bahaviou, which I didn't understand to be honest.

Link to comment
Share on other sites

Interesting, oddly single-threaded mode is also not rendering efficiently.

 

Today I set everything up for prman just to see if it does any better.  For a reduced point count of about 1/4 prman did much better than an equivalent mantra render, 8gb vs. 24gb.  The motion blur quality was also a lot better.  I think if I were to bin it into even more chunks I could probably get the full dataset through.. if we had enough licenses!  But I suspect the load/unload I/O would overtake any time advantages.

 

Probably the best way would have been to just use/write a simple accumulation buffer style renderer.

 

I do have a working solution at the moment though: render near foreground particles at full density, and render background particles at 1/8 density with the remaining particles scaled up by sqrt(8). FG layer renders in about 8gb, and the BG renders under 24gb.  Visually it's hard to tell the difference.

Edited by jonp
Link to comment
Share on other sites

Hey John,

 

this sound very interesting, i had to deal with high-amount with particles as well, is it possible, that you share your hip file? Or generate a dummy-test file, how you've solved the bg fg tiling?

 

 

thanks!

Link to comment
Share on other sites

Here's an example.  Look in the /img/cop network and follow the references.

 

Basic idea is to set up a take with hires geometry and camera clipped to foreground objects only, then a take with lores geometry and camera clipped to background objects.  The lores points are scaled up to visually make up for fewer particles. Then the two takes are rendered and comped together.

hires_fg_lores_bg.hipnc

Edited by jonp
Link to comment
Share on other sites

The only rapid- loading and unloading behavior I've seen in mantra was dealing with the point replicate procedural which would load the generated points on a per bucket basis.  Only way to render hundreds of millions of points in a single render w/o killing your RAM afaik.

 

Perhaps you could aggressively decimate your cache based on the camera perspective so that areas that are really dense would have their 'inner', mostly-invisible points deleted, as well as increasing the pscale of the remaining inner points.

Edited by ikarus
Link to comment
Share on other sites

Nice, I thought about trying that actually, loading them from disk using ptreplicate but didn't expect it to save any memory.  Might give it another shot on Monday.

 

EDIT: It looks like there is an upper limit to the number of ptreplicate points!  After 100,000 per-source point, roughly, I don't get any more.  Is this a bug?

Edited by jonp
Link to comment
Share on other sites

Looks like Mantra does actually unload geometry in favorable conditions. The trick is to force "no raytrace" path with -Qr flag. Otherwise even in micopolygon rendering without explicit ray trace materials or shadows Mantra seems to be using silently raytracing, how suggests that no-ray-trace flag breaks default materials. 

Link to comment
Share on other sites

Where can I find the -Qr flag in the manual?  I see the section "Rendering as part of a workflow" with mantra on the command line section, but this option is not specifically listed.

Looks like Mantra does actually unload geometry in favorable conditions. The trick is to force "no raytrace" path with -Qr flag. Otherwise even in micopolygon rendering without explicit ray trace materials or shadows Mantra seems to be using silently raytracing, how suggests that no-ray-trace flag breaks default materials. 

Link to comment
Share on other sites

It's printed by mantra -h . It's a part of the old school render quality control via command line. I havn't finish my test but it seems like Mantra doesn't even call illuminance() loop with that flag (or I did something wrong while wiring quickly own shader).

Link to comment
Share on other sites

  • 1 month later...

In my case the particles were shaded by a constant material so I was able to divide them into layers using the camera's clipping planes and render each layer with a different take.  The FG layer used a full LOD representation while each layer further back used a LOD with fewer particles.  In the end I had three layers and each layer needed less than 16GB of RAM to render:

FG - 100% LOD, pscale = 100%

MG - 25% LOD, pscale = sqrt(400%)

BG - 12.5% LOD, pscale = sqrt(800%)

 

This was in addition to the dataset being sliced into sub-cubes.

 

The preprocessing was done with a command line python script so I didn't need to load every particle into ram at once.

Link to comment
Share on other sites

Nice stuff, and thanks for the after action report. Can you show the result?

 

Probably not allowed to directly, but I can reveal the source of the data:

http://www.mpa-garching.mpg.de/mpa/research/current_research/hl2011-9/hl2011-9-en.html

 

The project I worked on was The American Museum of Natural History's Dark Universe:

http://www.amnh.org/exhibitions/space-show/new-space-show-dark-universe

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...