Jump to content
nap

Boosting Monte Carlo Rendering by Ray Histogram Fusion

Recommended Posts

Maybe it is possible to use the DCM files with the "Pre-Composite Samples" flag on. This way we may have all information needed to run the method as a post process.

I will try this as soon as possible.

 

Cheers,

nap

Share this post


Link to post
Share on other sites

link to pdf: http://dev.ipol.im/~mdelbra/rhf/boosting_MC_RHF_09292013.pdf

 

Unless I'm mistaken this doesn't seem to require any data other than the final color and histograms. The paper suggests computing the histogram at time of render but not sure why.

 

"This requires two kinds of data from the rendering system: the noisy Monte Carlo image u˜(x) and the associated sample color histograms h(x)."
 
Would be interesting if one could add more criteria by which it selects the pixels to use, like say, N dot Nneighbour < 0.5, and lenght(P-Pneighbours) < 0.5... stuff like that.
Edited by Serg

Share this post


Link to post
Share on other sites

Would be interesting if one could add more criteria by which it selects the pixels to use, like say, N dot Nneighbour < 0.5, and lenght(P-Pneighbours) < 0.5... stuff like that.

 

 

That sort of "fat sample" approaches are described in the previous work section of the paper - they seem to think that it is not needed in this approach.

 

With Houdini, one would not even need to use deep data, rendering with Sub-Pixel Output enabled would get you the colors of all the samples, from where you could calculate the histograms. As said above, now we just need someone to implement it ;)

 

I'm a bit worried that their test scenes are mostly untextured, I wonder how well the approach will work with high-frequency texture&normal maps..

Share this post


Link to post
Share on other sites

That sort of "fat sample" approaches are described in the previous work section of the paper - they seem to think that it is not needed in this approach.

 

With Houdini, one would not even need to use deep data, rendering with Sub-Pixel Output enabled would get you the colors of all the samples, from where you could calculate the histograms. As said above, now we just need someone to implement it ;)

 

I'm a bit worried that their test scenes are mostly untextured, I wonder how well the approach will work with high-frequency texture&normal maps..

Yeah I thought the same about the textures, there is only one tiny little textured leaf pic, though its still surprisingly good. 

 

What we normally do when NR is called for, is to decompose the render using the passes, dividing the direct/indirect diffuse by the raw color pass to take the texture out, then run NR on each pass individually as needed. You can get away with murder this way :D

We use NeatVideo under Fusion... the frame interpolator is pretty awesome, great for particle based smoke/dust/sand stuff ;) 

 

I'm not too keen on subpixel output though... HUGE exr files.... is it really necessary? I didn't see any mention in the PDF about resampling the renders. Then again its super over verbose so I might have skimmed :)

  • Like 1

Share this post


Link to post
Share on other sites

Bypassing the need for subpixel output is exactly the reason why I think they suggest computing the histograms at render time - one only seems to need the subpixels to be able to calculate the histograms afterwards.

I don't think we can plug into that part of the rendering pipeline in mantra (or can we?), that's why I thought it would be easiest to just output the subpixels.

 

Of course histograms also take some memory, but with larger sample counts it would be a clear win. They used 60 buckets per pixels and 8 bits per bucket might be enough, so the break-even point with 16bit color channels would be at 10 samples per pixel.

Share this post


Link to post
Share on other sites

I don't think we can plug into that part of the rendering pipeline in mantra (or can we?), that's why I thought it would be easiest to just output the subpixels.

 

I don't we can, I recall asking sideFX about something similar... for purposes of outputting a color/texture pass that when multiplied a untextured diffuse pass produces 100% match with the equivalent beauty render along aa and motion blurred edges of objects passing in front of each other... Its not possible to get this straight out of the renderer. 

Share this post


Link to post
Share on other sites

Just to keep this thread up to date, this is now implemented in houdini 14.

 

http://www.sidefx.com/docs/houdini14.0/props/mantra#vm_pfilter

 

it's not in the dropdown, but if you set the pixel filter to 'combine -t <threshold>'  where threshold is the merging threshold, it does RHF.

 

Experimenting with it now....

Edited by mestela

Share this post


Link to post
Share on other sites

 

it's not in the dropdown, but if you set the pixel filter to 'combine -t <threshold>'  where threshold is the merging threshold, it does RHF.

 

 

It's available in the pull-down as 'Ray Histogram Fusion'

Share this post


Link to post
Share on other sites

Huh, interesting. It's not in our work build, now that I think about it, a few other H14 defaults don't match up with Apprentice at home. Someone mentioned we might have some scripts floating around that are pushing H13 parameter layouts where they shouldn't be, this sounds like one of them.

 

Thanks for the heads up!

Share this post


Link to post
Share on other sites

It's available in the pull-down as 'Ray Histogram Fusion'

Excuse me for poking my nose into a thread that's way over my head but you tend to be a nice guy and answer silly questions so I decided to ask anyway - what are the benefits of using this particular filtering algorithm ? I did some tests with a very simple scene and while it does seem like the image is slightly less noisy when zoomed in than when using the 2-2 Gaussian, it also seems a tiny bit slower - in contrast to what I assumed would be the effect ie -> "This paper proposes a new multi-scale filter accelerating Monte Carlo renderers".

 

The difference too was not nearly as dramatic as the one on the video. Barely noticeable as a matter of fact :(

Share this post


Link to post
Share on other sites

There are some tests in the bowels of this thread http://forums.odforce.net/topic/22080-suppress-small-artefact-in-pbr-render/

I found the effect to be almost too heavy, adjusting the threshold value of course matters.

 

The acceleration comes from not needing to use as many samples to get a noise-free image.

  • Like 1

Share this post


Link to post
Share on other sites

You weren't using the render view window with 'preview' enabled by any chance? Looks like it only works if preview is disabled.

 

My quick tests on a simple scene with the default settings (combine -t 20) show a big reduction in noise, too much in fact; blotchy in some areas, banding in other areas. 

 

Trying to get the adaptive sampler to a similar level of noise gets much much slower however, and is still pretty noisy by comparison. Will see if I can create a little example set of renders.

  • Like 1

Share this post


Link to post
Share on other sites

 

The difference too was not nearly as dramatic as the one on the video. Barely noticeable as a matter of fact :(

 

Yup, as the others said. You've stumbled across the anomaly that strikes a lot of papers - they pick and show an example that works! Never the examples that don't :)

Share this post


Link to post
Share on other sites

Very late reply - but I had this thread open while looking into RHF sampling. I've some additional info from Sesi which I'm going to share - just in case someone else stumbles upon this thread like I did. :)

 

> Hi Support,
>
> Rendering with the combine-20 (histogram) pixel filter seems to be very
> slow even on black / empty space. Is this a bug or feature?
 

Hello Paul,

Our developers tell me that this is a known issue with the "Ray Histogram Fusion" filtering approach. The original paper only presented results with images up to 1280x720 and a single image plane, where all samples can easily fit into memory. However, with a 3840x2160-pixel image, with 8x8 pixel samples, that's 530 million samples, and with 16 image planes that are each 4 components, that'd be over 126 GB (3840*2160*8*8*16*16 bytes) to keep in memory, and the results couldn't be shown until the render was completely finished. With more image planes, more pixel samples, or higher resolution, it'd be even more memory.

Since it's not feasible to reserve that much memory just for keeping sample data around, we have to filter one tile at a time. Because RHF with its default settings can have an output tile depend on rendered sample tiles a few tiles away, even that can become a lot of sample data to keep in memory until it's no longer needed by any output tiles, so it may be explicitly sent out to disk if the sample data cache limit is reached. However, the biggest issue performance-wise is that, because the filtering is done one tile at a time, and each output tile depends on a lot of sample tiles, there's a lot of computation that has to be re-done for every output tile that would have only needed to be done once if the whole image were filtered at the same time.

This isn't a significant issue for the default Gaussian filter, because a pixel can only depend on samples that are one pixel away (both horizontally and vertically), so there's very little overlap in computation, whereas a pixel in the default RHF filter can depend on samples that are 40 pixels away. The performance of both filters is independent of the content of the tiles, apart from the number of image planes, pixel samples, and image resolution, so for more expensive renders, the filter time shouldn't be as significant, but RHF will require that *much* more of the image has been rendered before the first output tile can be displayed.

The other significant issue with the RHF filter is that it will eliminate some noise in an image even if it's supposed to be there, e.g. from a texture map or displacement shading, not just noise that is due to undersampling. For example, if you render the mantra_Combine node in the scene submitted with this bug, you can see that a lot of the detail of the mandril is removed, whereas the Gaussian filter keeps the detail. Most of the example renders presented in the original paper either didn't have noisy surfaces or had noise that was well above the threshold value they chose, so this was only slightly visible in a few of their examples.

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×