Jump to content
art3mis

SIGGRAPH - Neural networks and the future of ray tracing

Recommended Posts

Guest tar

I call bogus on this- it's a GPu denoiser. There are no denoisers that take minutes per frame:

Quote

Existing algorithms for high-quality denoising consume seconds to minutes per frame, which makes them impractical for interactive applications.

 

Share this post


Link to post
Share on other sites
5 hours ago, marty said:

I call bogus on this- it's a GPu denoiser. There are no denoisers that take minutes per frame:

 

Well it depends on what resolution you have, and what amount of data loss you allow (blurring)

I think predicting the location of noise and having data sets for image completion (to replace the noise with) is certainly not a bad idea...

There was also a siggraph demo of real-time fluids based on a similar principle

Share this post


Link to post
Share on other sites
Guest tar

redshift devs looked at it and say that it helps for the last 5% maybe. It's just not very good at the moment.

adding that Neat Denoiser works at approx 4 fps for 2K images IIRC, 1080ti 12 cores@3.33, so a render at pixel samples of 5x5 would equal 25 times slower to denoise, which is 6.25 secs per frame.

Edited by tar

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×