Jump to content

Irradiance caching


Recommended Posts

HACK ALERT!!  :D

 

In reply to comments on the BSDF bonanza thread.

 

You can use a very very simple hack to trick the pathtracer to use an irradiance cache, of sorts...

 

In the past I've done a lot of point cloud baking for this purpose, in combination with a lot of other `indirect light` vop hackery, in order to do stuff like the gi light distance threshold... which works, but by using the indirect light vop (deprecated stuff) hacks made rendering much slower when more than 1 bounce was needed and irr cache was not in use... and sometimes the pcwriting would randomly skip objects for reasons I never got to the bottom of.

 

In this case I'm using ptex baking (H15), but I suppose it could be anything... Since the ggx thread post was made I had what currently seems to be a much better/simpler idea than how I did it before, without any real modification to the pathtracer. Basically the hack is, plug the ptex bake into Ce and zero F on indirect rays (not the pbrlighting input)... despite zero'ing indirect rays Ce is magically looked upon by erm, indirect rays... :)

 

But of course there are lots of practical implications and associated further hackery...

  • Need to wedge bake entire scene (though it could maybe be selective, like just the caching the walls)
  • auto filling in the shaders cache file path
  • Just discovered baking doesn't work on packed geo!!  :(
  • Don't want to be killing indirect reflection beyond the first bounce. This leads to needing pre multiplied separate F components until they arrive in compute lighting, which in turn means making your own shader struct and all the layer blending tools that go with that. OR (I really really hope someone knows how to do this and shares it here), make a is_diffuse/reflection/refraction/etc ray vop. 

I have a hunch that the best way to do irrcaching in general might be to voxelize the scene... not only because it would provide a cache for shading as we are doing here, but also because we (meaning sideFX or other brainy people) could then start looking at things like cone tracing (like nvidia is doing for realtime gi). But the best thing would be (really dreaming here) that it would remove geometry complexity from the problem of raytracing... Basically the voxels would become a geo LOD, so if the cone angle is big and the distance is bigger than say 2 or 3 voxels then it would do all that expensive ray intersection stuff against the level set instead... forests, displacements, bazzillion polys, reduced down to voxels. 

I think this might work because I've been doing this in essence for years, but limited to hair... by hiding hair from all ray scopes and using a volume representation to attenuate/shadow direct/indirect light reaching the hair (nice soft sss looking results). But! I hear some say it will be blurry etc, because the volume lacks definition, so there is also a simple illum loop to trace `near` shadows and so add some texture/definition back in...  fast... even compared to today's much faster hair rendering in mantra, and arguably better looking (the sss/attenuation effect esp if the volume shader casts colour attenuated shadows), but there is the hassle of generating the volumes even if automated as much as it can without sesi intervention.

 

1m11s for cached, 2m26s for regular. This is using the principled shader. Btw a test on this simple scene looks like the GI light works (here!), but it is way way way brighter than brute force PBR, and yeah also had grief with the gilight, sometimes not writing the actual file... 

post-1495-0-79732800-1445618805_thumb.jp

irrcache_v003.hip

Edited by Serg
  • Like 4
Link to comment
Share on other sites

More craziness.

 

This time I'm keeping the F components separate so I'm not terminating reflection rays (you'll see what I mean if you look at the hip).

I also set the reflect limit to 10 (previously 1) and rough to zero (to see reflect bounces clearly), and I turned Adaptive sampling Off because it makes this scene more noisy rather than less.

 

Interesting results... in that the beauty render time difference is bigger than before, at 6.4x faster than brute fore PBR and better noise quality than even Photon caching. Photons take a hell of a lot less time to cache than any way other way I can think of to bake lighting. 

 

ptex baking looks to not be viable at all for high polygon counts objects ... takes an age on something like the happy budha scan (640k poly)... pcwrite from an offset camera would be good but it's currently saying no to baking anything coming out of pbrlighting.

 

Photon cache is a very practical solution but it looks much brighter. It would seems either PBR or Photon cache is wrong, but there's probably more to it... With Gi Light in point cloud mode the brightness is also too bright, the blotchy photons are clearly visible in reflections and light leaks at corners.

photon caching is glitchy though, one minute its working, the next it doesnt, and then it works again...

 

post-1495-0-25585300-1445699746_thumb.jp

 

 

 

 

 

irrcache_v005.hip

  • Like 1
Link to comment
Share on other sites

Cheers!

 

One more test, with happy Buddha's yay! :)

The only thing that got cached was the walls. Goes to show you don't have to cache everything... The Buddha's still get the benefit  :)

Indirect contribution from them shows up in the regular aov's while light from room ends up in emission.

 

I also setup a hybrid of global GI cache + view dependant cache.

The view dependant cache is just a half res render from camera where every object but the room is phantom and specular rays disabled. Fresnel is view dependant so there is less diffuse energy in the room than there should be (there is ways around it like a baking mode in the shader where Fresnel is Off)...

Everything inside the frame that is not occluded from camera (a self shadow test) is using the view dependant cache, everything outside the frame or occluded is using the global cache. The view dependant cache has the potential to provide vast coverage with a cache detail level auto appropriate for the distance.

 

The point of it is mainly to deal with animated stuff in frame efficiently with the option of a view dependant cache per frame, and a static frame for global.

 

Gain and Gamma was adjusted until it roughly matches the brightness of Photon cache render.

I turned up the settings until quality >= patience limit, pretty good for 8m14s + 4m for both caches (cache time could be much less if blurry reflections). This was 6x6 2min 9max 0.0025 noise level (var aa still struggles with dark transitions). Think pbr would still be chewing the first 16 buckets after this finished and no where near these noise levels. Something weird with the reflection on the right wall though.

 

Photon cache does really well 5m53 with same mantra settings. 10million were fired, there were some odd glows/blotches with 1million but not that big a difference. Quality wise pretty similar for the time spent, I call it about even, irr cache took longer but its cleaner over big areas (less variance in irr cache). More photon caching glitches... think turning off prefilter photons then re-rendering photons breaks it (black) and seems to stay stuck black, after a while of trying to get it back (on/offs, different file paths, etc) it somehow comes back.

 

post-1495-0-21718600-1445722877_thumb.pn

irrcache_v006.hip

Edited by Serg
  • Like 1
Link to comment
Share on other sites

That will be an interesting comparison... I suspect any movement on the Buddha catching the light will move those photons around and cause low frequency strobe on contact points and dark areas  they find hard to  reach, lets see. 

Probably much less likely to happen with image based caches and sample lock On. I suppose I could try the ray histogram filter stuff on this... or even clean it up in post, NeatVideo does a wonderful job. Maybe we can even get away with rendering the view dependant cache at half FPS and re-time it in post!

There might be some weirdness if I animate Buddha on the right to come into frame (global cache to view cache transition).

 

Other things to try... distance threshold. I've got a bsdf occlusion node which is an indirect lighting vop hacked to gather occlusion attenuated by distance (normally used for my version of attenuation) that I could use to mix brute force PBR back in corners/contacts, rather than switch it off entirely for ray level > 0. 

Edited by Serg
Link to comment
Share on other sites

Done. I rendered with sample lock On so we can clearly see how the caching methods differ.

 

Its surprising how the photon cache strobe is not as visible in the beauty as the direct photon render would suggest.

It looks like the photons that hit the wall directly stay stable, but of course the ones that hit the statue swim all over. Wonder if its still forgiving if the light moves...

 

The irr cached render is rock solid though, and now at 1 minute per view dependant cache frame (with RHF filter) its starting to look really compelling. 

Solved the weird reflection on the right wall by setting min reflect ratio to 0.5, but its pretty clear that the hack confuses the path tracer, it looks as though the indirect rays are playing russian roulette with diffuse rays that aren't there, and this is still visible on the floor reflections. 

 

Mantra is same settings as photon cache render except for min refl ratio, but its still clear to see that the noise levels are lower because the irr cache is far more coherent than a photon cache. The funny thing with photons is that as the photon count increases blotching becomes less of a problem but variance gets worse, because they are getting smaller in radius and so harder to hit coherently ray after ray. 

 

I think there is a very good case for irr caching in Houdini... But like I said before, it needs sideFX to make it work proper and hassle free :)

 

irrcacheVSphotons_001.mov

irrcache_v008.hip

  • Like 1
Link to comment
Share on other sites

Found Dennis Albus thread on ray switch labels (awesome! been trying to figure that out...) so I'll try and use that to improve the results when I get more time... I hope it allows the bsdf components to remain combined rather than editing all shaders and layer blending tools to handle separated bsdfs... and also hoping it might fix the noisy reflections.

 

Ideally I want reflections to behave as the gilight does in reflections (and what my old gather based system did), in that reflections are reflecting diffuse tracing against the cache, whereas currently the irr cache completely replaces all diffuse in reflections of cached objects at all ray levels, which is the same result in reflections as when "point cloud mode" is On with the gilight.

The difference between pcm and irr cache is that the irr cache is far cleaner so these reflections are far less objectionable (see post #3 renders).

 

Cheers!

S

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...