Rendering Fur with GI
Posted 12 September 2010 - 08:18 AM
Posted 23 September 2010 - 04:39 AM
Basically how it works is by converting the fur geo preview into a fog density volume with isooffset (you may need to re-sample the curves to get enough points for accurate enough representation using the point cloud option in the isooffset), then I set this volume to Phantom, and exclude fur from any shadow casting lights, leaving the volume to cast shadows onto the fur and other stuff... And you're basically done!
Receiving raytraced shadows from a volume is surprisingly very quick rendering and you get the subsurface lighting for free as the light penetrates into the volume (the volume shader need only be the default Volume Cloud).
It casts shadows onto say a characters head quite nicely but obviously limited by the resolution of the fur to volume conversion, so you wont see individual hair strand shadows, but maybe tracing against a relatively sparse fur distribution (say by tracing against the actual preview fur) in addition to the volume will cheat this.
Bellow are two renders, rendered in MP mode with 10x10 pixel samples, the top is lit by two area lights. The second render is lit by an environment light with an HDRI map. They both took aprox 5min20s on 3 out 4 cores in this i7 940. For some reason, regardless of lighting technique the render takes about 4 mins before is actually starts, then goes very quickly.
Posted 23 September 2010 - 05:01 AM
Posted 23 September 2010 - 08:03 AM
Nice indeed. I believe I saw a Pixar paper where the were using a volume to shade hairs as well.
That is a very cool paper! I wonder if we could also simulate hair with fluids in Houdini...I think u should have a crack at that one Peter! (if you haven't already) DOPs gimme a headache
Their rendering technique is very different than what I'm doing as it is still centers around deep shadow map tech, i.e. it makes hair look nice but doesn't attempt to solve the raytracing problem.
Transferring the inherent softness of the volume normals to the hairs is very cool. We could possibly do this in Houdini too by making volume normals and transferring the result to the guide curve points.
I would love to be able to run shaders on a volume and transfer to a surface (in this case the hair) at render time without jumping through hoops, kind off like a voxel based point cloud shader. I suppose I could convert the volume into a point cloud then run it as a pcloud shader but that is jumping through hoops.
I now basically need to generate a occlusion mask (by somehow getting the hair density volume transfered onto the hair) which I would use to shadow raytraced indirect diffuse bounces. In other words, ray leaves point on hair, all other hairs are invisible to it, ray hits scene geo to get diffuse bounce, result gets somehow multiplied/shadowed by the hair density field, done. Or maybe enabling opacity in the occlusion vop and tracing the volume like the shadows would work quick enough, hmmm...
Posted 23 September 2010 - 08:47 AM
can you provide us with a test file?
Posted 24 September 2010 - 10:09 AM
Posted 25 September 2010 - 03:34 AM
Posted 28 September 2010 - 04:54 PM
Thanks for sharing!!!
it'a really cool hack!
i just took the scene and do some test using a one env light with HDR and it's seems very promising...
Very cool solution and thanks for sharing the file.
Can you please explain exactly what did you modify on the hair diffuse VOP?
Edited by Mzigaib, 28 September 2010 - 04:54 PM.
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users