Jump to content

Rendering Fur with GI


Recommended Posts

  • 2 weeks later...

I'm experimenting with a technique right now that is looking promising, It's directed at long hair but may work with fur too. The aim being to render hair quickly and without having to use hair specific deep shadow map light rigs n all that junk, because the hair must sit right with area lights etc, and to never trace against the hairs themselves coz it murders render time.

Basically how it works is by converting the fur geo preview into a fog density volume with isooffset (you may need to re-sample the curves to get enough points for accurate enough representation using the point cloud option in the isooffset), then I set this volume to Phantom, and exclude fur from any shadow casting lights, leaving the volume to cast shadows onto the fur and other stuff... And you're basically done! :D

Receiving raytraced shadows from a volume is surprisingly very quick rendering and you get the subsurface lighting for free as the light penetrates into the volume (the volume shader need only be the default Volume Cloud).

It casts shadows onto say a characters head quite nicely but obviously limited by the resolution of the fur to volume conversion, so you wont see individual hair strand shadows, but maybe tracing against a relatively sparse fur distribution (say by tracing against the actual preview fur) in addition to the volume will cheat this.

Bellow are two renders, rendered in MP mode with 10x10 pixel samples, the top is lit by two area lights. The second render is lit by an environment light with an HDRI map. They both took aprox 5min20s on 3 out 4 cores in this i7 940. For some reason, regardless of lighting technique the render takes about 4 mins before is actually starts, then goes very quickly.

post-1495-128524261238_thumb.jpg

post-1495-128524263325_thumb.jpg

post-1495-128524521303_thumb.jpg

Cheers

S

Link to comment
Share on other sites

Nice indeed. I believe I saw a Pixar paper where the were using a volume to shade hairs as well.

http://graphics.pixar.com/library/Hair/paper.pdf

That is a very cool paper! I wonder if we could also simulate hair with fluids in Houdini...I think u should have a crack at that one Peter! (if you haven't already) DOPs gimme a headache ;)

Their rendering technique is very different than what I'm doing as it is still centers around deep shadow map tech, i.e. it makes hair look nice but doesn't attempt to solve the raytracing problem.

Transferring the inherent softness of the volume normals to the hairs is very cool. We could possibly do this in Houdini too by making volume normals and transferring the result to the guide curve points.

I would love to be able to run shaders on a volume and transfer to a surface (in this case the hair) at render time without jumping through hoops, kind off like a voxel based point cloud shader. I suppose I could convert the volume into a point cloud then run it as a pcloud shader but that is jumping through hoops.

I now basically need to generate a occlusion mask (by somehow getting the hair density volume transfered onto the hair) which I would use to shadow raytraced indirect diffuse bounces. In other words, ray leaves point on hair, all other hairs are invisible to it, ray hits scene geo to get diffuse bounce, result gets somehow multiplied/shadowed by the hair density field, done. Or maybe enabling opacity in the occlusion vop and tracing the volume like the shadows would work quick enough, hmmm...

cheers

S

Link to comment
Share on other sites

Hi Serg...

Thanks for sharing!!!

it'a really cool hack!

i just took the scene and do some test using a one env light with HDR and it's seems very promising... :D

cheers!

Very cool solution and thanks for sharing the file.

Can you please explain exactly what did you modify on the hair diffuse VOP?

Thanks.

Edited by Mzigaib
Link to comment
Share on other sites

Very cool solution and thanks for sharing the file.

Can you please explain exactly what did you modify on the hair diffuse VOP?

Thanks.

Hi, thanks.

I put the function in a fit() instead of using abs. The point being to be able to raise the minimum of the shading model (before multiplication with the shadow).

I find the hair diffuse effect too be too contrasty, I guess because it is a single hair lighting model and therefore doesn't account for the light scattering through and reflected from hair to hair. Raising the minimum brightens the shading (but can still be dark if the shadow is dark) and so I guess it looks better because it cheats some of that missing energy. imo :)

Link to comment
Share on other sites

Hi, thanks.

I put the function in a fit() instead of using abs. The point being to be able to raise the minimum of the shading model (before multiplication with the shadow).

I find the hair diffuse effect too be too contrasty, I guess because it is a single hair lighting model and therefore doesn't account for the light scattering through and reflected from hair to hair. Raising the minimum brightens the shading (but can still be dark if the shadow is dark) and so I guess it looks better because it cheats some of that missing energy. imo :)

Hi, i do a quick test using fine, short fur with env light ! :D

still need a lot of improvement, but still render in acceptable render time ,

7minutes in my poor macbook pro, with 11x11 samples !

cheers

Cassio

post-4199-128576464202_thumb.jpg

Link to comment
Share on other sites

Hi, i do a quick test using fine, short fur with env light ! :D

still need a lot of improvement, but still render in acceptable render time ,

7minutes in my poor macbook pro, with 11x11 samples !

cheers

Cassio

Cool test. I guess it could do with more hair to hair and clump to clump shadows. it highlights the problem that you need a very high rez volume to capture the shape of each clump in short fur and thus be able to cast shadows from clump to clump. We could probably compress the volume to as low as 2 bits to solve the storage issue with probably no negative side effects, it's the generation phase take could cause issues.

I think this could benefit from setting a slightly darker color at the base of the hair and/or increasing the the volume density.

A change/cheat to the shader that I'm planning to do, is to shadow hairs nearest the clump guide hair as well as nearest the root. Since we only have to assist the volume shadows in creating some detail this should work quite well to enhance the look of dense and clumpy short stuff.

Perhaps it could be possible to trace against hairs, but only up to a max distance value (say 3cm for leopard fur) at which point it would blend to using the volume for shadows, or even ignore the volume altogether and only receive shadows from the skin.

cheers

S

Link to comment
Share on other sites

  • 1 month later...

serg,

just tested the volume fur method, it works, but i really hope sidefx

will improve the fur rendring with area shadows/pbr..

could you maybe tell alittttle how you style long hairs? whats the approach of your inhouse tool?

ulf

Link to comment
Share on other sites

serg,

just tested the volume fur method, it works, but i really hope sidefx

will improve the fur rendring with area shadows/pbr..

could you maybe tell alittttle how you style long hairs? whats the approach of your inhouse tool?

ulf

I wish they do too, but I dont think theres any escaping from the sheer geometric complexity of raytracing hair though, apart from simplifying the complexity of course :) be it by converting the hair to a volumes and/or do something like what PDI does with their geometry simplification schemes, where they shoot rays against very coarsely diced geometry with hardly any quality loss (according to the paper)... 99% of the time a hair on a characters toe doesn't need to know a hair on the characters head exists, and if it did then the head hair would in its eyes only need to be very very simple representation. Such schemes would help general rendering of complex displacement mapped renders not just hair.

I would like the conversion to volume to be a effortless render time process, just set the desired resolution for the volume representation and shading density and you're done. I'd imagine it would basically set off a pre-render i3d convert (it would need to support conversion of lines so we dont have to re-sample curves).

The conversion itself could be a better representation of the hair density, e.g. if ten hair points fall inside a single volume voxel the iso-offset only counts up to one, whereas it should really be adding them up, so as it can represent the density not just the shape (which just happens to work :) but could be better).

Re the long hair styling tool, it's not a styling tool, it just crates hair guides inside a supplied hair volume container.

Basically imagine a box with user defined divisions and maybe even forking topology (in can in fact be any shape/topology), then you select which side of the box to grow hair from (the skin patch), dice the skin patch according the density of guide curves you want, then copy the hair guides onto the skin patch points, the length of the guides is determined by the height of the box (we ray this). At this point the hair guides and hair containers are essentially in their rest position.

Then you shape/sculpt the boxes to the desired style, and use Simon's excellent PGMVC lattice deformer to make the hair guides follow the shape of the deformed box.

So, basically from a user perspective... the tool needs two inputs: rest position hair containers and a matching set of deformed/styled hair container geometry, and it outputs skin patches and the hair guide curves matching the shape of the deformed containers.

Then you set things like the desired guide hair density, number of curve points (must be same as is set in the fur procedural), nurbs or poly lines, even segments, randomize lenghts, noise etc...

If you wish you can take these guides and edit them further with an edit sop or something.

We then use a modified "fur" hda to support import of external guide curves. I also made various fixes/features, like being able to use noise deformers in world space or rest space in addition to the pretty crap random jitter effort, fixing randomize length so that clumping fur doesn't erase it, curling around guide curves (sesi disappointingly didn't implement their own hair curling tutorial into their tool!!), and generally re-organized the entire UI so that is actually makes sense, etc.

So, we have several options in terms of modelling and animating hair... it can be modelled in any software by anyone, and animating it can be done like anything else by rigging/animating the container geometry in maya (helpfully already has straight rest positions), and or guide curve dynamics in houdini. It's pretty open what you do with the stuff. I might even try to convert the containers into volumes instead of the hair curves for BG character hair.

Basically the process is similar to that plug for Max where you model shapes instead of tweaking guide curves. Except here we can still tweak the guide output from the tool.

cheers

Sergio

Link to comment
Share on other sites

hey serg!

thanks alot for the explanation. sounds like a great tool!

but also sounds way to technical advanced for me ;)

i just tried to setup long fur styling with wiretransfershape curves to the

guides, which works pretty good so far.

ulf

Link to comment
Share on other sites

Cool test. I guess it could do with more hair to hair and clump to clump shadows. it highlights the problem that you need a very high rez volume to capture the shape of each clump in short fur and thus be able to cast shadows from clump to clump. We could probably compress the volume to as low as 2 bits to solve the storage issue with probably no negative side effects, it's the generation phase take could cause issues.

I think this could benefit from setting a slightly darker color at the base of the hair and/or increasing the the volume density.

A change/cheat to the shader that I'm planning to do, is to shadow hairs nearest the clump guide hair as well as nearest the root. Since we only have to assist the volume shadows in creating some detail this should work quite well to enhance the look of dense and clumpy short stuff.

Perhaps it could be possible to trace against hairs, but only up to a max distance value (say 3cm for leopard fur) at which point it would blend to using the volume for shadows, or even ignore the volume altogether and only receive shadows from the skin.

cheers

S

Hi Serg

Thanks, for tips !

And i will, i was very busy last weeks, so ...

Also starting tracing rays over guides geometry instead fur procedural, or a iso volume, just to see how much slower its can be, and for my surprise, PBR, doing it at some levels (not many hairs) much more fast than i expected !

now, since i have some spare time, i can play a bit more on it. :D

My Best

Link to comment
Share on other sites

and for my surprise, PBR, doing it at some levels (not many hairs) much more fast than i expected !

yea, i'm pretty pleased with how well pbr manages even with pretty full on furr shots (motion blur, closups, big res, etc.). obvi not the fastest renders, but pretty stable, predictable, and able to be planned for

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...