Jump to content

Depth of field problem


qsc99

Recommended Posts

I followed this tutorial:

https://vimeo.com/6580446

 

(Render extra image plane with Pz, 32 float, Closest Surface, minmax min. Then a Dof node combined with a defocus node set to "per pixel defocus")

 

The effect works as it should, but only in areas with background geometry. In ares where there is nothing in the background, the foreground objects get blurred, but the blur does look weird. You can see this in my screenshot:

post-15628-0-17914800-1453203489_thumb.p

 

I experimented with the different values in the dof and defocus nodes, didn't work. The only thing that improved the image a little was to blur the mask outputted by the dof node, but this can't be a solution, right?

 

Help is greatly appreciated.

Edited by qsc99
Link to comment
Share on other sites

  • 1 month later...

Are you testing render results in Houdini cop or in an external software and if so, which one?

 

edit:

 

Oh sorry, now I see the rest of your post. I am not 100% sure but I could figure out how you can test what is the problem.

 

1. If those problematic geometries are open curves try to use tinny tubes or any polygonal geometry instead of curves just to detect if it solves problem (if that solves problem then Closest Surface option with curves has some issue). Also from posted picture it is hard to distinguish if there is real opacity on geo or just opacity created by defocus. So make those curves and later tubes full opaque.

 

2. In cop, when you get pixel info on depth channel in region with no geometry (by pressing "i") what values you get?

 

3. Make your plane has constant black shader and animate it as it goes from current position away of cam to very large distance (and scale it if necessary) to always be partialy behind the problematic curves while it goes away from cam and keep those curves (static in places). That way you will see does error even over geometry appears if geo goes much deeper in the scene.  

 

4. what is the value of your back (far) clipping plane on your render camera (I am not sure does it affect calculation of depth in Mantra )

 

5. Did you use any pixel filter on Pz extra image plane?

 

6. Can you post the mask you get for bluring in cop generated by DOF node but without blur you mentioned?

 

Make those tests and post the results.

Edited by djiki
Link to comment
Share on other sites

Several tests of defocus calculation based on depth channel in Houdini composite module bring me here :

 

1. Mantra renders Zdepth in positive direction oriented away of cam. That means 32 bit value increase with depth. Represented by gray scale, closer pixel is darker , further  pixel is brighter.

From that logic it is expected that some infinity deep pixel has highest 32 value but that is not the case. Those pixels have value of zero. Anyway neither pixel can have real value of 0 (that would means some polygon is at cam position, and it's infinity small region would cover hole frame).I suppose that zero value is reserved value for those specific case so it can be used to detect "blank" part of image and discard it. In defocusing function, depth is used as multiplication coefficient so values of zero will "blank" that region without any special parsing Z depth channel.  Even if you scale range by some factor zero will stay zero. 

 

2. DOF node in Houdini composite discards those pixels of zero depth in calc. And that produce error you see.

 

Solution is to manually setup those infinity deep pixels to some real value. Slider DEFAULT MASK VALUE on DOF node do exactly that. "Bringing" those zero value pixels close to camera by that slider you will see error disappear.

 

In general calculating DOF is not as simple as blur closer pixels and provide alpha for later composition!

Proper calculation of resulting RGB values in such blur have to take into account RGB values from background because background pixels are "displaced" in such calc. So you can not achieved same results without background image by blurring just front and providing alpha except if you provide such pixel transformation in an extra vector channel included with RGBA channels.

By simple words, seeing through alpha of blurred front pixel should show you all pixels of background which transformed by proper defocus function exists "under" pixel you are seeing through !.

So you can not do defocus in Houdini and export just RGBA values to an external composite software and expect to get proper defocus on that blank area "under" alpha.

 

You have to include transformation, or you can export Z depth and do full defocus in an external software, or you can do full composition in Houdini.

 

Defocus in Houdini I have never used by now.so I decided to test it for some basic things defocus should do:

 

1.blur which intensity is provided by a mask generated from depth channel is far away from realistic. Realistic calc has to take into account pixel scaling based on depth for proper highlights accumulation and gain calculation to be able to produce sharp oversaturated edges on them. 

2. Inability to  cover hyperfocal distance

3. defocused light sources from background should cross over the edges of closer objects. That happens if per pixel defocus option is turned on but it is not calculated properly. On referenced photo you can see even light sources are in very large distance they produce relatively sharp edges in defocus which is not case in Houdini. They are blured like other parts of background.

4. Chromatic component of light sources in defocus, in nature is much more consistent which is not case in Houdini's defocus. You have lot of unnatural blurry spreads

 

here is some reference photo

 

http://www.gettyimages.com/detail/171147021

 

I am not impressed by the results,  but it is fast enough so it could be used for some fast previz or animatics all in one software.  For final production quality results I will stay sticked to dof calculation through rendering. It allows setup of F-Stop and shutter time for dof and it gives amazing results when cam is moving which is hard to reproduce in composite software by using Z depth channel for dof calculation and applying motion blur to it..  

Edited by djiki
Link to comment
Share on other sites

  • 3 months later...

I had a similar issue, but it was because there was no depth information in the areas where there is no geometry.

To solve this, in the COP network I added a Color node with a custom plane called Pz (32 bit float, though 16 is probably ok...I think), connected it to a Pixel node to set the component to a constant value (10 in my case, but this is a 'fake depth' value for the areas of your image with no geo. This was then combined with an over node. Defocus seemed to work great after that.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...