Jump to content
qral

clamping Pz range in mantra

Recommended Posts

Hi guys

Im having trouble getting a good Pz channel render from the camera settings I am using. I am using a 180mm focal length and the camera is situated 100 units away from the model. The resulting rendered Pz channel is very poor, with very little detail in the depth, making it difficult for applying it later in post.

Is there a way of telling Mantra that is has to clamp the depth (just as if the camera were much closer)?

Thanks in advance!

- Rasmus

Share this post


Link to post
Share on other sites

Hi guys

Im having trouble getting a good Pz channel render from the camera settings I am using. I am using a 180mm focal length and the camera is situated 100 units away from the model. The resulting rendered Pz channel is very poor, with very little detail in the depth, making it difficult for applying it later in post.

Is there a way of telling Mantra that is has to clamp the depth (just as if the camera were much closer)?

Thanks in advance!

- Rasmus

hello,

You can create your own shader and then connect the surface depth to a "fit" node and output to surface color.

The fit node let's you control the "range" of your depth.

In the mantra node choose Closest sample filtering in pixel filter and closed surface in sampler filter

have fun,

Thomas

Share this post


Link to post
Share on other sites

Hi guys

Im having trouble getting a good Pz channel render from the camera settings I am using. I am using a 180mm focal length and the camera is situated 100 units away from the model. The resulting rendered Pz channel is very poor, with very little detail in the depth, making it difficult for applying it later in post.

- Rasmus

I think, Even if one refits in shader the number of steps / pixels are limited.

I would render image bigger, perhaps twice the size and get more information for post work.

Since the floating point precision is arbitrary (at least for image manipulation purposes). You could refit it later in post too.

I don't know if that's a good practice.

Cheers,

Edited by vectorblur

Share this post


Link to post
Share on other sites

Thanks for your input... after fiddling with the shader solution, trying to clamp the values and normalizing it with the fit node, I must admit im not getting the result I hoped for. I have played around in the composition part and believe I have found a way to deal with the problem...

- Rasmus

Share this post


Link to post
Share on other sites

Try it with volumes. A little slow perhaps but you have a lot of control over the range, detail, and take into account mb and such things.

Share this post


Link to post
Share on other sites

This might not be of any help to you, but I'm actually very happy about the way Houdini outputs depth information in a raw non altered way. It's just a 1:1 mapping of distance to luminance.

What do you mean exactly by "poor quality"? Have you tried the sample and pixel filters "papicrunch" has suggested? You probably already know this, but the depth channel is supposed to be aliased.

-dennis

Share this post


Link to post
Share on other sites

@ dennis, I don't believe there is nothing wrong with the way Houdini is mapping the depth,maybe the term "poor" was a bit off, I think the problem probably has more to do with the Focal Length, if you compare a tele and a wideangle depth render, the wideangle is much more detailed and the range "seems" greater.

@macha, do you mean using volumes for fog effects or for some depth information wizardry ? and what do you mean by "mb"?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×