Jump to content

deep raster zdepth parameter export


Magnus Pettersson

Recommended Posts

Im rendering sprites and are trying out deep rasters for the first time and I have made a "myZdepth" parameter that i export in the standard dust_puff shader. But what i dont understand is how its affected by the Of output of the shader? Im multiplying the zdepth with the sprites "edge" alpha but i dont want to inherit or get infected by the shaders Of alpha.

My own zdepth imageplane works as i expect when i have full intensity alpha but when i lower it, it gets alpha information added into it (see picture attatched) . What i want is having my opacity in color and alpha but it should not affect my zdepth image as it does now, certainly something basic i have missed here? I attatched a simple test scene too.

zdepth_example.hip

post-4519-12482856218_thumb.jpg

Link to comment
Share on other sites

First off, you're getting quite far from a legal and correct z-buffer here.

I presume you know that and just want a workable one for some purpose.

After all, there really is no correct AND good z-buffer for a case like this.. (except a deep camera map! :)

The effect you are seeing stems partially from the way Mantra combines it's depth samples.

If you look closely, your z values in the latter case (Ko=0.1) are actually a lot larger!

You are exporting the same value in both cases, but when the samples along one sampling ray are combined,

that same value is associated with a lower opacity. When the samples are combined, more of each sample is

counted in, and they add up to a larger number. In other words, Mantra is looking for premultiplied values,

and the bit of a counter-intuitive answer is to multiply with the "final" Of and not with the "raw" opacity

you're multiplying with now.

With this thing fixed the resulting values are in the same ballpark. You will of course not see the exact same

result (as you seemed to want?), as the sprites with different opacity will naturally blend together in different way.

If you don't like this way of combining the samples, you can change the output from Opacity Filtering to

Closest Surface. In that case Mantra will just return the closest sample - this is the default behaviour

for a "correct" z-buffer. (A correct z-buffer should also use a minmax min pixel reconstruction filter)

Also, a correct z-buffer would be identical for both your cases..

On a side note, with Mantra your scene will already be in camera coordinates, so you don't need that

current->camera space change. In fact, you could just take the length of the I global variable. Or P-Eye :)

(being anal here now, but calculating depth like this is also a little bit wrong, as the official z-buffer

should be the distance to the image plane, not the radial distance to camera pivot calculated here..)

Apologies for the rant, and someone slap me if I got this wrong ;)

eetu.

  • Like 1
Link to comment
Share on other sites

First off, you're getting quite far from a legal and correct z-buffer here.

I presume you know that and just want a workable one for some purpose.

After all, there really is no correct AND good z-buffer for a case like this.. (except a deep camera map!

Im working barely-legal when dealing with shaders :ph34r: Im used to having my shaders on a silver platter and not worry about all under-the-hood stuff but ever since my switch to houdini and being the only Houdini license eater at work, it seems i have to face the beast if i wanna get my work done :P

But what i basically is aiming for is a depth pass that the compositors can have some use for in a project where im doing some sprite rendering (if its at all possible?), and the top result felt more useful than the 0.1 Ko one when i was playing around a little in Nuke... deep camera map, is it something worth checking up for me?

If you don't like this way of combining the samples, you can change the output from Opacity Filtering to

Closest Surface. In that case Mantra will just return the closest sample - this is the default behaviour

for a "correct" z-buffer. (A correct z-buffer should also use a minmax min pixel reconstruction filter)

I tested having on Closest Surface but the opacity seems to not work (gets black squares with the texture on) so i guess that option is not usable for sprites, maybe when rendering points tho?

On a side note, with Mantra your scene will already be in camera coordinates, so you don't need that

current->camera space change. In fact, you could just take the length of the I global variable. Or P-Eye :)

(being anal here now, but calculating depth like this is also a little bit wrong, as the official z-buffer

should be the distance to the image plane, not the radial distance to camera pivot calculated here..)

Apologies for the rant, and someone slap me if I got this wrong

Hehe i think its me who needs a slap or two for good learning, so im happy you point it out :)

So my best shot at getting the top zdepth pass would be using takes and just add a switch in the shader and plug the zdepth stuff (with 1.0 Ko) into Cf and be happy instead of using an extra image plane? :)

Link to comment
Share on other sites

Yeah, faking is the name of the game - if it works for you then it's all good =)

Deep camera maps allow you to correctly composite multiple transparent layers or even volumes.

They retain all the depth samples without trying to combine them. There is no support for them yet in

any public compositing software as far as I know, but some studios have supposedly built their

own stuff. But watch for them, they are definitely coming!

http://www.sidefx.com/docs/houdini10.0/rendering/deepshadowmaps

eetu.

Link to comment
Share on other sites

ah cool i seen that somewhere (cant remember where) but it was an example with an airplane and a cloud and there was no need to render the cloud out with the cutout airplane, so im guessing its deep camera maps in action there... i am definitely going to keep an eye out for it : Thanks for the help!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...