Magnus Pettersson Posted July 22, 2009 Share Posted July 22, 2009 Im rendering sprites and are trying out deep rasters for the first time and I have made a "myZdepth" parameter that i export in the standard dust_puff shader. But what i dont understand is how its affected by the Of output of the shader? Im multiplying the zdepth with the sprites "edge" alpha but i dont want to inherit or get infected by the shaders Of alpha. My own zdepth imageplane works as i expect when i have full intensity alpha but when i lower it, it gets alpha information added into it (see picture attatched) . What i want is having my opacity in color and alpha but it should not affect my zdepth image as it does now, certainly something basic i have missed here? I attatched a simple test scene too. zdepth_example.hip Quote Link to comment Share on other sites More sharing options...
eetu Posted July 22, 2009 Share Posted July 22, 2009 First off, you're getting quite far from a legal and correct z-buffer here. I presume you know that and just want a workable one for some purpose. After all, there really is no correct AND good z-buffer for a case like this.. (except a deep camera map! The effect you are seeing stems partially from the way Mantra combines it's depth samples. If you look closely, your z values in the latter case (Ko=0.1) are actually a lot larger! You are exporting the same value in both cases, but when the samples along one sampling ray are combined, that same value is associated with a lower opacity. When the samples are combined, more of each sample is counted in, and they add up to a larger number. In other words, Mantra is looking for premultiplied values, and the bit of a counter-intuitive answer is to multiply with the "final" Of and not with the "raw" opacity you're multiplying with now. With this thing fixed the resulting values are in the same ballpark. You will of course not see the exact same result (as you seemed to want?), as the sprites with different opacity will naturally blend together in different way. If you don't like this way of combining the samples, you can change the output from Opacity Filtering to Closest Surface. In that case Mantra will just return the closest sample - this is the default behaviour for a "correct" z-buffer. (A correct z-buffer should also use a minmax min pixel reconstruction filter) Also, a correct z-buffer would be identical for both your cases.. On a side note, with Mantra your scene will already be in camera coordinates, so you don't need that current->camera space change. In fact, you could just take the length of the I global variable. Or P-Eye (being anal here now, but calculating depth like this is also a little bit wrong, as the official z-buffer should be the distance to the image plane, not the radial distance to camera pivot calculated here..) Apologies for the rant, and someone slap me if I got this wrong eetu. 1 Quote Link to comment Share on other sites More sharing options...
Magnus Pettersson Posted July 22, 2009 Author Share Posted July 22, 2009 First off, you're getting quite far from a legal and correct z-buffer here.I presume you know that and just want a workable one for some purpose. After all, there really is no correct AND good z-buffer for a case like this.. (except a deep camera map! Im working barely-legal when dealing with shaders Im used to having my shaders on a silver platter and not worry about all under-the-hood stuff but ever since my switch to houdini and being the only Houdini license eater at work, it seems i have to face the beast if i wanna get my work done But what i basically is aiming for is a depth pass that the compositors can have some use for in a project where im doing some sprite rendering (if its at all possible?), and the top result felt more useful than the 0.1 Ko one when i was playing around a little in Nuke... deep camera map, is it something worth checking up for me? If you don't like this way of combining the samples, you can change the output from Opacity Filtering toClosest Surface. In that case Mantra will just return the closest sample - this is the default behaviour for a "correct" z-buffer. (A correct z-buffer should also use a minmax min pixel reconstruction filter) I tested having on Closest Surface but the opacity seems to not work (gets black squares with the texture on) so i guess that option is not usable for sprites, maybe when rendering points tho? On a side note, with Mantra your scene will already be in camera coordinates, so you don't need thatcurrent->camera space change. In fact, you could just take the length of the I global variable. Or P-Eye (being anal here now, but calculating depth like this is also a little bit wrong, as the official z-buffer should be the distance to the image plane, not the radial distance to camera pivot calculated here..) Apologies for the rant, and someone slap me if I got this wrong Hehe i think its me who needs a slap or two for good learning, so im happy you point it out So my best shot at getting the top zdepth pass would be using takes and just add a switch in the shader and plug the zdepth stuff (with 1.0 Ko) into Cf and be happy instead of using an extra image plane? Quote Link to comment Share on other sites More sharing options...
eetu Posted July 23, 2009 Share Posted July 23, 2009 Yeah, faking is the name of the game - if it works for you then it's all good =) Deep camera maps allow you to correctly composite multiple transparent layers or even volumes. They retain all the depth samples without trying to combine them. There is no support for them yet in any public compositing software as far as I know, but some studios have supposedly built their own stuff. But watch for them, they are definitely coming! http://www.sidefx.com/docs/houdini10.0/rendering/deepshadowmaps eetu. Quote Link to comment Share on other sites More sharing options...
Magnus Pettersson Posted July 23, 2009 Author Share Posted July 23, 2009 ah cool i seen that somewhere (cant remember where) but it was an example with an airplane and a cloud and there was no need to render the cloud out with the cutout airplane, so im guessing its deep camera maps in action there... i am definitely going to keep an eye out for it : Thanks for the help! Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.