Hello,
This is more of a question of curiosity at the moment. Although compositing is usually done after rendering multiple passes, how would one go about if one wanted to composite more procedurally at render time, using shaders? Sort of bringing the compositing phase into the SOP/MAT context. I do not mean grabbing a pre-existing image from disc or creating something procedurally and then combining them in a shader with layering or the Composite VOP, I mean making the same calculations, but on the pixels that would be rendered if the object weren't there, something like transmission and ray-bending calculations on glass like surfaces. Basically I'm talking about getting "what would be rendered if the object weren't there" information into the shading context as a possible layer to be made calculations on, blended with, etc. Would there be a specific VOP, something I haven't encountered yet?
This could provide some interesting possibilities, I think, for example the option of controlling blending modes at SOP context with attributes more intuitively without having to promote some attributes to shaders, render them as separate passes, and then try to do procedural operations on them in COPs or some compositing application.
A more specific example would be a lot of grids, pointed towards the camera, moving around in world or object space, and having materials applied to them that dictate, based on geometry attributes, weather whatever is used as their surface color or texture, would be "overlayed" over everything in the 3D space behind it, added to it, multiplied, used in a difference or subtract calculation, or something else.
I hope I communicated clearly enough. Anyone tackled this?