breadbox Posted July 2, 2015 Share Posted July 2, 2015 Anyone got an idea how to Raytrace a UV Camera Projection? The idea would be similar to a standard project UV's from camera, BUT to use a second geo object to bounce the UV angle. (see attached picture). In this example pic the UV's from the standard camera projection would only see the front of the plane... however the raytraced UV using the curved sphere would show the sides and back. It seems like something that would be very possible in houdini. I am just learning some vex / vops, but it seems like some vector math to calculate a , u & v, coordinate based on the normal of the bounce object and then convert that to camera space. I would appreciate any help. scratching my head where to begin. 1 Quote Link to comment Share on other sites More sharing options...
Robert Posted July 2, 2015 Share Posted July 2, 2015 Hi this seemed like an interesting problem so I gave it a try. The first fun problem to pop up was that when you are using the reflected vector you are using the hemisphere as a lens. You need to move your object somewhere before the focus point or adjust your lens to get a non-inverted/ non infinitely small projection. Now this solution might be really convoluted but it seems to get close to your idea. To keep it simple I assumed that the camera was shooting rays from its origin (ignoring any and all camera attributes) and calculated a reflected vector for each point. I stored the reflected vector for the sphere and the test object in the "up" attribute so turn on visualizations to see it. The next problem is to edit the UV's of the test object you need to let a wrangle run over the object itself, thereby not having access to the reflected vector from the sphere. In my solution I run over all the points in the sphere and test for the smallest dot product between the direction from the test object point to all the points of the sphere to get the vector that would have been reflected onto it. This might be way easier than I'm making it but hey it works. Then when you have the reflected vector on the test object I use a raycast/intersect to get the position on the sphere that would have been reflected onto the object. While I'm writing this I realize I could just use the point() expression since I stored the best match point number on the sphere but I already did this so I'm rolling with it. When you have the position on the sphere you can get any attribute from it like the UV's or color. I also tried the same method but with the sphere's normals as the reflecting vector. In many cases this works better than the actual reflected vector but it's up to you. I hope I explained it enough for you to take it further or for you to call me stupid because I misinterpreted the problem. bounceUV.hiplc Quote Link to comment Share on other sites More sharing options...
breadbox Posted July 2, 2015 Author Share Posted July 2, 2015 This is an interesting solution. I'm not sure the result is quite what I need or expected. its difficult to explain. Let me back up and explain from an overview. This problem stems from trying to bake reflections from a view dependent camera. Most render engine bake operations allow you to bake color, light, shadow, diffuse, but never reflection because from the UV perspective they know nothing of the camera and raytraced angles. Some render engines (Vray) have a solution, but it looks like they reflect a box environment map, It's NOT actually a reflection from the POV of the render camera. In some cases this looks wrong when you project that texture back onto the object. The problem is usually solved by a POV perspective view projection of UV's and then render from the same camera and slap the render back onto the UV's. NOW it is a problem if the render can't physically see the polygon on the occluding side of an object, does a reflection occur and how can we give it pixel space to render? (if a tree falls in the woods, and no one can hear it...). My "cheat" was to instead render a "chrome" curved hemisphere behind the object so that it would bounce a raytrace ray all around the object so from a single POV all sides of the object will be "seen" by the camera. I need UV's to match this so they can re-project that same texture. I guess another way to describe it is an inverted spherical lens? I'll continue to look at this example because maybe in fact it is doing that?!... Quote Link to comment Share on other sites More sharing options...
fathom Posted July 3, 2015 Share Posted July 3, 2015 (edited) i * think* what you are going for is to ray trace the ndc coordinate of your plane. you just need a simple shader call to "trace()" to find the intersection of your reflection (or refraction) and then transform that intersection to ndc coordinates and you're done. edit: rereading the thread, i'm actually not sure this is what you're going for. Edited July 3, 2015 by fathom Quote Link to comment Share on other sites More sharing options...
zoki Posted July 3, 2015 Share Posted July 3, 2015 i think this is done in cmivfx Ai cars system when they trace traffic in front of them Quote Link to comment Share on other sites More sharing options...
breadbox Posted July 3, 2015 Author Share Posted July 3, 2015 Actually I may not be sure what I am asking for either!!! The topic could have also been called how to bake a reflection from a viewpoint angle that does not see the entire object. There maybe several ways or approaches to solve the problem, Robert's solution was close to solving what I needed, but I think I might actually need the ray's to trace depth 2x? The goal would be to map the reflection of the object within the sphere dish, back onto the object. I think Robert's solution works by mapping a texture of the dish onto the object, but not a reflection of the object within the dish back onto the object. (I hope someone understands that!) Ill have to prep some images to describe better what I mean. One of my other ideas was to use an array of camera's (attached pic) and blend the output of each one together somehow. It might be similar to a slit scan camera that only lets in 1 line of pixels at a time? I can't quite tell, but in actual practice this effect this could actually be as simple as a polar or cylindrical UV mapping of the object, and a spherical/cylindrical render of the scene as seen from the inside the centroid of of the object. Whatever the method the UV's need to match the rendered output from the camera, so render can be mapped back onto the object as texture. Quote Link to comment Share on other sites More sharing options...
fathom Posted July 3, 2015 Share Posted July 3, 2015 but i'm kind of confused here. you want to generate uv's that match a render of your camera so you can project the texture back onto your object. isnt' that a round trip? you trace from the camera, bounce off of a surface, then hit the object in question. you sample the color of that object and then figure out some way to generate uv's to let you know where that color came from. but if you have informaton enough to sample the color of the backside of your object, what's the point of the generated map? Quote Link to comment Share on other sites More sharing options...
breadbox Posted July 3, 2015 Author Share Posted July 3, 2015 The reason appears a bit esoteric at first but it is for projection mapping. I need a way to render the scene that will allow it to playback as a texture in real time. This normally works by rendering the scene from a POV camera and then bake the uv's from that same camera projection. But now I have a unique situation where I cannot see the entire object from the POV. This is actually fine for diffuse and lighting because most game engine style baking will allow you to bake to a uv pelt of an unwrapped object, because diffuse and light do not change based on viewing angle. However reflections do depend on a viewing angle, and this is the problem. Because the object will be viewed from any (and every) possible angle, there is technically not a "right" POV or a single POV. Normally I just cheat this and render the object from the "center" POV, however in this particular case the camera is occluded from the left and right sides of the object. I have tried to stitch "right" and "left" POV renders but there is a seam in the middle where they meet because the env changes greatly depending on viewing angle. I have also tried a 3 cam setup left right center, but then I actually have 2 seams. That lead me to think that if there was a way to creat an array with many many camera and then blend each view it might work. As it happens I have not found much documentation on it, but I believe when you try to bake a reflection in Vray, and Mray, it does a kind of spherical env map, which is not technically from a POV, but more blends all possible spherical camera positions. I do have a test image of this, it technically works, but looks very different and in my opinion not as "correct" as from a single POV. For all these reasons, it's a loosing battle and I must chose the most acceptable way to "fake" it. Quote Link to comment Share on other sites More sharing options...
fathom Posted July 3, 2015 Share Posted July 3, 2015 how are you generating your uvs for your sphere map? you need the calculate a reflection vector -- you can't just grab the normals. env mapping is generally pretty good if you're trying to get reflections for things reasonably far from your object. which of course, rules out self reflections. an env map is going to be your best bet at having a single texture that can simulate reflections from any arbitrary angle. Quote Link to comment Share on other sites More sharing options...
breadbox Posted July 3, 2015 Author Share Posted July 3, 2015 yea I suppose a spherical ENV map is a decent option. I generate the UV's by using a uv project node set to polar - (this is a pretty standard mapping in any software actually) Matching render can be achieved by renering a "spherical 360degree" camera from the objects centroid. (object is hidden but only render the ENV) I think there is actually a built in method with Mray and Vray. When you bake a reflection to UV because they do not ask you for a camera I think they do a spherical ENV projection onto the object (I have yet to find in the docs but based on what I see). attached are 2 renders with using a bake method 1. ViewpointPOV - this is what the scene "should look" like from the "center" POV of the camera, notice how all the reflections feel proper from the angle of view, nice and warpy around the sphere. 2. SphericalBake - this is a bake to UV method. notice how the reflection feels "flat". yes the sky is up and the ground is down but its almost like an "ortho" projection of the environment. It's unfortunate that the method to bake feels "flat" because that really is the only solve for the problem. If the reflection needs to be seen by all angles it inherently creates a paradox of a problem Because once baked you can rotate around the object and every polygon will be "painted" with reflection, its just not quite the reflection of the "eye" ray from the POV that you want. Quote Link to comment Share on other sites More sharing options...
fathom Posted July 4, 2015 Share Posted July 4, 2015 (edited) yeah, the problem is you're using a polar projection. you need to calculate your uvs using the reflection vector. vector I = -vtransform("/obj/cam1",v@P); vector Nc = ntransform("/obj/cam1",v@N); float eta = 1.5; // index of refraction float kr, kt; vector R, T; fresnel(normalize(I),normalize(Nc), eta, kr, kt, R, T); v@uv.y = asin(R.y); v@uv.x = atan2(R.x,R.z); you'd probably want to add something in there to put the R vector into the space of your env sphere. right now it assumes the env sphere is in camera space. also, i used fresnel so you could take the kr value and adjust the amount of reflection. the "eta" is the refractive index which would control how much rolloff happens with the kr value. if you don't use kr, then eta doesn't really make any difference for the R vector. the transmission vector (T) could potentially be used as well for a fake refraction (along with kt) and then the eta value is important. edit: didn't test this thoroughly, but i think that's the math... Edited July 4, 2015 by fathom Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.