sebkaine Posted October 29, 2015 Share Posted October 29, 2015 Hi guys , I am quite sure i have read somewhere that Arnold use an optimisation for sampling when doing L / R view. Basically it find a way to use the sampling of the first eye to optimizes computation of the seconds eye. So if a frame take for exemple 2 hours on R eye it will only take 1 hour for L eye. I would like to now if this kind of mechanism apply in Mantra when we use the stereoscopic camera ? If not would you have tricks to make things like this ? I was thinking of projecting my R eye render on all my scene and render this as a constant camap for L eye. Evaluate area that are not working and re-render only throw some sort of alpha the area that need to big clean. But this sound really old school / clunky to me ... What would be the best options for this kind of problematic inside Mantra ? Thanks for your time ! Cheers E Quote Link to comment Share on other sites More sharing options...
sebkaine Posted October 30, 2015 Author Share Posted October 30, 2015 (edited) It looks that V-Ray 3.1 Get exactly the principle i'm talking about. In this video at 4:04 you can see that Left view come nearly for free compare to Right View I'm gonna contact SESI directly for info and come back here. Edited October 30, 2015 by sebkaine Quote Link to comment Share on other sites More sharing options...
fathom Posted October 30, 2015 Share Posted October 30, 2015 (edited) i'm curious if you could pull this off inside of nuke using deep image renders. render a center camera and then calculate left/right eyes after the fact from your deep image samples. seems kind of like what the vray method is. Edited October 30, 2015 by fathom 1 Quote Link to comment Share on other sites More sharing options...
mestela Posted October 30, 2015 Share Posted October 30, 2015 (edited) We tested that last year, I was sure it'd work, compers were sure it wouldn't, to my annoyance the compers were right. There's not enough samples on the visible edges to get a good reconstruction, let alone the occluded edges. If you could control the dicing so that the samples were generated from the sum of both camera frustums you might be able to make it work, but I suspect you'd need more access to the guts of mantra than we have currently. Last I checked Arnold didn't do any clever stereo rendering either, but that was a while ago. Edited October 30, 2015 by mestela Quote Link to comment Share on other sites More sharing options...
Juraj Posted October 31, 2015 Share Posted October 31, 2015 Also interested in this. Oblique has published some good shaders for Arnold one has this feature. http://s3aws.obliquefx.com/public/shaders/help_files/Obq_Bend4Stereo.html I was wondering if similar approach would be applicable for rendering shots with subtle camera movement. It would be great optimization, wouldn't it? Quote Link to comment Share on other sites More sharing options...
sebkaine Posted October 31, 2015 Author Share Posted October 31, 2015 (edited) Thanks for your feedback guys ! The thing is that i use fusion now and i would like to avoid adding human extra effort in the food chain, so a workflow with extra work in comp is not a solution for me. I find this idea of computing a prepass of the center camera with all info that are not camera dependant and then use this as a basis to compute L/R quite clever. but doing this at the rendertime is the best option. find a way to solve this is a necissity for VR workflow where you can have easily 4096*2048 * 2 resolution ... if you can find a way to get 2 images for almost the price of one that would be stellar ... I have contact SESI they will give me an answer soon i guess. This is definitly an eetu question , but i haven't seen him around for sometime now ! I will try some old school test with - computing L - project the render as a constant camap - isolate the face where projection is stretch by comparing I and N - build a point base @mask where black = projection OK / white = stretch - build a shader that use the shading networks for white area and projection for the black one It sound like a shitty workflow but it might give good enough result in certain scenario ? Cheers E Edited October 31, 2015 by sebkaine Quote Link to comment Share on other sites More sharing options...
sebkaine Posted November 2, 2015 Author Share Posted November 2, 2015 (edited) well i've just got the answer of SESI support : there is currently no shared shading information between left/right eyes. Loading and processing geometry is only done once, but no shading information is shared. So it's not possible at the moment. eetu we need your magic dude ... Edited November 2, 2015 by sebkaine Quote Link to comment Share on other sites More sharing options...
sebkaine Posted November 10, 2015 Author Share Posted November 10, 2015 The V-Ray doc about this feature : http://docs.chaosgroup.com/pages/viewpage.action?pageId=8356536 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.