Jump to content

Does Mantra optimizes sampling for stereoscopic camera ?


Recommended Posts

Hi guys ,

 

I am quite sure i have read somewhere that Arnold use an optimisation for sampling when doing L / R view.

Basically it find a way to use the sampling of the first eye to optimizes computation of the seconds eye.

 

So if a frame take for exemple 2 hours on R eye it will only take 1 hour for L eye.

 

I would like to now if this kind of mechanism apply in Mantra when we use the stereoscopic camera ?

If not would you have tricks to make things like this ?

 

I was thinking of projecting my R eye render on all my scene and render this as a constant camap for L eye.

Evaluate area that are not working and re-render only throw some sort of alpha the area that need to big clean.

 

But this sound really old school / clunky to me ...

 

What would be the best options for this kind of problematic inside Mantra ?

 

Thanks for your time !

 

Cheers

 

E

Link to comment
Share on other sites

It looks that V-Ray 3.1 Get exactly the principle i'm talking about.

 

In this video at 4:04 you can see that Left view come nearly for free compare to Right View

 

I'm gonna contact SESI directly for info and come back here.

Edited by sebkaine
Link to comment
Share on other sites

i'm curious if you could pull this off inside of nuke using deep image renders.  render a center camera and then calculate left/right eyes after the fact from your deep image samples.  seems kind of like what the vray method is.

Edited by fathom
  • Like 1
Link to comment
Share on other sites

We tested that last year, I was sure it'd work, compers were sure it wouldn't, to my annoyance the compers were right. :)

 

There's not enough samples on the visible edges to get a good reconstruction, let alone the occluded edges. If you could control the dicing so that the samples were generated from the sum of both camera frustums you might be able to make it work, but I suspect you'd need more access to the guts of mantra than we have currently.

 

Last I checked Arnold didn't do any clever stereo rendering either, but that was a while ago.

Edited by mestela
Link to comment
Share on other sites

Thanks for your feedback guys !

 

The thing is that i use fusion now and i would like to avoid adding human extra effort in the food chain, 

so a workflow  with extra work in comp is not a solution for me.

 

I find this idea of computing a prepass of the center camera with all info that are not camera dependant and then use this as a basis to compute L/R quite clever.

but doing this at the rendertime is the best option.

 

find a way to solve this is a necissity for VR workflow where you can have easily 4096*2048 * 2 resolution ...

if you can find a way to get 2 images for almost the price of one that would be stellar ...

 

I have contact SESI they will give me an answer soon i guess.

This is definitly an eetu question , but i haven't seen him around for sometime now !

 

I will try some old school test with 

- computing L

- project the render as a constant camap

- isolate the face where projection is stretch by comparing I and N 

- build a point base @mask where black = projection OK / white = stretch 

- build a shader that use the shading networks for white area and projection for the black one

It sound like a shitty workflow but it might give good enough result in certain scenario ?

 

Cheers 

 

E

Edited by sebkaine
Link to comment
Share on other sites

well i've just got the answer of SESI support :

 

there is currently no shared shading information between left/right eyes. Loading and processing geometry is only done once, but no shading information is shared.

 

So it's not possible at the moment.

 

eetu we need your magic dude ... :)

Edited by sebkaine
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...