Jump to content

How to generate Stereo Cube Maps in Mantra ?


Recommended Posts

Hi Guys !

 

I am trying to generate stereo cube maps with Mantra for the gear VR exactly like V-Ray.

 

Here are 4 images render with the V-Ray stereo camera :

https://labs.chaosgroup.com/wp-content/uploads/2015/07/Construct_Arrival.jpg

https://labs.chaosgroup.com/wp-content/uploads/2015/07/Construct_ImminentCollision.jpg

http://www.v-ray.com/downloads/Steelblue_V-Ray_CubicVR_download.jpg

http://www.v-ray.com/downloads/blk_gear_vr.jpg

 

I am at version 25 and still no success ... so if you guys have any idea that would be great ...

 

Some reading :

http://paulbourke.net/stereographics/stereorender/

http://paulbourke.net/stereographics/stereopanoramic/

http://www.tokeru.com/mayawiki/index.php?title=HoudiniOculusWip

 

The pain is that you must fullfill 2 constrains :

- keep the frustum of your 6 camera to form a perfect cube 

- keep the interaxial distance cohesion in your 6 cameras

 

I can satisfy one of those 2 conditions , but at the moment it looks to be an unsolvable problem.

But the V-ray and O-Toy guys have solve it so there is a solution ... i'm just too blind to see it at the moment ! :)

 

I have attach version 25 as a starting point. This version gives :

- correct sense of depth 

- correct cube layout 

- but as i use the off-axis asymetrical frustum the frustum doesn't fit at 100% (gap/offset in the env)

 

I start to think it's impossible to do it directly with cubemap , but do it in 2 times :

- compute in a lens shader the hemispherical projection coord

- in the same lens shader convert those hemispherical coord to cube map coord

- then render

But it might give the same lost of quality in certain area ? 

 

Any additional brain power would be very helpful ! 

 

Thanks For your time !

 

Cheers 

 

E

cam_rig.hiplc

Edited by sebkaine
Link to comment
Share on other sites

I downloaded the scene file. I don't think it will work because it's not taking into account every possible angle of view for each eye. So if you looked straight down or straight ahead (or perfectly on any axis) it would look right but looking between each wouldn't look right. See this blog post about rendering constantly offset spherical panoramas which is what is needed.

 

http://elevr.com/cg-vr-1/

http://www.tokeru.com/mayawiki/index.php?title=HoudiniOculusWip

 

Note this is not the same as left and right eye positions like you'd find in traditional stereoscopic renders. The spherical renders can be converted to cubemaps after they are rendered.

Link to comment
Share on other sites

Thanks a lot for your input Luke ! :)

 

I have study in depth the first part of the elevr tutorials which is great , but can't follow the second part cause i only have a Maya LT license at the moment.

The point is that i know many people use spherical panorama then convert to cubemap.

 

But i might be wrong and i might struggle for bad reason, but i have the feeling that outpout the 3D render in cubemaps is a better workflow because

- you don't apply any distorsion on your image

- you benefit from a constant quality in every area

- you have views that are easy to work for compers and average guys

- with a indie license you can render 6*1080 that you stitch to 2880*1440 in cubemap

- with latlong you can't go higher than 1920*960 which is only good enough for streaming and web stuff ...

 

So i would like to work as much as i can in all the food chains in cubemap , and at the last time apply the latlong distorsion if needeed for export.

 

So what the V-ray guys do is exactly what i need to do , you outpout clean stereo cubemap that look gorgeous in the gear VR !

I have try to contact SESI support but i only have indie license so they can't help me on that ...

 

The tokeru article is great but he mainly talk about building stereo spherical projection for the DK2.

 

Well i'll keep diging tomorrow ...

 

Thanks again for your help !

 

Cheers

 

E

Link to comment
Share on other sites

I have find some interesting comment on the UE4 forum of people struggling with the exact same pb and arriving to the exact same conclusion :

 

https://answers.unrealengine.com/questions/79870/360-degree-stereo-pre-render-needed.html

 

The problem with a cube map capture is that we need this in stereo and the stereo will not be correct for the sides and rear of the capture with a cube map capture, so we have to build an array of cameras and export each one. I imagine that if we do a 2D scene capture and export each one it would work. I could use some help with what code to use to make that happen per frame with each one and have all of the effects and procedural animation not get broken or be out of sync.

 

1) The above description would not create a stereoscopic image. There is only one perspective being captured.

2) Capturing a 3D cube in 2D space would not create a panoramic (360 degree) image, unless I am missing something.

And the problem with remapping cubemaps to 2D space in general is that they don't lend themselves to creating a proper stereo image. The math just isn't quite correct. So cube 3D scene captures aren't really even part of the solution, as far as I can see, unless they are altered in some way.

 

 

Two latlong captures are not enough for real 360 3D (you loose stereoscopy on 90 degrees left / right and you have inverted eyes in back view).

 

That's reassuring to see that those guys have met the exact same wall

- left / right lost of stereo if you keep cube map consistency

- impossibility to keep image consitency with cubemap in stereo

=> impossibility to get both consistency and stereo cube map

 

So those V-ray guys must use a tricks and i am wondering if like you said luke they don't compute a latlong prepass to know where to sample,

then convert those info to cubemap space and then render !

 

i have attach a very simple scene with :

- a stereo rig that match the elevr tutorial with rotation / asymetrical frustum

- simple sphere to project the point to render

- simple cube as a base for the final cubic map

 

start_scene.hiplc

Edited by sebkaine
Link to comment
Share on other sites

The second link I posted in my first post is the secret sauce you're looking for. The regular stereoscopic camera in Houdini is no good for stereoscopic cubemaps. The vantage point for stereoscopic cubemaps changes for every pixel sample which is unlike any other rendering technique out there. This is a pretty good resource that goes into detail.

 

http://paulbourke.net/stereographics/stereopanoramic/

 

It would be nice to get a cubemap directly out of Mantra but the math to get there is still the same, the vantage point will change for every pixel sample. Figuring that out for a stereoscopic panoramic is pretty straightforward. Figuring it out for directly rendering to cube map space is another story. It can probably be done but I'm not sure where to start with it.

 

Maybe a reference EXR could be used for the ray direction and position. Make that from the stereoscopic panoramic images and then convert it to cube map, then use that in a lens shader. The disadvantage would be having to create that reference image again after doing things like changing interocular distance but it would give a sane cubemap straight out of the renderer.

Edited by lukeiamyourfather
  • Like 1
Link to comment
Share on other sites

I'm the tokeru spherical pano guy. :)

 

It should entirely be possible, but the spherical pano stuff I wrote up is not the guide to follow, unless you wanna do a post conversion (which is a semi-reasonable approach). Instead look at the ASAD shader again, I suspect the stuff for a regular perspective projection.

 

For a cube map, for each square of the final output you determine which square you're in, where the camera position needs to be (P), where the aim vector points (+X/-X/+Z/-Z/+Y/-Y), feed that to a persp projection. Someone else updated the wiki toward the end with a guide of where the cube map regions are defined:

 

http://www.tokeru.com/mayawiki/index.php?title=HoudiniOculusWip#Viewing_the_results_with_Gear_VR

 

So you'd do a big multi-level if statement. The cubemap is a 12x2 format, so it'd be something like

 

float y_div = 1/12;

float x_div = 1/2;

float eyesep = 0.5; // eye separation

vector aim;

 

// left eye aim +X , top right square

if (y < y_div*1 && x < x_div*1) {

  P.x -= eyesep;

  aim = {1,0,0};

  // persp camera code here

}

 

// right eye, aim +X, top right square

if (y < y_div*1 && x > x_div*2 &&x < x_div*2) {

  P.x += eyesep;

  aim = {1,0,0};

  // persp camera code here

}

 

 

// right eye, aim -X, 2nd row 1st column

if ( y > unit*1 && y < unit*2 && x < unit*1) {

  P.x += eyesep;

  aim = {-1,0,0};

  // persp camera code here

}

 

 

etc.

  • Like 2
Link to comment
Share on other sites

The second link I posted in my first post is the secret sauce you're looking for. The regular stereoscopic camera in Houdini is no good for stereoscopic cubemaps. The vantage point for stereoscopic cubemaps changes for every pixel sample which is unlike any other rendering technique out there. This is a pretty good resource that goes into detail.

 

http://paulbourke.net/stereographics/stereopanoramic/

 

It would be nice to get a cubemap directly out of Mantra but the math to get there is still the same, the vantage point will change for every pixel sample. Figuring that out for a stereoscopic panoramic is pretty straightforward. Figuring it out for directly rendering to cube map space is another story. It can probably be done but I'm not sure where to start with it.

 

Maybe a reference EXR could be used for the ray direction and position. Make that from the stereoscopic panoramic images and then convert it to cube map, then use that in a lens shader. The disadvantage would be having to create that reference image again after doing things like changing interocular distance but it would give a sane cubemap straight out of the renderer.

 

Thanks a lot for your extremely precious help Luke ! :)

 

- you have to shoot at 360 with the same camera you can't use a static stereo rig that wasn't so clear in my mind

- you then have to build a spherical projection to get proper ray direction and position

- then the cherry on the cake is to convert this directly to cubemap

 

I start to see some light , my bigest mistake is that i was thinking this was a very easy / simple problem to solve but it is definitly tricky ... at least for me ... :)

 

I will study in Depth Matt shader tomorrow see if there is a way to plug directly the spherical to cubemap conversion !

Link to comment
Share on other sites

It should entirely be possible, but the spherical pano stuff I wrote up is not the guide to follow, unless you wanna do a post conversion (which is a semi-reasonable approach). Instead look at the ASAD shader again, I suspect the stuff for a regular perspective projection.

 

For a cube map, for each square of the final output you determine which square you're in, where the camera position needs to be (P), where the aim vector points (+X/-X/+Z/-Z/+Y/-Y), feed that to a persp projection. Someone else updated the wiki toward the end with a guide of where the cube map regions are defined:

http://www.tokeru.com/mayawiki/index.php?title=HoudiniOculusWip#Viewing_the_results_with_Gear_VR

 

So you'd do a big multi-level if statement. The cubemap is a 12x2 format, so it'd be something like

...

 

Thanks a lot for your answer Matt ! the thing start to enlighten.

 

Like i said i don't want to do hemispherical output, i could only use this as a prepass if needed.

For the cubemap outpout i don't have understand at 100% what you mean ?

 

But i will focus on your lens shader and the asad shader tomorrow and things should be a litlle more clear !

 

i hope i will have a proper solve in a day or 2 ! i let you know !

 

Thanks to both of you this damn stuff was driving me nuts !

Link to comment
Share on other sites

I suspect the stuff for a regular perspective projection.

 

If I understand the problem correctly the code for the perspective camera wouldn't work for the second eye. This is essentially the same thing Emmanuel posted in the scene file in the first post. Simply pointing a stereo camera rig in six directions will breakdown as the view angle is further from the center of an axis. Maybe I'm missing something about that code though?

Link to comment
Share on other sites

Matt i will dive in this today but i am wondering if Luke is not right ? I will try to sum up all the thing i have find during my test to explain the pb.

 

The goal is to match the V-ray feature, if you analyse those image you see that :

- The right and left image has negative parallax on all the Z 

- The right and left image converge slightly to a central point when they are at a far Z

- So this means that they use a ZPS which is located imo slighlty back to the object at max Z

- basically i think they take their scene create a bounding box that just enclose all and set the ZPS at the radius of the bounding box

- the fact that image converge slightly show that they either use Toe-in / asymetrycal frustum / HIT 

 

- i don't think they use toe-in cause you will lost what you want a cube , because you introduce vertical parallax in the equation = bad idea

- so the only good enough method is either HIT(horizontal image translation) or asymetrical frustum

- with HIT you have to offset thing manually in post to control things so i think they don't use that either

- the best option is asymetrical frustum , it solve most problems and also it give you a perfect cube if you set ZPS at the bounding box of your scene

 

=> basically i fall on the same conclusion of the elevr guy :

- use a rotative stereo rig 

- with ZPS located at the bounding box of the scene

- with an asymetrical frustum 

 

So the first scene i post is i think the best you can have before finding the magic trick 

- you have all camera that converge to a perfect cube 

- you have very clean parallax

- you have control over where your ZPS is 

- the only thing you don't have is frustum that match perfectly to avoid any jonction problem beetween cubes

 

The second scene i post while very simple, contain imo all the parameter to make things work but we need to find how to solve this frustum offset.

 

On this subject i share the pov of the guys on the UE4 forum. By default it is just impossible to get a perfect match. so i see 2 options :

- you still use a perspective projection but you deform it so that each one match. i have start test in this direction by modifying the field of view of my cam according to a ratio

but at the end it's a bad idea cause you are watching your scene with different lens and that will introduce weird distorsion when you watch it in VR

 

- i'm also afraid the solution you offer of shooting 6 projection will have the same problem of gap in certain area

 

=> The more is see it the more i think one working approach is what Luke said. you compute where each point to render  will be located on a spherical projection by using the rotating rig to take 

account of pivot offset. then you convert the result of this to a cube map projection and you start to sample at those coord

 

The shader eetu has done for the PBR bake , is close to this idea ... i would have hope it would be easier than that .... :)

 

Maybe there is a magic trick that i haven't see, but again 70% of the puzzle is in the second simple scene.

 

Cheers 

 

E

Edited by sebkaine
Link to comment
Share on other sites

i post the V-ray docs links :

 

http://docs.chaosgroup.com/display/VRAY3MAYA/Stereoscopic+Camera

http://docs.chaosgroup.com/display/VRAY3MAYA/Camera+Settings

 

A friend of mine make a test in V-Ray and give me an extremely useful info.

Like you see you have 3 modes :

 

- none = parallel ZPS at infinity

- rotation = toe-in

- shear = asymetrical frustum 

 

But when you try to render a stereo cube map in vray in shear mode you get the following error :

// Warning: Stereo panorama does not support shear focus. Rotation will be used instead.

So i was wrong they don't use asymetrical frustum they use Toe-In ... 

Edited by sebkaine
Link to comment
Share on other sites

great Matt i will study that, i will also have to add the ability to render face by face to be able to have 1080*1080 / face with the indie license.

 

you were right for the asad shader pretty simple and extremely helpful to understand things ...

 

Thanks for your always helpful feedback ! 

Edited by sebkaine
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...