Jump to content

motion vector pass for Nuke


Butachan

Recommended Posts

Hi,

I have been researching for a while how to export motion vectors from houdini mantra to nuke... and it wasn't easy....

For starters Nuke seems to swap the R and G values for some reason.... which did not make easy the to troubleshoot 

And I am still a bit confuse between the diference of camera space and ndc space though I kind of understand why I need to use ndc space.

this is what I got so far I hope it helps anyone that uses this. And if by any chance someone knows a better workflow it would be much appreciated.

 

test_MV - Newandimproved.hip

Link to comment
Share on other sites

  • 2 weeks later...

@djiki Are you sure about not needing to transform to screen space?

I really haven't have the time to test in deep but the test that I made seem to work fine.

The original set up from the helpfile did not give me proper results in neither nuke or AE.

just FYI. I got my setup form here (Japanese, sorry):

http://nomoreretake.net/2016/11/13/houdinivector-blur-pass-to-nuke/

The post is in japanese but the author also got help here at the forums on:

 

And after testing I only flipped the channels as Nuke seems to like it the other way (not really an issue, but more of a lack of implementation into nuke).

But my question is sincere... are you sure is not needed the change to screen space? because as I said I haven't have the time to test the set up without the transform but it does make sense to me as if Pblur is in world or camera space the x value will differ from screen right?

 

Link to comment
Share on other sites

Yes you are right,

updated documentation states that GetbluredP returns position in camera space and not NDC space. So conversion is required for proper motion blur.

Differences between camera space and NDC. Hm, camera space is regular 3D space. Just imagine new coordinate system in your scene which is positioned with it's origin in camera position, and oriented that positive part of Z axis looks in direction camera looks and Y axis is aligned with UP vector of your camera. Now, if you try to express point coordinates of all objects in that coordinate system, those readings are actually coordinates in camera space.

NDC coordinate system involve perspective transformation. That is "perspective correction" applied to x and y coordinate from camera space according to point distance from camera. For example, if you have two identical objects moving in parallel in direction of X axis with speed of 1 unit per frame. One object is let say closer to camera and another is far away from it. Rendering motion vectors of those objects in camera space would give vectors of 1 unit length with values (1,0,0) for both objects. That is because camera space is like world space. You can translate any line of some length to any position in such space and that length will always be the same. In NDC space that is not the case. The line closer to camera, when projected on camera sensor will have larger length than identical line which is positioned deeper in scene, far away from camera. In our case of objects in motion, even both objects have the same speed of 1 unit/frame expressed in world or camera space, when converted in NDC space, closer object to camera will have larger motion vector while further will have smaller.   

I modified your scene by duplicating sphere with your animation and positioning deeper in the scene. Try to render in camera space and NDC space and look differences in motion vectors in Nuke.

test_MV_modified.hip

Multiplying by resolution in shader,  like in your original example, is not wise choice. That way you killed Z coordinate. Someone will ask why do you need Z coordinate for 2D motion blur, but yes, some advanced algorithms use it especially in situation where two trajectories of objects in motion, looking from camera position, overlaps. Also sometimes you want to distinguish pixels moving toward camera from those moving away from camera and that Z coordinate (third component) of motion vector can be used for that. Houdini camera space is defined that camera looks in direction of positive Z axes, meaning, pixels with positive value in that component are moving away from cam and vice versa. From the other side, Nuke camera space looks to negative direction of Z axis, that is why you need to exchange X and Y components. But that shouldn't be done in shader. Suppose you are working in large company and your render output is used in different compositing software. Exchanging coordinates in shader will make that render "NUKE specific" and for sure you don't want to render more motion passes for other composite software. That's why you should leave output as is and all "Nuke specific things" finish in Nuke. 

Setup in Nuke:

NukeSetupForMotionVectorsFromHoudini.thumb.gif.4d9b3e3c78a3ee06afc9b8a8140bf028.gif

 

Amount parameter is basically your uniform scale for motion blur. On this example it is oversized so you can see something. Exchange of x and y coordinate and multiplication with resolution is applied 

in expression node

 

Cheers   

  •  
  •  
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...