Jump to content

Motion Vector Shader For Mantra ?


Recommended Posts

hello all

i am wondering if anyone has come across or developed a motion vector shader for mantra. i have used the lm_2DMV mental ray shader before (http://www.alamaison.fr/3d/lm_2DMV/lm_2DMV_ref.htm) with maya and would love to be able to do a similar thing with houdini/mantra to allow for post-process motion blur with re:vision's reel smart motion blur. thanks in advance for any and all leads !

best,

scott

Link to comment
Share on other sites

Off the top of my head, it should just be a matter of importing the velocity attribute into a VOP surface shader and wiring VX, VY, VZ into CR, CG, CB. Just make sure that your velocity vector is normalized.

Link to comment
Share on other sites

thank you sum][one for the .hip file ! - i've learned quite a bit from this choice snippet. thanks for taking some time while the particles cached ;)

++ scott

[one' date='May 4 2007, 08:58 AM' post='35645]

here's a sample HIP file.... caching particles let you spend time on forum's things :D:D

hope this helps

cheers

Link to comment
Share on other sites

[one' date=May 4 2007, 11:58 PM' post='35645]

here's a sample HIP file.... caching particles let you spend time on forum's things :D:D

hope this helps

cheers

thanks sum][one, i was looking for the similar solution too.

thanks broken pixel for posting it too.

Link to comment
Share on other sites

  • 3 weeks later...

Regular 3d motion vectors will not work for that.

At alamaison website there are some instructions that needs to be recreated in mantra:

* Projection of 3D vectors to 2D screen space

* Remap to positive values

* Normalization

I think there is an vop operator for doing that 3d to 2D conversion.

Link to comment
Share on other sites

I know close to nothing about this post-filter "reel smart" thingie, so what follows is based on a very quick scan of the alamaison page and, well, just "thinking out loud"...

* Projection of 3D vectors to 2D screen space

Assuming you calculate the velocity in SOPs (as sum][one did), then the vectors will be, by definition, in object space. So the first thing to do would be to put them in camera space:

   vector vel = ow_vspace(v); // transform to camera space

(Or you can use the DirectionSpaceChange VOP and set it to "Direction Other than Normal").

Now you can transform it to a "normalized screen" space -- one where both the width and height of the image plane run from 0 to 1, which is what the format requires. This is a directional vector, but we could temporarily treat it as though it were a point and pass it to the toNDC() function -- which should also take care of aspect ratio issues, I think.

   vel = toNDC(vel)*{1,1,0}; // transform to NDC space

* Remap to positive values

* Normalization

For the 2D directional part of the encoding (red and green channels), this is trivial:

   vel = normalize(vel) * 0.5 + 0.5;

However...

The "intensity" part (blue channel) represents the length of the velocity vector normalized to the length of the longest vector in the whole sequence. Note however, that we're talking about a *2D* (screen space) vector here, so that, for example, a point moving super-fast but directly away or toward camera has *zero length* in raster space. Getting the value of the longest v in the sequence is not something you can do in the shader, but you could do it in CHOPs.

One way that comes to mind would be to, in a separate branch in SOPs, assign "v" to every point's position, then do a uv-project from camera. Take the uv attribute to CHOPs and extract the longest one -- the length of this (2D) vector becomes your normalizing factor (let's call it "maxlen"). Finally, scale the (3D) "v" attribute of the original geometry by 1/maxlen. Meaning that the shader will now expect a pre-normalized vector.

That's not the end of it though :) You'll need to pass a further scaling factor to the app to convert to pixel units. For this, you could simply use the number of pixels in the image's largest dimension.

With all of the above taken care of, your final shader might look something like this:

surface ReelKewl (
	  vector v	  = 0;   // bound velocity vector (attribute)
   )
{
   // encode the 2D direction (red, green channels)
   vector vel = toNDC(ow_vspace(v))*{1,1,0}; // in NDC space
   vector dir = normalize(vel) * 0.5 + 0.5;  // unit-length and positive

   // encode the *pre-normalized* intensity channel (blue)
   Cf = set(dir.x, dir.y, length(vel));

}

Needless to say: none of the above has been tested!!!

Much of that awkward normalization step required by their spec is (should be) completely unnecessary for floating point images... but "oh well"... they obviously wrote the code with 16-bit int images in mind. Maybe someone here feels like writing a motion blur COP filter (HDK probably) that handles floating point input properly (Mark? ;)).

Also, note that this hack can't handle an object that crosses the screen in one shutter-open/close interval, as there's nothing to render at either end.

Anyhoo... hope that some of the above helps.

Cheers!

Link to comment
Share on other sites

very nice work mario and sum

what would be very useful (because I checked sums file) is to cover all situations

-if your objects and camera are moving

-if just objects are moving

-and if all is still except camera

hope this comes out as good as your sss and glass work :blink:

z

Link to comment
Share on other sites

Much of that awkward normalization step required by their spec is (should be) completely unnecessary for floating point images... but "oh well"... they obviously wrote the code with 16-bit int images in mind. Maybe someone here feels like writing a motion blur COP filter (HDK probably) that handles floating point input properly (Mark? ;)).

Cheers!

I've never used it but isn't that what the velocity blur cop does already?

Link to comment
Share on other sites

I've never used it but isn't that what the velocity blur cop does already?

Ooohhh yeahhhh.... forgot about that COP.

I just tried it and it doesn't work all that well -- actually, it's closer to the "unusable" side of "not that well".

I haven't looked at the code too closely (it's in VEX), but it would seem that instead of smearing along v, it uses v as a lookup offset -- which makes sense since it's in VEX and so it can't create colors at random locations -- but it also means that the blur can only happen inside the object (it doesn't smear outside the rendered bounds). :(

One really really ugly workaround (for a very limited number of cases) is to expand the velocity layer of the rendered image to include the maximum MB displacement. Here are some test pics:

post-148-1180553142_thumb.jpg

Here's the test hip. (it also includes one possible method for embedding all motion into the velocity attribute: point deformations, SOP transformations, plus object and camera transforms).

fakemb1.hip

P.S: In my previous post, where I wrote toNDC(v), I should have written toNDC(P+v)-toNDC(P).

Link to comment
Share on other sites

  • 1 year later...
I know close to nothing about this post-filter "reel smart" thingie, so what follows is based on a very quick scan of the alamaison page and, well, just "thinking out loud"...

Assuming you calculate the velocity in SOPs (as sum][one did), then the vectors will be, by definition, in object space. So the first thing to do would be to put them in camera space:

   vector vel = ow_vspace(v); // transform to camera space

(Or you can use the DirectionSpaceChange VOP and set it to "Direction Other than Normal").

Just in case somebody finds this, tries to replicate it and gets stuck an finding the DirectionSpaceChange VOP:

It doesn't exist anymore (or is hidden), though it still shows up in the help. The Transform VOP node is the way to go here; don't be fooled by its help file which seems to refer to what is the Transform Matrix VOP.

Link to comment
Share on other sites

Anyone able to shed light on this?

Perspective based sampling

You can specify (on a per-object basis) to perform motion blur after primitives have been projected to the screen. This is basically equivalent to doing an occlusion corrected 2D motion blur. It is slightly faster than true 3D sampling (which mantra does by default) but it is not as accurate. This is controlled by the Perspective Correct Blur checkbox, which can be added to the Sampling sub-tab of the Properties tab of the render node.

I have been thinking off an on about the use of 2D motion blur and velocity attributes and the previous post got me thinking about it again.

I can can add this to parameter to the object moving through the scene but how do I know it is really working?

Could you write this out to an image file?

need more time to dig in here, but I'm thinking out loud.

-k

Link to comment
Share on other sites

urk... mayhap storing a perframe ptcloud with velocity attributes on disk.. would still have to avoid the occlusion bit... then you could use it ...

just more talking about loud..

-k

Anyone able to shed light on this?

I have been thinking off an on about the use of 2D motion blur and velocity attributes and the previous post got me thinking about it again.

I can can add this to parameter to the object moving through the scene but how do I know it is really working?

Could you write this out to an image file?

need more time to dig in here, but I'm thinking out loud.

-k

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...