Jump to content

2D motion vector as collision for fluids?

Recommended Posts


recently I watched this breakdown.



I am not sure that I understood how author did collisions with fluids. It seems to me like he generated motion vectors in compositing application (Nuke's vector generator?) and then somehow transferred it to houdini where he generated velocity field based on those vectors? But I don't understand how one can transfer 2D motion data from one camera to 3D. It isn't possible, is it?




I'd say that for such collisions proxy geometry must be animated. Like in this video (1:00)



Do you have experience with integrating real footage collisions with fluid sims? Are there any workflows that can speed up this process?




Link to comment
Share on other sites

It's all about money (and time, but since time is money ....)


1. For the best quality results, preparation should start on set, before filming. You have to provide stereo camera rig or depth sensor (or several of them for different point of views). Then appropriate calibration (usually with rigid proxy model like cube of known size or similar) is done by filming that rigid object using all cameras fixed in scene. It will allow proper cameras positioning latter in post production with respect to their relative positions.

One way or another, you can reconstruct depth info later in post production. First you have to do 3D camera tracking (if filming camera have some movement) and find lens distortion if any. This is necessary step that eliminates pincushion of reconstructed 3D scene. If distortion exists, you have to undistort all of your footages before they are used in 3D software. Next step, you have to reconstruct motion vectors. You can do that on several ways, but one you chose depends on software you used to extract depth data (from stereo camera rig or depth sensor). Some software provide those for you automatically some other not. If second is the case then you'll have to do an optical flow (from usual 3d camera tracking software). Those are just 2D motion vectors but that is ok since you will project them at right position in 3D space. 
 Now, in 3D software, import camera from 3D tracking software, generate point for each pixel of your imaginary camera sensor plane and apply RGB data and motion vectors data to those points. Then, by camera projection, project those points in the scene using depth data and you have fully reconstructed set-scene as a point cloud in 3D space. Now, you can easy clamp only necessary points (using depth distance and/or intensity of optical flow vectors) to separate point cloud into two clouds, one for static object and one for moving. Convert both to colliding volumes. And finally, create  velocity field from motion vectors of moving point cloud. 


2. In low-budget projects, filmed material comes to post-production without some/all necessary data for such reconstruction (VFX supervisor on set was busy by helping a young actress to find her way to become Hollywood star). In that case you have to try to do some of those tasks with an approximation.

Do 3D cam tracking and match moving of objects, generate low poly proxy models and animate them using tracking data. Project optical flow vectors to such geometry, convert to volumes etc. If 3D track is precise enough (filming is done at high res and high FPS and shorter shutter time so material is eliminated of motion blur and every frame is sharp as a razor) you can reconstruct depth data from it. Tracked points (chose manually) will be precise but trying all pixels to put in depth will result in huge depth error. It will be let say 10 - 20% (the best case) of precision you can get from any depth sensor. So better solution is to track only those precise (markers on actors) points and latter connect them proceduraly to any kind of shape/curve at which you will apply animation corrections, if any, and finally sweep to a low poly proxy geo to generate volumes etc.

Link to comment
Share on other sites

you can as well without any additional information or proper tracked camera transfer 2D masks and 2D velocities using toNDC() VEX function

to your geometry or volume from your shot camera 

then just manually adjust information in camera z direction, should be a good estimsate for simple cases if you don't have proper matchmove

Link to comment
Share on other sites

any image, including cops

you simply use toNDC() to get coordinates in camera screen space, then texture() call to sample the image


like for example if you want to mask out a volume from camera view you can use (Volume Wrangle on scalar non-empty volume named density to be masked)

vector uv = toNDC("/obj/cam1", @P);
vector4 map = texture("butterfly1.pic", uv.x, uv.y);
f@density *= map.a;

or to directly set some densities (Volume Wrangle on scalar volume named density)

vector uv = toNDC("/obj/cam1", @P);
vector map = texture("Mandril.rat", uv.x, uv.y);
f@density = luminance(map);

or a vector field  (Volume Wrangle on vector volume named vel)

vector uv = toNDC("/obj/cam1", @P);
vector map = texture("Mandril.rat", uv.x, uv.y);
v@vel = map;


you can of course create parameters for camera and texture and reference COP image if you need to

Edited by anim
  • Like 3
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...