# Does someone know an effective algorithm for computing this

## Recommended Posts

I am testing some 3D point cloud data captured by kinect sensors in Houdini.

My question is, is there any efficient way of computing velocity field (VF) such that a known SDF (A) advected by such VF results in known sdf B?

For example: Houdini has node "VDB advect SDF" which calculate sdf B if source sdf (A) and velocity field (VF) are known. I need opposite calculation, if A and B are known the goal is to calculate VF.

Scene in attachment contains biped animation used to represent an actor and "kinect emulator" (two of them) created by several houdini nodes which generate similar point cloud structure as real kinect does. On that way, sending large point cloud structures from real depth camera via attachment is avoided.  Processing node contains volumes A and B from two successive frames.

Human eye (brain) instantly finds the way on how volume shape A is transformed to shape B but math behind that is not trivial.

Anyone has some idea?

KinectEmulation_ProcessingTest1.hipnc

Edited by djiki

##### Share on other sites

if A and B have the same number of points you could simply loop through the points and execute a length function on every point set to determine a velocity vector. I'm not sure where you would store the results, however.

Edited by Atom

##### Share on other sites

As you can see from attached scene (if you are familiar to Houdini), 3D point clouds (A and B ) don't  have same number of points (such thing would be trivial by extracting differences in position "point to point" and store it to vector attribute and then create vector volume field from that attribute).

Even point clouds are from successive frames from 3D camera, points and their indexes can not be put in an "any match" relation. By simple words those are two completely different clouds. Some probably, robust "search 3d pattern" algorithm is required for such thing and I asked question if someone has some experience in that topic.

##### Share on other sites

I am on the same boat, been all over the internet, so far no luck, anyone ? vdb morph only works with SDF, I am almost on the way to fill up a FLIP container which will help to extract some vels but it will be extremely dodgy

Edited by tricecold

##### Share on other sites

if you stay at the SDF level i can't imagine any method that would allow to match 2 different frames.

the scalar field can be push in any random direction ? it sound impossible to make any precise voxel matching.

the only thing i would see is to do a divergence grid computation were :

- you fill a global bbox with empty voxel

- you make a match grid resolution with your SDF t / SDF t-1

- you fill at 1 voxels that contain a point in the 2 grids

- you fill at 0.5 when there is no point in the voxel but in the neighbour one (max dist < 2 * voxel size),

- you fill at 0 for the rest

- you compute the divergence grid by checking for each voxels the neighbour voxels

But i can't imagine what you can get more than this ?

there might also be an old school dirty tricks, that i haven't figure yet ?

interesting topic !

Edited by sebkaine

##### Share on other sites

I don't think you need any SDF volumes.
You can use the pointcloud directly and find the closest points from the previous frame to compute velocity.

KinectEmulation_ProcessingTest2.hipnc

##### Share on other sites

Thx Bunker. Sorry this post come late.

Solution you provide is trivial and can not fit as general solution. Take for example frame 18 in your modified scene ..... you will see that fast motion of head in front direction at that frame makes your algorithm fails because "closest distance to points" in that frame finds wrong points. Points from back of head in current frame find their closest points from previous frame those from front of head which is wrong and produce wrong velocities. In general, that method works only for small movement of very simple shapes.

Solution I was talking about is finally implemented in H16.5  They named it Volume Optical Flow. It can take pyramidal search of larger structural features (so head in previous example would be recognized as larger structure and not simple "closest point to point" method and should produce proper velocities) in several iterations to smaller parts until it matched proper velocities of complex structures. That is exactly what I am looking for. SideFX finally realizes that "hard artillery" is required to keep Maya light years behind.

Thank guys

• 2

## Create an account

Register a new account