Jump to content

Does someone know an effective algorithm for computing this


djiki

Recommended Posts

I am testing some 3D point cloud data captured by kinect sensors in Houdini.

 

My question is, is there any efficient way of computing velocity field (VF) such that a known SDF (A) advected by such VF results in known sdf B?

 

For example: Houdini has node "VDB advect SDF" which calculate sdf B if source sdf (A) and velocity field (VF) are known. I need opposite calculation, if A and B are known the goal is to calculate VF.

 

Scene in attachment contains biped animation used to represent an actor and "kinect emulator" (two of them) created by several houdini nodes which generate similar point cloud structure as real kinect does. On that way, sending large point cloud structures from real depth camera via attachment is avoided.  Processing node contains volumes A and B from two successive frames. 

 

Human eye (brain) instantly finds the way on how volume shape A is transformed to shape B but math behind that is not trivial.

 

Anyone has some idea?

KinectEmulation_ProcessingTest1.hipnc

Edited by djiki
Link to comment
Share on other sites

if A and B have the same number of points you could simply loop through the points and execute a length function on every point set to determine a velocity vector. I'm not sure where you would store the results, however.

Edited by Atom
Link to comment
Share on other sites

Atom, thanks for replying.

 

As you can see from attached scene (if you are familiar to Houdini), 3D point clouds (A and B ) don't  have same number of points (such thing would be trivial by extracting differences in position "point to point" and store it to vector attribute and then create vector volume field from that attribute).

Even point clouds are from successive frames from 3D camera, points and their indexes can not be put in an "any match" relation. By simple words those are two completely different clouds. Some probably, robust "search 3d pattern" algorithm is required for such thing and I asked question if someone has some experience in that topic.

Link to comment
Share on other sites

  • 3 months later...

I am on the same boat, been all over the internet, so far no luck, anyone ? vdb morph only works with SDF, I am almost on the way to fill up a FLIP container which will help to extract some vels but it will be extremely dodgy

 

Edited by tricecold
Link to comment
Share on other sites

  • 2 weeks later...

if you stay at the SDF level i can't imagine any method that would allow to match 2 different frames. 

the scalar field can be push in any random direction ? it sound impossible to make any precise voxel matching.

the only thing i would see is to do a divergence grid computation were :

- you fill a global bbox with empty voxel

- you make a match grid resolution with your SDF t / SDF t-1  

- you fill at 1 voxels that contain a point in the 2 grids

- you fill at 0.5 when there is no point in the voxel but in the neighbour one (max dist < 2 * voxel size),

- you fill at 0 for the rest

- you compute the divergence grid by checking for each voxels the neighbour voxels

But i can't imagine what you can get more than this ?

 

there might also be an old school dirty tricks, that i haven't figure yet ?

 

interesting topic ! :)

Edited by sebkaine
Link to comment
Share on other sites

  • 4 weeks later...
  • 1 year later...

Thx Bunker. Sorry this post come late. 

Solution you provide is trivial and can not fit as general solution. Take for example frame 18 in your modified scene ..... you will see that fast motion of head in front direction at that frame makes your algorithm fails because "closest distance to points" in that frame finds wrong points. Points from back of head in current frame find their closest points from previous frame those from front of head which is wrong and produce wrong velocities. In general, that method works only for small movement of very simple shapes. 

Solution I was talking about is finally implemented in H16.5  They named it Volume Optical Flow. It can take pyramidal search of larger structural features (so head in previous example would be recognized as larger structure and not simple "closest point to point" method and should produce proper velocities) in several iterations to smaller parts until it matched proper velocities of complex structures. That is exactly what I am looking for. SideFX finally realizes that "hard artillery" is required to keep Maya light years behind.     

Thank guys

 

 

  • Like 2
Link to comment
Share on other sites

  • 1 year later...

commenting to keep in recent activity. I have done a bit of kinect through cycling74-max and processing but haven't come up with a node graph for max or come across a library for proc which will allows me to export the point cloud sequence for use in houdini.

//edit//  looks like there is more online about this nowadays than in previous years. (it's been a while)

Edited by thebrettwalter
change
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...