djiki Posted March 7, 2016 Share Posted March 7, 2016 (edited) I am testing some 3D point cloud data captured by kinect sensors in Houdini. My question is, is there any efficient way of computing velocity field (VF) such that a known SDF (A) advected by such VF results in known sdf B? For example: Houdini has node "VDB advect SDF" which calculate sdf B if source sdf (A) and velocity field (VF) are known. I need opposite calculation, if A and B are known the goal is to calculate VF. Scene in attachment contains biped animation used to represent an actor and "kinect emulator" (two of them) created by several houdini nodes which generate similar point cloud structure as real kinect does. On that way, sending large point cloud structures from real depth camera via attachment is avoided. Processing node contains volumes A and B from two successive frames. Human eye (brain) instantly finds the way on how volume shape A is transformed to shape B but math behind that is not trivial. Anyone has some idea? KinectEmulation_ProcessingTest1.hipnc Edited March 7, 2016 by djiki Quote Link to comment Share on other sites More sharing options...
Atom Posted March 15, 2016 Share Posted March 15, 2016 (edited) if A and B have the same number of points you could simply loop through the points and execute a length function on every point set to determine a velocity vector. I'm not sure where you would store the results, however. Edited March 15, 2016 by Atom Quote Link to comment Share on other sites More sharing options...
djiki Posted March 15, 2016 Author Share Posted March 15, 2016 Atom, thanks for replying. As you can see from attached scene (if you are familiar to Houdini), 3D point clouds (A and B ) don't have same number of points (such thing would be trivial by extracting differences in position "point to point" and store it to vector attribute and then create vector volume field from that attribute). Even point clouds are from successive frames from 3D camera, points and their indexes can not be put in an "any match" relation. By simple words those are two completely different clouds. Some probably, robust "search 3d pattern" algorithm is required for such thing and I asked question if someone has some experience in that topic. Quote Link to comment Share on other sites More sharing options...
tricecold Posted June 20, 2016 Share Posted June 20, 2016 (edited) I am on the same boat, been all over the internet, so far no luck, anyone ? vdb morph only works with SDF, I am almost on the way to fill up a FLIP container which will help to extract some vels but it will be extremely dodgy Edited June 20, 2016 by tricecold Quote Link to comment Share on other sites More sharing options...
sebkaine Posted July 4, 2016 Share Posted July 4, 2016 (edited) if you stay at the SDF level i can't imagine any method that would allow to match 2 different frames. the scalar field can be push in any random direction ? it sound impossible to make any precise voxel matching. the only thing i would see is to do a divergence grid computation were : - you fill a global bbox with empty voxel - you make a match grid resolution with your SDF t / SDF t-1 - you fill at 1 voxels that contain a point in the 2 grids - you fill at 0.5 when there is no point in the voxel but in the neighbour one (max dist < 2 * voxel size), - you fill at 0 for the rest - you compute the divergence grid by checking for each voxels the neighbour voxels But i can't imagine what you can get more than this ? there might also be an old school dirty tricks, that i haven't figure yet ? interesting topic ! Edited July 4, 2016 by sebkaine Quote Link to comment Share on other sites More sharing options...
bunker Posted July 31, 2016 Share Posted July 31, 2016 I don't think you need any SDF volumes. You can use the pointcloud directly and find the closest points from the previous frame to compute velocity. KinectEmulation_ProcessingTest2.hipnc Quote Link to comment Share on other sites More sharing options...
djiki Posted November 18, 2017 Author Share Posted November 18, 2017 Thx Bunker. Sorry this post come late. Solution you provide is trivial and can not fit as general solution. Take for example frame 18 in your modified scene ..... you will see that fast motion of head in front direction at that frame makes your algorithm fails because "closest distance to points" in that frame finds wrong points. Points from back of head in current frame find their closest points from previous frame those from front of head which is wrong and produce wrong velocities. In general, that method works only for small movement of very simple shapes. Solution I was talking about is finally implemented in H16.5 They named it Volume Optical Flow. It can take pyramidal search of larger structural features (so head in previous example would be recognized as larger structure and not simple "closest point to point" method and should produce proper velocities) in several iterations to smaller parts until it matched proper velocities of complex structures. That is exactly what I am looking for. SideFX finally realizes that "hard artillery" is required to keep Maya light years behind. Thank guys 2 Quote Link to comment Share on other sites More sharing options...
thebrettwalter Posted October 24, 2019 Share Posted October 24, 2019 (edited) commenting to keep in recent activity. I have done a bit of kinect through cycling74-max and processing but haven't come up with a node graph for max or come across a library for proc which will allows me to export the point cloud sequence for use in houdini. //edit// looks like there is more online about this nowadays than in previous years. (it's been a while) Edited October 24, 2019 by thebrettwalter change Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.