Jump to content

Consistent topology from kinect pointcloud


tomwyn

Recommended Posts

Hi all,

 

I'm working with some pointcloud sequences captured from a kinect, and am struggling to get a usable mesh out of them.

 

I know that the standard procedure is to use the isosurface/particlefluid SOPs, but the changing pointcount obviously causes the topology of the mesh to change at every frame, creating that weird 'boiling' effect.

 

I've trawled my brain, as well as these forums for a direction to go in, but am still none the wiser.

 

Does anyone have any suggestions of how I may be able to achieve a nice constant topology from such a pointcloud?

 

As always, any hints/pointers in the right direction will be very much appreciated! :D

Link to comment
Share on other sites

One other 'tried and tested' area I'm exploring is scattering points onto the meshed surface, and then deforming them using the animated mesh.

 

However this is obviously pretty tricky because of the changing poly/pointcount of the mesh.

 

(the mesh is of a moving animal, btw)

Link to comment
Share on other sites

Have you tried the Point Cloud Iso SOP? It should produce relatively consistent mesh. Another way would be to use a fixed size grid and set the points of the grid to a known distance on one axis based on the point cloud from the Kinect.

 

Hey!

 

Yes, first thing I used, alongside the particlefluidsurface SOP. But it still produces a fluctuating mesh because the point count/position of the kinect pointcloud varies from frame to frame.

 

Does anyone know if there's a way to 're-topologize' a surface like this to a set number of points/prims? If not, even just smoothing out the 'boiling' effect from the mesh a little would be a real help.

 

:)

Link to comment
Share on other sites

You could generate geometry once and transfer animation from frame to frame into it. This obviously generates new set of challenges, but at least gives a chance to handle topology in a consistent manner. Smart interpolation is crucial, also way to get rid of outliers (using point normals for example). Tricky business. 

  • Like 1
Link to comment
Share on other sites

Does anyone know if there's a way to 're-topologize' a surface like this to a set number of points/prims? If not, even just smoothing out the 'boiling' effect from the mesh a little would be a real help.

 

Can you share a sample data set that you're working with? Doesn't have to be a lot, just a few frames worth. I have an idea of how to set this up but not sure what the data incoming actually looks like.

Link to comment
Share on other sites

Can you share a sample data set that you're working with? Doesn't have to be a lot, just a few frames worth. I have an idea of how to set this up but not sure what the data incoming actually looks like.

Hey Luke,

 

That would be amazing, thanks.

 

Been banging my head against this in Meshlab and Cloudcompare for days now.

 

Unfortunately I can't post the whole pointcloud due to agreements, however here's a small section of it lasting 10 frames.

 

The incoming file is a .ply, with no point normals or anything, so I've exported it as a bgeo sequence with no attribs, so it's as close to original as possible.

 

If you would be so kind as to have a look if you get time, that would be a massive help, thanks!!

noise_test_001.zip

Edited by tomwyn
Link to comment
Share on other sites

Relative to the point cloud, where was the Kinect?

 

There were actually 2 kinects (fron & back) at the following coordinates:

 

Kinect 1 XYZ = 0.041, 1.1701,2.941
Kinect 2 XYZ = -0.920, 09.51, 1.406
 
Having said that, the section of pointcloud in that file will likely only be from 1 kinect source.
 
Thanks!
 
EDIT: also, here's more info on the sensors...

post-7190-0-03382100-1439566141_thumb.pn

Edited by tomwyn
Link to comment
Share on other sites

It looks like two sets of samples because there are concave shapes (not possible with one Kinect). What I have in mind would work for only one Kinect at a time but the results could possibly be merged. Can you separate out the two sets of samples? I don't see any data in the samples that could be used to break them apart.

Link to comment
Share on other sites

It looks like two sets of samples because there are concave shapes (not possible with one Kinect). What I have in mind would work for only one Kinect at a time but the results could possibly be merged. Can you separate out the two sets of samples? I don't see any data in the samples that could be used to break them apart.

 

Hey Luke,

 

Don't suppose you got any further with this?

 

Meshlab (or similar) is definitely not the way forward. Very powerful tools, but definitely not build for sequences of meshes. Much more geared towards 3D printing etc.

 

I managed to get a consistant number of points/polys on the mesh by using the ray SOP > Casting from a tube around the mesh & transforming points

 

However the result is much like the shrinkwrap tool, so I loose a little detail. This isn't the end of the world, however the main problem of the 'boiling' effect still persists. I'm fresh out of ideas! :/

 

Thanks!

Link to comment
Share on other sites

What exactly is the end game here?

 

It sounds like your trying to rebuild the mesh at every step?  Why? Unless there is some direct need to do that it would be far easier to create the mesh on say the first frame, or through whatever other means then import it.

 

As for the boiling effect issue, once you have a static mesh I think one approach would be to, as others have suggested, and replicate the motion by using the point cloud to get the average position of groups of point from the point cloud based distance from points on the static mesh.  

 

This may be okay if the target isn't moving too quickly although creating a sort of velocity field and attempting to use that to guide the static mesh's points may be the only way to achieve anything near an accurate replication, and probably some smoothing sub steps couldn't hurt.  Then what I would do is use a mush deformer to smooth the mesh out, leaving only the basic motion so you can then project the detailed mesh onto the motion.

Edited by captain
Link to comment
Share on other sites

  • 3 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...