slamfunk Posted March 22, 2012 Share Posted March 22, 2012 (edited) Hi guys, so i'm looking at ways of using Kinect point cloud information to create 3D scans of things in Houdini. So far I am outputting XYZ .chan files from Processing (which communicates with the Kinect) and importing them in to Houdini via CHOPS (advice I saw in an earlier post on odforce). After doing this I find a nice set of points in my viewport which represent the scanned object. From here on how would I go about skinning the points? I have tried to use a fluid surface object but the effect is too blobby and well.. fluidy. Using a Skin SOP doesn't work cause there are no rows and columns amongst the points. Also there is no normal information. Any pointers would be greatly appreciated. Thanks. P.s. Attached a picture of an example point cloud. Edited March 22, 2012 by slamfunk Quote Link to comment Share on other sites More sharing options...
anim Posted March 22, 2012 Share Posted March 22, 2012 you can use good old trick with flattening the pointcloud to camera plane then use triangulate2D sop to create mesh from them then move points back to original position here is one of the threads where you can find more detailed description of the process however flattening in Z may or may not for you, so make sure you chose correct axis or some arbitrary axis, best for you may be camera Z if your camera is matched witch your kinekt camera Quote Link to comment Share on other sites More sharing options...
vi_rus Posted March 22, 2012 Share Posted March 22, 2012 Take a look at the PointCloudIso node. Quote Link to comment Share on other sites More sharing options...
slamfunk Posted March 22, 2012 Author Share Posted March 22, 2012 (edited) Hey Tomas thanks for the heads up i'll give it a shot and see what I can create. Sergei, I did take a look at the pointcloudiso node but I'm missing the point normal attribute. Is there a workaround for that you might know? Edited March 22, 2012 by slamfunk Quote Link to comment Share on other sites More sharing options...
edward Posted March 22, 2012 Share Posted March 22, 2012 Doesn't Kinect didn't estimate normals as well? Quote Link to comment Share on other sites More sharing options...
petz Posted March 22, 2012 Share Posted March 22, 2012 Hey Tomas thanks for the heads up i'll give it a shot and see what I can create. Sergei, I did take a look at the pointcloudiso node but I'm missing the point normal attribute. Is there a workaround for that you might know? Quote Link to comment Share on other sites More sharing options...
ryew Posted March 22, 2012 Share Posted March 22, 2012 I don't know the answer, but the idea is really intriguing - please continue sharing your progress on odforce if you could Quote Link to comment Share on other sites More sharing options...
zarti Posted March 23, 2012 Share Posted March 23, 2012 hi slamfunk , your picture shows points from a front projected scan ( only depth ) . if thats the case , shouldnt be hard to solve , imho . in my attached file i tried to rebuild a full 3D pointCloud . i couldnt find a way to totally fuse and clean the mesh , but at least transfered its normals to the pointCloud .. -- the null1 SOP has few controls . hope this helps .. - - remeshedzcan1.hipnc Quote Link to comment Share on other sites More sharing options...
slamfunk Posted March 23, 2012 Author Share Posted March 23, 2012 Cool thanks for the advice guys, i'll have a look at your suggestions for the normals. As per yesterdays tips I tried the triangulate2D sop but all that happened was Houdini crashed. I mean python started calculating once I activated t2d sop but after half an hour I escaped out and then Houdini crashed. Is it supposed to take that long? Should I just wait a little longer? I mean if it takes that long to cook one frame I must be doing something wrong right? Also, i'm going on a little tangent here but is there anyway to ignore points that lie at 0,0,0(xyz) in the point cloud? I don't want to delete the points as the points are always changing but I just want to ignore the points that lie on the origin at every frame. Is there some sort of bounding box or threshold I can create for points? Cheers Quote Link to comment Share on other sites More sharing options...
anim Posted March 23, 2012 Share Posted March 23, 2012 you can create group from points you want, then use just that for calculation of your mesh to ignore points at 0,0,0 you can create point group by expression like length($TX, $TY, $TZ) > 0 or you can use bounding box or object to create group (Bounding Tab) but in my opinion you can delete points you don't want after they came from CHOPs, it will not affect pointcloud calculation for other frames, because input for CHOPs will still be the same you can do it similarly, in delete SOP by expression length($TX, $TY, $TZ) < 0.0001 triangulate2D should be quite fast, depends on number of points you have for example 10000 points took 0.6s to triangulate here 100000 points took 20s 1000000 points took 12m since your points are just from one camera view, that approach should be fine and you'll get normals as well Quote Link to comment Share on other sites More sharing options...
majikal Posted March 23, 2012 Share Posted March 23, 2012 (edited) hi.. i used brekel capture for capturing. You can get rgb, depth, pointcloud and obj surface out of it... or you can just use raw depth map output from kinect as displacement texture on a grid and displace in direction of an camere (its slow and you need fov of kinect)... point that i dont need like bgk i deleted manualy with conditions. Edited March 23, 2012 by majikal Quote Link to comment Share on other sites More sharing options...
slamfunk Posted March 23, 2012 Author Share Posted March 23, 2012 you can create group from points you want, then use just that for calculation of your mesh to ignore points at 0,0,0 you can create point group by expression like length($TX, $TY, $TZ) > 0 or you can use bounding box or object to create group (Bounding Tab) but in my opinion you can delete points you don't want after they came from CHOPs, it will not affect pointcloud calculation for other frames, because input for CHOPs will still be the same you can do it similarly, in delete SOP by expression length($TX, $TY, $TZ) < 0.0001 triangulate2D should be quite fast, depends on number of points you have for example 10000 points took 0.6s to triangulate here 100000 points took 20s 1000000 points took 12m since your points are just from one camera view, that approach should be fine and you'll get normals as well Thanks Tomas, that delete SOP with the expression did the trick! Got triangulate2D to work as well but the mesh is pretty nasty so I need to figure out a way to clean it up a little. Quote Link to comment Share on other sites More sharing options...
slamfunk Posted March 23, 2012 Author Share Posted March 23, 2012 i used brekel capture for capturing. You can get rgb, depth, pointcloud and obj surface out of it... Hey majikal thanks for the pointer. I had seen brekel kinect floating around but my objective was to get the information through Processing so I never followed it up. Nonetheless I had a look now and it's very handy! Quote Link to comment Share on other sites More sharing options...
premini Posted February 26, 2013 Share Posted February 26, 2013 Hi, i revive this thread to ask if you finally achieved the results you were looking.... i am about to start the same path of action. Thanks in advance! Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.