Jump to content

Leveraging TRC Mocap Data?


Atom

Recommended Posts

Hi All,

I am setting BVH mocap data aside for the moment. All the free files I can find on the web seem to have a ton of noise and or bone twisting problems which relay into the rig when bound.

I came across another format called TRC. This format is not skeletal based but point based. Over all this format look more 'natural' to me and may avoid the twisting bone issue. However, because it is point based, there is no rotation information.

I was able to bind a few of the points to some of the controllers in the simplefemale character that comes with Houdini. But my efforts are kind of clunky and as time goes moves forward the controllers seem to drift away from their original targeted locations. Notice the foot crossing over the wrong way in the image.

Does anyone use this format or have any tips on how to re-create rotation from just points?

Is there any way to do natural based animation, such as dancing, without using mocap data?

Untitled-1.jpg

ap_TRC_mocap_1a.hiplc

Edited by Atom
Link to comment
Share on other sites

About re-creating rotation from just points, there is Inverse kinematics chain in every 3d app, doing just that. Another method is look at constrain ( Track To in Blender, Aim in Maya, Direction constrain in Softimage), as a sort of complement to IK chain.  Math behind IK chain of two elements is pretty much simple and well known, it's some method of calculating the sides of triangle,  you know angle or length of two, then you get what you need for third. For more than two bones, it's different story, because this one always arbitrary, but two bone chain should be enough, here.

In apps like Maya or Softimage, 'manual' motion re-targeting, using constraints or IK is pretty much possible procedure, while some skill is definitively needed to get it to work. For example, if two bones from mocap are parallel at some frame, there should be special solution to get an up vector, otherwise it's enough to use midpoint between root and end of last bone, so on. However it's still only skill, no need to know math behind or even no need for (that much) of scripting. Blender should be fine, too. Pro solution for re-targeting is Full Body IK solver, these solvers are able to work with complete skeleton at once,  providing the info for re-targeting and filtering the mocap data in same time.

Now...... Houdini has a bit different approach compared to Maya or Softimage or Blender. Things like Blend or Fetch object, or built in Look At, I think even IK CHOP, they are acting as a parent of ''constrained'' object, not as an override of local transform like in mentioned three, and rest of 3d world ( I think). There are other differences, too. Most likely it's perfectly possible to build some manual motion re-targeting in H, but, with how many steps, and, how fast this setup is, I don't know.

For doing the natural based animation without using mocap,  someone has to be a very, very, very skilled animator to get this believable. For movements like dancing, martial arts or so, having a lot of contacts with other objects, one would like to have a rig with a bit more than just a common set of IK solvers or constraints, robust IK FK matching mechanism as well.

 

Edited by amm
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...