Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


madebygeoff last won the day on June 26 2020

madebygeoff had the most liked content!

Community Reputation

10 Good

About madebygeoff

  • Rank

Personal Information

  • Name
    Geoff Bailey
  • Location
    Brooklyn, NY
  1. Depends on how much you want them to shrink. You didn't include the original geo for the balloon, so it was hard to troubleshoot exactly, but hopefully the tube I used is similar. I think your original remesh of the torus was too detailed. It created a very dense mesh. So even though you were setting the rest length to 0.3, it didn't allow the torus to shrink very much. I got rid of the remesh, made the torus a bit more corase, reduced the rest length to 0.1, and added a bit of dampening to remove some of the high energy bounce to the shrinking. I also upped the substeps from 2 to 5. It's also possible you just didn't have enough substeps for the solver to converge properly. If you want them to shrink even more, I might try a different set up for the rubber bands, where you scale them down into position to get your starting shape (you could do this is a drape node before the solver) before running the full sim. Inflate_V2.hipnc Inflate.hipnc
  2. I think you're getting an error because you are missing the necessary metadata that the capture pack node needs. At the very least you need the list of joint names (the capture path) that reference the _index point attribute. Take a look at Matt's file under that procedural weighting section you linked to. Look at the "set_detail_attribs1" wrangle. That's where he's setting all the detail attributes that the pack SOP uses as metadata. You should be able to adapt that node to your setup and it should work.
  3. I'll take a look at this this afternoon and see why you're getting the out of range error, but just out of curiosity, is there any reason you're not using the proximity capture Matt and I suggested earlier? It seems like it does what you're trying to do with a single node. It doesn't allow for per point manual selection of the bone, but honestly that seems overkill for blocking, since you're still going to have to do a round of capture layer paint to blend the weights. proximity.hipnc
  4. [SOLVED] Max Influence for Biharmonic Capture?

    Try using proximity capture instead of biharmonic. For blocking weights, it's faster and as you ask it lets you limit the number of bone influences.
  5. Getting IK legs to parent to hips

    Both biped setups seems to work as expected, so I think it's just personal preference there. As for the quad hind legs, it's also personal preference, but the 4-bone IK doesn't give you any control over the angle of the lower leg. Typically (in dogs at least) the angle between the lower leg (the meta-tarsus, which is actually an extended upper foot) and the upper leg (femur) is 180-degrees. They run parallel to each other with the tibia connecting them. If your model is built that way, the 4-bone IK works pretty well. But if it is modeled without that relationship, it can look a little weird and generally as an animator I just expect to be able to control that angle if I need to. There's two ways I know of in other programs to get around that problem. One is to use an 3-bone IK on the upper leg and an aim constraint on the lower leg so that you can rotate the foot controller to adjust the angle. And the other is a 2-chain IK, which lets you rotate the hip controller to adjust the angle. All three options work, it just depends on what you need. I haven't added a foot roll set up into any of these setups to see how that all works together. But the 2-chain biped set up above made me think that that might be the way to go. dogleg_v05.hipnc
  6. Getting IK legs to parent to hips

    What about this? Keep the reverse foot after the leg IK, but add a second IK to add in the reverse foot ankle motion to the IK solve? There's probably a cleaner way to add in the reverse foot motion than the additional blast and skeleton blend, but I didn't have time to really clean that section up. I might have missed something in your set up, but I was getting weird rotational behavior of the foot roll. This set up works the way I'd expect a conventional rig and as far as I can tell is more immune to flipping. Interestingly, it's also pretty close to the set up I'd use for a dog leg (digitigrade) set up (a double-IK setup, not the usual 4-bone spring IK). Which means potentially you could wrap both as a single "Leg" HDA with options for digitigrade and platigrade. That'd take a little more experimentation, but... anyway. Hope this helps. I'd still be interested in seeing your setup if you have a sec to upload. KineFX character rigging still feels a little like the wild west, so I'm curious to see other approaches. leg-foot_v02.hipnc
  7. Getting IK legs to parent to hips

    Can you post the revised file? I tried recreating what you're showing, but I'm getting double-transforms into the IK solver, so the foot roll does strange things to the leg position. But maybe I missed something in your set up.
  8. Getting IK legs to parent to hips

    In your earlier file, the reason it wasn't working was because you had the reverse foot controls upstream from the IK, so when you blended back in the foot animation using the "skeletonblend_toes" node you were effectively blending back in the foot joint positions before the IK solver, overriding the positions coming out the IK solver and resulting in a foot that didn't move.
  9. Getting IK legs to parent to hips

    Oh, I just realized that is all in one VOP, not two separate screenshots. I think that's the problem. You may not be passing the proper info to the reverse foot because it isn't getting the output of the IK solver. Have you tried doing the IK solver in one VOP, then append a second VOP below it and do the reverse foot in that one. I think that should work. If not, put a file here to look at.
  10. Getting IK legs to parent to hips

    I haven't used the reversefoot VOP a lot. I tried it early on and it seemed excruciatingly slow. But it looks like it's wired up properly. If you post a file, we could take a look. You could also try the reverseroot SOP, which works pretty well, although it's still a bit heavy compared to doing the rig yourself. But there one other thing to be aware of that matters more when you're doing more complicated IK chains. The IK Chains SOP and Two Bone IK VOP (which are both using the Two Bone VOP under the hood) work differently from the IK Solver VOP. In the Two Bone IK, you have to blend the pelvis movement into the skeleton (the pipe that ends up in the 1st input of the IK Chains or Two Bone IK VOP). Then you animated the IK controls for the leg separately. This is what Matt shows above. It works, but I find it pretty counterintuitive compared to every other program I've ever used where you want the hip IK control to move to drive that hip movement. In the IK Solver, it works the way you would expect if you've rigged in another program (but kind of opposite the IK Chains and Two Bone. You animate the pelvis, which drives the hip, and blend that in to move around the hip IK control. It's all a little confusing, so I attached a project file. I asked SideFX about it since it seems confusing to me that they work in opposite ways and they basically said it was a "feature" not a "bug." Well, okay, I guess. But it's worth knowing because I find the IK solver a more robust way to set up the leg rigs and required if you want to do three or four bone IK (a dog leg for instance). Finally, Matt does some cools stuff above splitting out his rig pose controls so that it all looks cleaner in the viewport. I haven't done that here since once you attach control geo and wrap it all up in an HDA, it looks better, but it does make it a little confusing when you're manipulating the rig pose nodes. IK_setup_v01.hipnc
  11. Offset Bone Animation in a KineFX Rig

    Circling back to this because part of it was bugging me. I mentioned before there's a few ways to do this. -- You can just do it simply with expressions in a rig pose node. -- You can do it in a rig VOP, although note that the rig VOP has been changed from Detail to Points. In Detail mode (the way a rig VOP is supposed to work) you can get all the point transforms and then in a for each point transform loop you can offset the rotation. But this doesn't do what you think. Instead, if you set the VOP to run over points, it works the way you expect it to. -- Finally you can do the same thing in a wrangle. EXCEPT, you have to make sure to offset the localtransform (the transform relative the parent) and not the world transform. AND you have to use the "prerotate" command, not the "rotate" command. This mirrors the pre-multiply and post-multiply options on the rig pose. I don't fully get it, honestly, but it's a bit like shifting between local and world space transforms. -- And lastly I added a simple time offset. For simplicity I'm just adding the same rotation and time offset to each joint, not what you show in the example. But it should be easy to adapt this so each joint has its own controls like in the examples above. Its a seemingly simple thing to do, but getting it to work is useful to understand how the various transform attributes are used under the hood by kinefx and the rig pose node. Hope it's useful. tail_v01.hipnc
  12. Simple non-skin mannequin rig (with fingers)

    Or https://www.mixamo.com/
  13. KineFX animate constraint

    Either the Parent Contraint VOP or the Blend Parent VOP should do what you want in a rig VOP.
  14. It'd be easier to see exactly what is going on if you have a scene file. But it sounds like there's a problem with the way you are trying to initialize the rig data on your raw points. Rather than trying to transfer transforms from the existing skeleton to your mocap points, it would probably be better to go the other way. Take a skeleton with proper transforms and hierarchy and retarget your mocap data onto that skeleton. There's a quick start below that goes through it. It's pretty simple set up. If you're looking for something else, post a scene file and I'll take a look. Been setting up our own mocap pipeline over the last few weeks.
  15. Been trying to dig down and understand what's going on at a low level in KineFX. I think I have a pretty good understanding of how the transform and localtransform attributes work together to define the world space transform and the local transform relative to a parent respectively for each joint. But what is the "effective local transform" attribute? And what might you use it for? Thanks.