Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

Everything posted by madebygeoff

  1. Basically I'm trying to calculate facing ratio at SOP level as an attribute to feed into a shading network for a character (for reasons that don't matter here, I need to do this in SOPs and not in the shader itself). I wrote a basic bit of VEX to calculate the facing ration, but if the object is transformed at the object level or parented, it doesn't work because it's grabbing the point position prior to parenting and object level transformations, which are likely to occur in animation. I thought I could use ow_space() but that doesn't seem to work (possibly because parent transforms are done after SOP level calculations?). Any way to grab point positions that will take object-level transforms into account?
  2. Houdini Vellum Mesh Flickering / Jittery

    So, for the jittering: you can think of vellum working by putting a whole bunch of springs connecting your points. You can see them (the constraints) by connecting a null to the second output of the cloth or solver SOP. Each of those has: 1) a stiffness (stretch stiffness) for how much they want to hold their original length (rest length), 2) a stiffness (bend stiffness) for how much they want to keep their original angle in relation to the points surrounding them, and a stiffness (compression stiffness) for how easily they can compress their length. There's other attributes but those are the most important. With each step, the solver moves the points and then it goes back and tries to satisfy the requirements of all the constraints. It does this several times (iterations) getting closer and closer. Jittering happens when you give the solver settings that it can't satisfy given the constraint requirements. For instance, lets say you crumple a piece of fabric but have a very high bend stiffness. You'll get areas that have to bend to avoid penetration, but the constraint setting won't allow it. So the solver freaks out and you get jittering between multiple bad solves. So the main thing is that you had VERY high stiffness settings all around. 1x10^10 for stiffness, 1x10^6 for compression and a low bend stiffness. So as the cloth stretches over the rock, it can't stretch. I just turned the stiffness settings down (and turned compression stiffness off since you dont have anything compressing) and it worked fine. Lastly, your settings were VERY inefficient. I think in trying to solve the jittering you may have tried tweaking a lot of other stuff, but it resulted in a very slow sim for how simple it is. Your mesh was crazy dense. I turned the remesh down to a target size of 0.1 or 0.05. You can raise it later if you really need more detail, but vellum is now pretty stable across high and low res meshes. Start low and dial things in then turn the resolution up. Your edge length scale in the cloth SOP was also REALLY small. Generally 0.25 - 0.5 is a good place to start (you had 0.01). And your solver settings were very high. Generally the more substeps, the fewer constraint iterations you need. So with 5 substeps (a decent place to start) you can usually get away with 25-50 constraint iterations for cloth. The more substeps, the smaller each movement, the fewer constraint iteration are needed to solve all the constraint requirements. And finally, you had velocity damping turned all the way up. Sometimes you'll hear that turning it up will help smooth sims, but generally that's not true and it results in behavior that isn't very natural. Use it sparingly. You can always turn on the visualizer in the solver and see what's causing the jittering. Best of luck. fabric_pulling_over_rock2.hiplc
  3. Houdini 19.5 skeleton blend issue

    The new skeleton blend added a couple features, but it still works the same. In its default state the only noticeable difference is that "bias" has been replaced by "weights", which might lead to confusion. Bias blended between input 1 (slider at 0) and input 2 (slider at 1). Now the skeleton blend works more like the blend shapes node, where you can add multiple blends and give them each a weight between 0 and 1. If nothing is happening, check that at least one of your weights is > 0. And double check that your "Attribute to match" exists on both skeletons (which it should from what you've described). Lastly, you can always downgrade to an earlier version of any SOP. Go to Windows > Asset Manager > Configuration and set "asset definition toolbar" to "show always". Now at the top of your SOP parameters you should see the version number of your SOP. You can change it to 2.0 and it will revert back to what is used in the tutorial. If none of this works, post a .hip file.
  4. Well it all depends on what you want to call "correct". Houdini, like a lot of computer programs, uses floating point numbers to represent its number system. It's a bit complicated, but think of it like scientific notation in binary. The short answer is that in floating point 0.7 has an endlessly repeating decimal, kind of like how if you tried to write 1/3 in standard notation, you'd end up with endless .3333333333333s. Same thing. So instead Houdini picks the closest value it can simply represent in floating point. In this case, it's 0.69999999999999996. If you click on the parameter name, it will expand to show you the actual floating point value that Houdini is using internally for its calculations. If you click the parameter again, it collapses down to a simplified value that Houdini chooses to display. So what you are seeing is "correct". It is displaying the actual value that Houdini is using to calculate the scattering phase. But it's also annoying to look at so many decimals. So depending on what you want: in your expression use something like round() or floor() or ceiling() to truncate the value to something easier to read.
  5. transfer Cd with CopyToPoints

    Geo spreadsheet is your friend here. And I admit it's a little confusing. Plus in your question there's some mixing of metaphors going on. When you create a primitive sphere and apply a color to it, the color is applied to the point representing the primitive and the viewport does a trick to color the sphere from the point color. When you pack and instance the copy to points, the sphere (primitive or poly) is stored in memory and color IS transferred to the new points that represent each packed sphere. If you look in the geo spreadsheet, you can see that Cd IS transferred to the points regardless of whether the sphere is primitive or poly. However, because the primitive sphere is itself packed, the viewport can't do its little display trick so they all look grey, despite having a Cd attribute. As Atom said, if you want to see the colors on the primitive spheres you have to unpack them (either by unchecking the pack and instance option or by appending an unpack node and transferring Cd in "Transfer attributes". Now the primitives are assigned a color and the viewport can display them. Finally, pscale is NOT technically transferred. If you look at the geo spreadsheet after the Copy to Points, you'll see that there is NO pscale attribute on the points. The default is for pscale not to be transferred. Instead it (and the orient attribute) are applied as a transform matrix to all the copies. You could change this by unchecking "Transform Using Target Point Orientations" and deleting ^pscale (not pscale) from the list of "Attributes from target". Now pscale IS transferred as an attribute (although if you want to use it to scale your copies you now have to do that manually, but that is another post topic). Digging through the geo spreadsheet is often a good way to figure out how Houdini is moving attributes about.
  6. Sharp spikes growth effect

    There's lots of ways to skin that cat, but if I was doing that as a character (meaning an animated lizard, as opposed to an effect that suggests lizard skin) I'd want to bake that into the model geometry. So while you could do it as textured displacement, but looking at the way it blends into the smoother areas of the skin, I'd probably do it as a polyextrude with the length driven by a painted attribute and the direction by a different custom vector attribute. But to be honest, if it was a character that was going to be used in multiple shots, I probably would hand model it, rather than do it procedurally. But I'll try to mock something up tomorrow.
  7. Orient Point Towards Another

    I basically think of them as rotation around a vector direction. You define a vector in space and you define the amount you rotate around it. That's it. Pretty simple. All the complicated stuff happens under the hood. But if you want to dive into what is actually going on with all the complex numbers and four-dimentional space and everything, this is a really good explanation: https://eater.net/quaternions
  8. Orient Point Towards Another

    You can use maketransform in either VEX or VOP to generate a transform matrix from 2 vectors and then convert that matrix to a quaternion. I had to reverse your v@v vector as well. It was pointing the wrong way. orient_toward_specific_point.hiplc
  9. Stupid... I was pasting the script action into the "parameter" section of the edit parameters dialog, in the list of tags at the bottom. But it doesn't paste carriage returns when you do it that way, so the script was erroring out. All you have to do is paste the above into the "action button" tab, where it belongs.
  10. Anyone know how to add a joint selection action to a custom interface? I'd like to add something to an HDA I'm working on like what you see on the parent joints SOP for instance. You can click the selection arrow and fire the viewer state and select the joint(s) in the viewport and the string field will populate with the name attribute. I see that there's a script_action attached to the string field. When I copy it, I get: from kinefx.ui import rigtreeutils rigtreeutils.selectPointGroupParm(kwargs) But when I go to use it, I get a syntax error.
  11. When are equal values are NOT equal?

    Could they be floating point rounding errors? Sometimes you can create situations where two numbers that looks (or more maddeningly) SHOULD be equal, can be different because of how floating point numbers are handled. Click on the label of the parameter value for each to fully expand the number to full decimal places and see if they are the same or not.
  12. Mocap / add locomotion

    For anyone interested in this admittedly niche question, I spent some time earlier this week setting up a solve for this. I haven't fully tested it, but it's a start. Since there's still so little KineFX info online, figured I'd post it here in case it helps anyone... Personally, I learned some cool new stuff about how to manipulate motion clips that I thought was very handy. I tried two different routes, one of which I abandoned midway through, but which still has some useful applications. In both, I used a stabilize joints SOP to set the foot plants manually (you can do this procedurally as well, but most of the time I prefer to do it manually). Then, instead of actually stabilizing the joints, I just wrote out attributes. As a side note, I wish there was a way to set up another stabilize joints SOP downstream that would re-use those attributes, but I couldn't figure out a way to do that and the goal here was to only set the foot plants once and then have everything calculate from there. Direction 1: On each frame where a joint was planted, I calculated the vector it moves between that frame and the previous frame and I offset the planted joint by the inverse of that vector to stabilize it. I then added that vector to an offset variable that updates each frame so that I've got a record of how much I'm moving the feet joints to stabilize them and I add that back into the newly created locomotion joint (you could add this movement directly to the hips, but for testing it was a little easier to see what was happening on a new, static joint). That transferred the x and z-axis motion of the feet while they are planted to the locomotion joint and moves the hips in space. From there, the plan was to go back and do a second pass to lock down the feet. But I started thinking that it was a mistake to start with the hip movement. Instead I should lock down each foot completely and then transfer some portion of both offsets to the hip. Plus, after diving inside the stabilize joints SOP to try to answer a different question I, I started to realize that maybe there was a better way to approach the whole process of cleaning up the mocap data: Direction 2: Is the same basic idea, except that I convert the mocap into a motionclip and then just do the same basic procedure of offsetting the stabilized joints, recording that offset to a variable and then adding that offset to the hip joint so that the hips better follow the planted feet. Converting to a motionclip meant not having to rely on a simulation (although the original solver was very fast). But it also meant that once you'd created the motion clip, there's all kinds of other smoothing and filtering you can do at that point which was much simpler to do. Anyway, hope it's useful. locomotion.zip
  13. Mocap / add locomotion

    I'm working on a project that is using data from a Rokoko mocap suit. The raw data that I get from the suit has the hip joint static and all the other joints move relative to that. In their capture app, they provide a way to add locomotion by setting when each foot is planted and then internally calculating the resulting hip motion. But I don't love the way that their app is solving the motion. I loose some of the finer detail in the captured motion and everything starts to feel a little loose and float-y. I'm wondering if anyone has ideas about how to build a custom setup in Houdini that does the same thing. I've got some ideas, but this seems like something you might have to do in building crowd sims, so I thought I'd see if this is a problem that already has a solution before I go off and re-invent the wheel. I've attached a project below to show the same clip with and without the post-processed locomotion. Thanks in advance for any ideas. mocap_locomotion.zip
  14. I hope this isn't belaboring the point, but the reason this is hard is because it is working against the main benefit of vellum. The whole idea behind vellum and position based dynamic solvers is that they affect the position of points directly without referencing internal forces. Let's see if I can not botch this explanation: but instead of calculating physically realistic forces like pressure, elasticity, volume preservation, etc., vellum represents a material as points and constraints. Those constraints just have stretch and bend limits as well as damping. Over a number of iterations, the solver then updates the position of each point in a way that best satisfies the limits of all the constraints attached to that point. It is not physically accurate, but the advantage of this is that it is very fast and is much more stable UNLESS... the topology changes. As soon as the topology changes the wrong constraints are matched to wrong points and all hell breaks loose. In order for vellum to work, you need to have each individual point retain the same point number throughout the simulation AND you need to have the constraints (your constraint geometry) keep the same point numbers and primitive numbers. So there's basically three ways (that I can think of) of doing this if you want to change the shape or size of a simulated object: 1. You can keep the same geometry and update the parameters of the constraints so that it appears as if the geometry is changing (like hair growing because the rest length is increasing over time, or a balloon inflating because the rest length of the pressure constraints are increasing, or cloth sagging because (you guessed it) the rest length of the stretch constraints are increasing over time. This is what John set up earlier in the thread. 2. You can perform the simulation on a stable piece of geometry and either alter it post-sim (for instance carve the piece of tape post-sim) or use it to drive a different piece of geo that is being altered (say transfer position based on UVs as an example). 3. Or finally you can get fancy and devise a way to keep the point numbers and constraint geometry from changing as the topology changes. HOW you do this depends on what kind of changes you are introducing to the topology. I'm not aware of a single set up that would work in all cases. But those are the conditions you have to meet if you want it to work...
  15. Fun! My dog would love it... tetherball_v01.hiplc
  16. IK solver mayhem

    It's a bug or something related to your project file and for the life of me I can't figure out what. If I use your project file, even if I rebuild the joints with a skeleton SOP I get the same (lack of) behavior. If I start a new project and do the same thing, it works. Changing the rest angle mode to "Compute from targets" should snap the curve shape to the original shape. For some reason it isn't working in your project. And I have no idea why. Sorry I can't be more help. You might want to reach out to sidefx directly and see if they can locate the bug. It's not directly related but in the past I have had conflicts with parts of KineFX and other 3rd party frameworks. For a month or so the skeleton node didn't work whenever I had Renderman installed. Worth talking with them. There's still some ghosts in the machine with KineFX.
  17. Should work as Atom said. Maybe post a scene file if you're still having trouble.
  18. IK solver mayhem

    What version are you using? I seem to remember something in one of the masterclasses about an update to the IK Solver VOP. When I try it in 19.0.383, I get the behavior from your file. When I try it in .531 or later, it works.
  19. IK solver mayhem

    This isn't a full solution, but I tried it in a new file and the solveIK VOP works. It gets a little unstable as the number of joints increases (I'm getting flipping as the joints compress). But it works. The weird part is that when I try to duplicate the same thing in your file, it doesn't work. I'm not exactly sure why yet. Still looking into it. But try opening up my file and see if you get the behavior you expect. IK_multi_joint_v01.hiplc
  20. Drive bone rot by another bone rot

    And thanks to a little help from @mestela, here's a pass at a VEX version. I was lazy and just looped through each joint and applied the previous joints transform according to a single bias. If you want each joint to have its own bias, you could code things a little different. Again, I haven't tested it fully, but hope it helps: #include <kinefx.h> int start_ptnum; int pts[]; float bias; matrix parent_localxform; matrix parent_rest; matrix descendant_xforms[]; matrix descendant_localxforms[]; matrix descendant_efflocalxforms[]; matrix offset; start_ptnum = chi("Start_point"); bias = chf("Inheritance_bias"); parent_localxform = point(0, "localtransform", start_ptnum); parent_rest = point(0, "rest_transform", start_ptnum); offset = parent_localxform * invert(parent_rest); updatedescendanttransforms(0, start_ptnum, -1, matrix(), matrix(), pts, descendant_xforms, descendant_localxforms, descendant_efflocalxforms); foreach (int i; matrix descendant_localxform; descendant_localxforms) { descendant_localxforms[i] *= slerp(ident(), offset, bias); } setpointlocaltransforms(0, pts, descendant_localxforms);
  21. KineFX translating VOPs to wrangles

    Thank you, Matt! I knew I was missing something stupid like that. So many useful functions buried in there!
  22. So I've been trying to convert a few Rig VOPs I've built to VEX. Partly to see if they run faster and partly just to understand what is happening under the hood. But I'm noticing that if you expose the VEX of a certain VOP, there are a bunch of commands that don't show up in a wrangle. For example: blendtransforms(). There's a blend transforms VOP and if you look at the VEX it uses a blendtransforms() command in VEX, but if you add a wrangle there's not such command. Is there some library I need to add or am I missing something?
  23. Drive bone rot by another bone rot

    See if this helps. I've got 2 VOPs in the script. VOP 1 extracts the rotation from the local transform matrix of the parent joint and subtracts it from the extracted rotation of the rest position to get the amount the parent joint has rotated, then scales it and applies it to the child joint. VOP 2 uses the matrices directly. It's a little less obvious, but it's something I did to try to understand better how to manipulate matrices (any suggestions are welcome). It's the same basic procedure but handling the matrices is a little different. In order to the get the difference between the parent joint and its rest position, you multiply the joint's local transform by the inverse of the rest position, then multiply it with the child joint local transform matrix. The bias is set with the blend transform VOP (I haven't found a way to limit the rotation matrix to something like .5 or .25 directly). Haven't tested it fully, but it should be a start. inherit_rotation.hiplc
  24. How can I reverse rigdoctor?

    KineFX uses vertex order to set hierarchy. A reverse SOP will flip the hierarchy. Make sure it's wired in before the first rig doctor. Once you add a rig doctor, additional attributes are added that define parent and child indices and local transforms are calculated using that hierarchy. Adding a reverse SOP after that won't have any effect.
  25. A little more info on what you are trying to accomplish or a scene file might be helpful. There's quite a few ways to do what you ask depending on the final result. Are you looking to animate lots of bones or orient them for rigging? You could simply set a group and in a rig wrangle use the rotate or prerotate commands. Again it depends on how you want to rotate the joints. A screenshot or scene file would help with your last question. If you rotate a bone with a rig pose it usually shows up in the properties window as a local rotation in degrees.