Jump to content

Search the Community

Showing results for tags 'transform'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 27 results

  1. Hello; In "Transform SOP" (or maybe other SOPs), is it possible to do a Per_Element scaling? I mean to scale each pieces based on it's bbox_center pivot, not based on main pivot (pivot defined in transform sop). In 3DSMax you have several option to scale selected pieces of a geometry which is awesome ("Use Selection center", "Use Pivot point center" , ... ). Thanks.
  2. Cut Copies

    Hello all, I'm attempting to make an effect like this and am getting stumped if anyone had any ideas on how to approach it would be super helpful! Thanks.
  3. Hi Guys, I'm trying to replicate COP workflow from Simon Holmedal, in the video below about 29:00 he folding UV coordinate VOPCOP filter... I can't make it work... Any Idea? Thanks.
  4. Ran into an issue while learning DOPs. After having setup Vellum in SOPs I'm trying to import the setup in DOPs. At first everything is fine and dandy, but if there is a Transform added to the geometry on object level then only the GEO that's imported into DOPs network is updated. The Constraints are still in their original position. Is it possible to update Vellum constraint location when the object node is transformed? Adding a basic example file with the issue. test.hipnc
  5. moving tile by tile

    I have been searching tutorials, videos, forums, etc. The answer to this should be easy, and be something in the vein of what I assume many people try, but I cannot find an answer. I want a grid of tiles to flip one by one, rotating 180 degrees. I used group range to select them one by one from a copy to group (copy tiles to grid points). It will only rotate the whole thing. This lead me to check out local rotations, and quaternion conversion, as per the videos. I assume I need to do the vex attribute to convert to each local rotation, then use time as the factor to rotate them in order? I tried for each loops, vex for local rotations, group range with Frame Number try to rotate them according to frame, etc. I am not posting a test file here because I have tried so many things, I cant't figure it out. I finally decided I need to come here first, as I am putting the recipe together in the wrong order somehow. I did find things that allowed me to rotate all tiles around local space in a copy cloud, but couldn't figure out the one at a time in order thing. Anyone have a solution?
  6. Hi folks, I have searched around, but similar questions are either unresolved or unanswered. Hopefully now in 2019 we have a simple solution to this simple issue. I have used the Extract Transform node in Houdini, which gave me the position and rotation of my object. Prior to that, I have simmed a Rigid Body object, and used this node to replace it with a higher-resolution version for my render. Now I wish to do the same in Maya. I have a high-res model of my rigid body, and it would be a waste to export each frame in alembic. It would be great to export a simple Null with a position and rotation attribute or something like that, and constraint my object to it in Maya. It sounds simple, but I can't do it. Attaching a ROP FBX or alembic to the ExtractTransform node produces the files, but they're just a locator without any attributes in Maya. How can I get this set-up to work, or what are the alternatives? Thank you.
  7. I am working on a big destruction job and I’m trying to optimise our workflow as much as possible. my usual workflow: - cache a single frame of my fractured geo as packed fragments - import rbd sim and create point per rbd piece (dop import) then lighting would import these two caches and use transform pieces. This works well since we only cache a single frame of the fractured geo, and the cache for the rbd points is very small. however the geo is still written into the ifd file every frame, which can be quite large. So, is it possible to specify the geo needed for the whole sequence and then have mantra transform the geo at render time (the same as transform pieces). This would make the ifd’s tiny an alternative method is to write each fractured piece out as a separate file then copy empty packed disk primitves to the rbd points then use the unexpandfilename intrinsic to expand it at render time. This makes tiny and fast ifd files which is great but seems quite slow to render - probably because it has to pull so many files from disk (1000s of pieces).. is it possible to do the render time transform pieces approach or does anybody have a better method? (The two I’ve mentioned are fine I’m just trying to optimise it!)
  8. Hi there! I've found a lot of topics somewhat talking about this, but haven't been able to get anything working. I am trying to constrain or parent geometry to an animated FBX bone. In my scene, a character emits particles from their mouth, but that animation is at the SOP level. I could use a rivet, but that only tracks the position of the bone, and not the rotation. If I use a Parent Blend constraint at the Object level, nothing from the Sop level animation affects the child. Am I missing something on how parenting works in Houdini? Or perhaps there is a better method to accomplish this kind of constraint relationship? Thanks for any help!
  9. I got really confused with multiple transform sops. I hate to ask such simplistic question, but the fact that I can't make it work means that I don't understand some fundamental concepts of Houdini. If I use them one after another, I thought that the second one would inherit transforms of the first one. in the example(Houdini 17 non-commercial) the first transform is animated and the second one is static(non-animated) but it is parented to the first one, so I expect it to follow the fist one, I mean the center of the second one should always be at the center of the first one, but it is true only if scale of the second one is set to 1. translate_scale_origin_q.hipnc
  10. Hi, It seems that generally when animating objects in scene/object level, you are only able to animate variables globally. I have a scene with a spinning propeller, and this spinning propeller needs to rotate around its own axis, while being placed in a certain spot in the scene. How is this achieved? All transforms in the scene are global space, so if you need to rotate the object 30 degrees on the X axis, the Y axis will not follow the local transforms and the propeller will spin sideways in circles. Followup question: I briefly had a teacher from Lost Boys school of VFX and he showed me how to properly load animation data into a dopnet. Unfortunately I was just starting out with Houdini as he showed me, so I did not catch the method. I gathered that this should not be done in sop-level ('Use deforming geometry'), and rather with motion / 'RBD keyframe active' operators in the dopnet -- how do I load in my animation data, so the velocities are correct with proper interpolation? Thank you,
  11. Hey guys, i have an alembic animated file (there's no deformation) and I have to convert this object to a vdb mesh (vdb from polygons >> convert vdb). I did it on a static frame (using timeshift) and now i'd like to copy the original animation to this new mesh. A copystamp does copy the position but not the rotation. Can you guys help me with this? Just to make things clearer (this is not the original scene, i'm kinda faking it so i can share here) this is the alembic character animated: then I made this new mesh from it: i need to transfer the position and orientation from the first one to the second one. I guess i could make it with a packed object since it have just one point Thx! RnD_copy.tansformation.v1.01.hiplc
  12. Hello to all! I am currently writing a wrangle-thing in which I'm hoping to generate a random "hair" on every point and make it face the normal of said point. The way I planned it is as follows: 1. create the hair w/ random params on 0,0,0 2. rotate the hair to Normal of the point that it's "generated on" w/ rotate(matrix,angle,vertex) function 3. Move the hair to @P I'm not a hero with vector math, otherwise I'd generate the whole hair thing along a normal. I wonder if I somehow can construct a identity matrix, like one would do with the ident() function, and use that to transform the prim. Is there a way to simply rotate a primitive using something simpler than extreme vector math? sgamhar.hip
  13. How to transform a geo asymetrically

    Could you help me transform a geometry like this?
  14. Good morning dear community As well as we can get a bbox boundaries and centroid using as well $CEX, $CEY, $CEZ or centroid expression, exist the possibility to get "pivot rotation" related to world space? Tried conversion of acos to degrees using dot from Normal Vector but without successfully results. Take in mind it came from point deformed animation with constant shape (Menas bbox remain constant during the whole length animation) Thank you very much
  15. Hi all, After researching for some days, apparently there is no way to export Houdini transform handles into Maya through the digital asset (correct me if i'm wrong). So one way i found to do this would be through the Maya connection editor between a locator and the Houdini asset. So, in Houdini i have a grid and a sphere and the sphere is controlling the grid extrusion by its position and scale attribute. It is a very simple setup with attribute transfer to test in Maya and everything is working fine except one thing. My goal is to control the sphere inside Maya with the traditional "transform handle" so i can control where the extrusion on the grid is happening by moving the handle instead of typing numbers. Once i imported the asset in Maya, i could attach the locator transform handle to the whole asset using the connection editor however, i cannot see the sphere transform node parameter from Houdini in the connection editor, only the main translation parameter that controls the whole asset. After that, the result that i got so far is the locator translation handle transforming the whole asset and not only the sphere inside the asset and therefore, i cannot move the extrusion around. That said, in this case, does anyone know how to connect the sphere transform parameters to the locator handle in Maya through the connection editor so i could control the extrusion on the grid? Any info would be appreciated. Thanks
  16. Tranform falloff

    Hi! I'm struggeling to create a "Cinema 4D mograph like effect." I have fractured a geo and I want them to rotate based on an effector. I have tried to pack it and or with a for each loop, bt I couldn't get it close to it. Can anyone help with it? Thanks!
  17. Hey guys, I am really stuck with this one. I was told it is impossible to change the scale of a piece inside of DOPS while it's simulating with the bullet solver. I have achieved something similar where the piece goes into dops, gets out of dops, gets transformed in the scale, and then goes back into the same DOPS sim in a loop. So it creates overtime shrinking. But now I need this to happen inside of DOPS. I´m pretty sure it´s possible. Does anyone have a clue.
  18. Hello all, Have been racking my brains/this forum on something seemingly quite simple, but not getting anywhere... Basically, I would like to match the scale/orientation of one set of points, based on another. Then be able to extract those 'transform' paramaters. For example, if i have a pointcloud A, with 10 points, which has been re-scalled/oriented to create Pointcloud B (in a separate process, I don't have access to those transf params). How would i then re-orient it to match? I have a photogrametry pc, which i am trying to match a tracked camera to. The issue is that the camera track is coming it at origin, so I need to transform to match the original pc. I have isolated 10 identically positioned (not identical ptnum) points from the pc & tracking data to use as 'calibrators', but can't figure out the best/most efficient next step in doing the re-orient/scale. Any help, tips, or pointers in the right direction for threads would be very much appreciated as always.
  19. EDIT : This may be something different. When parent bones have non matching scales to the current edited bone, this issue arises. Going to poke around, issue remains tho, on uniform scaled bones too. I'm not sure if this should be in the scripting forum or not... But I'm running into some issues with non-uniform scaled bone's becoming uniformly scaled when using the agent edit node. I'm specifically trying to edit the bones through vex at this point now. It seems to change a non uniform scale to a uniform scale after using the maketransform vex function. I'm noticing this issue when polar decomposition and transposing of 4x4 matricies being scaled by one of the scale's values when only modifying a position or rotation of the matrix. As tho, instead of scaling by {1.2, .8, 1.3} it scales by {1.2, 1.2, 1.2} or maybe its just the vector's length of the found eigenvalues. Like when using the agent edit node, it will change any edited bone's non-uniform scale to a uniform scale. Has anyone found a simple way to get around this, other than manually editing the matrix position W row and using dihedral math to rotate the bones?
  20. How does the pivot work when using the make transform Vop node? I am trying break down and figure out the function of the Make Transform node using python (Outside of any 3d software). The Translate is pretty basic as I can just place them into a matrix as they are x = m[3][0], y = m[3][1] and z = m[3][2]. For rotation I am using the following: def EulerToMatrix(Rotation): x, y, z = Rotation XM = M3([[ 1, 0, 0], [0, math.cos(x), -math.sin(x)], [0, math.sin(x), math.cos(x)]]) YM = M3([[math.cos(y), 0, math.sin(y)], [0, 1, 0], [-math.sin(y), 0, math.cos(y)]]) ZM = M3([[math.cos(z), -math.sin(z), 0], [math.sin(z), math.cos(z), 0], [0, 0, 1]]) return (ZM * YM * XM) I can then just pipe this information into the Matrix4. The problem I have now is that I cannot find any information that I can understand on how to apply the pivot to the matrix4. It's not as simple as just adding. I have attached a file that has the make transform that I am using to test against. The pivot seems to be linked to the rotation and so I am guessing it is related to the scale as well. Is there any way to break down how this pivot transforming works? MakeTransPiv.hip
  21. Is there a way to make the transform parameter on my subnet or HDA appear as a transform handle in the scene viewer?
  22. Hi! I'm trying to replace some non packed pieces from a bgeo sequence to a hi-res version. I cannot simulate again so I need to find a workaround to "transform input geometry" in the dop import. I've tried the 'transform pieces' sop without success. Any suggestion? Thanks! transform_pieces.hip
  23. Dynamically Modify Geometry

    I take a sphere and then apply the mountain SOP to this sphere. Then I add a Copy node which is fed a point cloud from a sphere, IsoOfset, and scatter combination. Below the Copy I place a NULL node. In this Null Node I can see the total number of packed Primitives. What I would like to do is to be able to modify the Frequency Parameter on the Mountain Sop for each of the Primitives that are being Copied so that each one of the Primitives has a unique shape. I need a variety of moon rock to blast out. Without modifying them, they all look the same. Any thoughts on how I can accomplish the ability to dynamically create rocks of various shapes. Thanks in advance.
  24. Xform pivot

    Hi, I'm trying to learn, but I sometimes find it hard to get answers from the doc (real good doc tho). here is what I'm struggling with at the moment: I want the pivot of a transform node to to be placed relatively to another object (in the same geo node). I need the pivot to be at XMIN in x axis, 0 in y axis, ZMIN in z axis. Easy. But I need the XMIN and ZMIN of another object. Can someone help me with the syntax ? thanks !
  25. A while ago I used some very simple Matrix multiplications to bring geometry, that is positioned anywhere in space, back to the origin and from there, back to its original position. Maybe anybody here can help me remember? I think I created a world space transformation Matrix, called it myMatrix, and then multiplied it by its reverse to transform geo to the origin. In order to translate the geo back, I just multiplied by myMatrix...is that possible or am I getting something wrong? (Not in fromt of a Houdini right now) Anyway, how do I get the world space transformation matrix for an object positioned anywhere in world space, for example for some geo that came into my scene as an ABC from Maya? Lookattransform in VOPSOPs? Thanks for your hints and patience.
×