Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

ch3

Members
  • Content count

    88
  • Joined

  • Last visited

  • Days Won

    2

Community Reputation

21 Excellent

About ch3

  • Rank
    Peon
  • Birthday 03/18/1981

Contact Methods

  • Website URL http://www.ch3.gr

Personal Information

  • Name Georgios
  • Location NYC

Recent Profile Visitors

1,859 profile views
  1. Since I was working with the new H16 height fields, I re-implemented the reaction diffusion using those rather than points. I also did it without DOPs, using just an openCL within a SOP solver. I've attached a new file. reactionDiffusion.hip
  2. Ahh that's perfect. I wasn't aware of this node. thanks a lot
  3. I have a bunch of cameras exported from photoScan (photogrammetry app) and I would like to create a master one which can animate between the whole set. What's the best way to get the transform of two cameras, blend between them and drive a different camera? I tried to extract the intrinsics from the alembic export, but didn't manage to make it work. I ended up key-framing the camera in maya with a good old mel script and export it as an alembic. But I was wondering if there is a better way in houdini, with either an FBX or alembic scene from the photogrammetry or any other application. thanks
  4. We have a few projects coming in the NY branch and we are looking for freelancers. May also consider candidates outside the country. If you are looking for a permanent position you should also get in touch for a possible future opening. thank you
  5. Nice one, thank you for this example. I still can't make mine work though. The difference is that you already have a wire object in simulation's initialization, where I want to start with nothing and keep adding point to a single wire piece. I tried keeping a copy of DOPs output to have a the object in its initial state with all the attributes needed by the solver, so I can gradually add them into the sim. Attatched is the closest I've got it working. thanks again springFace_02.hip
  6. I am trying to figure out how to spawn new wire geometry within DOPs as it simulates. I create the full wire shape at sop level and I then I animate the deletion of its points backwards, from a couple of points to the full set. Of course just by importing this changing geometry, the wiresolver won't re-import it on on every frame. I looked into combining a SOP solver and tried various approaches, but none worked so far. Deleting points in the sop solver didn't work. Then I tried deleting the primitive, adding or even copy (to aquire all the wire attributes) one point at a time and connecting them all with a new line primitive. But that didn't work either. Any ideas? thank you
  7. Isn't it possible to constrain some arbitrary points of one solid object to the points of another one? The SBD seems to expect a matching set of points between the two objects, which I assume works well when the two object have tets that perfectly touch each other. Is there another way to attach two solids that slightly intersect? thanks
  8. I have one cloth object which I have pre-cut in a certain way. I want to have it stitched when the simulation starts but at a given time I want to gradually break the stitches. What kind of constraint can keep two points of the same cloth object together? sbdpinconstraint pins points in space, which I don't want. clothstitchconstraint is meant to work between two different objects right? It kind of works if I set my object to both constraint and goal object, but then how do I create the associations between specific points? Is there something similar to constraintnetwork where I can explicitly specify the constants between points? thank you
  9. I scatter points on a surface and then bring them into a DOP network to do some dynamics. I would like to create clusters for some of the particles, to get bigger chunks at some parts, rather than individual points. Initially I tried to pack some of them before bringing them into DOP, but it seems that the popsource1 node removes any primitive data, so no packed geo are coming out of it. I am not trying to figure out if I could use spring constraints to force some particle groups to stay together. Has anyone done this before? How would I go about combining POP forces and SBD or RBD spring constraints ? thank you georgios
  10. Is this file still somewhere available? I'd like to take a look
  11. I have to say though, occasionally I get some strange behavior with VDBs. They update correctly when viewed from the moving camera, but at times they stay still when viewed from the default viewport perspective camera. At this point it may be safer to extract the translate/rotate/scale components from the matrix and apply them using one transformSOP per primitive.
  12. hey, thanks for these pointers. I have noticed that other than matrix3 intrinsic transform, I also need to set the @P to fully copy the alembic animation onto static packed objects, or volumes. Somehow though, if you convert a simple volume to a VDB, the intrinsic:transform attribute becomes a 4x4 matrix, so the code you posted doesn't set the matrix correctly. This works though: @P = point(1,"P", @ptnum); matrix t = primintrinsic(1, "packedfulltransform", @primnum); setprimintrinsic(0, "transform", @primnum, t, "multiply"); and confuses me a bit to why it still needs to set the @P even though it's a 4x4 matrix. As you say volumes and VDBs may be a special case, as I've noticed their bounding box is also stored as scale values within the matrix and that's why I have to set the setprimintrinsic to multiply rather than set. We are currently face another problem, where the volume jitters after setting these the transform matrix. Somehow it seems that the pivot changes as the cloud patch re-calculates on every frame, which changes the placement. Thanks again for your time.
  13. Hi, we are turning some animated geometry (alembic, non-deforming animation) into clouds, but because we want the noise of the clouds to be in local space (not be affected by the animation), I am trying to work out how to apply the animation after the creating the clouds referencing the intrinsic transform of the alembic. Using an AttributeVOP, I've managed to copy the intrinsic:packedfulltransform of the animated alembic, on to the intrinsic:transform of the static alembic model. I am still trying to get my head around the difference between matrix3 and matrix4 for these transform. Is there a general rule to which matrix type is used in each case? intrinsic:packedfulltransform is matrix4, intrinsic:transform in packed primitives is matrix3, but intrinsic:transform in VDB is matrix4. So what I put together and works for packed geometry or simple volumes, doesn't work with VDBs unless if I pack them before hand. Also my understanding is that intrinsic:packedfulltransform and intrinsic:packedlocaltransform are not writable and I can only change intrinsic:transform . Is that right? thank you
  14. I've made a distant forest out of 50 million points, which I've cached out in a single bgeo file to load at render time using the delayed load procedural shader. Is it possible to add additional functionality into the procedural? I would like to add some minimal movement to the points, as an indication of wind. Rather than generating a sequence of files, I was wondering if it's possible and how to put together a procedural shader to load the cache and add some noise offset at render time. thank you georgios