Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

ch3

Members
  • Content count

    92
  • Joined

  • Last visited

  • Days Won

    2

Community Reputation

24 Excellent

About ch3

  • Rank
    Peon
  • Birthday 03/18/1981

Contact Methods

  • Website URL http://www.ch3.gr

Personal Information

  • Name Georgios
  • Location NYC

Recent Profile Visitors

1,991 profile views
  1. I may be wrong about the rest volume, but can't you just manually make 3 volumes one for each axis and use a volume wrange to populate the values like that? @restX = @P.x; @restY = @P.y; @restZ = @P.z; I believe this makes sense to use when you advect it together with density, so you can have a reference to a "distorted" coordinate to drive noises with. Otherwise using the above rest fields will be the same with using world space P in the shader (P transformed from screenspace to worldspace)
  2. There are many ways you can project onto volume. The rest field is one of them and as you mentioned it can been used as UVs. I tend to skip that step and directly use the P which has been fitted to a the bounding box of the desired projection. It's easier to try that on a volumeVOP to begin with. Let's say you want to project along the Y axis between x and z values of -10 to 10. All you need to do it fit the x and z values within that range so you have a 0 to 1 and feed that to the UVs (st) of the texture node. You can even have a second object as input and automatically get its bounds to calculate your fit range. Now if you want the projection to be on an arbitrary axis, you will have to do some extra maths to rotate the P, project and rotate back within VOPs, or if it's easier, you can do it at the SOP level. What is important to keep in mind, is that volumeVOP will operate on a voxel level and you will never get any sharper detail than the voxel size. But once you do this, you can easily transfer the same nodes/logic onto a volume shader, which operates on rendered samples, which means you can go as sharp as your texture. But of course if you move your camera away from your projection axis, the texture representation will get blurred along that axis. But then again, that's just one approach and maybe there are other ways that may give you more control and better results.
  3. What's the best way to simulate dynamics with just 2D shapes? Is it possible to use any of the existing solvers to simulate rigid bodies, but also flexible curve/polygonal shapes that respect line to line collisions ? I've tried using the wire and the grain solver (with rest length on a network of lines that connect the points), but the collisions only happen on the point level, resulting in penetrations between shapes. Is there anything else I should look into, or a working example I can take a look? thanks a lot georgios
  4. By default when changing the framerate of a houdini scene any existing keyframes will be adjusted to match the existing timing. Is there a way to prevent that and for example keyframe at frame 1000 remains at 1000? I remember there was a dialogue about it, but that no longer pops up when I change the framerate. thanks
  5. Since I was working with the new H16 height fields, I re-implemented the reaction diffusion using those rather than points. I also did it without DOPs, using just an openCL within a SOP solver. I've attached a new file. reactionDiffusion.hip
  6. Ahh that's perfect. I wasn't aware of this node. thanks a lot
  7. I have a bunch of cameras exported from photoScan (photogrammetry app) and I would like to create a master one which can animate between the whole set. What's the best way to get the transform of two cameras, blend between them and drive a different camera? I tried to extract the intrinsics from the alembic export, but didn't manage to make it work. I ended up key-framing the camera in maya with a good old mel script and export it as an alembic. But I was wondering if there is a better way in houdini, with either an FBX or alembic scene from the photogrammetry or any other application. thanks
  8. We have a few projects coming in the NY branch and we are looking for freelancers. May also consider candidates outside the country. If you are looking for a permanent position you should also get in touch for a possible future opening. thank you
  9. Nice one, thank you for this example. I still can't make mine work though. The difference is that you already have a wire object in simulation's initialization, where I want to start with nothing and keep adding point to a single wire piece. I tried keeping a copy of DOPs output to have a the object in its initial state with all the attributes needed by the solver, so I can gradually add them into the sim. Attatched is the closest I've got it working. thanks again springFace_02.hip
  10. I am trying to figure out how to spawn new wire geometry within DOPs as it simulates. I create the full wire shape at sop level and I then I animate the deletion of its points backwards, from a couple of points to the full set. Of course just by importing this changing geometry, the wiresolver won't re-import it on on every frame. I looked into combining a SOP solver and tried various approaches, but none worked so far. Deleting points in the sop solver didn't work. Then I tried deleting the primitive, adding or even copy (to aquire all the wire attributes) one point at a time and connecting them all with a new line primitive. But that didn't work either. Any ideas? thank you
  11. Isn't it possible to constrain some arbitrary points of one solid object to the points of another one? The SBD seems to expect a matching set of points between the two objects, which I assume works well when the two object have tets that perfectly touch each other. Is there another way to attach two solids that slightly intersect? thanks
  12. I have one cloth object which I have pre-cut in a certain way. I want to have it stitched when the simulation starts but at a given time I want to gradually break the stitches. What kind of constraint can keep two points of the same cloth object together? sbdpinconstraint pins points in space, which I don't want. clothstitchconstraint is meant to work between two different objects right? It kind of works if I set my object to both constraint and goal object, but then how do I create the associations between specific points? Is there something similar to constraintnetwork where I can explicitly specify the constants between points? thank you
  13. I scatter points on a surface and then bring them into a DOP network to do some dynamics. I would like to create clusters for some of the particles, to get bigger chunks at some parts, rather than individual points. Initially I tried to pack some of them before bringing them into DOP, but it seems that the popsource1 node removes any primitive data, so no packed geo are coming out of it. I am not trying to figure out if I could use spring constraints to force some particle groups to stay together. Has anyone done this before? How would I go about combining POP forces and SBD or RBD spring constraints ? thank you georgios
  14. Is this file still somewhere available? I'd like to take a look