Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

ch3

Members
  • Content count

    88
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by ch3

  1. Since I was working with the new H16 height fields, I re-implemented the reaction diffusion using those rather than points. I also did it without DOPs, using just an openCL within a SOP solver. I've attached a new file. reactionDiffusion.hip
  2. I have a bunch of cameras exported from photoScan (photogrammetry app) and I would like to create a master one which can animate between the whole set. What's the best way to get the transform of two cameras, blend between them and drive a different camera? I tried to extract the intrinsics from the alembic export, but didn't manage to make it work. I ended up key-framing the camera in maya with a good old mel script and export it as an alembic. But I was wondering if there is a better way in houdini, with either an FBX or alembic scene from the photogrammetry or any other application. thanks
  3. Ahh that's perfect. I wasn't aware of this node. thanks a lot
  4. I tend to remember that there used to be a render attribute to chose between continuous or discrete deep image samples. I can't find it in v15. Has it been removed, or am I mistaken that there was an option? thanks
  5. We have a few projects coming in the NY branch and we are looking for freelancers. May also consider candidates outside the country. If you are looking for a permanent position you should also get in touch for a possible future opening. thank you
  6. I am trying to figure out how to spawn new wire geometry within DOPs as it simulates. I create the full wire shape at sop level and I then I animate the deletion of its points backwards, from a couple of points to the full set. Of course just by importing this changing geometry, the wiresolver won't re-import it on on every frame. I looked into combining a SOP solver and tried various approaches, but none worked so far. Deleting points in the sop solver didn't work. Then I tried deleting the primitive, adding or even copy (to aquire all the wire attributes) one point at a time and connecting them all with a new line primitive. But that didn't work either. Any ideas? thank you
  7. Nice one, thank you for this example. I still can't make mine work though. The difference is that you already have a wire object in simulation's initialization, where I want to start with nothing and keep adding point to a single wire piece. I tried keeping a copy of DOPs output to have a the object in its initial state with all the attributes needed by the solver, so I can gradually add them into the sim. Attatched is the closest I've got it working. thanks again springFace_02.hip
  8. Isn't it possible to constrain some arbitrary points of one solid object to the points of another one? The SBD seems to expect a matching set of points between the two objects, which I assume works well when the two object have tets that perfectly touch each other. Is there another way to attach two solids that slightly intersect? thanks
  9. I have one cloth object which I have pre-cut in a certain way. I want to have it stitched when the simulation starts but at a given time I want to gradually break the stitches. What kind of constraint can keep two points of the same cloth object together? sbdpinconstraint pins points in space, which I don't want. clothstitchconstraint is meant to work between two different objects right? It kind of works if I set my object to both constraint and goal object, but then how do I create the associations between specific points? Is there something similar to constraintnetwork where I can explicitly specify the constants between points? thank you
  10. I scatter points on a surface and then bring them into a DOP network to do some dynamics. I would like to create clusters for some of the particles, to get bigger chunks at some parts, rather than individual points. Initially I tried to pack some of them before bringing them into DOP, but it seems that the popsource1 node removes any primitive data, so no packed geo are coming out of it. I am not trying to figure out if I could use spring constraints to force some particle groups to stay together. Has anyone done this before? How would I go about combining POP forces and SBD or RBD spring constraints ? thank you georgios
  11. Is this file still somewhere available? I'd like to take a look
  12. Hi, we are turning some animated geometry (alembic, non-deforming animation) into clouds, but because we want the noise of the clouds to be in local space (not be affected by the animation), I am trying to work out how to apply the animation after the creating the clouds referencing the intrinsic transform of the alembic. Using an AttributeVOP, I've managed to copy the intrinsic:packedfulltransform of the animated alembic, on to the intrinsic:transform of the static alembic model. I am still trying to get my head around the difference between matrix3 and matrix4 for these transform. Is there a general rule to which matrix type is used in each case? intrinsic:packedfulltransform is matrix4, intrinsic:transform in packed primitives is matrix3, but intrinsic:transform in VDB is matrix4. So what I put together and works for packed geometry or simple volumes, doesn't work with VDBs unless if I pack them before hand. Also my understanding is that intrinsic:packedfulltransform and intrinsic:packedlocaltransform are not writable and I can only change intrinsic:transform . Is that right? thank you
  13. I have to say though, occasionally I get some strange behavior with VDBs. They update correctly when viewed from the moving camera, but at times they stay still when viewed from the default viewport perspective camera. At this point it may be safer to extract the translate/rotate/scale components from the matrix and apply them using one transformSOP per primitive.
  14. hey, thanks for these pointers. I have noticed that other than matrix3 intrinsic transform, I also need to set the @P to fully copy the alembic animation onto static packed objects, or volumes. Somehow though, if you convert a simple volume to a VDB, the intrinsic:transform attribute becomes a 4x4 matrix, so the code you posted doesn't set the matrix correctly. This works though: @P = point(1,"P", @ptnum); matrix t = primintrinsic(1, "packedfulltransform", @primnum); setprimintrinsic(0, "transform", @primnum, t, "multiply"); and confuses me a bit to why it still needs to set the @P even though it's a 4x4 matrix. As you say volumes and VDBs may be a special case, as I've noticed their bounding box is also stored as scale values within the matrix and that's why I have to set the setprimintrinsic to multiply rather than set. We are currently face another problem, where the volume jitters after setting these the transform matrix. Somehow it seems that the pivot changes as the cloud patch re-calculates on every frame, which changes the placement. Thanks again for your time.
  15. I've made a distant forest out of 50 million points, which I've cached out in a single bgeo file to load at render time using the delayed load procedural shader. Is it possible to add additional functionality into the procedural? I would like to add some minimal movement to the points, as an indication of wind. Rather than generating a sequence of files, I was wondering if it's possible and how to put together a procedural shader to load the cache and add some noise offset at render time. thank you georgios
  16. Apparently in version 15 both are supported on the same deep file. They've also implemented all these as well http://research.dreamworks.com/papers/Improved_Deep_Compositing_DWA_2015.pdf
  17. yeah I think you are right about the name, I saw it in some older version docs. I wonder why it's missing in the latest version.
  18. Hello, I've used the cloud tool several times, with great success but for static clouds. I need to do a few patches of individual clouds with a bit of movement, so I would treat them as hero ones, rather than using the sky rig. Here is a good reference of what I am trying to achieve. Apart from a nice balance between noise and advection setting, I struggle to find a good way to make them appear out of nothing and disappear again like in the video. Any ideas? thank you georgios
  19. A small experiment, where geometric models are turned into sound. More info about the process at ch3.gr/geophone I love the geometry CHOP node! =]
  20. Thank you. I've attached the basic setup. under /ch/mix there is the master audio out node, from where I select which object to hear at any moment, but also add any volume controls for each different object. The rest happen at the object level. The head null node calculates which set of points to read on each frame and feeds other parts of the network with the values of the 'firstPoint' and 'lastPoint'. You should only set the frequency and the start frame there. The audioGen subnetwork works in two modes. Animated and non-animated, meaning that in the first you can have a variable frequency. If it's constant speed, I just use the straight output of the geometry CHOP, resample it to the desired frequency and add some offset at the start. If it's animated, it gets more complicated, because houdini will only send to the audio card the wave as it stands when you press the play button. So to overcome that, I trim just the section of the samples that sound be heard on the specific frame, I resample that timeslice to the desired frequency, place it back to the current frame and feed it to a record node which will cache out the wave. I've exposed a couple of controls for that process. Just make sure when in record mode, you play every frame. And when you playback the audio switch back to realtime. The rest of the network is about normalizing each channel and then mapping the 3 into the 2 stereo channels. I hope all that makes sense. =] odforce_geophone.hip
  21. Hello, this is a short I finished last year and it's now available on vimeo. It was mostly done in houdini, in fact this project together with a job I had in London during that time, was my introduction to this amazing software. Enjoy =] Alosis on ch3.gr
  22. Hi, I have two file nodes and I am animating a switch between the two at a certain frame. I am then taking the output to the speakers, but it doesn't seam to respect the animation of the switch and it just plays the first selection even if the waveform changes on the viewer. Is there a way to force it to update the audio on every frame? Any animation is also ignored when I save out the waveform to disk. Any ideas? thanks
  23. Actually a record node set to current time slice solves this.
  24. Hi, has anyone here in the forums used solids and the finite element solver to do large scale, production level destruction? I am doing some tests to determine whether we will be using it in a series of shots, but so far my simulations have been rather slow and unstable. My current test is a moving sphere moving through a section of a wall that has 4-5 different layers to describe the different materials (plaster, wooden beams, cement). I understand that sandwiching multiple objects together and forcing another object through is a very demanding task for the solver, so I am wondering it it is possible to simulate something like that with a reasonable turn around times? All the objects together have around 100k tets in total, I've set it to 25 substeps and 10 collision passes, in an attempt to make the simulation stable. Some frames took more than 45 minutes and even at these calculation times the simulation became unstable and ended up exploding a few frames after the impact. Is there anything obvious I am missing out in terms of simulation efficiency/stability, or is it just a matter of keep increasing the substeps? As a straight comparison I have Kali, the DMM engine from pixelux that I was using in MPC, where we were able to simulate a couple of hundred frames overnight with more than a million tets. Do you think there is such a huge difference between the two solvers? thank you georgios