Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

ch3

Members
  • Content count

    96
  • Joined

  • Last visited

  • Days Won

    2

ch3 last won the day on January 27 2016

ch3 had the most liked content!

Community Reputation

24 Excellent

About ch3

  • Rank
    Peon
  • Birthday 03/18/1981

Contact Methods

  • Website URL
    http://www.ch3.gr

Personal Information

  • Name
    Georgios
  • Location
    NYC

Recent Profile Visitors

2,034 profile views
  1. So even if the shader pulls the image from the /img content, it doesn't seem to update it over time. Whether it's an animated noise pattern, or a changing heightfield which is what I am trying to use it for. The frame the scene is when I kick off the sequence render, is used across all frames. Any ideas for that? thanks again
  2. Ah great, that makes total sense now. I guess it's somewhat similar to the way glsl/openCl shader kernels expect all parameters to be imported a certain way. thanks a lot for the in-depth explanation.
  3. Is there a general limitation to expressions and connections within a material builder in comparison to promoted parameters? Seems like the op: expression or even a reference to a path chs() doesn't work within the material builder and they have to be promoted outside it. Is that normal?
  4. I have a small compositing network which I want to reference as texture in the shader using the op: expression .ie op:/img/trail Even though I've managed to make it work several times, I always find it a bit flaky and many times mantra doesn't manage to load the image, even though it may be visible on the viewport when referencing the same image operator in a uvquickshade node, or just by loading the shader. It works with a principle shader out of the box, but if I put the same shader within a material builder it breaks. Is there a render attribute or something else I need to add to the shader? I understand it's better to use pre-rendered images the normal way, but I want to use dynamic heighfields SOPs for textures and ideally avoid having to write out thousands of frames in advanced.
  5. I may be wrong about the rest volume, but can't you just manually make 3 volumes one for each axis and use a volume wrange to populate the values like that? @restX = @P.x; @restY = @P.y; @restZ = @P.z; I believe this makes sense to use when you advect it together with density, so you can have a reference to a "distorted" coordinate to drive noises with. Otherwise using the above rest fields will be the same with using world space P in the shader (P transformed from screenspace to worldspace)
  6. There are many ways you can project onto volume. The rest field is one of them and as you mentioned it can been used as UVs. I tend to skip that step and directly use the P which has been fitted to a the bounding box of the desired projection. It's easier to try that on a volumeVOP to begin with. Let's say you want to project along the Y axis between x and z values of -10 to 10. All you need to do it fit the x and z values within that range so you have a 0 to 1 and feed that to the UVs (st) of the texture node. You can even have a second object as input and automatically get its bounds to calculate your fit range. Now if you want the projection to be on an arbitrary axis, you will have to do some extra maths to rotate the P, project and rotate back within VOPs, or if it's easier, you can do it at the SOP level. What is important to keep in mind, is that volumeVOP will operate on a voxel level and you will never get any sharper detail than the voxel size. But once you do this, you can easily transfer the same nodes/logic onto a volume shader, which operates on rendered samples, which means you can go as sharp as your texture. But of course if you move your camera away from your projection axis, the texture representation will get blurred along that axis. But then again, that's just one approach and maybe there are other ways that may give you more control and better results.
  7. What's the best way to simulate dynamics with just 2D shapes? Is it possible to use any of the existing solvers to simulate rigid bodies, but also flexible curve/polygonal shapes that respect line to line collisions ? I've tried using the wire and the grain solver (with rest length on a network of lines that connect the points), but the collisions only happen on the point level, resulting in penetrations between shapes. Is there anything else I should look into, or a working example I can take a look? thanks a lot georgios
  8. By default when changing the framerate of a houdini scene any existing keyframes will be adjusted to match the existing timing. Is there a way to prevent that and for example keyframe at frame 1000 remains at 1000? I remember there was a dialogue about it, but that no longer pops up when I change the framerate. thanks
  9. Since I was working with the new H16 height fields, I re-implemented the reaction diffusion using those rather than points. I also did it without DOPs, using just an openCL within a SOP solver. I've attached a new file. reactionDiffusion.hip
  10. Ahh that's perfect. I wasn't aware of this node. thanks a lot
  11. I have a bunch of cameras exported from photoScan (photogrammetry app) and I would like to create a master one which can animate between the whole set. What's the best way to get the transform of two cameras, blend between them and drive a different camera? I tried to extract the intrinsics from the alembic export, but didn't manage to make it work. I ended up key-framing the camera in maya with a good old mel script and export it as an alembic. But I was wondering if there is a better way in houdini, with either an FBX or alembic scene from the photogrammetry or any other application. thanks
  12. We have a few projects coming in the NY branch and we are looking for freelancers. May also consider candidates outside the country. If you are looking for a permanent position you should also get in touch for a possible future opening. thank you
  13. Nice one, thank you for this example. I still can't make mine work though. The difference is that you already have a wire object in simulation's initialization, where I want to start with nothing and keep adding point to a single wire piece. I tried keeping a copy of DOPs output to have a the object in its initial state with all the attributes needed by the solver, so I can gradually add them into the sim. Attatched is the closest I've got it working. thanks again springFace_02.hip
  14. I am trying to figure out how to spawn new wire geometry within DOPs as it simulates. I create the full wire shape at sop level and I then I animate the deletion of its points backwards, from a couple of points to the full set. Of course just by importing this changing geometry, the wiresolver won't re-import it on on every frame. I looked into combining a SOP solver and tried various approaches, but none worked so far. Deleting points in the sop solver didn't work. Then I tried deleting the primitive, adding or even copy (to aquire all the wire attributes) one point at a time and connecting them all with a new line primitive. But that didn't work either. Any ideas? thank you