Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


Everything posted by ch3

  1. Install additional python libraries

    I think they since removed the installer which made things easier. But just for the record and for my future self who will probably forget again. 1. Install python https://www.python.org/ 2. From cmd go to C:\Python27\Scripts\ 3. pip install reportlab (or any other package you may need) 4. copy C:\Python27\Lib\site-packages\reportlab to <userDocuments>\houdini16.0\python2.7libs (or link via env variable)
  2. I want to use an external python library to create and write out a pdf file from houdini. Maybe it's a simple thing to many, but I've been struggling for the past couple of hours, even if I've done this before!! About a year ago I managed to usereportlab, but since I installed the latest houdini (or something else change on my computer) my python SOP fails to import the library and I am trying to figure out how to do it again. What's the typical process of installing and importing any external library to houdini? There is also the cairo library which would like to try. Any help will be highly appreciated thanks
  3. hi, I wanted to create something like an audio analyzer with geometry, where the low frequencies pull one side of a plane and the high the other one. I managed to put something together, but since I am new to houdini, I was wondering if there is a cleaner/better way to do this. I used the pitch CHOP node to create a bunch of different channels. These are then accessed by a point SOP to deform a plane. The problem is that the resolution of the deformation is determined by the 'pitch divisions' and the amount of channels created, which creates a stepped look. I wrote a function to interpolate between the two closest channels, but this just smooths the steps, rather then adding more detail. Is there a different way to get the audio spectrum and use it to offset the surface? thanks georgios If you download the file, just plug an audio file into the 'sample' CHOP node and make sure you have the function defined. getFreq( string path, float f, float n ) { chN = chopn(path); float newF = f* chN; if( floor(newF) == ceil(newF) ) return chop( path+newF ); else { f = floor(newF); c = ceil(newF); t = newF % 1; value = chop(path+f)*(1-t) + chop(path+c)*t; value = value + rand(newF*100+$T)* value *n; return( value ); } } audioAnalyze.hipnc
  4. I have a small compositing network which I want to reference as texture in the shader using the op: expression .ie op:/img/trail Even though I've managed to make it work several times, I always find it a bit flaky and many times mantra doesn't manage to load the image, even though it may be visible on the viewport when referencing the same image operator in a uvquickshade node, or just by loading the shader. It works with a principle shader out of the box, but if I put the same shader within a material builder it breaks. Is there a render attribute or something else I need to add to the shader? I understand it's better to use pre-rendered images the normal way, but I want to use dynamic heighfields SOPs for textures and ideally avoid having to write out thousands of frames in advanced.
  5. So even if the shader pulls the image from the /img content, it doesn't seem to update it over time. Whether it's an animated noise pattern, or a changing heightfield which is what I am trying to use it for. The frame the scene is when I kick off the sequence render, is used across all frames. Any ideas for that? thanks again
  6. Ah great, that makes total sense now. I guess it's somewhat similar to the way glsl/openCl shader kernels expect all parameters to be imported a certain way. thanks a lot for the in-depth explanation.
  7. Is there a general limitation to expressions and connections within a material builder in comparison to promoted parameters? Seems like the op: expression or even a reference to a path chs() doesn't work within the material builder and they have to be promoted outside it. Is that normal?
  8. Volume density from texture?

    I may be wrong about the rest volume, but can't you just manually make 3 volumes one for each axis and use a volume wrange to populate the values like that? @restX = @P.x; @restY = @P.y; @restZ = @P.z; I believe this makes sense to use when you advect it together with density, so you can have a reference to a "distorted" coordinate to drive noises with. Otherwise using the above rest fields will be the same with using world space P in the shader (P transformed from screenspace to worldspace)
  9. Volume density from texture?

    There are many ways you can project onto volume. The rest field is one of them and as you mentioned it can been used as UVs. I tend to skip that step and directly use the P which has been fitted to a the bounding box of the desired projection. It's easier to try that on a volumeVOP to begin with. Let's say you want to project along the Y axis between x and z values of -10 to 10. All you need to do it fit the x and z values within that range so you have a 0 to 1 and feed that to the UVs (st) of the texture node. You can even have a second object as input and automatically get its bounds to calculate your fit range. Now if you want the projection to be on an arbitrary axis, you will have to do some extra maths to rotate the P, project and rotate back within VOPs, or if it's easier, you can do it at the SOP level. What is important to keep in mind, is that volumeVOP will operate on a voxel level and you will never get any sharper detail than the voxel size. But once you do this, you can easily transfer the same nodes/logic onto a volume shader, which operates on rendered samples, which means you can go as sharp as your texture. But of course if you move your camera away from your projection axis, the texture representation will get blurred along that axis. But then again, that's just one approach and maybe there are other ways that may give you more control and better results.
  10. 2D dynamics

    What's the best way to simulate dynamics with just 2D shapes? Is it possible to use any of the existing solvers to simulate rigid bodies, but also flexible curve/polygonal shapes that respect line to line collisions ? I've tried using the wire and the grain solver (with rest length on a network of lines that connect the points), but the collisions only happen on the point level, resulting in penetrations between shapes. Is there anything else I should look into, or a working example I can take a look? thanks a lot georgios
  11. By default when changing the framerate of a houdini scene any existing keyframes will be adjusted to match the existing timing. Is there a way to prevent that and for example keyframe at frame 1000 remains at 1000? I remember there was a dialogue about it, but that no longer pops up when I change the framerate. thanks
  12. Converting VEX to OpenCL

    Since I was working with the new H16 height fields, I re-implemented the reaction diffusion using those rather than points. I also did it without DOPs, using just an openCL within a SOP solver. I've attached a new file. reactionDiffusion.hip
  13. I have a bunch of cameras exported from photoScan (photogrammetry app) and I would like to create a master one which can animate between the whole set. What's the best way to get the transform of two cameras, blend between them and drive a different camera? I tried to extract the intrinsics from the alembic export, but didn't manage to make it work. I ended up key-framing the camera in maya with a good old mel script and export it as an alembic. But I was wondering if there is a better way in houdini, with either an FBX or alembic scene from the photogrammetry or any other application. thanks
  14. Blend between cameras

    Ahh that's perfect. I wasn't aware of this node. thanks a lot
  15. I tend to remember that there used to be a render attribute to chose between continuous or discrete deep image samples. I can't find it in v15. Has it been removed, or am I mistaken that there was an option? thanks
  16. We have a few projects coming in the NY branch and we are looking for freelancers. May also consider candidates outside the country. If you are looking for a permanent position you should also get in touch for a possible future opening. thank you
  17. I am trying to figure out how to spawn new wire geometry within DOPs as it simulates. I create the full wire shape at sop level and I then I animate the deletion of its points backwards, from a couple of points to the full set. Of course just by importing this changing geometry, the wiresolver won't re-import it on on every frame. I looked into combining a SOP solver and tried various approaches, but none worked so far. Deleting points in the sop solver didn't work. Then I tried deleting the primitive, adding or even copy (to aquire all the wire attributes) one point at a time and connecting them all with a new line primitive. But that didn't work either. Any ideas? thank you
  18. spawning new geometry within the solver

    Nice one, thank you for this example. I still can't make mine work though. The difference is that you already have a wire object in simulation's initialization, where I want to start with nothing and keep adding point to a single wire piece. I tried keeping a copy of DOPs output to have a the object in its initial state with all the attributes needed by the solver, so I can gradually add them into the sim. Attatched is the closest I've got it working. thanks again springFace_02.hip
  19. FEM constraints

    Isn't it possible to constrain some arbitrary points of one solid object to the points of another one? The SBD seems to expect a matching set of points between the two objects, which I assume works well when the two object have tets that perfectly touch each other. Is there another way to attach two solids that slightly intersect? thanks
  20. stitching cloth

    I have one cloth object which I have pre-cut in a certain way. I want to have it stitched when the simulation starts but at a given time I want to gradually break the stitches. What kind of constraint can keep two points of the same cloth object together? sbdpinconstraint pins points in space, which I don't want. clothstitchconstraint is meant to work between two different objects right? It kind of works if I set my object to both constraint and goal object, but then how do I create the associations between specific points? Is there something similar to constraintnetwork where I can explicitly specify the constants between points? thank you
  21. point clustering in DOP

    I scatter points on a surface and then bring them into a DOP network to do some dynamics. I would like to create clusters for some of the particles, to get bigger chunks at some parts, rather than individual points. Initially I tried to pack some of them before bringing them into DOP, but it seems that the popsource1 node removes any primitive data, so no packed geo are coming out of it. I am not trying to figure out if I could use spring constraints to force some particle groups to stay together. Has anyone done this before? How would I go about combining POP forces and SBD or RBD spring constraints ? thank you georgios
  22. Maya Paint Effects To Houdini

    Is this file still somewhere available? I'd like to take a look
  23. Hi, we are turning some animated geometry (alembic, non-deforming animation) into clouds, but because we want the noise of the clouds to be in local space (not be affected by the animation), I am trying to work out how to apply the animation after the creating the clouds referencing the intrinsic transform of the alembic. Using an AttributeVOP, I've managed to copy the intrinsic:packedfulltransform of the animated alembic, on to the intrinsic:transform of the static alembic model. I am still trying to get my head around the difference between matrix3 and matrix4 for these transform. Is there a general rule to which matrix type is used in each case? intrinsic:packedfulltransform is matrix4, intrinsic:transform in packed primitives is matrix3, but intrinsic:transform in VDB is matrix4. So what I put together and works for packed geometry or simple volumes, doesn't work with VDBs unless if I pack them before hand. Also my understanding is that intrinsic:packedfulltransform and intrinsic:packedlocaltransform are not writable and I can only change intrinsic:transform . Is that right? thank you
  24. I have to say though, occasionally I get some strange behavior with VDBs. They update correctly when viewed from the moving camera, but at times they stay still when viewed from the default viewport perspective camera. At this point it may be safer to extract the translate/rotate/scale components from the matrix and apply them using one transformSOP per primitive.