Jump to content


  • Posts

  • Joined

  • Last visited

Personal Information

  • Name

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

guache's Achievements


Newbie (1/14)



  1. In my scene, I have semi-opaque objects with only one type of a (VEX) shader. This shader sends refractlight rays, encountering objects with the same shader. How can I detect how "deep" ray-wise I am, that is count the number of surfaces hit by the ray from the original shader call?
  2. I use Houdini for effects, I don't know much about character rigs. I have a human skeleton (Houdini bone objects), no geometry. I need to create a simple stand-in geometry for the skeleton (put some simple "flesh" on the bones). My plan: go through each bone, and assign either a tube or a "capsule" (a soft tube, like the one used for capturing geometry), that matches the bone's transform/scale. I can write a Python script, but I thought I'd ask first, in case there's a Houdini tool that can do it, or someone has an existing solution (basically I want the opposite of an "auto-rig", where you start with geometry and create a skeleton for it)
  3. So far I've only used Houdini for effects so I don't know anything about character rigs. I need a simple, posable mannequin (with fingers). I don't mean a character with a connected skin, just a stick figure with cylinder- or capsule-shaped limbs (head, chest, arm, forearm, thigh etc). In Houdini, is there some ready-to-go mannequin of this kind? Or a standard biped bone rig (with fingers) which than I can "autorig" relatively quickly with cylinders / capsules? Thanks.
  4. The help for "computenormal" (link below) reads: "However, when the VEX variables are changing with a high frequency (for example, a high frequency displacement map causing high frequency changes to the P variable) ... ". Since this mentions displacement, it suggests that "computenormal" (in a disp. shader) gives the normal to the displaced surface. It sounds too simple: to know this normal, the shader would need to know the partial derivatives of the disp. surface, i.e. for each shaded point, the shader would need to compute or "remember" two other shaded points. Is it really what's happening? https://www.sidefx.com/docs/houdini/vex/functions/computenormal.html
  5. I want to import a Houdini mesh into Unity via FBX. In Houdini, the object has vertex normals which define hard/soft edges. I export with the FBX ROP. No matter which FBX options I try, I can't pass this hard/soft edge info on to Unity. I can re-define normals in Unity by face angle, but I'd prefer to read in Houdini vertex info directly. Unity expects "smoothing groups", which don't exist in Houdini. I don't see any "smoothing" related options in the FBX ROP. Someone else, when exporting to Unreal, fixed this by adding a string prim attribute in Houdini to define "smoothing groups". But this prim attribute is Unreal-specific (1st link). I'm not the first having trouble importing hard/soft edges into Unity (2nd link). https://www.sidefx.com/forum/attachment/f7125ef24d8648354a6623078f76b2015c14066d/ https://polycount.com/discussion/92226/fbx-and-vertex-normals-of-a-skinned-mesh
  6. Thanks. I'll create a thick shell and see how it goes. For Bullet, I'll try to re-do the box by treating the individual walls as separate objects (making sure they're thick and convex)
  7. I don't have experience with sims, but I need to set up a simple RBD sim. I have 15 balls inside a thin, hollow box (a bit thicker than the balls). I want the balls to fall and collect at the bottom of the box. The balls fall down, but the box doesn't hold them in. They collide with features of the box, but, in the end they fall out of the box, as if the box's walls had "holes". The box is a Static Object (Use Surface Collision), its poly normals point to the center, as you'd expect in a hollow object. The balls are an RBD Point Object (Use Volume Based Collision on). I use RBD Solver, I tried Bullet (same setup), but the balls just fall straight down. Questions: 1) For a hollow box, is having just the inner walls (with normals pointing inward) OK, or do the walls need "thickness", like a shell? 2) Why is the box detected for collisions by the basic RBD Solver (not very well, but still), while in Bullet the balls fall right through the box?
  8. I'd like to make a nice looking presentation of an object (a personal project, not commercial, no budget). I know that Keyshot Vectary etc are geared towards such product presentations. Is there a site or resources for Houdini with typical props / lighting setups used in such presentations? I mean things a glass table, a prop book, a window geo for reflections, a "studio" HDRI / AI setup with a soft backdrop etc I know all this can be made, but maybe there are ready-to-go collections
  9. You need to be specific what you mean by "reveal". Do you mean the surface is black / not seen, but as another object passes by, it becomes visible? Then use a light attached to the moving object, with a tight attenuation. I don't understand why geo attribs can't be used. Will your employer fire you if you subdivide the surface for more geo detail? Subdivide, use a point wrangle to compute the distance to the passing object and save it as attribute.
  10. Thanks. I'll try a Clip SOP, with the clipping plane aligned to the fold line, but set just a tiny, tiny bit "into" the polygon. so that it's guaranteed to always create an edge.
  11. I have a long, rectangular poly which ends up "folded" by 180 deg (not necessarily in half, as in the pic). After folding, this poly is obviously very non-planar. I want to add a new "crease" edge along the fold line to fix it. I can create the edge manually in the viewport, but I want a procedural SOP solution. The Divide SOP seems most logical (Convex poly: ON, Triangulate non-planar: ON), but it doesn't always work. Sometimes it creates a new crease edge, sometimes it doesn't (maybe because folding by 180 deg is so radical). Is there a SOP that would reliably do this? I see a "Crease SOP", but it doesn't create edges, it assigns weights to existing edges. PolySplit SOP could work, but the way you need to specify the split points (% of edge length etc) seems unnecessarily complex. I do have the 3D point positions of the fold line, so I can easily create a line there. But how do I "plug" it into the existing poly as a valid new edge?
  12. I have a comp ROP (launched within the Houdini UI) that takes >1 hr, but I see only 20% use of my CPU. Is there way to use more CPU to speed up the comp? In Preferences->Cooking->Multithreaded Cooking is set to 14 (out of 16), but I don't see CPU usage above 20%. In Compositing Settings, I tried dropping bit depth to 8-bit integer (from 16-bit FP), but it doesn't make any difference. I tried "Batch Cook Frames" set to 2 and up, but it slows down the comp even more. My comp uses only individual frames (no time warps etc), so it should be easy to parallelize it to the max (one frame per thread)
  13. I'm trying to render a flipbook with motion blur resulting from camera's motion (not object's motion), but I never get any motion blur. In flipbook settings, I enabled Motion Blur, 5 (sub-)frames. On the camera, I added/enabled all motion-blur-related rendering options: Allow Motion Blur, Enable Motion Blur, Xform Time Motion Samples (5) Geo Time Samples (5). Shutter time is 0.5. I have non-integer frames in the playbar. Any ideas how to get blur from camera motion in a flipbook?
  14. Create a polyline defining the path, constrain the camera to it. Use the "position" setting (where on the arc the camera is) to time it any way you want.
  • Create New...