Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.


  • Content count

  • Joined

  • Last visited

  • Days Won


f1480187 last won the day on July 12

f1480187 had the most liked content!

Community Reputation

385 Excellent

About f1480187

  • Rank
    Houdini Master

Personal Information

  • Name

Recent Profile Visitors

2,177 profile views
  1. Here is my attempt, with some nice techniques. Not overwhelmingly realistic, assume I've never seen an ice cream. cornetto.hipnc
  2. Try Even–odd_rule. You can check if a next step will bring the point outside of the polygon, and change direction to prevent it. Also, there is Vector2 class in HOM. I would subclass it and define my own methods in case I need extra functionality.
  3. My scenes usually less than one megabyte. I unlocked couple of rigged characters (default male and female) with geometry stored in HDAs. This action increased the file size to 50 MB. I think something is bloating your scenes unusually big.
  4. According to odforce scenes, no. The only common thing is an arbitrary color saying "attention/problem/fix there". H16 started to colorize many nodes. Now it's a bit confusing, since old nodes kept their colors. When you add op, new color differentiates it from similar node created before. IMO, the better way is automatic coloring and shaping. Like syntax highlighting in code editors. Divide operators into classes, then users can customize colors and override classes to their own preferences.
  5. Add Hole SOP.
  6. Convert the volume to a mesh and intersect it with many Grids using the Boolean SOP. volume_to_slices.hipnc
  7. 2/3. It's trivial to place split geometry on different groups using Partition node. You can even transfer it back to original geometry. 3. Try Dissolve node instead. 4/5. Polywire it. You can control the width of the Polywire's output geometry. Setup ramp parameter in VOPs or Wrangle to map width to arbitrary shape you need. edges_to_path_driven_geo.hipnc
  8. You need to get an instance of hou.Geometry. Sometimes there is no easy way to track what set of calls you need to get hou.Something. You may try to construct it by calling hou.Something(). It may throw an "AttributeError: No constructor defined", which is, I think, will happen with most of hou classes. Even if you can construct hou.Geometry, you obviously need a specific geometry, not an empty one. Fortunately, any SopNode seems to have a Geometry. Here is Python Shell example: >>> node = hou.node('/obj/geo1/platonic1') >>> geo = node.geometry() >>> bb = geo.boundingBox() >>> bb <hou.BoundingBox [-0.92793, 0.92793, -0.975684, 0.975684, -0.789344, 0.789344]> >>> mv = bb.minvec() >>> mv <hou.Vector3 [-0.92793, -0.975684, -0.789344]> >>> tuple(mv) (-0.9279304146766663, -0.9756839275360107, -0.7893444299697876)
  9. Simple control over many deformation kinds by a custom weight: deform model at full strength, add Point Wrangle, input both models, blend with "lerp(@P, @opinput1_P, @weight)". weighting.hipnc
  10. Variable access uses dollar sign syntax: $MYVAR. @myattrib is a wrangle-style access and it uses actual attribute name. You don't need to create a variable and use addvariablename function in late Houdinis. "varmap" is a detail attribute. Keep Geometry Spreadsheet nearby to inspect it's contents.
  11. There is also a Solver SDF subtraction approach posted in several threads. Didn't find them quickly, but the principle is basically this: vdb_track.hipnc
  12. Interesting scene. Played with it's RBD part: wall_crack_rbd.zip UVs are moving because you are transferring it to the moving simulation pieces. UV will be different at every frame. You should transfer on static geometry. Attribute transfer matches geometry by proximity, it's not a best way to deal with vertex UVs, but often you can't use anything else. In your case, however, you can transfer UVs by Convert VDB node using second input. It will work way better, and you can reduce your network by a half. It also makes sense for me to separate geometry into a low-res simulation proxy and high-quality mesh for rendering. This way you don't need to restore UV on decimated simulation meshes. You only need their position data, which can be stored and cached as points for Packed RBDs. You could also compute geometry once, then use cached version to transform it's pieces by such points.
  13. @eistan, array "a" is a string type, and member type must be a string too. Foreach also expects semicolon as a separator. foreach (string $string_pt; split($pts)) { if ($value_to_check == atoi($string_pt)) { $color = {1, 1, 1}; } }
  14. Do you want it to be perpendicular to the plane created by two edges with angle? For any non-corner case you can compute rotation matrix: // Point wrangle. // Compute matrix aligned with plane created by two edges. vector e0 = point(0, "P", @ptnum-1) - @P; vector e1 = point(0, "P", @ptnum+1) - @P; vector y = normalize(e0 + e1); vector z = normalize(cross(e0, e1)); vector x = normalize(cross(z, y)); 3@r = set(x, y, z); After that, you may use it in your point creation loop like this: // Create position on circle centered at the origin. // Scale it by radius. Rotate it by rotation matrix. // Translate it to desired location. set(cos(theta), sin(theta), 0) * radius * r + pivot It's an example expression, not a final code. From your code, value .05 can be used as a radius, @P can be a pivot and r is a matrix computed by the snippet above. There are common cases where it will not work. You should decide which orientation to prefer from infinite possible variants: End points. One or both edges has zero length. Angle between edges is 0 or PI align_circles_to_curve.hipnc
  15. I think your setup is simply wrong somewhere, it should work as described. Post scene. Bias should place ray starting point above a surface. Try to scale your negated normal by a small value and use this as bias. Curvature mask produces better results than occlusion masking, but meshes must have a nice curvature, therefore be smooth. worn_edges.hipnc