Jump to content

toadstorm

Members
  • Content count

    40
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by toadstorm

  1. It's something to do with the trails you're generating from the VDB Advect Points SOP... they don't seem to be getting the right normals. If you AttributeTransfer N from your original points onto the trails, it should work fine, even as NURBS curves.
  2. You can avoid the need to resample with some Clip SOP magic. Compute dot(I, N) where I is the normalized vector between your camera and point position, set a rest position, then use a wrangle to move your points into a world position based on that dot product: vector camP = point(1, "P", 0); vector I = normalize(camP - @P); @dot = dot(@N, I); v@rest = @P; @P = set(0,@dot,0); Then use a Clip SOP to cut off any primitives where dot product is less than 0. The defaults work fine here. Afterwards, just move your points back to their original rest positions. smooth_line_clip.hip
  3. There might be an easier way to do this; I would love to see one. The problem is that `v@prev_P` was being set on each substep, including the last one right before the frame is done, so `v@prev_P` is returning the value 9/10ths of the way along. In order to fix this, `v@prev_P` has to only be set on the first substep of each frame. The expression to determine the step number looks like this: `i@stepnum += int( rint( (1/(@TimeInc * steps) * steps) * @TimeInc ) );` where `steps` is the number of substeps on the SOP Solver. Next, I check if we're on the first substep of a frame: `if(((i@stepnum - 1) % steps) == 1) { v@prev_P = @P; }` The `i@stepnum - 1` is necessary because the SOP Solver is set to cook on creation frame. I'm attaching an example file. prev_P_example_substeps.hip
  4. To make random geometry instancing in dopnet

    http://www.tokeru.com/cgwiki/index.php?title=HoudiniDops#Emit_packed_prims_into_RBD_sim
  5. You just need to introduce a little ray bias into your Ray SOP. A value of 0.1 got me pretty close. If all you're trying to do is remove points that are backfacing the camera, there's a simpler way. Unique the points on your mesh using a Facet SOP or however you like, create point normals via a Normal SOP, scatter, then just use a Delete SOP in Normals mode. There's an attribute there that accepts a camera input to determine backfaces. As long as the scattered points inherit your mesh's point normals, this will work.
  6. take a look at this file... it's using a Wrangle to create the projection vector to the origin for each point, and a Ray SOP to actually do the projection. ray_project_cull.hip
  7. use v@scale, it's built into the copy SOP
  8. duhhh, forgot to attach file. prev_P_example.hip
  9. might be some details lost in your exact setup? hard to tell without a HIP. here's a working example.
  10. RBD Fracture get seed from attr

    The expression to get the number of points in the box would be npoints("../box1"). VDBs tend to be quicker to cook in most cases... I'm not sure how much of a difference they make in this particular example but it's worth a benchmark.
  11. RBD Fracture get seed from attr

    Rather than trying to alter the seed for each shatter in a loop, how about you copy your objects to shatter to your template points first, then in a for/each loop shatter each copy individually? Since the volume noise you're using is based on global P, each copy will evaluate differently. Check this example file. rbdTest_toadstorm.hipnc
  12. Volume Displacement on Rest

    Take a look inside the Pyro Shader... they're using Dual Rest in there internally. It looks like you're running an older version of Houdini, but in 16.0 at least, there's a VOP called the Dual Rest Solver, which handles a lot of things for you. It outputs a struct-type channel called "dualrest" which the Unified Noise VOP is capable of reading natively. In previous versions the exact implementation might be a little different, but the Pyro Shader is still the standard model you should be looking at.
  13. Keyframe Maximum/Minimum Values

    You could do this in a CHOPnet. Import your channel into CHOPs using Fetch or Object, depending on where the keyframes are, then use a Shuffle CHOP to split all the samples into separate channels. Then use a Math CHOP with Combine Channels set to "Maximum." The output channel will be the maximum found in the sampled range.
  14. Copy Stamping alternatives?

    For something like rotation / orientation, copy stamping doesn't make sense because transforming instances or packed primitives is something the Copy to Points SOP can do just using template point attributes, no stamping or For/Each loops required. If you check the docs for copying and instancing here, you can see that by providing either an N and up attribute or an orient attribute to the template points, the Copy SOP will automatically apply those transformations to each copy. Setting those attributes is sometimes less intuitive than using the familiar Transform SOP, though, so here's a bit of VEX code to help you out. You could put this in a Point Wrangle to convert user-friendly Euler rotations (like what you see in a Transform) to a Copy SOP-friendly orient quaternion: vector R = chv("rotate"); // you could replace this with a vector point attribute with Euler values! vector4 orient = eulertoquaternion( radians(R), 0 ); p@orient = orient; If you were in a situation in which the shape of each copy needs to be varied, then Sepu's video above is the right choice. You'd want to use a multithreaded For/Each loop to vary each copy. FWIW, I have a big guide to this whole question of instancing here: http://www.toadstorm.com/blog/?p=493
  15. Inflate Cloth using JUST Cloth

    i hate to break it to you dude, but cloth IS fem. look inside the solver.
  16. Inflate Cloth using JUST Cloth

    you could try approximating cloth with pop grains. check out the granular sheets example for ideas on setting up constraints. then just apply a force on the points pushing outwards and hope your constraints will deal with it. you could break constraints in a sop solver if you need popping action.
  17. krakatoa style rendering

    mantra's not the best at rendering lots and lots of points; it has a much better time with volumes. you may want to consider using a cvex volume procedural to load all of your tiny points and create density at rendertime in your shader, using a point cloud lookup. if you want very fine detail, you'll still need a ton of particles, so look into point replicate or maybe use a gap filling algorithm to create new points where they're needed. then decrease your shader's point cloud search radius to as small as you can go before you start losing desired density.
  18. You can possibly use SOuP for Maya to extract your vector attribute and bind it to a color set that can be read by mental ray. The arrayToPointColor node can extract this information. See this link: http://www.toadstorm.com/blog/?p=240 Also if your attribute types are set correctly, Houdini should export Alembics with readable color sets. Check this thread: https://www.sidefx.com/forum/topic/38939/?page=1#post-178403
  19. Thanks for doing this, Alessandro! Great to see you as always.
  20. Fluid sim along a surface?

    If you want to move points along a surface based on texture UVs (not per-primitive parametric UVs), you need to do a little magic with xyzdist() and primuv(), or XYZ Distance VOP and Primitive Attribute VOP if you prefer. Convert your UV attribute to Point type, then use a Point Wrangle to move your point position to v@uv. Don't forget to store your original point position in the Wrangle, or via a Rest SOP. Then give each of your particles a goalUV vector attribute. Then use xyzdist() to find the nearest primitive # and parametric UV to your distorted surface. Store that primitive number and UV coordinate on your particle as point attributes (posprim and posuv are what particles use internally for sliding). Then move your mesh back to world space using a Rest SOP or a Wrangle, and use the primuv() function on your particles to find the "P" attribute based on posprim and posuv. This should return the nearest point on the surface given the goal UV you defined earlier. I'm attaching a .HIP to make it a little easier to read. goal_to_uv.hip
  21. pop grains accumulation

    Hi, I'm trying to create a sort of sand-in-hourglass effect, where piles of sand accumulate over time from a steady stream. I'm using the POP Grains DOP to try to accomplish this, but the particles don't seem to want to stack. I'm relatively new with this particular solver so I'm not entirely sure what the approach is for accumulating piles like this. Can anyone point me in the right direction? Thanks!
  22. melting a textured object with FLIPs

    Hello, I'm trying to figure out how to melt an object using FLIP fluids and have the UVs melt with the object. Getting the fluid motion down isn't a big problem, but I can't get the UVs to follow along. I played around with creating a vector field based on the source object's UVs in the hopes that I could advect it in DOPs, but I can't even get the UVs to write correctly to a vector field. I also tried using an attribTransfer node to transfer the UVs from the original geo to the points created by the Fluid Source node, and then I again transfer the UVs from the imported fluid particles (in SOPs, from the Dop I/O) onto the particle fluid surface, which kind of works, but the UV seams are always incredibly distorted because I have to convert the UVs from a vertex attribute to a point attribute. Is there a better approach to this? I know I've seen this done in Houdini but I can't find any good information on how it's done. Thanks!
  23. Bullet: RBD sim as collision object for second sim

    I actually just ran into this problem recently, and a colleague of mine came up with a pretty quick solution. Inside your DOPnet, use a File DOP to write out your .sim files. Then you can just read them back into another DOPnet and they will collide as normal with other objects in your scene (one-way of course).
×