Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

5 Neutral

About bentway23

  • Rank

Personal Information

  • Name
    Mark Reynolds
  • Location
  1. Instance node loses UVs

    I have an animated alembic loop that I am copying to points from a POPnet, but I want each copy to be slightly randomly offset time-wise. I would love to be able to do this with my original alembic (I'll be posting this question separately), but the only method I've found is Tim van Helsdingen's method that explicitly calls the cached frames, just adds an offset to the $F part of the path, and then uses the instance node to do the instancing, so I'm doing that having recached my .abc as a .bgeo sequence. The problem is that the instance node doesn't retrieve the UVs for the objects, and I haven't been able to find a way to get them to carry through. How can I get the UVs to be seen in the instanced objects? [ADDENDUM: I figured out how to do it using a for-loop for all of the emitted points, copying an instance of the .abc going through a time shift that randomized a frame offset based on the iteration count of the loop. This works, but is also considerably slower for the merely-300 objects I'm instancing, so any alternate-method suggestions are definitely welcome!]
  2. This was silly. I got tunnel vision and the vellum rest blend had slipped my holey brain.
  3. I have an alembic with a lovely bit of movement happening, lying down on a ground plane. I'd like to make parts of it floppy, which I can do with vellum and a pin to target with an attribute that scales the pin stiffness so it's lower on the floppier parts. No problem there. The part I'm not able to figure out is this: I actually want to hang this up by one of the floppy bits, but still be able to use the shivering/shuddering from the target animation. The target animation is apparently evaluated as absolute coordinates, rather than deltas of a rest mesh, so even with the floppy part being hung up, the pinned parts are still lying on the "ground"/seeking the actual point locations of the target. Is there a way where I can get it to evaluate the target so that even if some of the animated bits drift (and move some of the not-quite-100%-pinned areas) the vellum will incorporate the source object's animation, but at it's new simulated place? Thanks for any help!
  4. Is it possible to animate input point temperature for the pyro source spread node--i.e. have one pocket of points that flares up (@temperature = 1) at frame 100, and other that flares up at frame 200? Or a group that is dynamically created (bound by animated geometry), so that when the animated object flies through it creates a new source for the pyro source spread? It seems like it only accepts the values at frame one, which makes sense for a solver--is there a way, such as with vellum, that I can dig in and animate those things? I'm using this more as a growth solver, not generating any actual pyro on this--I'm just spreading attributes about. Thanks for any help!
  5. OpenCL Exception: Could not open OpenCL program

    I do indeed, and it appears to match yours exactly (at least, the OpenCL lines do). I'm sure it's just an option I have to check somewhere. (I've noticed all along, both with this and with the previous Houdini build (18.5.408) that I would get an error any time I tried to use OpenCL with a pyro solver, but I didn't think much of it given how lightly I was using pyro. Today was my first time using an OpenCL node. I'm guessing it's just a setting I have to check somewhere.)
  6. OpenCL Exception: Could not open OpenCL program

    Forgive my ignorance--where is that?
  7. Hello! I'm trying to use an OpenCL node and the console pope up with the error in the subject line as soon as I click on it. Googling didn't help me--I updated my GPU drivers to no avail. I also added "HOUDINI_OCL_DEVICETYPE=GPU" to my .env file (per another forum), which didn't help. When I open the nVidia control panel to turn on OpenCL, it doesn't appear in any of the device manager settings, just OpenGL (and for the program-specific settings, Houdini doesn't appear at all). I'm running Windows 10 on a computer with decent horsepower and an RTX 2070 SUPER graphics card, so computer beefiness or up-to-date-ness shouldn't be a problem. I'm also running Houdini Indie (the latest build--18.5.499). Thanks for any help!
  8. Alrighty, I'm trying to export a very straightforward system to alembic for use in Maya. A very simplified version is attached--an emitter emits particles, there's a trail, those are added to create a line, and a polywire makes them into geo. This should be easy (haha), but I'm getting inconsistent results (user error, no doubt), none of which are the one I want. For the most part, the color sets are inaccessible--Cd shows up in the color set editor, but does nothing when used as Cd in an Arnold user data color node for a material. Also, the geo tends to be "flickery", and getting close to it in the viewport the Cd-based colors (visible, at least, in the viewport) are jumping about and it looks like there are occasional normals reversing. If I reimport an .abc into Houdini it seems to work fine. The closest to success I got was to write it out as a bgeo sequence and use Houdini Engine to bring that into Maya and write an .abc from Maya--the geo appeared okay, but the color set didn't carry over (even though I had "write color sets" clicked). What is wrong with my settings, or with the way my system is built? Thanks for any help! wires_example.hiplc
  9. This might be more of a Maya question, but I have yet to find a Maya forum where one can find an answer within a year . . . . (Also, I've Googled aplenty and this question doesn't appear to have been asked since 2014.) I'm exporting an alembic of geo for use in Maya. I've made sure the Cd values are promoted to vertices so Maya can read them, but when I import it into Maya, on the first frame (1000.0) the colors are clearly there in the viewport, and in Mesh Display-->Color Set Editor Cd(RGBA) shows. The thing is, the minute I change frames it is gone, not appearing in the viewport, not showing up through Arnold's User Data Color, and not in the color set editor--even if I go back to frame 1000 it is gone. I've tried it both with and without "export vertex colors" checked in the Arnold tab on the shape node in Maya and even if it's just on frame 1000(.0) with the color set showing Arnold's user color node isn't seeing it (or piping it into the render, at least). If I bring the import back into Houdini it works great. What am I doing wrong?
  10. Gentle nudge/push particles away from surface (using SDF?)

    This is great! I'm glad to see I was at least in the ballpark, and I was able to quickly create and inner "core" pushing them away from the center and taming that unruly mob by simply adding instead of subtracting the inner gradient. Also, the visualize tree is new to me and way better than the create-a-bunch-of-points-create-an-attribute-and-scroll-through-the-geometry-spreadsheet method I've been using. Also, I wasn't sure about "fill interior"--my approach was (theoretically) making a hollow VDB from the container walls and using the external-to-the-vdb-but-inside-the-container values for the sampling. Thanks so much!
  11. Alrighy, I have a big wedge of stupid in my head that is keeping me from doing this. I have particles emitting into a container (a super-simplified version is attached). I would like to get it where instead of the usual bounce/slide off a wall, the particles slow down and gradually move/are pushed away from the containing walls. I'm assuming this would be done using SDF/volume sample/volume gradient and a POP wrangle modifying the velocity based on the sampled values, but I just can't get it to work. I've seen lots of tutorials on using this approach to get particles to stay NEAR a surface, but my brain isn't adapting that to a nudge-away-from-a-surface instead of a nudge-towards. What am I doing wrong? The POP wrangle code-in-progress (the collider is input 2, the VDB is input 3): float samp = volumesample(2, 0, @P); vector grad = volumegradient(2, 0, @P); @v *= -grad; SOMETHING is happening, just not what I want. (And ideally I could have a second SDF inside the container pushing out, thus gently keeping the particles within a certain radius of the container wall, not going to the center.) Thanks for any help! example.hiplc
  12. Would there be a way to procedurally set the capture region for a bend deformer, for instance if you're using it in a for loop for the same object but with different sizes/rotations? I guess the main question would be figuring out its angle, as bounding box and centroid expressions could do most of the placing and sizing (except if the object was rotated the bounding box size wouldn't be accurate).
  13. Alrighty, I think I've found it, in case anyone else has this issue: When you create the new nParticles to attach the cache to you must first create the PP attributes. Then when you attach the (pre-existing) nCache it appears to read them in automatically. (I saw this on a separate thread, it just took a little toying about before I could make it happen in my own life.) Just hoping to be of service, because knowing is half the battle.
  14. Using Engine I have been able to import a particle cache into Maya just fine. Scale and color came through and were easily applied in Arnold. However, I need to recache the particles as a Maya nParticle cache to hand it off to someone else not using Houdini Engine. I can get it to cache the general position/lifespan, but it doesn't appear that any color/scale(radiusPP)/alpha(/transparency) information is in the re-cache. How can I make sure those attributes make it through to the nParticles (re)cache? Thanks for any help!
  15. My brain is having an issue here. I have a vellum sim that emits a new piece of geo every x frames. I need to take the final simmed geo (tet softbodies) and point deform the original hires geo. This wouldn't be hard to do the old brute-force way--by the end only four instances have emitted--but I feel like this is something a for each loop would be perfect for. The image attached shows my attempted setup (it's part of a much bigger project, not practical to upload). I bring in the cached sim and blast the elements I don't want for this step. I ran it through an assemble node (because I thought doing for-each named primitive would do the trick here, no success yet, though, so this might be unnecessary). Then there's a wrangle that takes the creation frame (from the vellum path name) and converts it to an integer "num" attribute, which (theoretically) a time shift could read to know what frame to freeze the simmed and source geo to get them to match up and then do a point deform inside the for each loop. (The reason I have to time shift the source geo is that the geo going into the vellum sim was rotating so that each emitted object would come in at a different angle. I referenced copied those transforms to the hires geo so they match in time and space for the beginning of the point deform.) The first (maybe last?) problem I'm having is that I can't get the time shift to harmonize with the for each loop. It's reading the attribute correctly and behaves right outside of the loop. Some other Googling indicates that it has to be connected to the node before the loop, but then I'm having trouble isolating which object to freeze. I've seen similar questions on various posts, but none seem to quite answer what I'm looking for here (or I'm just not bright enough to translate the fixes). Thanks for any guidance on this!