Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Everything posted by MoltenCrazy

  1. SOP Solver/SOPNet inside DOPs

    Working through a prototype in which RBD fragments erode as they sim and are deleted after hitting a volume threshold. Already know I'm probably overdoing it with the VDB approach, but just want to get this working... I'd like the sim to update each frame with the eroded geo, so I have the for-loop embedded in the DOPnet via a SOPSolver/SOPnet combo (SOPSolver references the SOPNet as an external SOP). Everything works fine inside the SOPnet, and the reference via the SOPSolver looks fine, but for the life of me I can't figure out why it's not working...multisolver doesn't seem to help, either. Anyone have any thoughts? VDB Erosion v0001a - OUT.hiplc
  2. SOP Solver/SOPNet inside DOPs

    Hi, Librarian...primitives are added to the erode group based on non-zero acceleration. The motion on the fragments doesn't begin until frame 22 (I think), so the group is technically empty until then. Wanted to have an external attr in the mix, although would probably be 'active' most of the time. And if anyone knows why there is an acceleration 'blip' on a handful of primitives at frame 2, I'd be thrilled. Have seen this with the accel attr before on frame 2 and am totally baffled by it.
  3. With the help of both the Redshift community and resources here, I finally figured out the proper workflow for dealing with Redshift proxies in Houdini. Quick summary: Out of the box, Mantra does a fantastic job automagically dealing with instanced packed primitives, carrying all the wonderful Houdini efficiencies right into the render. If you use the same workflow with Redshift, though, RS unpacks all of the primitives, consumes all your VRAM, blows out of core, devours your CPU RAM, and causes a star in nearby galaxy to supernova, annihilating several inhabited planets in the process. Okay, maybe not that last one, but you can't prove me wrong so it stays. The trick is to use RS proxies instead of Houdini instances that are in turn driven by the Houdini instances. A lot of this was based on Michael Buckley's post. I wanted to share an annotated file with some additional tweaks to make it easier for others to get up to speed quickly with RS proxies. Trust me; it's absolutely worth it. The speed compared to Mantra is just crazy. A few notes: Keep the workflow procedural by flagging Compute Number of Points in the Points Generate SOP instead of hard-coding a number Use paths that reference the Houdini $HIP and/or $JOB variables. For some reason the RS proxy calls fail if absolute paths are used Do not use the SOP Instance node in Houdini; instead use the instancefile attribute in a wrangle. This was confusing as it doesn’t match the typical Houdini workflow for instancing. There are a lot of posts on RS proxies that mention you always need to set the proxy geo at the world origin before caching them. That was not the case here, but I left the bypassed transform nodes in the network in case your mileage varies The newest version of Redshift for Houdini has a Instance SOP Level Packed Primitives flag on the OBJ node under the Instancing tab. This is designed to basically automatically do the same thing that Mantra does. It works for some scenarios but not all; it didn't work for this simple wall fracturing example. You might want to take that option for a spin before trying this workflow. If anyone just needs the Attribute Wrangle VEX code to copy, here it is: v@pivot = primintrinsic(1, “pivot”, @ptnum); 3@transform = primintrinsic(1, “transform”, @ptnum); s@name = point(1, “name_orig”, @ptnum); v@pos = point(1, “P”, @ptnum); v@v = point(1, “v”, @ptnum); Hope someone finds this useful. -- mC Proxy_Example_Final.hiplc
  4. Building a beast Houdini + Redshift mainly

    Jumping in late, but researching X570 chipset boards right now. Although most appear to have shifted to a 4-slot DIMM design (as opposed to the traditional 8-slot), many of them support up to 128GB of RAM. Guess we just have to be a little more careful with our upgrade planning...
  5. One of my first all-Houdini projects was done with the truly, truly stoopid thought of ‘set realism’. The narrative is centered around the transformation of a 9“ (24cm) toy, and everything was designed around the animation being on a small photo stage that fit the scale of the toy. Initially, this worked great with the lighting look dev, shallow lens choices, etc. I started running into some issues dealing with forces for RBD effects, but those weren't too difficult to work around once I got a feel for what Houdini was expecting at thatscale. Now I'm trying to shape particle and pyro effects onto things like fragments coming off a 9” model and I'm just getting wrecked with memory and other issues as try to get particle separation and sim division sizes to work at that scale. So, so many memory crashes over the last few days. The post-mortem for this project is basically an illustration of me firing myself… Can anyone help me with a good strategy for rescaling the fx-heavy shots? My scene seems pretty poorly built for this: subnets with Redshift lights at the OBJ level (no choice there), animated cameras, a big mix of DOPnets, POPnets driving geo creation and animation, etc. My first thought was just parenting everything to a null at the OBJ level and scaling it by a factor of x100. Visually, that seems to work for most things, but any geo that has been scaled that way triggers an ‘Object level scales detected. A new object has been created to convert to world space’ warning. And I've seen suggestions that SOP-level scaling is typically the best way to go for and transformations. – mC
  6. OpenGL ROPFetch and 'no attribute UI' error

    Turns out this was actually an issue with the Redshift plugin what was conflicting with OpenGL ROPs in PDG/TOPnets. This has been fixed as of Redshift for Houdini 2.6.43, released today. -- mC
  7. Not sure if this should be in 'General' or 'Tools', so let's start here... Trying to get an OpenGL ROP working consistently with a ROPFetch, but keep getting bunches of this error: AttributeError: 'module' object has no attribute 'ui' The pattern seems to be that the node generates X number of files, hits something that causes this error, then slows to a crawl. If I dirty and recook the node, it recognizes the finished files on disk then does another group quickly, then hits this error again. Wash, rinse, repeat through 4,000 frames. Was hoping to use the OpenGL ROP for speedy iteration, but the errors are slowing things to a crawl. I should add that it looks like all the files are eventually written correctly, but the python script seems to get into a weird state about certain frames as the batch render progresses. Anyone know what's up here? -- mC
  8. PolyBridge in For-Each Loop

    It does indeed, Julian! Didn't realize I could pass entire primitive groups in and that PolyBridge could parse separate matching primtives out of the groups. Figured I would have to iterate over each piece. Thanks! -- mC
  9. PolyBridge in For-Each Loop

    Woo! My first Houdini modeling question! I've got primitives in an inner and outer shell, created via a Facet node from the original surface polys. The matching inner and outer primitives have a matching ID attr, and I also know that after a merge that the matching primitive nums are offset by 642 (the number of primitives in each shell). I want to send the pairs into a For-Each loop and apply a PolyBridge to each piece, getting closed, independent pieces out. Would love to be able to do this with a simple PolyExtrude, but I need them to appear seamless/joined at the edge at rest. First, does this sound like the way to go for this? I feel like it's close, I just can't figure out how to get the proper IDs/group names to the PolyBridge tool for each iteration/pair. MOD_Hex_Offset_01.hiplc
  10. PolyBridge in For-Each Loop

    Hey, vicvvsh! You're all over my posts. Thanks! Hadn't considered driving the PolyExtrude extrusion with the normal attr. Got a few other suggestions on the SideFX forums as well, including a Fuse approach. I've attached a version of the file that shows the different suggestions working as well as the final clustering and shattering test, in case someone else can use it. If anyone is game, I'd still love to know how to get the PolyBridge version to work in a For-Each loop… -- mC MOD_Hex_Offset_02.hiplc
  11. @active to PopSolver and/or PyroSolver?

    A big thanks to vicvvsh here and to mkps on the SideFX forum for their help! This file has both approaches/fixes with a switch. Got what I was after: per piece activation driving both the popsolver and subsequent pyrosolver as well as (fixed) control over when each piece gets perceived as active by the solver. Hope this is of help to someone down the road… – mC Active_To_Pop_Pyro_03.hiplc
  12. Hoping someone can help me with this...feels like it should be easy-ish, but can't seem to figure it out. I've got no problem controlling the @active attr and passing it into a DOPNet/rigidbodysolver. How do you do pass @active to a POPSolvers and PyroSolvers? I want to take the same animated pieces from my RBD sim and have a particle sim and/or a pyro sim triggered by the same timing. For a PopSolver, the best I've come up with is a painful for-each setup based on fragment age and group culling, so basically cheating it all before the PopSolver gets the geo. I could probably modify it to have the for-each loop eval for @active as the first step, but that's still avoiding the solvers. For Pyro, I was working on a similar system with clusters but when I scaled it from my prototype to my project geo, it basically wrecked my machine. I've attached a simple file with a staggered shattering of a sphere and all the hooks for both a PopSolver and a PyroSolver. Is there a more straightforward way to do this? -- mC Active_To_Pop_Pyro_01.hiplc
  13. @active to PopSolver and/or PyroSolver?

    Thanks, vicvvsh! Dammit, I was so close but I was obsessed with stripping the active attr down to point level that it never occurred to me that an active group was the way to go. Am I correct here in understanding that the popsolver is explicitly getting the active attribute but the pyrosolver is only getting it implicitly because it's just receiving the dynamically-created fuel volumes? I don't think I have a problem with that, just want to make sure I understand. Is the pyrosolver different enough that the concept of active/inactive doens't apply? And how do you offset the start of particle generation inside a popnet? For instance, when that first fragment/set of points enters the sim, I'd like to wait maybe 10 frames before the particles start emitting. And I'd like that to happen for every point as it enters. Been trying a bunch of if-then variable combos in the impulse activation ($F, $FF, $T, $SF, etc) but nothing seems to work. Everything seems based on the sim start time as opposed to the point activation time. -- mC
  14. Preserving ParticleIDs

    What's a good method for preserving particleIDs? As particles die, Houdini reassigns the IDs, right? I need to preserve the particleIDs so I can calculate acceleration for multi-step particle motion blur. For this to work, the IDs need to stay consistent and not be reused. I've tried a combination of the Trail SOP and a SOPSolver looking up the previous frame ID to carry it forward, but once the particles start dying this falls apart. -- mC
  15. Particle Systems on Packed Primitives

    Trying to figure out the best approached to creating particle systems that are unique and isolated to packed primitives. Basically, I've got a rigid body shatter effect and for each packed prim I want to have an energy effect that travels along with the prim geo. I'm after a particle effect that basically 'clings' to the surface with maybe the occasional wisp or particle that escapes a tightly controlled field around the surface. I don't want a comet trail/firework sort of effect as it travels through space. Think of something like an ether fire engulfing an object: all flame and form around the source without any smoke emission. For instance, the look could be similar to stationary geo with a VDB-driven velocity field manipulated by curl noise. It feels like my base needs to be a per-piece particle system that starts as a scatter-on-surface system and...then everything I've tried just doesn't get me where I need to be. So, I guess my questions are: Is a for-each loop pretty much a given here? Any way I can avoid making the first step in the for-each loop an unpack? Because, you know, that's painful... How do I keep the particle system contained to the volume of the packed primitive/fragment geo? This isn't meant to sound as broad as it might; I've got (I think) a fairly good understanding of forces/velocity/etc in POPs/DOPs but I just can't seem to find the right combo here. The particles need to inherit the overall velocity/acceleration/transformations of the fragments as they transform, but they need to do it in a contained, volumetric fashion. My prototype tests have been getting me imagery that at one extreme looks like the particles are getting 'stamped' in space and like the comet tail at the other. I've tried a few ways to create a VDB per-fragment, and it, uh, wasn't a pretty for performance or visuals. (willing to take the perf hit for the right look, btw) I literally had not considered pyro for this until typing the ether flame description in the post. Does this sound like something suited for a pyro solver? Haven't done much pyro besides some advection hijacking. I'd be grateful for any advice. Thanks! -- mC
  16. Working on a camera rig that emulates a real world crane rig. I've created a null to act as a control panel to drive dolly in/out, panning, yaw, pitch, etc. The values on the control panel are driving parameters on other nulls via relative references like ch("../CAMERA_CONTROLS/baseTransX") The rig itself consists of a chain of nulls parented in order, with the Houdini camera at the very bottom and the base null at the top. Is there a way to direct the transform parameters to use the local space on the nulls via a relative reference? As it is, I can only get them to use world space. I need to be able to use the control node to get the rig into place, then when animating transform along the local axes as though on a track. Baffled that I can't find anything in the docs on this (which could just be me ). Thanks!
  17. Transforming a Subnet

    Annnnndddd I figured it out right after I posted. All of the objects in the subnet need to be wired/parented into the same input that will be in turn parented to the null. So, I made another null inside of the subnet, renamed it 'OUT', parented it to the subnet input #1 node, and in turn parented everything else to that OUT null. And wiring things like that, it appears the 'Child Compensation' flag is unnecessary. Hope this helps someone else. -- mC
  18. Transforming a Subnet

    I have a subnet that is comprised of a bunch of RS lights (that have to remain on the OBJ level) and geometry that represents a practical light setup. I need to animate all lights and geo as a single entity, so I've dropped down a null and parented the subnet to it. 'Child compensation' is turned on in the null to get the subnet to move, but for some reason the objects below move in the opposite direction of the null transform, and it appears to be a double translation as well. Has anyone ever seen this? What can I do to get the subnet to cleanly inherit the the transforms of the null? This can be replicated by dropping down a sphere, collapsing it into a subnet, then attaching it to a null. Thanks. -- mC
  19. I've got an R&D file in which it looks like the hair count is changing frame-to-frame. Both Mantra and Redshift are identifying the topology as changing frame-to-frame, so they're both disabling deformation motion blue for the object. This occurs whether the both the guides and the hair are cached or not. Is there a necessary rest setup for hair/fur? I've gone through the nodes in my set and the rest cache option under Static Generation on the Hair Generation node is about the only thing I see that might come into play. Attached are a few images showing the issue between frames 30 and 32, as well as the scene. – mC FurBall_ForRS_01.hiplc
  20. Can someone help me better understand making sim scenes more efficient when dealing with dense geo? An issue with Redshift displacement* means I may be forced to work with the full character geo as opposed to a subdivided base mesh with displacement/normal maps from ZBrush. Running ops on the high res model is pretty painful, so I'm trying to optimize on a per-shot basis by deleting unseen geo. In the attached file I've subdivided Tommy a bit to simulate a dense model (I couldn't do any more because it made the file too big to post, so let's pretend Tommy is really, really dense ). I've added the network I'm testing to speed things up. Is this a setup that makes sense? It seems to be faster but I'm not sure if there are better ways to do this. Any help would be great. Thanks! – mC *(The issue with Redshift displacement is that it shows cracks on fractured geo in the rest position before anything has even moved; there's no means of passing the displacement direction like the workaround in Mantra) . . ModelEfficiency_02.hiplc
  21. RBD Material Fracture for surface/thin shell

    [facepalm] Urghh...no idea how I missed that. Thanks, Dam! -- mC
  22. Loving the new RBD Material Fracture. I'd like to use it to shatter a hollow model shell but I can't find a way to get rid of interior surface chunks. I attached a torus shell to show what I'm after (and the issue). Is there an attribute that can be processed to achieve this? Hoping this is possible as the new fracture looks better than the previous Voronoi approach and is noticeably faster/more stable than the boolean approach. Thanks! H17_RBDMatFrac_Example_01.hiplc
  23. For a lookDev scene, I'm trying to create a more complex control node based off a Null with a number of spare parameters. I've got it working with some automation, but I'd like to increase the functionality with drop down menus that trigger one or more expression-driven parameter changes based on the value chosen. For example, if ‘Area’ was chosen from the drop-down menu ‘Light Type’, I would want to toggle the Area light on while toggling off all other lights in the scene. So, minimally it's an IF/THEN with multiple triggers in the function. This is something I've done in other packages with MEL, Python, etc but I'm not even clear on the terminology I should be searching for in Houdini. Can someone point me in the right direction? An example would be awesome. – mC
  24. Surface as force volume?

    Great example, Henry. I think I'm all set on the VDB approach. And it looks like I can use the approach on either static or deforming geo, which is fantastic. I'm curious about the the Field Force example you gave. I gave it a go based on your description but can't seem to make it work. I've tried referencing both the shatter geo and the interior geo in the SOP Geometry node, but in both cases the force attribute isn't coming in with the geo. Does this approach also require a SOP Solver node to wire up the points and forces? Metaball_Test_03a_FieldForce.hiplc
  25. Is it possible to covert a surface into a form that defines a force volume? I have a character that will have pieces popping off it, basically a fracturing shell. I've (mostly) achieved the effect in tests with a sphere, randomly deleted glue constraints, and a metaball to provide the illusion the force is coming from under the surface itself. While I think I can mash together metaballs to get me the form I need, it would be better if I could just use the underlying surface as the repelling force volume, especially if the character is animated/deforming. Can this be done, either by using a surface directly as the volume or maybe as a force mask inside a larger metaball region? Thanks! -- mC