Jump to content

Shalinar

Members
  • Content count

    39
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Shalinar

  • Rank
    Peon

Personal Information

  • Name
    Chris Burgess
  • Location
    Los Angeles
  1. Hmm thanks Luke. I'll look into HQueue-ing instead, as this is an animation. It's mostly reading files from disk from a shared network drive that both machines have access to, so I was hoping that wouldn't slow it down too much (as opposed to having all the heavy sim info inside the ifd). I just checked the frames that finished so far, and they're all fucked up. None of them rendered properly, something I've never seen with mantra before. I've upload an example of a "completed" frame using "manta -H computer1,computer2" env.0004.exr
  2. Hi guys and gals, I just recently started learning about distributed rendering with mantra. I've been trying it out and I have a really dumb question. I have two machines, lets call them computer1 and computer2. So in Houdini I go to my mantra rop, in the Driver tab switch the Command to Network Render and use the command "mantra -H computer1,computer2", then hit Render To Disk. Ok great everything launches. Computer1 is my main machine (master), and computer2 is the slave machine I want to distribute rendering to. When I look at computer2, having just logged in and done nothing else, it says mantra is running as a background process, but the CPU is only at like 2-4% usage. I would assume a render would take up way more than that, since the master machine spikes up to 90-100% at times during the rendering. How can I be sure that the rendering is in fact being properly distributed amongst all the machines designated in the command? Thanks, Chris
  3. Why Do Dop Points Move From Frame 1 to 2?

    Interesting... Thanks for the replies everyone I'm still trying to understand this further, and I've got some questions for anyone who might know. 1. If it's because of interpenetration, how come the geo itself doesn't move when the first frame is solved? I would expect the pieces to move along with the centerpoints, but that doesn't appear to be the case if the packed pieces are viewed in wireframe. 2. Does this mean voronoi fracturing by its very nature will always create interpenetrating pieces? Why is that? I can understand that boolean creates "perfect" cuts in that sense, but why does voronoi method create interpenetrating pieces? 3. If it's due to collision object padding, why doesn't the same thing occur when using boolean pieces? I would assume boolean pieces would still have a little bit of default padding added to the pieces, leading to the same effect, but that doesn't seem to be the case as Vitor demonstrated. Again thanks in advance for any help, I really would like to understand what's going on at a deeper level (and why), so I can make better sims!
  4. Pretty simple.. Any time I do a destruction setup, just fracturing geo, packing it, and running a simple sim, the points of the packed pieces jump from frame 1 to a slightly different position in frame 2. Why is this? It's incredibly frustrating when trying to map hi res pieces onto a low res sim because on frame 1, everything will look good but as soon as it gets to frame 2 all the pieces jump and now the fracture lines show before the destruction happens. Is there a way around this? Thanks, Chris In attached example, just turn to wireframe, turn on Display points and scrub between frames 1 and 2 to see what I mean dop_points_jump_example.hipnc
  5. Viewport Changed Python

    If you look at a recent post I made here, Alex Rusev gave me an excellent solution for getting an event trigger when a new node is selected. I strongly suspect you could use the same mechanism for finding out when the viewport changes. Essentially you just add a new eventLoopCallback() that compares the current viewport to a previously-stored viewport, and if they are different then do a thing (in my case, I needed to emit a Qt signal, in your case, scale geo). Not sure what your level of python is, but if you need help writing it I'll probably have some time to figure it out with you a little later today
  6. hou.pasteNodesFromClipboard()

    The network editor has a local coordinate system (X and Y locations). My guess is that by default the nodes get pasted at (0, 0). If the pasted nodes are going to be wired into existing setups, you can always try hooking up the inputs and using Node.moveToGoodPosition() or Node.layoutChildren() on the pasted nodes. Alternatively, you can query the location of one of the existing nodes (probably the last node in the branch) and use hou.moveNodesTo() to move them close to that location.
  7. PySide Emit Node Selected Signal

    This has been working great for me Alex! The only modification I've made is in the remove_selection_event static method, I ended up having to add the following lines at the end: if hasattr(hou.session, '_selection'): del hou.session._selection Otherwise, I was getting a continuous hou.ObjectWasDeleted error when event_callback was trying to getattr(hou.session, '_selection', None) if I had deleted the node (this also happened when I tried it first on a locked then unlocked SESI HDA -- weird). So I just made sure to clean up the _selection attribute on hou.session and it's working like a charm! Thanks again for your very detailed example, it was clear, concise, and easy to follow/replicate for my own needs.
  8. PySide Emit Node Selected Signal

    Thank you very much Alex! That install_selection_event method is pretty clever I will test this today in my program and let you know the results.
  9. PySide Emit Node Selected Signal

    In the current tool I'm building for H16, I have a UI in which I would like to have different options available to the user depending on their selection. It would be ideal if these options can change without the user having to close and relaunch the UI every time. In short, I need a way to capture a signal emitted when the user selects node(s). I looked around the HOM and didn't see anything that looks like it gets called whenever a node is selected, just queries for currently selected nodes. Does anyone know how I could trigger something in my UI whenever the user selects different nodes? Thanks! Chris
  10. Weird Behavior - createOutputNode

    Thanks Graham, figured out my mistake right as you posted haha
  11. Weird Behavior - createOutputNode

    Ah my mistake, "parent" refers to the obj-level container, not the input node. Printing "parent" in my case was misleading because the parent name was the same as the input node's name. Code fixed to: input = node.inputs()[0] input.createOutputNode("xform")
  12. Weird Behavior - createOutputNode

    I'm developing a setup using a Python SOP and came across some behavior that I cannot account for. In my example scene, you'll see that I just have a cube appended with a Python SOP. Enable the Python SOP to see what happens. Inside the Python SOP I'm just getting the parent node (the cube), then trying two different things: 1) append a transform node to the Python SOP, and 2) append a transform node to the cube. The first succeeds, the second fails due to "invalid node type name", and it spawns multiple transform nodes. Here's the code: node = hou.pwd() parent = node.parent() # This works just fine: node.createOutputNode("xform") # This fails due to "Invalid Node Type": parent.createOutputNode("xform") So the first issue is why does createOutputNode from a parent node fail? The error raised is "Invalid node type name", but that doesn't make sense because the same node type name is valid when using createOutputNode from the current node. The second issue is why do multiple transform nodes get spawned from a single createOutputNode? EDIT: Only a single transform node spawns when the parent.createOutputNode line is removed, so I believe the second issue is due to the Python SOP re-evaluating the failing code multiple times, each time spawning another successful transform node from the Python SOP. createOutputNode_example.hip
  13. Refractions Not Energy Conserving?

    Yeah sorry I should have mentioned this is an H15.5 job.. I'll look into the conserve vop, thanks
  14. Hi everyone, I've noticed an issue shining a light through a refractive material. The refractions don't seem to be energy conserving. Using a point light with intensity 1, exposure 0, no attenuation, shining through a grid with a basic mantra surface material with refractions enabled, the brightness values of the resulting pixels in the render are extremely high. Some are as high as 4,000. Why is this? In this thread, Symek mentions that the BSDF model is not inherently energy conserving, so it would have to be applied post-process. Seems odd that this wouldn't already be done in SideFX's shaders... If that's the case, though, how can I make refractions in the mantra surface shader energy conserving? Thanks, Chris EDIT: Forgot to upload sample scene, here it is light_refraction_broken.hip
  15. Python: OnInputChanged Double eval?

    Thanks for getting back to me Try/except is exactly what I ended up doing
×