Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Posts posted by Shalinar

  1. On 3/13/2019 at 10:03 PM, tmdag said:

    If you're not sure about the first part, here is an example (check before and after copy sop). Also take a look what objects i am rendering under mantra rop.

    Your geometry will become like small area lights. In case of polygons - each polygon is a small area light. 

    As for material - same, Ce is fine (I've created new one). Not sure what tutorial you have seen and what render engine it was (raytracing?micropoly?pbr?) but I'm using PBR in for this


    Thanks mate,
    I should have been clearer: I know that instancing spheres onto particles "works", but I want to avoid that if possible since instancing geo onto tens or hundreds of millions of particles can get quite heavy. Following Steve Knipping's tutorial "Applied Houdini - Particles 1" (https://www.cgcircuit.com/tutorial/applied-houdini---particles-i/video/ah-particles-9b-last-changes-illumination?time=53) around the 55 second mark is when he adds a geo light to his particles. No instancing spheres, it just "worked". I believe he was using PBR in the video, either that or micropolygon PBR.


    On 3/13/2019 at 11:08 PM, Skybar said:

    The geometry light doesnt need geometry per se, but it does need a primitive. So instead of copying geometry to the points, you can use Add SOP and turn on "Add Particle System" and geo light should work fine. The size and color can also be controlled per-particle with pscale and Cd which is useful too.

    Interesting. I got it working with the Add SOP like you suggested. It's bizarre to me that "Add Particle System" would produce what I need when an actual particle system from a real POPs simulation doesn't. I'll have to rewatch the tutorial I was following to see if there was a primitive with the particles somewhere that I missed, because he definitely did not use this Add SOP method to get the geo light working.

    Thanks for the help guys :)

  2. 2 hours ago, tmdag said:

    Geometry light needs a GEOMETRY (not just points). You could potentially instance (copy sop) spheres on top of your points. Also don't forget to add material to your geo light.

    But you do not have to do that in order to have light out of points, you could just crank up Ce values.

    Hey Albert, thanks for the helpful response! I'm not sure about the first part. I was following one of Steve Knipping's particle tutorials, at the end he just adds a geo light and points it to his particle sim, and everything worked fine without instancing actual geometry onto the points or changing anything in the geo light. Also I don't really know what kind of material to add to a geo light... I tried creating a principled shader with Ce set to Use Point Color, but it didn't make a difference.

    In the end, controlling the Ce value on the particles' shader did the trick, so thank you for that! I would still like to understand why the geo light doesn't behave like I would expect it to, though.

  3. Hi all,

    I'm getting some unexpected behavior when I try to light my scene. I've got a particle sim, and I want those particles to illuminate the scene, so I assigned a Geometry Light to them. However, it's not acting as I expected.

    For one, cranking up the Intensity of the light does nothing. In the viewport, I can see the scene getting brighter, but in the render there is zero difference between an Intensity value of 1 vs 1000.

    Another odd behavior is that the light in the render seems to be dependent on where my display flag is set in the particle network, rather than the node that I told it to point to. I'll upload my hipfile to illustrate better, but basically I've got two null coming after my particle sim. One has a vop where I'm cranking the Cd way up, and the other is just the vanilla result of the particle sim. Even though I set my Geometry Light to point to the one with the overblown Cd, if I move my display flag to the other null then that is what gets used for the render.

    This illustrates the third strange behavior to me, which is that cranking up the Cd values makes my light brighter. Like the Cd is acting as the Intensity parameter, rather than the actual intensity. So in order to make my light actually illuminate objects, I have to completely overblow my Cd and ruin the color of my particle sim.

    Are these supposed to work this way? All three seem odd to me. If I've got something set up incorrectly though, please let me know.







  4. I'm using a userSetup.py script to do some setup in Maya during launch. However, some things are not happening as expected, and I can't seem to find the output of Maya's stdout during startup to debug.

    For example, if I put the line

    print "Testing stuff..."

    into userSetup.py, I cannot find where it gets printed. It does not show up in the Script Editor after Maya launches, it does not get written to the shell if I launch Maya via cmd prompt > maya.exe, and it does not get written out to the output file if I set MAYA_CMD_FILE_OUTPUT in my Maya.env

    Where can I find the stdout from Maya's startup process??

  5. 1 minute ago, lukeiamyourfather said:

    Try rendering only with the remote host. Instead of "mantra -H localhost,remotehost" just try "mantra -H remotehost" and see what you get. Note the other machines rendering might take a while to actually get started. Information gets transferred over the network to start the render. Depending on the types of assets this can take a considerable amount of time. If you plan to render animations you're better off rendering complete frames with something like HQueue (or any other queue manager) versus distributed bucket rendering because there's less overhead and wasted computing time.

    Hmm thanks Luke. I'll look into HQueue-ing instead, as this is an animation. It's mostly reading files from disk from a shared network drive that both machines have access to, so I was hoping that wouldn't slow it down too much (as opposed to having all the heavy sim info inside the ifd).

    I just checked the frames that finished so far, and they're all fucked up. None of them rendered properly, something I've never seen with mantra before. I've upload an example of a "completed" frame using "manta -H computer1,computer2"


  6. Hi guys and gals,

    I just recently started learning about distributed rendering with mantra. I've been trying it out and I have a really dumb question. I have two machines, lets call them computer1 and computer2. So in Houdini I go to my mantra rop, in the Driver tab switch the Command to Network Render and use the command "mantra -H computer1,computer2", then hit Render To Disk. Ok great everything launches. Computer1 is my main machine (master), and computer2 is the slave machine I want to distribute rendering to. When I look at computer2, having just logged in and done nothing else, it says mantra is running as a background process, but the CPU is only at like 2-4% usage. I would assume a render would take up way more than that, since the master machine spikes up to 90-100% at times during the rendering. How can I be sure that the rendering is in fact being properly distributed amongst all the machines designated in the command?



  7. Interesting... Thanks for the replies everyone :)

    I'm still trying to understand this further, and I've got some questions for anyone who might know.

    1. If it's because of interpenetration, how come the geo itself doesn't move when the first frame is solved? I would expect the pieces to move along with the centerpoints, but that doesn't appear to be the case if the packed pieces are viewed in wireframe.

    2. Does this mean voronoi fracturing by its very nature will always create interpenetrating pieces? Why is that? I can understand that boolean creates "perfect" cuts in that sense, but why does voronoi method create interpenetrating pieces?

    3. If it's due to collision object padding, why doesn't the same thing occur when using boolean pieces? I would assume boolean pieces would still have a little bit of default padding added to the pieces, leading to the same effect, but that doesn't seem to be the case as Vitor demonstrated.

    Again thanks in advance for any help, I really would like to understand what's going on at a deeper level (and why), so I can make better sims!

  8. Pretty simple.. Any time I do a destruction setup, just fracturing geo, packing it, and running a simple sim, the points of the packed pieces jump from frame 1 to a slightly different position in frame 2. Why is this? It's incredibly frustrating when trying to map hi res pieces onto a low res sim because on frame 1, everything will look good but as soon as it gets to frame 2 all the pieces jump and now the fracture lines show before the destruction happens. Is there a way around this?



    In attached example, just turn to wireframe, turn on Display points and scrub between frames 1 and 2 to see what I mean


  9. If you look at a recent post I made here, Alex Rusev gave me an excellent solution for getting an event trigger when a new node is selected. I strongly suspect you could use the same mechanism for finding out when the viewport changes. Essentially you just add a new eventLoopCallback() that compares the current viewport to a previously-stored viewport, and if they are different then do a thing (in my case, I needed to emit a Qt signal, in your case, scale geo). Not sure what your level of python is, but if you need help writing it I'll probably have some time to figure it out with you a little later today :)

  10. The network editor has a local coordinate system (X and Y locations). My guess is that by default the nodes get pasted at (0, 0). If the pasted nodes are going to be wired into existing setups, you can always try hooking up the inputs and using Node.moveToGoodPosition() or Node.layoutChildren() on the pasted nodes. Alternatively, you can query the location of one of the existing nodes (probably the last node in the branch) and use hou.moveNodesTo() to move them close to that location.

  11. This has been working great for me Alex! The only modification I've made is in the remove_selection_event static method, I ended up having to add the following lines at the end:

    if hasattr(hou.session, '_selection'):
    	del hou.session._selection

    Otherwise, I was getting a continuous hou.ObjectWasDeleted error when event_callback was trying to getattr(hou.session, '_selection', None) if I had deleted the node (this also happened when I tried it first on a locked then unlocked SESI HDA -- weird). So I just made sure to clean up the _selection attribute on hou.session and it's working like a charm!

    Thanks again for your very detailed example, it was clear, concise, and easy to follow/replicate for my own needs.

  12. In the current tool I'm building for H16, I have a UI in which I would like to have different options available to the user depending on their selection. It would be ideal if these options can change without the user having to close and relaunch the UI every time. In short, I need a way to capture a signal emitted when the user selects node(s). I looked around the HOM and didn't see anything that looks like it gets called whenever a node is selected, just queries for currently selected nodes. Does anyone know how I could trigger something in my UI whenever the user selects different nodes?



  13. Just now, graham said:

    The issue is that the parent node is in fact /obj/example.  In the case of Houdini the parent node is the node that contains an operator, not an input node. Since the parent is an Object it fails because there is no xform node at the object level.

    As you mentioned the multiple nodes in the SOP level are a result of the node cooking multiple times, most likely when MMB on it for errors, etc.

    Thanks Graham, figured out my mistake right as you posted haha :)

  14. Ah my mistake, "parent" refers to the obj-level container, not the input node. Printing "parent" in my case was misleading because the parent name was the same as the input node's name. Code fixed to:

    input = node.inputs()[0]


  15. I'm developing a setup using a Python SOP and came across some behavior that I cannot account for.

    In my example scene, you'll see that I just have a cube appended with a Python SOP. Enable the Python SOP to see what happens. Inside the Python SOP I'm just getting the parent node (the cube), then trying two different things: 1) append a transform node to the Python SOP, and 2) append a transform node to the cube. The first succeeds, the second fails due to "invalid node type name", and it spawns multiple transform nodes.

    Here's the code:

    node = hou.pwd()
    parent = node.parent()
    # This works just fine:
    # This fails due to "Invalid Node Type":

    So the first issue is why does createOutputNode from a parent node fail? The error raised is "Invalid node type name", but that doesn't make sense because the same node type name is valid when using createOutputNode from the current node.

    The second issue is why do multiple transform nodes get spawned from a single createOutputNode?
    EDIT: Only a single transform node spawns when the parent.createOutputNode line is removed, so I believe the second issue is due to the Python SOP re-evaluating the failing code multiple times, each time spawning another successful transform node from the Python SOP.


  16. Hi everyone,

    I've noticed an issue shining a light through a refractive material. The refractions don't seem to be energy conserving. Using a point light with intensity 1, exposure 0, no attenuation, shining through a grid with a basic mantra surface material with refractions enabled, the brightness values of the resulting pixels in the render are extremely high. Some are as high as 4,000. Why is this?

    In this thread, Symek mentions that the BSDF model is not inherently energy conserving, so it would have to be applied post-process. Seems odd that this wouldn't already be done in SideFX's shaders... If that's the case, though, how can I make refractions in the mantra surface shader energy conserving?



    EDIT: Forgot to upload sample scene, here it is


  17. Has anyone else encountered an issue with an hda python module where the OnInputChanged evaluates twice? I'm not sure what's causing it... Even simple code in the module triggers it -- eg:

    node = kwargs['node']
    print node

    ^ I get two printouts of the node each time i change the input connection.

    I originally noticed this when I was trying to query the connected input. When I did

    node = kwargs['node']
    input = node.inputs()[0]
    print input

    I would get an error saying the index was out of the tuple range, followed by a correct printout of the first input connection. The first eval seems to trigger before the connection is actually made, hence why it errors the first time through. Then the second eval triggers once it's connected. This is bizarre, and I haven't yet figured out what's causing it. Surely this isn't standard behavior.

    Anyone else seen this or have ideas as to what's causing it?




  18. 2 hours ago, graham said:

    The usual method to prevent modifications would be through the use of file permissions on your repository files:

    A normal tool that artist would be using should be coming to them from a read only file that is created/managed by your tool system.  Since the file has no write permissions artists are not able to make changes to it (think SESI otls that exist inside $HFS, you can't modify these by default because they are read only).  When you hit your "Modify" button your system would make a local copy of that otl to the users work directory and the file would have write permissions so they'd be able to add/update/remove the contents and parameters.  You'd then "Publish" the tool back to your system, removing their local copy and creating a new, read only copy that artists would then pick up.

    I forgot all about just doing it through the OS permissions, got my head too deep in HOM :rolleyes: Thanks Graham!

  19. Hello,

    I'm creating a digital asset publication system, and I want to be able to prevent artists from changing the asset's parameters outside of the system. So basically, I want my assets to be just like SideFX's digital assets, in the sense that you can't modify the Type Properties, you can only add spare parameters. But then when they use my "Modify" button, the Type Properties dialog unlocks and they can edit the base definition. Then "publishing" the asset re-locks the Type Properties. How can this be achieved? I immediately went to hou.hda.setSafeguardHDAs(), but that makes it so NO asset can be modified; I just want our custom assets to be modified through the new system.


  20. Thanks F1! Experimenting with that now. I always wondered how the Tools tab was used. The documentation is pretty basic and doesn't address a lot of the advanced options for power users. Do you have a reference for this tab? My google-fu is weak here, since searching for the "tools" tab doesn't help much considering a lot of people use "tool" interchangeably with "node/asset/hda/otl".

  21. I believe in the File Merge node, you should change your Merge Variable to be WEDGENUM to keep the naming consistent, then you can merge based on your WEDGENUM in your filepath. The File Merge is going to try to replace every instance of your Merge Variable with the values in the Merge Range (start to end by increment). So your starting value should be your lowest seed value and the end value should be your highest seed value (not 1 and 1 as yours are now :P ). Then just make sure to set Missing Files to No Geometry so it doesnt error out for inevitable missing files in the range from start value to end value.


    I just tested this with a simple scene: had a cube, sphere, and pyramid written out with a switch controlled by $CLUSTER variable (0=cube, 1=sphere, 2=pyramid). Then I used the File Merge node with Merge Variable set to CLUSTER and read them in with the same variable, setting the Merge Range to (0, 2, 1), and Missing Files to No Geometry -- that way, when I deleted the file where $CLUSTER=1, files 0 and 2 still imported just fine :) So you can see here I just have the cube and pyramid being read in without error. You'll definitely need the same for your particles