Search the Community
Showing results for tags 'farm'.
-
Hi Wizards! I'm trying to use the Deadline ROP to send sims/renders to farm, but it's not using the Environment Variables from my scene file. Any idea on how I can inherit the environment variables? Thanks!
-
Afanasy 3.2.1 version supports TOPs: https://cgru.readthedocs.io/en/latest/software/houdini.html#afanasy-top-scheduler
-
[I think I'm probably more than a few days from understanding and being able to write the necessary recursive function, but I thought I'd tap the brains here to save time (or see if I'm on the wrong track and it's easier than I assume).] I want to build a list of render nodes that include the nodes they depend on. (see image) Something like: (node name, nodes it immediately depends on) (r,as & b), (as, cs), (b, ds & es), (cs, 0), (ds, 0), (es, 0) --- I know I can run hou.node("out/r").inputDependencies() to get the full list, but that doesn't give me the dependencies of each node in turn. I figure, like I said above, that I'll probably need to write a recursive function to spit these out. Something that runs over each subsequent list of dependencies until there are none then return the list. Somehow. But maybe not? I imagine there might be an internal to Houdini way to do this, no?
-
Hello, I've setup a hqueue farm across several machines with a shared drive. Almost everything seems to be working now (had lot's of intermittent issues, with jobs not making it to hqueue and 0%progress fails[think perhaps there was a license conflict]) Anyway, current problem is that I am unable to overwrite dependencies in the shared folder when I resubmit the hqrender node. As far as I can tell there are no permission issues and the files are not being used. I can read/delete/edit these files no problem from any of my clients/server machines(also fine/not corrupt in local directory). Any thoughts? log below OUTPUT LOG ......................... 00:00:00 144MB | [rop_operators] Registering operators ... 00:00:00 145MB | [rop_operators] operator registration done. 00:00:00 170MB | [vop_shaders] Registering shaders ... 00:00:00 172MB | [vop_shaders] shader registration done. 00:00:00 172MB | [htoa_op] End registration. 00:00:00 172MB | 00:00:00 172MB | releasing resources 00:00:00 172MB | Arnold shutdown Loading .hip file \\192.168.0.123\Cache/projects/testSubmit-2_deletenoChace_2.hip. PROGRESS: 0%ERROR: The attempted operation failed. Error: Unable to save geometry for: /obj/sphere1/file1 SOP Error: Unable to read file "//192.168.0.123/Cache/projects/geo/spinning_1.23.bgeo.sc". GeometryIO[hjson]: Unable to open file '//192.168.0.123/Cache/projects/geo/spinning_1.23.bgeo.sc' Backend IO Thanks in advance
-
Hi all I've read through previous posts on this topic as well as other websites attempting to solve this, but I have yet to find an adequate solution. I am trying to build a workflow for meshing large point clouds (lidar ply files) that we can offload to our render farm. The issues I'm currently having: 1. The point clouds have no normals so point cloud ISO does not work. I'm not sure what math other point cloud software uses to generate normals on point clouds. 2. I've tested using both VDB from points and Particle Fluid Surfacer. The particle fluid surfacer seems to be faster at meshing the points, but the big problem with both is that it ends up giving you thickness which is ultimately double the number of polygons and completely unnecessary. 3. The point clouds are massive (averaging 100 million points per file), and I haven't found a good way of automatically splitting that up into better chunks. The point cluster is really slow on that many points. Any advice would be much appreciated. I'd love to keep this inside Houdini rather than using traditional point cloud software, mainly because I can run it on the farm and because the rest of our workflow is all houdini based. Thanks!
-
HI, any suggestion for houdini mantra online render farm ? i have contacted grid Market. any other ?
-
Hi Guys, I'm trying to get a tool that I created to render on a farm. It's a multi-container sim tool for volumes. All the tool is, is a ROP Geo node in a subnet that I tell to iterate over a sop output. I give it a min and a max and a current iteration value. This is the Code: def render(): node = hou.pwd() min = node.evalParm('from') max = node.evalParm('max') sop = hou.node(node.evalParm('soppath')) for n in range(min,max): node.setParms({'current':n}) print node.evalParm('current') sop.cook() hou.parm(node.path()+ '/sim/execute' ).pressButton() It works locally beautifully, but when I submit it to a farm it just sim's the same "current" iteration, over and over not iterating as it does locally. I imagine the farm runs the render command and then once it's finished it runs the entire function again (but I could be wrong) instead of running through the loop. or its just not iterating the current value. Sorry if this is a bit vague. Does anyone have any experience with this stuff I would love a little help. Thanks Rob
-
hi every one is there any workflow to simulate rbd on multiple machines ? since the slice work only with flip i'v tried some stuff like using the hq sim and hq render and no chance and thanks in advance
-
Hi guys, I need to distribute a flip fluid sim on the farm, I have Royal Render on the farm but what's really matter for me now is make it works so, which one is the best way to sim it: ..HQueue or RR? And in any case, how can I do it, please? Thanks in advance
-
Greetings, I've had some down time and decided I'd attempt to get HQueue to work on the couple of machines we have here. NOPE. Somehow I can get the Server set up on a machine and the Client on that same machine will register in the web interface but none of the other machines will register. All windows 7 machines, all clean installs of HQueue client and Server. Can someone make a HQueue setup for idiots? I've read through the official instructions it doesn't really offer anything other than the bare basics of what to install. Example of my setup now - Server-PC 192.168.0.1 running server and a client, client works fine it seems. Port 5000 Slave01-PC 192.168.0.2 running just the client, and installed and pointed to "Server-PC" in the ini. What am I doing wrong? Is there anything else I HAVE to have installed? Am I missing a step? The guide makes it seem as though that's all I need to do. Both systems have working versions of Houdini on them already.
-
I saw this in the news and thought it could make for an interesting render farm. It would be nice if they had access to more memory though (8GB max), hopefully future iterations will allow for more memory. Each 4U unit hosts 45 tiny machines with dual core Atom processors at 1.6 GHz or 2.0GHz with some other basic components like a NIC and SSD. http://h17007.www1.hp.com/us/en/enterprise/servers/products/moonshot/index.aspx#tab=TAB1 Supposedly the second or third generation of the product (they call it Moonshot) will use ARM processors instead of the x86-64 based Atom processors. Houdini is in a unique position to take advantage of hardware like this because the Mantra licenses for each render node are included with the Houdini licenses, for example with Renderman it would take $90,000 of licenses to run a single 4U unit of these machines. I wonder if SESI has experimented with ARM builds of Houdini or Mantra.