Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

Search the Community: Showing results for tags 'HQueue'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 24 results

  1. Hello, I'm trying to distribute my wave layer tank FLIP to about 8 computers and they seem to cause this(seem image below) when they are brought back in where the sections that meet go crazy. (i'll add a file when I get a chance if the image does'nt instantly spark an obvious solution.)
  2. Hey guys, I'm trying to use HQueue for flip simulation. I believe I set up the machines correctly as the HQueue server distributes render jobs successfully. And the jobs are done well. The problem I have is distributing simulation. I followed the direction from the HQueue help. Nothing special. However, the working stops all of a sudden in a certain frame. Actually, I tested both volume slice and particle slice for different kind of fluid solutions. And the same problem comes out around 10 frame. No more proceeding. But the render manager shows "running" that means the HQueue is working fine. I guess this is definitely an error in spite of no error messages because HQueue stops anyway. There's no clue. I'm totally stuck on this point. These pics are from a sliced pyro sim but actually a sliced flip sim has the same problem. Anybody has experienced this problem? Or give me a hint for solving this problem. This actually drives me nuts. OS: windows 7 professional x64 SP1 HOUDINI: 14.0.201.13 the server and clients are all same. hq_flip_02.hip
  3. Hello, I'm looking to find out what off the shelf render farms people prefer currently for production; likes and dislikes? Short background: We need to use our nodes for Houdini, Max, Maya, fume, and a few other software. Currently about ~100 nodes, and we don't have a dedicated render farm technician so the most hands off, off the shelf is the best. I have experience with Hqueue, Rush, Deadline, and Qube, so I am looking for any additional off the shelf software, too beyond those. Thanks -Ben
  4. Hi everyone, I'm facing a weird problem with HQueue. I have 3 client machines, firewall disable on all of them, they have all the necessary authorizations and access, the file I want to render can be open on each of them with the exact same path (and I've tried to open it on each of them, it works perfectly) but... when I submit the job to HQueue, it fails on all machines with this message : "......................... return function(**kwargs) File "C:\Program Files\Side Effects Software\Houdini 15.0.244.16\houdini\scripts\hqueue\hqlib.py", line 1489, in simulateSlice controls_dop = _loadHipFileAndGetNode(hip_file, controls_node) File "C:\Program Files\Side Effects Software\Houdini 15.0.244.16\houdini\scripts\hqueue\hqlib.py", line 331, in _loadHipFileAndGetNode _loadHipFile(hip_file, error_logger) File "C:\Program Files\Side Effects Software\Houdini 15.0.244.16\houdini\scripts\hqueue\hqlib.py", line 362, in _loadHipFile hou.hipFile.load(hip_file) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.16/houdini/python2.7libs\houpythonportion.py", line 518, in decorator return func(*args, **kwargs) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.16/houdini/python2.7libs\hou.py", line 22697, in load return _hou.hipFile_load(*args, **kwargs) hou.OperationFailed: The attempted operation failed. Unable to open file: X:/3d_Projects/MyProject/Houdini/Beach_4.hip" My file is in an other shared folder than the specific one for HQueue, but I even try to submit it from the HQueue shared folder and it doesn't work either. Does anyone have a clue what is going on ? I am stuck...
  5. hey guys, So I got a new workstation this week, and i'm trying to setup my old system as a render/simulation node. I did get mantra rendering to work using deadline, but simulations aren,t working for some reason. I'm getting this error (see attached image) I also tried HQueue, but all my jobs keep failing there. HQueue does see both my computers, but it immediately fails when I submit a job (either render or sim) I'm on windows and I'm using Houdini Indie. Any ideas?
  6. Hi, I am wondering if render with htoa through hqueue is supported using the latest versions of Houdini and Arnold. If not, is there any way to do it with scripting, alembic etc? Many Thanks
  7. Hey Peeps, I know this will sound weird but has anyone ever run HQueue on centos on a NAS, such as QNAP TS-253A-4G (see amazon link), it contains a celeron processor with 4GB ram. I note from the description of the NAS that it runs ubunto out of the box, but i have seen centos mentioned as being able to run within "container station" a QNAP app. My setups so far is two i7-4790's both running dual boot windows and centos, with a local houdini install. I would love for them to both become render nodes with HQserver running on the NAS. I know most people say just run HQserver on one of the nodes but I want the nodes to do just rendering without any overhead from the HQserver, also all of the rendered output will be stored on the NAS so it makes sense if it can also run the HQserver. I don't know the minium specs for the HQserver but if i upgraded the ram on the NAS to 8GB it might be enought to pull this off.... If anyone has any info on the minimum processor/ram for HQserver please let me know, or if anyone has a QNAP NAS that can run linux, please share your expereiences of linux on NAS.... ciao Albin HO
  8. This is a newbie question (kind of)... After setting up Hqueue and HqueueClient on a single machine to help me do batch render jobs I noticed the render times from Hqueue is dramatically slower x5 times slower. And again this is the same machine I just use Hqueue to help me organize batch renders. The client takes too long in the Hython (around 16 min preparing the scene) and is not nearly as fast while rendering each frames (x5 times slower). I find this really sad because Mantra is so fast rendering that is a shame it does not seem to be used properly in my Hqueue setup. And i don't know even how to find the problem the client and server are running properly. though I notice mantra is a lot slower to start rendering in my task manager, instead of constantly rendering like when I "render to disk" from my Hip file. I am using Houdini Indie in a Windows 10 machine, so I am not exporting ifd files, it took me a while to find out that is incompatible with Indie, any ideas on how can I go about to solve this would be very appreciated it.
  9. I'm trying to run a sim in the background from a file cache. I set the output and frame range. I hit Save to disk in the background. The scheduler pops up and says that the job is complete after 1 sec, but nothing has been cached? Has anyone experienced this? it doesn't seem to be throwing an error (unless I'm not looking in the right place). I have administrator privileges and i'm rendering on only one machine. Any suggestions would be great. Rob
  10. I tried following the sesi documenation on hqsetup.But unable to get it working. Could anyone direct how to make this work ? I am able to see the client but render is coming out failed ERROR: Cannot open file W:/3D_project/0038_HoudiniProjects/HQTEST/HqTest1.hip This drive is on the server. client is using the windows login username and password to run the service. What am i doing wrong ? # The shared network. hqserver.sharedNetwork.host = localhost hqserver.sharedNetwork.path.linux = %(here)s/shared hqserver.sharedNetwork.path.windows = \\render-04\hq hqserver.sharedNetwork.path.macosx = %(here)s/HQShared hqserver.sharedNetwork.mount.linux = /mnt/hq hqserver.sharedNetwork.mount.windows = H: hqserver.sharedNetwork.mount.macosx = /Volumes/HQShared # Server port number. hqserver.port = 5000 # Where to save job output job_logs_dir = %(here)s/job_logs # Specify the database for SQLAlchemy to use sqlalchemy.default.url = sqlite:///%(here)s/db/hqserver.db # This is required if using mysql sqlalchemy.default.pool_recycle = 3600 # This will force a thread to reuse connections. sqlalchemy.default.strategy = threadlocal
  11. Hi guys, yet another HQueue problem. My team and I expanded our university's renderfarm and now we're at about 100 clients. Everything is working great, but the web interface of the HQueue was getting slower and slower with each client we added and is now it's nearly unusable... We've tried different browsers (chrome, firefox, internet explorer...) but nothing changed. Our setup is pretty much the one suggested by SESI in the help section, so the shared folder and the hqserver are two different machines. Do you guys know this problem, or maybe even have a solution for it? Thank you, Philipp
  12. Hi guys, I've been trying to render a sequence in HQueue for Houdini 15.0.244.16 but it is not rendering my per-light components out, but rendering an "all" component. Hqueue also doesn't show any error when the render is taking place I've tried rendering the same mantra node locally and all the components do appear, I Have attached 2 screenshots showing the difference in the render passes, any help would be greatly appreciated. Cheers Ronald
  13. Hey guys, I am having a heck of a time getting my farm setup for my school finals. I have three machines I need to run sliced fluid sims on. Right now I am just trying to get the main workstation to complete a HQueue job... So I have HQueue client and server installed on this machine on the C drive. Both services run fine under another admin account I created called HQueue. I Have used the shelf tool for creating a sliced sim (pyro sim in this case) as recommended per the HQueue documentation. The shared folder with a houdini install in it is on another disk called F in the workstation, the specific folder is shared as hq to all other machines and is mounted on all of them as H: I have no problems accessing it from any machine. The server's .ini file has been setup with the server ip of the pc it is running on (the workstation), and these lines have been set: hqserver.sharedNetwork.path.windows = \\KYLE-PC\hq hqserver.sharedNetwork.mount.windows = H: Everything else in there is vanilla. The problem seems to be in writing the slice files to the mounted H: drive, as I get this error when I submit the houdini file I have attached: hqlib.callFunctionWithHQParms(hqlib.simulateSlice) File "\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py", line 1864, in callFunctionWithHQParms return function(**kwargs) File "\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py", line 1532, in simulateSlice _renderRop(rop) File "\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py", line 1869, in _renderRop rop.render(*args, **kwargs) File "//KYLE-PC/hq/houdini_distros/hfs.windows-x86_64/houdini/python2.7libs\hou.py", line 32411, in render return _hou.RopNode_render(*args, **kwargs) hou.OperationFailed: The attempted operation failed. Error: Failed to save output to file "H:/projects/geo/untitled.loadslices.1.bgeo.sc". Error: Failed to save output to file "H:/projects/geo/untitled.loadslices.2.bgeo.sc". I am really not sure why this is happening as I think I have all the relevant permissions. Any suggestions peeps? -Kyle Here is the diagonostics ouput too: Diagnostic Information for Job 75: ================================== Job Name: Simulate -> HIP: untitled.hip ROP: save_slices (Slice 0) Submitted By: Kyle Job ID: 75 Parent Job ID(s): 73, 76 Number of Clients Assigned: 1 Job Status: failed Report Generated On: December 12, 2015 01:52:08 AM Job Properties: =============== Description: None Tries Left: 0 Priority: 5 Minimum Number of Hosts: 1 Maximum Number of Hosts: 1 Tags: single Queue Time: December 12, 2015 01:15:04 AM Runnable Time: December 12, 2015 01:46:19 AM Command Start Time: December 12, 2015 01:50:04 AM Command End Time: Start Time: December 12, 2015 01:50:04 AM End Time: December 12, 2015 01:50:18 AM Time to Complete: 13s Time in Queue: 35m 00s Job Environment Variables: ========================== HQCOMMANDS: { "hythonCommandsLinux": "export HOUDINI_PYTHON_VERSION=2.7 && export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && cd $HFS && source ./houdini_setup && hython -u", "pythonCommandsMacosx": "export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && $HFS/Frameworks/Python.framework/Versions/2.7/bin/python", "pythonCommandsLinux": "export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && $HFS/python/bin/python2.7", "pythonCommandsWindows": "(set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && \"!HFS!\\python27\\python2.7.exe\"", "mantraCommandsLinux": "export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && cd $HFS && source ./houdini_setup && $HFS/python/bin/python2.7 $HFS/houdini/scripts/hqueue/hq_mantra.py", "mantraCommandsMacosx": "export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && cd $HFS && source ./houdini_setup && $HFS/Frameworks/Python.framework/Versions/2.7/bin/python $HFS/houdini/scripts/hqueue/hq_mantra.py", "hythonCommandsMacosx": "export HOUDINI_PYTHON_VERSION=2.7 && export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && cd $HFS && source ./houdini_setup && hython -u", "hythonCommandsWindows": "(set HOUDINI_PYTHON_VERSION=2.7) && (set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && (set PATH=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!\\bin;!PATH!) && \"!HFS!\\bin\\hython\" -u", "mantraCommandsWindows": "(set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && \"!HFS!\\python27\\python2.7.exe\" \"!HFS!\\houdini\\scripts\\hqueue\\hq_mantra.py\"" } HQPARMS: { "controls_node": "/obj/pyro_sim/DISTRIBUTE_pyro_CONTROLS", "dirs_to_create": [ "$HIP/geo" ], "tracker_port": 54534, "hip_file": "$HQROOT/projects/untitled.hip", "output_driver": "/obj/distribute_pyro/save_slices", "enable_perf_mon": 0, "slice_divs": [ 1, 1, 1 ], "tracker_host": "KYLE-PC", "slice_num": 0, "slice_type": "volume" } HQHOSTS: KYLE-PC Job Conditions and Requirements: ================================ hostname any KYLE-PC Executed Client Job Commands: ============================= Windows Command: (set HOUDINI_PYTHON_VERSION=2.7) && (set HFS=!HQROOT!\houdini_distros\hfs.!HQCLIENTARCH!) && (set PATH=!HQROOT!\houdini_distros\hfs.!HQCLIENTARCH!\bin;!PATH!) && "!HFS!\bin\hython" -u "!HFS!\houdini\scripts\hqueue\hq_sim_slice.py" Client Machine Specification (KYLE-PC): ======================================= DNS Name: KYLE-PC Client ID: 1 Operating System: windows Architecture: x86_64 Number of CPUs: 24 CPU Speed: 4000.0 Memory: 25156780 Client Machine Configuration File Contents (KYLE-PC): ===================================================== [main] server = KYLE-PC port = 5000 sharedNetwork.mount = \\KYLE-PC\hq [job_environment] HQueue Server Configuration File Contents: ========================================== # # hqserver - Pylons configuration # # The %(here)s variable will be replaced with the parent directory of this file # [DEFAULT] email_to = you@yourdomain.com smtp_server = localhost error_email_from = paste@localhost [server:main] use = egg:Paste#http host = 0.0.0.0 port = 5000 [app:main] # The shared network. hqserver.sharedNetwork.host = KYLE-PC hqserver.sharedNetwork.path.linux = %(here)s/shared hqserver.sharedNetwork.path.windows = \\KYLE-PC\hq hqserver.sharedNetwork.path.macosx = %(here)s/HQShared hqserver.sharedNetwork.mount.linux = /mnt/hq hqserver.sharedNetwork.mount.windows = H: hqserver.sharedNetwork.mount.macosx = /Volumes/HQShared # Server port number. hqserver.port = 5000 # Where to save job output job_logs_dir = %(here)s/job_logs # Specify the database for SQLAlchemy to use sqlalchemy.default.url = sqlite:///%(here)s/db/hqserver.db # This is required if using mysql sqlalchemy.default.pool_recycle = 3600 # This will force a thread to reuse connections. sqlalchemy.default.strategy = threadlocal ######################################################################### # Uncomment these configuration values if you are using a MySQL database. ######################################################################### # The maximum number of database connections available in the # connection pool. If you see "QueuePool limit of size" messages # in the errors.log, then you should increase the value of pool_size. # This is typically done for farms with a large number of client machines. #sqlalchemy.default.pool_size = 30 #sqlalchemy.default.max_overflow = 20 # Where to publish myself in avahi # hqnode will use this to connect publish_url = http://hostname.domain.com:5000 # How many minutes before a client is considered inactive hqserver.activeTimeout = 3 # How many days before jobs are deleted hqserver.expireJobsDays = 10 # The maximum number of jobs (under the same root parent job) that can fail on # a single client before a condition is dynamically added to that root parent # job (and recursively all its children) that excludes the client from ever # running this job/these jobs again. This value should be a postive integer # greater than zero. To disable this feature, set this value to zero. hqserver.maxFailsAllowed = 5 # The priority that the 'upgrade' job gets. hqserver.upgradePriority = 100 use = egg:hqserver full_stack = True cache_dir = %(here)s/data beaker.session.key = hqserver beaker.session.secret = somesecret app_instance_uuid = {fa64a6d1-ae3f-43c1-8141-9c29fdd9d418} # Logging Setup [loggers] keys = root [handlers] keys = console [formatters] keys = generic [logger_root] # Change to "level = DEBUG" to see debug messages in the log. level = INFO handlers = console # This handler backs up the log when it reaches 10Mb # and keeps at most 5 backup copies. [handler_console] class = handlers.RotatingFileHandler args = ("hqserver.log", "a", 10485760, 5) level = NOTSET formatter = generic [formatter_generic] format = %(asctime)s %(levelname)-5.5s [%(name)s] %(message)s datefmt = %B %d, %Y %H:%M:%S Job Status Log: =============== December 12, 2015 01:15:04 AM: Assigned to KYLE-PC (master) December 12, 2015 01:15:10 AM: setting status to running December 12, 2015 01:15:23 AM: setting status to failed December 12, 2015 01:18:28 AM: Rescheduling... December 12, 2015 01:18:28 AM: setting status to runnable December 12, 2015 01:18:28 AM: Assigned to KYLE-PC (master) December 12, 2015 01:18:35 AM: setting status to running December 12, 2015 01:18:47 AM: setting status to failed December 12, 2015 01:23:18 AM: setting status to runnable December 12, 2015 01:23:19 AM: Assigned to KYLE-PC (master) December 12, 2015 01:23:20 AM: setting status to running December 12, 2015 01:23:33 AM: setting status to failed December 12, 2015 01:29:44 AM: setting status to runnable December 12, 2015 01:29:44 AM: Assigned to KYLE-PC (master) December 12, 2015 01:29:44 AM: setting status to running December 12, 2015 01:29:57 AM: setting status to failed December 12, 2015 01:34:17 AM: setting status to runnable December 12, 2015 01:34:17 AM: Assigned to KYLE-PC (master) December 12, 2015 01:38:17 AM: setting status to abandoned December 12, 2015 01:46:19 AM: setting status to runnable December 12, 2015 01:50:04 AM: Assigned to KYLE-PC (master) December 12, 2015 01:50:04 AM: setting status to running December 12, 2015 01:50:18 AM: setting status to failed UPDATE: I just did a system restart to see if it would help and instead of the regular write error I recieved this: 0x00000000577CDE78 (0x000000000000002B 0x000000AD63AEF840 0x000000AD453FEEB0 0x0000000000000000), ?thread_sleep_v3@internal@tbb@@YAXAEBVinterval_t@tick_count@2@@Z() + 0x8C8 bytes(s) 0x00000000577CDD2B (0x000000AD45381F90 0x000000AD45381F90 0x000000AD453FEEB0 0x0000000000000000), ?thread_sleep_v3@internal@tbb@@YAXAEBVinterval_t@tick_count@2@@Z() + 0x77B bytes(s) 0x00007FFF29E43FEF (0x00007FFF29EE1DB0 0x0000000000000000 0x0000000000000000 0x0000000000000000), _beginthreadex() + 0x107 bytes(s) 0x00007FFF29E44196 (0x00007FFF29E44094 0x000000AD453FEEB0 0x0000000000000000 0x0000000000000000), _endthreadex() + 0x192 bytes(s) 0x00007FFF36582D92 (0x00007FFF36582D70 0x0000000000000000 0x0000000000000000 0x0000000000000000), BaseThreadInitThunk() + 0x22 bytes(s) 0x00007FFF36C29F64 (0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000), RtlUserThreadStart() + 0x34 bytes(s) After resubmission, it went back to the usual error mentioned above. untitled.hip
  14. Hi, we're looking into hqueue and noticed that hqueue clients don't inherit the environment or the otl paths. i.e. otls stored on ~/houdini14.0/otl are not made available to the client. (I'm having a look at the hqueue docs but can't find anything regarding this. Also manually adding the otl_scan_path variable in the hqueue rop and submitting a job doesn't seem to pass the variable to the client on the farm.) Is there a way to configure hqueue so that clients would launch a specific wrapper? or simply inherit the current environment including otl_scan_path? Thanks for your help L
  15. hi every one is there any workflow to simulate rbd on multiple machines ? since the slice work only with flip i'v tried some stuff like using the hq sim and hq render and no chance and thanks in advance
  16. Hi,everyone The HQueue is set up in my place with 11 workstations in the simulation group,as this one is a rbd sim so I set the partitioning type to none.Everything works fine but it is using only one worker when simulateļ¼Œthere should be 11 instead.In the property it says min and max hosts is "1". The attached file is the settings of my hqsimulation and the property of the job on the HQ. Does anyone know how to set all the other 10 slaves take part in the job? Thank you very much!
  17. I know this has been asked a few times already but i've not found a very conclusive answer.. At work we currently have one Houdini FX license (locked to my machine) and we would like to test out using some of the other machines for rendering and simulation in the most cost efficient manner. Am I right in thinking we can use my machine as a workstation license and also as the Hqueue server machine? And then we would need Houdini Engine licenses for the render farm machines for rendering and simulation? But the Houdini Engine plugin that comes with Maya is not the same as a Houdini Engine license? And we already have unlimited mantra nodes that came with my FX license so we can just install them on the render farm machines? Is that correct? Have I missed anything?? Cheers
  18. Hey guys, I'm trying to use HQueue for flip simulation. I believe I set up the machines correctly as the HQueue server distributes render jobs successfully. And the jobs are done well. The problem I have is distributing simulation. I followed the direction from the HQueue help. Nothing special. However, the working stops all of a sudden in a certain frame. Actually, I tested both volume slice and particle slice for different kind of fluid solutions. And the same problem comes out around 10 frame. No more proceeding. But the render manager shows "running" that means the HQueue is working fine. I guess this is definitely an error in spite of no error messages because HQueue stops anyway. There's no clue. I'm totally stuck on this point. Anybody has experienced this problem? Or give me a hint for solving this problem. This actually drives me nuts. OS: windows 7 professional x64 SP1 HOUDINI: 14.0.201.13 the server and clients are all same. hq_flip_02.hip
  19. hey I have a running render farm using hqueue, so far its working fine (all win7 clients). but I like to switch to linux now. and cant get the linux clients to pick up jobs. they all have all needed share mounted and are listed under clients. they even getting these maintenance jobs, but no render jobs submitted from a win7 workstation. Ive tried centos 6.5 and mint 17.1. any ideas?
  20. Here is a recording of how to get HQueue up and running on two computers; it can be really frustrating trying to setup HQueue, but I've found it's much simpler to setup on Linux (similar distros is helpful of course), where you only have to install one or two packages, and then let the Houdini installer do the rest. It's probably most helpful for artists/students who might have one or two computers they can use as workers to sim/render, but might be nice for anyone who gets stuck! I think a Windows/Mac version would be awesome, if anyone would care to create one; I haven't had any experience using HQ on anything but Linux so far! Good luck!
  21. Greetings, I've had some down time and decided I'd attempt to get HQueue to work on the couple of machines we have here. NOPE. Somehow I can get the Server set up on a machine and the Client on that same machine will register in the web interface but none of the other machines will register. All windows 7 machines, all clean installs of HQueue client and Server. Can someone make a HQueue setup for idiots? I've read through the official instructions it doesn't really offer anything other than the bare basics of what to install. Example of my setup now - Server-PC 192.168.0.1 running server and a client, client works fine it seems. Port 5000 Slave01-PC 192.168.0.2 running just the client, and installed and pointed to "Server-PC" in the ini. What am I doing wrong? Is there anything else I HAVE to have installed? Am I missing a step? The guide makes it seem as though that's all I need to do. Both systems have working versions of Houdini on them already.
  22. Hi, Does anybody know how to submit a "A Simple Parent-Child Example" : http://www.sidefx.co...ationships.html Or how to submit a dependency chain out of Houdini. Like, Simulation --> Meshing --> Rendering. It's simple in Houdini on one machine but how to take it over to Hqueue? Cheers, nap
  23. I was wondering if any Houdini guru's would be able to help me with a render farm problem we are having. Each farm client (5 total) has HServer installed, access to the HQueue license server and the shared drive with Houdini distributables. Additionally we have 100mantra rendering licenses, and 5 batch licenses When I submit a job and only leave one client enabled (doesn't matter which), the render will be successful. If I include all clients I receive this error and the job fails. mantra: Network Error[RWCWRK7004374] HServer was unable to start mantra mantra: Network Error[RWCWRK7004876] HServer was unable to start mantra mantra: Network Error[RWCWRK7004560] HServer was unable to start mantra mantra: Network Error[RWCWRK7004369] HServer was unable to start mantra mantra: No remote hosts are available for rendering Does anyone have any ideas to what might be causing this? Thanks
  24. The new feature in Houdini 12 for simulating Pyro clusters is pretty cool and very efficient for the simulation. I'm trying to get the simulation caches back into Houdini (to view and render the results) but I don't see a straight forward way of doing so. I used the "${CLUSTER}" variable in the name for the export ROP that went to HQueue but that only works for the export, not for reading the simulation back in. Am I missing a straight forward way of reading the simulated Pyro clusters back in or is it a one way street to use the "${CLUSTER}" variable and you just make a bunch of file nodes to get them back in?