Jump to content

Search the Community

Showing results for tags 'hqueue'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
    • Marketplace
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 44 results

  1. Hi guys, i'm "trying" to configure an hq server and client on several machines and it's a nightmare.... when i submit a simulation, the job starts, i see my (for now) 2 clients but every time , the job failed !! I don't even know why and when i look into the report i don't understand where the error is !!! I think it's a path problem but i have a NAS for the sim, all the client work and see the NAS, we have backburner since 10 years and it works but in the configuration of the hqserver.ini and the hqclient.ini i don't know what i need to put on the share path, etc.... If someone can help, it would be nice. Regards. job_2_diagnostic_information (7).txt
  2. Hi All ! Did anyone of you guys have Houdini 18 Hqueue issue? In my case, After houdini 18 installation, I decided to upgrade hqueue (from 17,5) to 18. The issue is : Everytime I submit a job to Hqueue, the IFD generation starts normally, till the last frame IFD is done and "master IFD generation job" shows me a "FAILED" status. After that, the clients assigned to render and maschines seem to be rendering. At the end of each frame the status of frame also turns to"FAILED". But when I check the render folder in the output directory, all renderings are done and look fine. Did anyone have a chance to encounter such kinda problem? Tried to downgrade back to 17,5, works good, as it should be. Upgrading also brings the upshown issue. I'll be glad of any help.
  3. hqueue add client

    I have hqueue server and client installed and working on one machine. I am trying to add a second computer in a home network, as a client. when installing the client on the second computer i need to enter the server name and port number. I am not savvy with networking and do not know how to find this information. the houdini docs say to set this field to the machine hosting the hqueue server. I didn't know what to enter so i entered the ipv4 ip address of the first machine where i installed hqueue. and used the same port number 5000. this didn't succeed in adding the client to hqueue. Help would be appreciated. is there a way i can look up the correct server name info? thank you
  4. Hi ! I've got hqueue farm set up on linux machines. i use it only for simulations Sometimes i get hip files for simulation that contain various number of operators which are not needed for sim (i.e. some custom ROPs, materials etc.) BUT ! when a client loads this hip file it warns that it can not load specific OTL and refuses to load hip. Let's say i have a scene with geo node that has reference to some Redshift material in its 'material' parameter. but i don't want to render this. i just want to simulate and write geometry to disk. But hqueue refuses to do that because it can not find redshift otls .... Is there a way to force houdini to ignore things like this....????? So you can not find otl!!!!....nevermind. just simulate...you do not need redshift to simulate dop network.... upd: when i connect to client via ssh and start hbatch in terminal - it works.....it loads file, claims that can not recognize node types...but simulation works when i start it with the command manually (render -V rop_node).. why hqueue can not do that and just fail the job ???
  5. Hi ! I'm trying to render a job with frame increment of 0.25 ! But hqueue renders only in integer frames. Render on local machine from houdini works fine. I use $N variable to number rendered images . Is it possible to distribute fractional frames to hqueue farm ?
  6. Hi there everyone. Is there a way to use HQUEUE with the Apprentice version of Houdini ? when I had Indie, I used to pass it through an Output driver but those are not supported by the Apprentice version. I am having random sims just going straight to 100% after the first frame, a couple of frames in or sometimes it gets half way and then dose 100% and I cannot check what is causing this.
  7. Howdy there!! Having an issue here. I'm caching geometry to be prepped for Custom Flip SIMMs. And I need sub-frames as Flip needs sub-frames for VDB volumes and caches of Geometry. I'm putting on LOCAL system and it works. But when I send them to server using HQueue it only replies the Frames and doesnot include any sub-frames. I've checked with all the available options but still nothing. Screenshots are here................... These two things I've tried in HQueue.... Any IDEAS... Thanks.
  8. Hey! I'm in a situation where I have cached a huge amount of points on disk and I want to render those points. What I do in the scene is reading the bgeo.sc caches, merge them all together, trail the points to calculate velocity, apply a pscale, assign a shader and render. If I run it on the farm with IFDs it takes a lot of time to generate the IFDs for render, and I'm trying to reduce the time of this task. I noticed that the IFD itself is tiny, the big part is that under the IFD folder there's a storage folder where basically I have a huge bgeo.sc file per frame which I suppose is the geometry I'm about to render. I wonder if all of this is not redundant since all those operation can be done at rendertime I suppose...I tried to set the file sop in "packed disk primitive" but it seems I cannot apply pscale after that, all the points render at pscale 1...
  9. HQUEUE COPY PROJECT ISSUE

    Hello, I've setup a hqueue farm across several machines with a shared drive. Almost everything seems to be working now (had lot's of intermittent issues, with jobs not making it to hqueue and 0%progress fails[think perhaps there was a license conflict]) Anyway, current problem is that I am unable to overwrite dependencies in the shared folder when I resubmit the hqrender node. As far as I can tell there are no permission issues and the files are not being used. I can read/delete/edit these files no problem from any of my clients/server machines(also fine/not corrupt in local directory). Any thoughts? log below OUTPUT LOG ......................... 00:00:00 144MB | [rop_operators] Registering operators ... 00:00:00 145MB | [rop_operators] operator registration done. 00:00:00 170MB | [vop_shaders] Registering shaders ... 00:00:00 172MB | [vop_shaders] shader registration done. 00:00:00 172MB | [htoa_op] End registration. 00:00:00 172MB | 00:00:00 172MB | releasing resources 00:00:00 172MB | Arnold shutdown Loading .hip file \\192.168.0.123\Cache/projects/testSubmit-2_deletenoChace_2.hip. PROGRESS: 0%ERROR: The attempted operation failed. Error: Unable to save geometry for: /obj/sphere1/file1 SOP Error: Unable to read file "//192.168.0.123/Cache/projects/geo/spinning_1.23.bgeo.sc". GeometryIO[hjson]: Unable to open file '//192.168.0.123/Cache/projects/geo/spinning_1.23.bgeo.sc' Backend IO Thanks in advance
  10. Remove Hqueue

    Hello, I'd like to remove hqueue completely from mac OS, deleting the hqserver, hqclient and the files in Launch daemon isn't enough... when I try to reinstall it I have an internal server error, I suspect it happens because there are still parts of the hqueue remaining on the hard drive... There must be some terminal commands but I can't find anything in the doc about it and didn't have more chance with google. Thanks
  11. HQUEUE Job Fails

    Hello, I'm stucked, I think I managed to install and configure Hqueue correctly. This is what I do: -Plug a HQrender node after mantra's -Submit Job then Open HQueue The Job is submitted but after 3sec. it fails and the worst part is I don't know how to check what's wrong… If i click on the Job id number, it opens the Job details window but I can't see any “output log” to download. If I go in the Clients window, i can see that my Client machine is recognized with Avaibality set to Any @anytime. The server machine is a macpro 3,7 GHz Quad-Core Xeon E5, the client is an Imac 3,2 GHz Intel Core i3 Any idea where to start to resolve the problem? Thanks!!
  12. Hi ! I'm trying to distribute pyro simulation among several computers using hqueue And whatever i do i can not get proper results ! i've attached sample scene ! everything is done using shelf tools only (no custom multisolvers, etc.) as you can see the container, divided by 2 along X axis, looks fine ! but ! container divided another way looks wrong ! and i can not get what is the problem ! i just change slice division and nothing more .... distro.hip
  13. Greetings everyone. Struggling with setting up network rendering with Houdini 16.5. I use "use existing IFD" option and, unfortunately, can not use 'render current HIP file', because my hqclients running headless (nonGUI). Submitted jobs are failing with Warning: prepareIFDRender() got an unexpected parameter 'enable_checkpoints'. Traceback (most recent call last): File "/home/HQShared/houdini_distros/hfs.linux-x86_64/houdini/scripts/hqueue/hq_prepare_ifd_render.py", line 27, in <module> hqlib.callFunctionWithHQParms(hqlib.prepareIFDRender) File "/home/HQShared/houdini_distros/hfs16.5.405-linux-x86_64/houdini/scripts/hqueue/hqlib.py", line 1482, in callFunctionWithHQParms function.__name__, parm_name)) TypeError: prepareIFDRender() takes the parameter 'cross_platform_ifd' (not given) in log. Could please somebody explain how to set 'cross_platform_ifd' correctly. Thanks in advance.
  14. hello , i can send the project to the HQueueServer , and it can running , but can not render , the annex are error information .
  15. I would like to ask a question about Houdini HQueue. My Shared directory is my C disk. The remaining space of it only has more than 100G. When I make the large simulation, the cache is also very large. The remaining space isn't enough. May I know how could other computer read my other disk, whether I need re-install HQueue Server? Or do you have other solution except change a larger hard dive.
  16. HQueue Problem

    Hi,when I submit ,my job on client it fails every time with this message,"The network name cannot be found",or "The system cannot find the path specified".Does anyone had same problem?Thanks.
  17. Hello, we built an asset to cache and version our sims/caches I extended it to be able to use the renderfarm to sim things out. To do this I added a ROP in the asset, with the HQueue simulation node. I then use the topmost interface/parameters to 'click' on the 'Submit Job' button on that HQueue Simulation node down below. However, unless I do a 'allow editing of contents' on my HDA, it fails on the farm with the following code: ......................... ImportError: No module named sgtk Traceback (most recent call last): File "/our-hou-path/scripts/hqueue/hq_run_sim_without_slices.py", line 4, in <module> hqlib.callFunctionWithHQParms(hqlib.runSimulationWithoutSlices) File "/our-hou-path/linux/houdini/scripts/hqueue/hqlib.py", line 1862, in callFunctionWithHQParms return function(**kwargs) File "/our-hou-path/linux/houdini/scripts/hqueue/hqlib.py", line 1575, in runSimulationWithoutSlices alf_prog_parm.set(1) File "/hou-shared-path/shared/houdini_distros/hfs.linux-x86_64/houdini/python2.7libs/hou.py", line 34294, in set return _hou.Parm_set(*args) hou.PermissionError: Failed to modify node or parameter because of a permission error. Possible causes include locked assets, takes, product permissions or user specified permissions It seems that unless I don't unlock the asset, the submit job can't be clicked. Here's how i linked the interface button to the submit button Thanks for your input.
  18. IFD Workflow Basics

    I put together a simple intro to writing/rendering IFDs, using Packed Disk Prims to make better IFDs, and using HQueue to efficiently generate IFDs for rendering with Mantra. https://vimeo.com/223443000
  19. Hello. Is it possible to setup a distributed sim of a river that is coming down some kind of slope? Every example I can find takes some kind of already filled tank and goes from there. In the help there's even a note saying "this option is not needed for flowing rivers" (which might be refering specifically to the distributing pressure setup, but I'm not sure), but why is that? From my limited testing with the tank-like setup distributing the sim seems to speed it up considerably Thanks
  20. Hi All I have couple small problems in tuning HQueue farm 1. workstations Linux and win based 2. farm - mixed (osx/linux/win) hq_shared storage - nfs/smb NAS OSX and linux configured with autofs service, so Projects located on nfs share looks like "/net/servername/raid/nfs_sharename/Projects/XXX" for both osx and linux rendernodes and workstations. So i could render/simulate/whatever on linux/osx nodes with current project file, no need to copy hip to "Common Shared Storage". Just use "render current hip" in Hq_render node. But when i try to use win nodes and win workstation - arising problems. shared project for win part look as "//servername/raid.sharename/Projects/XXX" and "Projects" is a smb share mounted as disk "P:/" on nodes and worksations so when i try to send hip to render from linux/osx workstation usin "render current hip" to win nodes, nodes shows error "cannot read hip file "/net/servername/raid/nfs_sharename/Projects/XXX/scenes/file.hip" - seems Hqserver do not translate linux/osx path to win path. Same when using "render target hip" . But when i use "copy to shared storage" and HQ_render node copies hip and all needed cache/img files to "$HQROOT/projects" - win stations renders ok. But cannot do distributed simulation (error with saving cache file to *nix path) is it possible to use win nodes for render/simulate hip from osx/linux workstations as "current hip" , assuming that hip located on NAS, accesible from all stations/nodes? how to correctly set HOUDINI_PATHMAP var? Because in help/faq exist different var format examples like a. HOUDINI_PATHMAP = "/unix/path1" "//win/path1", "/unix/path2" "//win/path2" b. HOUDINI_PATHMAP = {"/unix/path1":"//win/path1", "/unix/path2":"//win/path2"} c. HOUDINI_PATHMAP = "/unix/path1":"//win/path1"; "/unix/path2":"//win/path2" etc. and no luck woth using any in houdini.env - all gives format/var error. and 2nd problem When i render on linux/osx nodes, and rendertime for 1 frame quite small (1-2min), linux and osx nodes finishing task, write image, and stays stuck on 100% completed task and waiting 6-8 minutes. in hqueue client page of those stuck stations shows quite high idle numbers (0.5 ... 2), so i think hqueue assumes that those stations in busy state (in fact no - 0% cpu usage by user task). is it any timeout/idle calculation settings/limits in hqueue settings to cure such timeouts? Thanks in advance.
  21. Hello, I'm trying to distribute my wave layer tank FLIP to about 8 computers and they seem to cause this(seem image below) when they are brought back in where the sections that meet go crazy. (i'll add a file when I get a chance if the image does'nt instantly spark an obvious solution.)
  22. HQueue unable to open file

    Hi everyone, I'm facing a weird problem with HQueue. I have 3 client machines, firewall disable on all of them, they have all the necessary authorizations and access, the file I want to render can be open on each of them with the exact same path (and I've tried to open it on each of them, it works perfectly) but... when I submit the job to HQueue, it fails on all machines with this message : "......................... return function(**kwargs) File "C:\Program Files\Side Effects Software\Houdini 15.0.244.16\houdini\scripts\hqueue\hqlib.py", line 1489, in simulateSlice controls_dop = _loadHipFileAndGetNode(hip_file, controls_node) File "C:\Program Files\Side Effects Software\Houdini 15.0.244.16\houdini\scripts\hqueue\hqlib.py", line 331, in _loadHipFileAndGetNode _loadHipFile(hip_file, error_logger) File "C:\Program Files\Side Effects Software\Houdini 15.0.244.16\houdini\scripts\hqueue\hqlib.py", line 362, in _loadHipFile hou.hipFile.load(hip_file) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.16/houdini/python2.7libs\houpythonportion.py", line 518, in decorator return func(*args, **kwargs) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.16/houdini/python2.7libs\hou.py", line 22697, in load return _hou.hipFile_load(*args, **kwargs) hou.OperationFailed: The attempted operation failed. Unable to open file: X:/3d_Projects/MyProject/Houdini/Beach_4.hip" My file is in an other shared folder than the specific one for HQueue, but I even try to submit it from the HQueue shared folder and it doesn't work either. Does anyone have a clue what is going on ? I am stuck...
  23. Networks sims using Deadline or HQueue

    hey guys, So I got a new workstation this week, and i'm trying to setup my old system as a render/simulation node. I did get mantra rendering to work using deadline, but simulations aren,t working for some reason. I'm getting this error (see attached image) I also tried HQueue, but all my jobs keep failing there. HQueue does see both my computers, but it immediately fails when I submit a job (either render or sim) I'm on windows and I'm using Houdini Indie. Any ideas?
  24. Hi, I am wondering if render with htoa through hqueue is supported using the latest versions of Houdini and Arnold. If not, is there any way to do it with scripting, alembic etc? Many Thanks
  25. HQueue on Linux on NAS!

    Hey Peeps, I know this will sound weird but has anyone ever run HQueue on centos on a NAS, such as QNAP TS-253A-4G (see amazon link), it contains a celeron processor with 4GB ram. I note from the description of the NAS that it runs ubunto out of the box, but i have seen centos mentioned as being able to run within "container station" a QNAP app. My setups so far is two i7-4790's both running dual boot windows and centos, with a local houdini install. I would love for them to both become render nodes with HQserver running on the NAS. I know most people say just run HQserver on one of the nodes but I want the nodes to do just rendering without any overhead from the HQserver, also all of the rendered output will be stored on the NAS so it makes sense if it can also run the HQserver. I don't know the minium specs for the HQserver but if i upgraded the ram on the NAS to 8GB it might be enought to pull this off.... If anyone has any info on the minimum processor/ram for HQserver please let me know, or if anyone has a QNAP NAS that can run linux, please share your expereiences of linux on NAS.... ciao Albin HO
×