Jump to content

HQueue Path mapping in mixed env and timeout

Recommended Posts

Hi All

I have couple small problems in tuning HQueue farm

1. workstations Linux and win based
2. farm - mixed (osx/linux/win)
hq_shared storage  - nfs/smb NAS

OSX and linux configured with autofs service, so Projects located on nfs share looks like "/net/servername/raid/nfs_sharename/Projects/XXX" for both osx and linux rendernodes and workstations. So i could render/simulate/whatever on linux/osx nodes with current project file, no need to copy hip to "Common Shared Storage". Just use "render current hip" in Hq_render node.

But when i try to use win nodes and win workstation - arising problems.
shared project for win part look as "//servername/raid.sharename/Projects/XXX" and "Projects" is a smb share mounted as disk "P:/" on nodes and worksations

so when i try to send hip to render from linux/osx workstation usin "render current hip" to win nodes, nodes shows error "cannot read hip file "/net/servername/raid/nfs_sharename/Projects/XXX/scenes/file.hip" - seems Hqserver do not translate linux/osx path to win path. Same when using "render target hip" .  But when i use "copy to shared storage" and HQ_render node copies hip and all needed cache/img files to "$HQROOT/projects" - win stations renders ok. But cannot do distributed simulation (error with saving cache file to *nix path)

is it possible to use win nodes for render/simulate hip from osx/linux workstations as "current hip" , assuming that hip located on NAS, accesible from all stations/nodes?

how to correctly set HOUDINI_PATHMAP var? Because in help/faq exist different var format examples like
a. HOUDINI_PATHMAP = "/unix/path1" "//win/path1", "/unix/path2" "//win/path2"
b. HOUDINI_PATHMAP = {"/unix/path1":"//win/path1", "/unix/path2":"//win/path2"}
c. HOUDINI_PATHMAP = "/unix/path1":"//win/path1"; "/unix/path2":"//win/path2"
and no luck woth using any in houdini.env - all gives format/var error.

and 2nd problem

When i render on linux/osx nodes, and rendertime for 1 frame quite small (1-2min), linux and osx nodes finishing task, write image, and stays stuck on 100% completed task and waiting 6-8 minutes.
in hqueue client page of those stuck stations shows quite high idle numbers (0.5 ... 2), so i think hqueue assumes that those stations in busy state (in fact no - 0% cpu usage by user task). is it any timeout/idle calculation settings/limits in hqueue settings to cure such timeouts?

Thanks in advance.


Edited by uzer_name
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...