Jump to content

Search the Community

Showing results for tags 'hqueue'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 38 results

  1. Howdy there!! Having an issue here. I'm caching geometry to be prepped for Custom Flip SIMMs. And I need sub-frames as Flip needs sub-frames for VDB volumes and caches of Geometry. I'm putting on LOCAL system and it works. But when I send them to server using HQueue it only replies the Frames and doesnot include any sub-frames. I've checked with all the available options but still nothing. Screenshots are here................... These two things I've tried in HQueue.... Any IDEAS... Thanks.
  2. Hey! I'm in a situation where I have cached a huge amount of points on disk and I want to render those points. What I do in the scene is reading the bgeo.sc caches, merge them all together, trail the points to calculate velocity, apply a pscale, assign a shader and render. If I run it on the farm with IFDs it takes a lot of time to generate the IFDs for render, and I'm trying to reduce the time of this task. I noticed that the IFD itself is tiny, the big part is that under the IFD folder there's a storage folder where basically I have a huge bgeo.sc file per frame which I suppose is the geometry I'm about to render. I wonder if all of this is not redundant since all those operation can be done at rendertime I suppose...I tried to set the file sop in "packed disk primitive" but it seems I cannot apply pscale after that, all the points render at pscale 1...

    Hello, I've setup a hqueue farm across several machines with a shared drive. Almost everything seems to be working now (had lot's of intermittent issues, with jobs not making it to hqueue and 0%progress fails[think perhaps there was a license conflict]) Anyway, current problem is that I am unable to overwrite dependencies in the shared folder when I resubmit the hqrender node. As far as I can tell there are no permission issues and the files are not being used. I can read/delete/edit these files no problem from any of my clients/server machines(also fine/not corrupt in local directory). Any thoughts? log below OUTPUT LOG ......................... 00:00:00 144MB | [rop_operators] Registering operators ... 00:00:00 145MB | [rop_operators] operator registration done. 00:00:00 170MB | [vop_shaders] Registering shaders ... 00:00:00 172MB | [vop_shaders] shader registration done. 00:00:00 172MB | [htoa_op] End registration. 00:00:00 172MB | 00:00:00 172MB | releasing resources 00:00:00 172MB | Arnold shutdown Loading .hip file \\\Cache/projects/testSubmit-2_deletenoChace_2.hip. PROGRESS: 0%ERROR: The attempted operation failed. Error: Unable to save geometry for: /obj/sphere1/file1 SOP Error: Unable to read file "//". GeometryIO[hjson]: Unable to open file '//' Backend IO Thanks in advance
  4. Remove Hqueue

    Hello, I'd like to remove hqueue completely from mac OS, deleting the hqserver, hqclient and the files in Launch daemon isn't enough... when I try to reinstall it I have an internal server error, I suspect it happens because there are still parts of the hqueue remaining on the hard drive... There must be some terminal commands but I can't find anything in the doc about it and didn't have more chance with google. Thanks
  5. HQUEUE Job Fails

    Hello, I'm stucked, I think I managed to install and configure Hqueue correctly. This is what I do: -Plug a HQrender node after mantra's -Submit Job then Open HQueue The Job is submitted but after 3sec. it fails and the worst part is I don't know how to check what's wrong… If i click on the Job id number, it opens the Job details window but I can't see any “output log” to download. If I go in the Clients window, i can see that my Client machine is recognized with Avaibality set to Any @anytime. The server machine is a macpro 3,7 GHz Quad-Core Xeon E5, the client is an Imac 3,2 GHz Intel Core i3 Any idea where to start to resolve the problem? Thanks!!
  6. Hi ! I'm trying to distribute pyro simulation among several computers using hqueue And whatever i do i can not get proper results ! i've attached sample scene ! everything is done using shelf tools only (no custom multisolvers, etc.) as you can see the container, divided by 2 along X axis, looks fine ! but ! container divided another way looks wrong ! and i can not get what is the problem ! i just change slice division and nothing more .... distro.hip
  7. Greetings everyone. Struggling with setting up network rendering with Houdini 16.5. I use "use existing IFD" option and, unfortunately, can not use 'render current HIP file', because my hqclients running headless (nonGUI). Submitted jobs are failing with Warning: prepareIFDRender() got an unexpected parameter 'enable_checkpoints'. Traceback (most recent call last): File "/home/HQShared/houdini_distros/hfs.linux-x86_64/houdini/scripts/hqueue/hq_prepare_ifd_render.py", line 27, in <module> hqlib.callFunctionWithHQParms(hqlib.prepareIFDRender) File "/home/HQShared/houdini_distros/hfs16.5.405-linux-x86_64/houdini/scripts/hqueue/hqlib.py", line 1482, in callFunctionWithHQParms function.__name__, parm_name)) TypeError: prepareIFDRender() takes the parameter 'cross_platform_ifd' (not given) in log. Could please somebody explain how to set 'cross_platform_ifd' correctly. Thanks in advance.
  8. hello , i can send the project to the HQueueServer , and it can running , but can not render , the annex are error information .
  9. I would like to ask a question about Houdini HQueue. My Shared directory is my C disk. The remaining space of it only has more than 100G. When I make the large simulation, the cache is also very large. The remaining space isn't enough. May I know how could other computer read my other disk, whether I need re-install HQueue Server? Or do you have other solution except change a larger hard dive.
  10. HQueue Problem

    Hi,when I submit ,my job on client it fails every time with this message,"The network name cannot be found",or "The system cannot find the path specified".Does anyone had same problem?Thanks.
  11. Hello, we built an asset to cache and version our sims/caches I extended it to be able to use the renderfarm to sim things out. To do this I added a ROP in the asset, with the HQueue simulation node. I then use the topmost interface/parameters to 'click' on the 'Submit Job' button on that HQueue Simulation node down below. However, unless I do a 'allow editing of contents' on my HDA, it fails on the farm with the following code: ......................... ImportError: No module named sgtk Traceback (most recent call last): File "/our-hou-path/scripts/hqueue/hq_run_sim_without_slices.py", line 4, in <module> hqlib.callFunctionWithHQParms(hqlib.runSimulationWithoutSlices) File "/our-hou-path/linux/houdini/scripts/hqueue/hqlib.py", line 1862, in callFunctionWithHQParms return function(**kwargs) File "/our-hou-path/linux/houdini/scripts/hqueue/hqlib.py", line 1575, in runSimulationWithoutSlices alf_prog_parm.set(1) File "/hou-shared-path/shared/houdini_distros/hfs.linux-x86_64/houdini/python2.7libs/hou.py", line 34294, in set return _hou.Parm_set(*args) hou.PermissionError: Failed to modify node or parameter because of a permission error. Possible causes include locked assets, takes, product permissions or user specified permissions It seems that unless I don't unlock the asset, the submit job can't be clicked. Here's how i linked the interface button to the submit button Thanks for your input.
  12. IFD Workflow Basics

    I put together a simple intro to writing/rendering IFDs, using Packed Disk Prims to make better IFDs, and using HQueue to efficiently generate IFDs for rendering with Mantra. https://vimeo.com/223443000
  13. Hello. Is it possible to setup a distributed sim of a river that is coming down some kind of slope? Every example I can find takes some kind of already filled tank and goes from there. In the help there's even a note saying "this option is not needed for flowing rivers" (which might be refering specifically to the distributing pressure setup, but I'm not sure), but why is that? From my limited testing with the tank-like setup distributing the sim seems to speed it up considerably Thanks
  14. Hi All I have couple small problems in tuning HQueue farm 1. workstations Linux and win based 2. farm - mixed (osx/linux/win) hq_shared storage - nfs/smb NAS OSX and linux configured with autofs service, so Projects located on nfs share looks like "/net/servername/raid/nfs_sharename/Projects/XXX" for both osx and linux rendernodes and workstations. So i could render/simulate/whatever on linux/osx nodes with current project file, no need to copy hip to "Common Shared Storage". Just use "render current hip" in Hq_render node. But when i try to use win nodes and win workstation - arising problems. shared project for win part look as "//servername/raid.sharename/Projects/XXX" and "Projects" is a smb share mounted as disk "P:/" on nodes and worksations so when i try to send hip to render from linux/osx workstation usin "render current hip" to win nodes, nodes shows error "cannot read hip file "/net/servername/raid/nfs_sharename/Projects/XXX/scenes/file.hip" - seems Hqserver do not translate linux/osx path to win path. Same when using "render target hip" . But when i use "copy to shared storage" and HQ_render node copies hip and all needed cache/img files to "$HQROOT/projects" - win stations renders ok. But cannot do distributed simulation (error with saving cache file to *nix path) is it possible to use win nodes for render/simulate hip from osx/linux workstations as "current hip" , assuming that hip located on NAS, accesible from all stations/nodes? how to correctly set HOUDINI_PATHMAP var? Because in help/faq exist different var format examples like a. HOUDINI_PATHMAP = "/unix/path1" "//win/path1", "/unix/path2" "//win/path2" b. HOUDINI_PATHMAP = {"/unix/path1":"//win/path1", "/unix/path2":"//win/path2"} c. HOUDINI_PATHMAP = "/unix/path1":"//win/path1"; "/unix/path2":"//win/path2" etc. and no luck woth using any in houdini.env - all gives format/var error. and 2nd problem When i render on linux/osx nodes, and rendertime for 1 frame quite small (1-2min), linux and osx nodes finishing task, write image, and stays stuck on 100% completed task and waiting 6-8 minutes. in hqueue client page of those stuck stations shows quite high idle numbers (0.5 ... 2), so i think hqueue assumes that those stations in busy state (in fact no - 0% cpu usage by user task). is it any timeout/idle calculation settings/limits in hqueue settings to cure such timeouts? Thanks in advance.
  15. Hello, I'm trying to distribute my wave layer tank FLIP to about 8 computers and they seem to cause this(seem image below) when they are brought back in where the sections that meet go crazy. (i'll add a file when I get a chance if the image does'nt instantly spark an obvious solution.)
  16. HQueue unable to open file

    Hi everyone, I'm facing a weird problem with HQueue. I have 3 client machines, firewall disable on all of them, they have all the necessary authorizations and access, the file I want to render can be open on each of them with the exact same path (and I've tried to open it on each of them, it works perfectly) but... when I submit the job to HQueue, it fails on all machines with this message : "......................... return function(**kwargs) File "C:\Program Files\Side Effects Software\Houdini\houdini\scripts\hqueue\hqlib.py", line 1489, in simulateSlice controls_dop = _loadHipFileAndGetNode(hip_file, controls_node) File "C:\Program Files\Side Effects Software\Houdini\houdini\scripts\hqueue\hqlib.py", line 331, in _loadHipFileAndGetNode _loadHipFile(hip_file, error_logger) File "C:\Program Files\Side Effects Software\Houdini\houdini\scripts\hqueue\hqlib.py", line 362, in _loadHipFile hou.hipFile.load(hip_file) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.16/houdini/python2.7libs\houpythonportion.py", line 518, in decorator return func(*args, **kwargs) File "C:/PROGRA~1/SIDEEF~1/HOUDIN~1.16/houdini/python2.7libs\hou.py", line 22697, in load return _hou.hipFile_load(*args, **kwargs) hou.OperationFailed: The attempted operation failed. Unable to open file: X:/3d_Projects/MyProject/Houdini/Beach_4.hip" My file is in an other shared folder than the specific one for HQueue, but I even try to submit it from the HQueue shared folder and it doesn't work either. Does anyone have a clue what is going on ? I am stuck...
  17. Networks sims using Deadline or HQueue

    hey guys, So I got a new workstation this week, and i'm trying to setup my old system as a render/simulation node. I did get mantra rendering to work using deadline, but simulations aren,t working for some reason. I'm getting this error (see attached image) I also tried HQueue, but all my jobs keep failing there. HQueue does see both my computers, but it immediately fails when I submit a job (either render or sim) I'm on windows and I'm using Houdini Indie. Any ideas?
  18. Hi, I am wondering if render with htoa through hqueue is supported using the latest versions of Houdini and Arnold. If not, is there any way to do it with scripting, alembic etc? Many Thanks
  19. HQueue on Linux on NAS!

    Hey Peeps, I know this will sound weird but has anyone ever run HQueue on centos on a NAS, such as QNAP TS-253A-4G (see amazon link), it contains a celeron processor with 4GB ram. I note from the description of the NAS that it runs ubunto out of the box, but i have seen centos mentioned as being able to run within "container station" a QNAP app. My setups so far is two i7-4790's both running dual boot windows and centos, with a local houdini install. I would love for them to both become render nodes with HQserver running on the NAS. I know most people say just run HQserver on one of the nodes but I want the nodes to do just rendering without any overhead from the HQserver, also all of the rendered output will be stored on the NAS so it makes sense if it can also run the HQserver. I don't know the minium specs for the HQserver but if i upgraded the ram on the NAS to 8GB it might be enought to pull this off.... If anyone has any info on the minimum processor/ram for HQserver please let me know, or if anyone has a QNAP NAS that can run linux, please share your expereiences of linux on NAS.... ciao Albin HO
  20. This is a newbie question (kind of)... After setting up Hqueue and HqueueClient on a single machine to help me do batch render jobs I noticed the render times from Hqueue is dramatically slower x5 times slower. And again this is the same machine I just use Hqueue to help me organize batch renders. The client takes too long in the Hython (around 16 min preparing the scene) and is not nearly as fast while rendering each frames (x5 times slower). I find this really sad because Mantra is so fast rendering that is a shame it does not seem to be used properly in my Hqueue setup. And i don't know even how to find the problem the client and server are running properly. though I notice mantra is a lot slower to start rendering in my task manager, instead of constantly rendering like when I "render to disk" from my Hip file. I am using Houdini Indie in a Windows 10 machine, so I am not exporting ifd files, it took me a while to find out that is incompatible with Indie, any ideas on how can I go about to solve this would be very appreciated it.
  21. I'm trying to run a sim in the background from a file cache. I set the output and frame range. I hit Save to disk in the background. The scheduler pops up and says that the job is complete after 1 sec, but nothing has been cached? Has anyone experienced this? it doesn't seem to be throwing an error (unless I'm not looking in the right place). I have administrator privileges and i'm rendering on only one machine. Any suggestions would be great. Rob
  22. Render Farms

    Hello, I'm looking to find out what off the shelf render farms people prefer currently for production; likes and dislikes? Short background: We need to use our nodes for Houdini, Max, Maya, fume, and a few other software. Currently about ~100 nodes, and we don't have a dedicated render farm technician so the most hands off, off the shelf is the best. I have experience with Hqueue, Rush, Deadline, and Qube, so I am looking for any additional off the shelf software, too beyond those. Thanks -Ben
  23. Hi guys, yet another HQueue problem. My team and I expanded our university's renderfarm and now we're at about 100 clients. Everything is working great, but the web interface of the HQueue was getting slower and slower with each client we added and is now it's nearly unusable... We've tried different browsers (chrome, firefox, internet explorer...) but nothing changed. Our setup is pretty much the one suggested by SESI in the help section, so the shared folder and the hqserver are two different machines. Do you guys know this problem, or maybe even have a solution for it? Thank you, Philipp
  24. Hi guys, I've been trying to render a sequence in HQueue for Houdini but it is not rendering my per-light components out, but rendering an "all" component. Hqueue also doesn't show any error when the render is taking place I've tried rendering the same mantra node locally and all the components do appear, I Have attached 2 screenshots showing the difference in the render passes, any help would be greatly appreciated. Cheers Ronald
  25. HQueue Problems

    Hey guys, I am having a heck of a time getting my farm setup for my school finals. I have three machines I need to run sliced fluid sims on. Right now I am just trying to get the main workstation to complete a HQueue job... So I have HQueue client and server installed on this machine on the C drive. Both services run fine under another admin account I created called HQueue. I Have used the shelf tool for creating a sliced sim (pyro sim in this case) as recommended per the HQueue documentation. The shared folder with a houdini install in it is on another disk called F in the workstation, the specific folder is shared as hq to all other machines and is mounted on all of them as H: I have no problems accessing it from any machine. The server's .ini file has been setup with the server ip of the pc it is running on (the workstation), and these lines have been set: hqserver.sharedNetwork.path.windows = \\KYLE-PC\hq hqserver.sharedNetwork.mount.windows = H: Everything else in there is vanilla. The problem seems to be in writing the slice files to the mounted H: drive, as I get this error when I submit the houdini file I have attached: hqlib.callFunctionWithHQParms(hqlib.simulateSlice) File "\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py", line 1864, in callFunctionWithHQParms return function(**kwargs) File "\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py", line 1532, in simulateSlice _renderRop(rop) File "\\KYLE-PC\hq\houdini_distros\hfs.windows-x86_64\houdini\scripts\hqueue\hqlib.py", line 1869, in _renderRop rop.render(*args, **kwargs) File "//KYLE-PC/hq/houdini_distros/hfs.windows-x86_64/houdini/python2.7libs\hou.py", line 32411, in render return _hou.RopNode_render(*args, **kwargs) hou.OperationFailed: The attempted operation failed. Error: Failed to save output to file "H:/projects/geo/untitled.loadslices.1.bgeo.sc". Error: Failed to save output to file "H:/projects/geo/untitled.loadslices.2.bgeo.sc". I am really not sure why this is happening as I think I have all the relevant permissions. Any suggestions peeps? -Kyle Here is the diagonostics ouput too: Diagnostic Information for Job 75: ================================== Job Name: Simulate -> HIP: untitled.hip ROP: save_slices (Slice 0) Submitted By: Kyle Job ID: 75 Parent Job ID(s): 73, 76 Number of Clients Assigned: 1 Job Status: failed Report Generated On: December 12, 2015 01:52:08 AM Job Properties: =============== Description: None Tries Left: 0 Priority: 5 Minimum Number of Hosts: 1 Maximum Number of Hosts: 1 Tags: single Queue Time: December 12, 2015 01:15:04 AM Runnable Time: December 12, 2015 01:46:19 AM Command Start Time: December 12, 2015 01:50:04 AM Command End Time: Start Time: December 12, 2015 01:50:04 AM End Time: December 12, 2015 01:50:18 AM Time to Complete: 13s Time in Queue: 35m 00s Job Environment Variables: ========================== HQCOMMANDS: { "hythonCommandsLinux": "export HOUDINI_PYTHON_VERSION=2.7 && export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && cd $HFS && source ./houdini_setup && hython -u", "pythonCommandsMacosx": "export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && $HFS/Frameworks/Python.framework/Versions/2.7/bin/python", "pythonCommandsLinux": "export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && $HFS/python/bin/python2.7", "pythonCommandsWindows": "(set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && \"!HFS!\\python27\\python2.7.exe\"", "mantraCommandsLinux": "export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && cd $HFS && source ./houdini_setup && $HFS/python/bin/python2.7 $HFS/houdini/scripts/hqueue/hq_mantra.py", "mantraCommandsMacosx": "export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && cd $HFS && source ./houdini_setup && $HFS/Frameworks/Python.framework/Versions/2.7/bin/python $HFS/houdini/scripts/hqueue/hq_mantra.py", "hythonCommandsMacosx": "export HOUDINI_PYTHON_VERSION=2.7 && export HFS=\"$HQROOT/houdini_distros/hfs.$HQCLIENTARCH\" && cd $HFS && source ./houdini_setup && hython -u", "hythonCommandsWindows": "(set HOUDINI_PYTHON_VERSION=2.7) && (set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && (set PATH=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!\\bin;!PATH!) && \"!HFS!\\bin\\hython\" -u", "mantraCommandsWindows": "(set HFS=!HQROOT!\\houdini_distros\\hfs.!HQCLIENTARCH!) && \"!HFS!\\python27\\python2.7.exe\" \"!HFS!\\houdini\\scripts\\hqueue\\hq_mantra.py\"" } HQPARMS: { "controls_node": "/obj/pyro_sim/DISTRIBUTE_pyro_CONTROLS", "dirs_to_create": [ "$HIP/geo" ], "tracker_port": 54534, "hip_file": "$HQROOT/projects/untitled.hip", "output_driver": "/obj/distribute_pyro/save_slices", "enable_perf_mon": 0, "slice_divs": [ 1, 1, 1 ], "tracker_host": "KYLE-PC", "slice_num": 0, "slice_type": "volume" } HQHOSTS: KYLE-PC Job Conditions and Requirements: ================================ hostname any KYLE-PC Executed Client Job Commands: ============================= Windows Command: (set HOUDINI_PYTHON_VERSION=2.7) && (set HFS=!HQROOT!\houdini_distros\hfs.!HQCLIENTARCH!) && (set PATH=!HQROOT!\houdini_distros\hfs.!HQCLIENTARCH!\bin;!PATH!) && "!HFS!\bin\hython" -u "!HFS!\houdini\scripts\hqueue\hq_sim_slice.py" Client Machine Specification (KYLE-PC): ======================================= DNS Name: KYLE-PC Client ID: 1 Operating System: windows Architecture: x86_64 Number of CPUs: 24 CPU Speed: 4000.0 Memory: 25156780 Client Machine Configuration File Contents (KYLE-PC): ===================================================== [main] server = KYLE-PC port = 5000 sharedNetwork.mount = \\KYLE-PC\hq [job_environment] HQueue Server Configuration File Contents: ========================================== # # hqserver - Pylons configuration # # The %(here)s variable will be replaced with the parent directory of this file # [DEFAULT] email_to = you@yourdomain.com smtp_server = localhost error_email_from = paste@localhost [server:main] use = egg:Paste#http host = port = 5000 [app:main] # The shared network. hqserver.sharedNetwork.host = KYLE-PC hqserver.sharedNetwork.path.linux = %(here)s/shared hqserver.sharedNetwork.path.windows = \\KYLE-PC\hq hqserver.sharedNetwork.path.macosx = %(here)s/HQShared hqserver.sharedNetwork.mount.linux = /mnt/hq hqserver.sharedNetwork.mount.windows = H: hqserver.sharedNetwork.mount.macosx = /Volumes/HQShared # Server port number. hqserver.port = 5000 # Where to save job output job_logs_dir = %(here)s/job_logs # Specify the database for SQLAlchemy to use sqlalchemy.default.url = sqlite:///%(here)s/db/hqserver.db # This is required if using mysql sqlalchemy.default.pool_recycle = 3600 # This will force a thread to reuse connections. sqlalchemy.default.strategy = threadlocal ######################################################################### # Uncomment these configuration values if you are using a MySQL database. ######################################################################### # The maximum number of database connections available in the # connection pool. If you see "QueuePool limit of size" messages # in the errors.log, then you should increase the value of pool_size. # This is typically done for farms with a large number of client machines. #sqlalchemy.default.pool_size = 30 #sqlalchemy.default.max_overflow = 20 # Where to publish myself in avahi # hqnode will use this to connect publish_url = http://hostname.domain.com:5000 # How many minutes before a client is considered inactive hqserver.activeTimeout = 3 # How many days before jobs are deleted hqserver.expireJobsDays = 10 # The maximum number of jobs (under the same root parent job) that can fail on # a single client before a condition is dynamically added to that root parent # job (and recursively all its children) that excludes the client from ever # running this job/these jobs again. This value should be a postive integer # greater than zero. To disable this feature, set this value to zero. hqserver.maxFailsAllowed = 5 # The priority that the 'upgrade' job gets. hqserver.upgradePriority = 100 use = egg:hqserver full_stack = True cache_dir = %(here)s/data beaker.session.key = hqserver beaker.session.secret = somesecret app_instance_uuid = {fa64a6d1-ae3f-43c1-8141-9c29fdd9d418} # Logging Setup [loggers] keys = root [handlers] keys = console [formatters] keys = generic [logger_root] # Change to "level = DEBUG" to see debug messages in the log. level = INFO handlers = console # This handler backs up the log when it reaches 10Mb # and keeps at most 5 backup copies. [handler_console] class = handlers.RotatingFileHandler args = ("hqserver.log", "a", 10485760, 5) level = NOTSET formatter = generic [formatter_generic] format = %(asctime)s %(levelname)-5.5s [%(name)s] %(message)s datefmt = %B %d, %Y %H:%M:%S Job Status Log: =============== December 12, 2015 01:15:04 AM: Assigned to KYLE-PC (master) December 12, 2015 01:15:10 AM: setting status to running December 12, 2015 01:15:23 AM: setting status to failed December 12, 2015 01:18:28 AM: Rescheduling... December 12, 2015 01:18:28 AM: setting status to runnable December 12, 2015 01:18:28 AM: Assigned to KYLE-PC (master) December 12, 2015 01:18:35 AM: setting status to running December 12, 2015 01:18:47 AM: setting status to failed December 12, 2015 01:23:18 AM: setting status to runnable December 12, 2015 01:23:19 AM: Assigned to KYLE-PC (master) December 12, 2015 01:23:20 AM: setting status to running December 12, 2015 01:23:33 AM: setting status to failed December 12, 2015 01:29:44 AM: setting status to runnable December 12, 2015 01:29:44 AM: Assigned to KYLE-PC (master) December 12, 2015 01:29:44 AM: setting status to running December 12, 2015 01:29:57 AM: setting status to failed December 12, 2015 01:34:17 AM: setting status to runnable December 12, 2015 01:34:17 AM: Assigned to KYLE-PC (master) December 12, 2015 01:38:17 AM: setting status to abandoned December 12, 2015 01:46:19 AM: setting status to runnable December 12, 2015 01:50:04 AM: Assigned to KYLE-PC (master) December 12, 2015 01:50:04 AM: setting status to running December 12, 2015 01:50:18 AM: setting status to failed UPDATE: I just did a system restart to see if it would help and instead of the regular write error I recieved this: 0x00000000577CDE78 (0x000000000000002B 0x000000AD63AEF840 0x000000AD453FEEB0 0x0000000000000000), ?thread_sleep_v3@internal@tbb@@YAXAEBVinterval_t@tick_count@2@@Z() + 0x8C8 bytes(s) 0x00000000577CDD2B (0x000000AD45381F90 0x000000AD45381F90 0x000000AD453FEEB0 0x0000000000000000), ?thread_sleep_v3@internal@tbb@@YAXAEBVinterval_t@tick_count@2@@Z() + 0x77B bytes(s) 0x00007FFF29E43FEF (0x00007FFF29EE1DB0 0x0000000000000000 0x0000000000000000 0x0000000000000000), _beginthreadex() + 0x107 bytes(s) 0x00007FFF29E44196 (0x00007FFF29E44094 0x000000AD453FEEB0 0x0000000000000000 0x0000000000000000), _endthreadex() + 0x192 bytes(s) 0x00007FFF36582D92 (0x00007FFF36582D70 0x0000000000000000 0x0000000000000000 0x0000000000000000), BaseThreadInitThunk() + 0x22 bytes(s) 0x00007FFF36C29F64 (0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000), RtlUserThreadStart() + 0x34 bytes(s) After resubmission, it went back to the usual error mentioned above. untitled.hip