Jump to content

Recommended Posts

Hi!
I have a strange issue. When I write many files at the same time, some of them breaks. It can be 200-300 files simultaneously size of about 10 Mb, or 70-80 files of 100 Mb - I have the same result: files exists, but 5-10% raise read error in Houdini. They have a normal size or less than normal on the storage.

I tried this on the old storage and on the newest storage (with perfect access time, speed, caching etc) - the same result. Houdini 16 behaves just like Houdini 15.5.

When I starting write this files, we have write errors on the storage from other software like Nuke or just write files from the OS.

Another picture is not observed: when we write hundreds exrs from nuke - there is no file breaks on the storage. 

This occurs for many months, and I think the reason is in the method of writing files from Houdini.

Is there a technology to change the method of writing geometry from Houdini (flags, variables)?

Edited by elecstorm
Link to comment
Share on other sites

Not really. You should submit a bug to SideFX with an example scene encase there is a bug. Especially with as much details about your system setup as possible. It could be a fringe case with your hardware setup. If you really don't believe it's the hardware, it could be competing software or your farm that is grabbing your nodes access, i.e. Nuke stomps on Houdini, or your farm software is not sophisticated enough Qube vs Hqueue. It has been a while since I ran into that issue, but the last few years I've gotten to work on very corporate data server hardware. 

  • Like 1
Link to comment
Share on other sites

5 hours ago, LaidlawFX said:

Not really. You should submit a bug to SideFX with an example scene encase there is a bug. Especially with as much details about your system setup as possible. It could be a fringe case with your hardware setup. If you really don't believe it's the hardware, it could be competing software or your farm that is grabbing your nodes access, i.e. Nuke stomps on Houdini, or your farm software is not sophisticated enough Qube vs Hqueue. It has been a while since I ran into that issue, but the last few years I've gotten to work on very corporate data server hardware. 

LaidlawFX, thanks for the answer. I really think it's software problem. Different storages, different blades. Nuke creates the same load on the storage, but we don't see any data loss. I was hoping that there is a low-level setting in Houdini for writing geometry files.

Link to comment
Share on other sites

Good luck with your bug submission.

Out of curiosity how are you cloning Houdini to the network? Is Houdini installed per a node, per a common software server, or on the workstations? Generally speaking Houdini as a whole is pretty "low level" i.e. Houdini Engine vs Houdini (FX, Core...) Also maybe it could be the format you are writing too. If it uses a third party writer like FBX as opposed to .bgeo there is a limit to what they can change in that type of code. Make sure you give SideFX a good repo, otherwise they'll only be able to help so much.

Link to comment
Share on other sites

10 minutes ago, LaidlawFX said:

Good luck with your bug submission.

Out of curiosity how are you cloning Houdini to the network? Is Houdini installed per a node, per a common software server, or on the workstations? Generally speaking Houdini as a whole is pretty "low level" i.e. Houdini Engine vs Houdini (FX, Core...) Also maybe it could be the format you are writing too. If it uses a third party writer like FBX as opposed to .bgeo there is a limit to what they can change in that type of code. Make sure you give SideFX a good repo, otherwise they'll only be able to help so much.

Houdini installed per node and on the workstations. We almost always use bgeo.sc.

8 minutes ago, malexander said:

You could be running out of file descriptors ("Too many open files"). On Linux, you can run 'limit' (or ulimit) to see how many files can be open at once. On my system, it's 1024 by default.

Which OS are you running?

It's debian 8 Jessie, on blades, workstations and on the old storage. The newest storage is running scientific linux with custom build. We will check the descriptors limit on the storage, but the descriptors are not limited in my system.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...