Jump to content

Recommended Posts

Hi!
I have a strange issue. When I write many files at the same time, some of them breaks. It can be 200-300 files simultaneously size of about 10 Mb, or 70-80 files of 100 Mb - I have the same result: files exists, but 5-10% raise read error in Houdini. They have a normal size or less than normal on the storage.

I tried this on the old storage and on the newest storage (with perfect access time, speed, caching etc) - the same result. Houdini 16 behaves just like Houdini 15.5.

When I starting write this files, we have write errors on the storage from other software like Nuke or just write files from the OS.

Another picture is not observed: when we write hundreds exrs from nuke - there is no file breaks on the storage. 

This occurs for many months, and I think the reason is in the method of writing files from Houdini.

Is there a technology to change the method of writing geometry from Houdini (flags, variables)?

Edited by elecstorm

Share this post


Link to post
Share on other sites

Not really. You should submit a bug to SideFX with an example scene encase there is a bug. Especially with as much details about your system setup as possible. It could be a fringe case with your hardware setup. If you really don't believe it's the hardware, it could be competing software or your farm that is grabbing your nodes access, i.e. Nuke stomps on Houdini, or your farm software is not sophisticated enough Qube vs Hqueue. It has been a while since I ran into that issue, but the last few years I've gotten to work on very corporate data server hardware. 

  • Like 1

Share this post


Link to post
Share on other sites
5 hours ago, LaidlawFX said:

Not really. You should submit a bug to SideFX with an example scene encase there is a bug. Especially with as much details about your system setup as possible. It could be a fringe case with your hardware setup. If you really don't believe it's the hardware, it could be competing software or your farm that is grabbing your nodes access, i.e. Nuke stomps on Houdini, or your farm software is not sophisticated enough Qube vs Hqueue. It has been a while since I ran into that issue, but the last few years I've gotten to work on very corporate data server hardware. 

LaidlawFX, thanks for the answer. I really think it's software problem. Different storages, different blades. Nuke creates the same load on the storage, but we don't see any data loss. I was hoping that there is a low-level setting in Houdini for writing geometry files.

Share this post


Link to post
Share on other sites

Good luck with your bug submission.

Out of curiosity how are you cloning Houdini to the network? Is Houdini installed per a node, per a common software server, or on the workstations? Generally speaking Houdini as a whole is pretty "low level" i.e. Houdini Engine vs Houdini (FX, Core...) Also maybe it could be the format you are writing too. If it uses a third party writer like FBX as opposed to .bgeo there is a limit to what they can change in that type of code. Make sure you give SideFX a good repo, otherwise they'll only be able to help so much.

Share this post


Link to post
Share on other sites

You could be running out of file descriptors ("Too many open files"). On Linux, you can run 'limit' (or ulimit) to see how many files can be open at once. On my system, it's 1024 by default.

Which OS are you running?

Share this post


Link to post
Share on other sites
10 minutes ago, LaidlawFX said:

Good luck with your bug submission.

Out of curiosity how are you cloning Houdini to the network? Is Houdini installed per a node, per a common software server, or on the workstations? Generally speaking Houdini as a whole is pretty "low level" i.e. Houdini Engine vs Houdini (FX, Core...) Also maybe it could be the format you are writing too. If it uses a third party writer like FBX as opposed to .bgeo there is a limit to what they can change in that type of code. Make sure you give SideFX a good repo, otherwise they'll only be able to help so much.

Houdini installed per node and on the workstations. We almost always use bgeo.sc.

8 minutes ago, malexander said:

You could be running out of file descriptors ("Too many open files"). On Linux, you can run 'limit' (or ulimit) to see how many files can be open at once. On my system, it's 1024 by default.

Which OS are you running?

It's debian 8 Jessie, on blades, workstations and on the old storage. The newest storage is running scientific linux with custom build. We will check the descriptors limit on the storage, but the descriptors are not limited in my system.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×