Jump to content

Houdini 16 on Windows vs Linux? And some other config questions


kemijo

Recommended Posts

Hi all, few Houdini hardware and other questions. If you have experience with Broadwell, 10 series GPUs, Redshift, etc please lend any insight if you can.

I'm speccing a new desktop to replace my old and finicky Mac Pro 2008. I'm on Linux at work but I don't care to admin a Linux machine at home so I'm likely going with Windows despite not having used one in years. I'd like to avoid a dual boot if possible for other reasons (mostly hassle). Considering that and putting aside any feelings for Windows itself:

1a) Is Houdini 16 performance (RAM efficiency) still much worse on Windows 10 than it is on Linux?

1b) Any other caveats that just make Houdini (and CG in general) a horrible experience on Windows, that aren't just "Windows sucks"?

 

Trying to decide between the i7 Broadwell chips. Leaning mostly to the 6850k but considering the 6900k and perhaps even the 6950X (can be had on ebay for $1500).

2a) Would general scene building in Houdini feel much different between the 3 Broadwell chips? Cores go up, clock speeds come down...is there a clear winner when processing typical node networks or sims?

2b) Is there any point to overclocking with Houdini? Would a minor speed boost be felt much? Is it safe for general use? What about for long uptimes?

 

I plan on Redshift with a 10 series GPU or two, e.g. a 1080ti paired with a 1070 eventually. I know VRAM can't be summed and Houdini only supports one openCL device.

3a) Is there anything Houdini Redshift just doesn't do well and Mantra is needed for?

3b) H16 apparently has 'fully openCL supported pyro'. What does this mean IRL? Is pyro on the GPU *much* faster than CPU? Are collisions supported/accelerated? Are there any features not yet supported or negate the gains brought by a GPU openCL?

 

4) Does RAM speed/timings play much of a factor at all in Houdini?

 

Thanks for any help, sorry for yet another set of hardware questions :P

Link to comment
Share on other sites

Guest tar
On 30/04/2017 at 8:30 PM, kemijo said:

Hi all, few Houdini hardware and other questions. If you have experience with Broadwell, 10 series GPUs, Redshift, etc please lend any insight if you can.

I'm speccing a new desktop to replace my old and finicky Mac Pro 2008. I'm on Linux at work but I don't care to admin a Linux machine at home so I'm likely going with Windows despite not having used one in years. I'd like to avoid a dual boot if possible for other reasons (mostly hassle). Considering that and putting aside any feelings for Windows itself:

1a) Is Houdini 16 performance (RAM efficiency) still much worse on Windows 10 than it is on Linux?

1b) Any other caveats that just make Houdini (and CG in general) a horrible experience on Windows, that aren't just "Windows sucks"?

 

Trying to decide between the i7 Broadwell chips. Leaning mostly to the 6850k but considering the 6900k and perhaps even the 6950X (can be had on ebay for $1500).

2a) Would general scene building in Houdini feel much different between the 3 Broadwell chips? Cores go up, clock speeds come down...is there a clear winner when processing typical node networks or sims?

2b) Is there any point to overclocking with Houdini? Would a minor speed boost be felt much? Is it safe for general use? What about for long uptimes?

 

I plan on Redshift with a 10 series GPU or two, e.g. a 1080ti paired with a 1070 eventually. I know VRAM can't be summed and Houdini only supports one openCL device.

3a) Is there anything Houdini Redshift just doesn't do well and Mantra is needed for?

3b) H16 apparently has 'fully openCL supported pyro'. What does this mean IRL? Is pyro on the GPU *much* faster than CPU? Are collisions supported/accelerated? Are there any features not yet supported or negate the gains brought by a GPU openCL?

 

4) Does RAM speed/timings play much of a factor at all in Houdini?

 

Thanks for any help, sorry for yet another set of hardware questions :P

First thing I would recommend changing the Mac to Windows or Linux and it will become a much better system. Maybe up to 30% faster and smoother, more so if you put a new graphics card in it.

1a) haven't heard that it's improved.

1b) nope Windows 10 is meant to be *almost* at MacOs level :)

2a) The 6950X looks great! Turbo speeds up to 4Ghz and 10 cores is wickedly good. The whole direction Houdini, e.g. compiledSops  benefit from multiple cores, and the remaining single core ops are very fast too.

3a) particles are still instanced in Redshift but that should be changing 'very soon'

3b) speed increases depend on the ratio of the CPU to the GPU, so YMMV. Can't see why collisions have been accelerated yet.

4) Yes, memory management is tetris like, anything that speeds that up will help.

Link to comment
Share on other sites

Thanks for the replies rigelbowen and marty :)

I was afraid Windows was still behind, but as long as its very usable and not crazily inefficient then I'll manage. I have see reviews describe the 6950X (and in fact all the Broadwells) as 4ghz at turbo, but only one of them is marketed as such. So the Turbo speed of all the chips hits 4 ghz despite some being labeled at 3.7, 3.6, 3.5 by Intel?

As for RAM I imagine the timings and speed technically 'help' but the price/performance ratio seems to be huge. Just wondering if it's worth it. Game sites suggest that better timings are only for enthusiasts trying to bench. If there is no real world discernible difference then I'll skip that.

Link to comment
Share on other sites

  • 4 weeks later...

I have the 6900 i7 and a  Titan X pascal GPU and 64GB RAM

Also running Linux(Ubuntu Mate 16.04 LTS)

Informal tests(I'm running a dual boot setup) show roughly 15 -30% sim & render speed improvement on Linux over Windows, all other things remaining identical. YMMV.

Testing Redshift and has been mentioned, particles require instancing.

Also for any custom shading, rendering volumes can be a pain as the RS volume shader is somewhat limited.

Redshift has a LONG way to go before it will match the versatility of a CPU renderer like Mantra, but it IS much faster with a good GPU and keeps getting better. Support fairly decent.

I would suggest looking into alternatives to i7 CPU for CPU rendering...AFAIK, even the previous generation of Intel chips were much better at overclocking.

I'm still very much a novice, but looking forward I can see using GPU rendering for personal work, testing locally, and cloud based renderers like Gridmarket for production work down the line...

Edited by art3mis
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...