Jump to content

Building a workstation Houdini orientated


csp

Recommended Posts

I am about to build my first workstation which will be Houdini orientated, I did an extensive research on hardware and I though it may be useful to share it for people who may be in the same place. Also I would like to submit a couple of question which I didn't manage to answer myself.

First of all, lets start with my build. Because currently I am not in a place to spend a large amount of money in a super workstation, I have designed two different builds, the first one "budget workstation" is going to use few hardware which I am currently own and it will build on top of those and keeping in mind the second build "ultimate workstation" as an update which may come next year. Why not waiting until have all the money? Because my current pc it's so slow, which is impossible to do anything with houdini.

here is the hardware of the two builds:

https://docs.google.com/document/d/1ZAVK_vq2cCobbnTsV2ryMokLOaKSBBpK48gX8xX8H74/edit

Questions:

1. What is best combination of nvidia? (one for OpenCL and the other for display)

2. Tesla C2075 and GTX 680 can be in the same setup?

3. Is it true that is better to go with the best (Noctua) air cooling than a cheap (Hydro Series™ H60) water cooling?

4. Any suggestion?

Please share your experience on the listed hardware or your own setups.

cheers

Edited by cparliaros
Link to comment
Share on other sites

1. What is best combination of nvidia? (one for OpenCL and the other for display)

2. Tesla C2075 and GTX 680 can be in the same setup?

Unless you plan to be simulating with OpenCL most of the time the extra graphics card is a waste. The Tesla cards provide additional memory (which means higher resolution simulations) but will not calculate any faster than their gaming cousins. If you really need that much memory it might be better to just calculate on the CPU instead. Not sure how the driver situation would work between a GeForce card and a Tesla card, but if you can afford a Tesla card you can afford a Quadro card too.

3. Is it true that is better to go with the best (Noctua) air cooling than a cheap (Hydro Series™ H60) water cooling?

I would avoid liquid cooling. Even when implemented in the best possible way liquid cooling will eventually fail and when it does the results can be catastrophic (when a fan dies it just runs hotter until you replace it). For example when the liquid cooled Mac Pro workstations started to fail.

powermac-g5-coolant-leak.jpg

The heat generated by a workstation isn't sufficient to warrant liquid cooling. The point of the liquid is to carry the heat away from the source to a larger heatsink than could otherwise fit at the heat source. Liquid cooling like the H60 is just to say, "Look, my machine is liquid cooled!"

  • Like 1
Link to comment
Share on other sites

Unless you plan to be simulating with OpenCL most of the time the extra graphics card is a waste. The Tesla cards provide additional memory (which means higher resolution simulations) but will not calculate any faster than their gaming cousins. If you really need that much memory it might be better to just calculate on the CPU instead. Not sure how the driver situation would work between a GeForce card and a Tesla card, but if you can afford a Tesla card you can afford a Quadro card too.

why wouldn't a tesla card calculate faster than a gtx680 exactly ? Not sure I understand.

I would avoid liquid cooling. Even when implemented in the best possible way liquid cooling will eventually fail and when it does the results can be catastrophic (when a fan dies it just runs hotter until you replace it). For example when the liquid cooled Mac Pro workstations started to fail.

powermac-g5-coolant-leak.jpg

The heat generated by a workstation isn't sufficient to warrant liquid cooling. The point of the liquid is to carry the heat away from the source to a larger heatsink than could otherwise fit at the heat source. Liquid cooling like the H60 is just to say, "Look, my machine is liquid cooled!"

have you ever tried liquid cooling ? This seems like a pretty strong statement of opinion.

Edited by ranxerox
Link to comment
Share on other sites

I see your two specs are for xeons. Their extra on-board cache is nice but I'd look at substituting with 3rd gen i7's with 6 cores on board. I bet that with the cost of the xeon, you could probably get a dual core i7 for a total of 12 procs. Certainly helps Mantra sweep through a frame.

Either that or go for an additional SSD for the os install and local cache along with a 2TB 7200rpm drive for files.

I'd stay away from liquid cooling in search of a silent system which is only one of two reasons to install it: you want less fan noise or you are an over-clocker. Over-clocking a machine that will be doing hard rendering and simulation for long period of times, well, will generate huge heat and even with adequate cooling, the thermal stress and today's flimsy components will see your machine rendered dead in no time. Play it safe on production machines.

As far as the GeForce GTX680, great card for Houdini at this time. Good balance between on-board memory and enough GPU cores to make running the odd OpenCL fluid sim workable. Instancing geometry in the viewport on the GPU will become ever more important moving forward.

If you get a job that requires tons of dust and will pay the bills, just add a Tesla to get a very large amount of gpu memory and procs to process much larger OpenCL fluid sims. Again let the job pay for the Tesla. No other way to warrant this unless you are independently wealthy and don't need to justify it. As far as Houdini goes, it's an environment variable that you set to point Houdini to use the Tesla instead of the GeForce.

Nvidia has their Maximus technology that bridges a Quadro with a Tesla. Great if you are in to medical visualization and can take advantage of this specialized arrangement. More intended for companies like Siemens and Philips who are in to medical imaging equipment. Houdini doesn't take advantage of Maximus at this time.

  • Like 2
Link to comment
Share on other sites

why wouldn't a tesla card calculate faster than a gtx680 exactly ? Not sure I understand.

Nvidia uses the same chips across all of their product tiers, a high end Quadro will have the same chip as a high end gaming card from the same generation (same for the Tesla). Other graphics manufacturers do the same thing like AMD (with FirePro and Radeon). The differences between the workstation, processing, and gaming products are the driver optimizations, quantity of memory, and the level of support offered.

have you ever tried liquid cooling ? This seems like a pretty strong statement of opinion.

Yes, ranging from a water block I designed and CNC milled to a factory liquid cooled workstation from Boxx. There are places where liquid cooling is actually needed like environments with extremely high ambient temperature, high density data centers or super computers that use a centralized liquid cooling infrastructure for higher efficiency, or necessarily silent applications with large passive reservoirs for recording or research environments. The typical computer graphics workstation is not in any of those situations so they just don't need it and if anyone tells you otherwise they're probably trying to sell you something. Overclocking is another use but most of the people using liquid cooling for overclocking could just as easily achieve the same results without liquid cooling, or if overclocking to the point where it needs liquid cooling I wouldn't suggest using it as a workstation anyway (for the same reason as not using liquid cooling in the first place).

Link to comment
Share on other sites

Nvidia uses the same chips across all of their product tiers, a high end Quadro will have the same chip as a high end gaming card from the same generation (same for the Tesla). Other graphics manufacturers do the same thing like AMD (with FirePro and Radeon). The differences between the workstation, processing, and gaming products are the driver optimizations, quantity of memory, and the level of support offered.

wouldn't the sheer number cores on a tesla make it outperform a 680gtx even on sims which fit in the memory of a 680 ?

Yes, ranging from a water block I designed and CNC milled to a factory liquid cooled workstation from Boxx. There are places where liquid cooling is actually needed like environments with extremely high ambient temperature, high density data centers or super computers that use a centralized liquid cooling infrastructure for higher efficiency, or necessarily silent applications with large passive reservoirs for recording or research environments. The typical computer graphics workstation is not in any of those situations so they just don't need it and if anyone tells you otherwise they're probably trying to sell you something. Overclocking is another use but most of the people using liquid cooling for overclocking could just as easily achieve the same results without liquid cooling, or if overclocking to the point where it needs liquid cooling I wouldn't suggest using it as a workstation anyway (for the same reason as not using liquid cooling in the first place).

I disagree. Maybe it's my ambient temperature (I can't always guarantee it to be icy cool) but I've melted my fair share of components. Ever since setting up my liquid cooled gtx580 I've never had a problem. We can agree to disagree at this point.

-G

I should also add that I'm not necessarily recommending liquid cooling, because there are many things going against it, but from the results that I got I probably wouldn't do without it from here on out.

Edited by ranxerox
Link to comment
Share on other sites

wouldn't the sheer number cores on a tesla make it outperform a 680gtx even on sims which fit in the memory of a 680 ?

The Tesla K10 uses the same chips as the GTX 690, they each have two of the higher end Kepler architecture processors. Something to point out is Houdini will see those as two device, so you could run two simulations at once but not one simulation that uses both.

The difference is what was stated before, the drivers, memory, support. I'm not really sold on the Tesla cards personally because they don't have that much more memory. Sure 4 GB (per GPU) is twice as much as 2 GB (per GPU) but that's still a relatively small amount of memory compared to having 24 GB or 48 GB of memory for a simulation if you run it on the processors instead of the graphics card. The performance improvements from using OpenCL are nice but that's only if the simulation fits within 2 GB or 4 GB of memory (depending on the graphics card) which in my experience is not that often.

When there are graphics cards with 32 GB of memory I'll probably have a different opinion. :D

Link to comment
Share on other sites

Rumour has it that the higher end Graphics cards get the chips in the centre of the large silicon wafers while the lower end cards get the chips on the perimeter but yes chips in the same family tend to share the same chip architecture.

The vid on liquid cooling vs air cooling is, wow. The cooling rad is about the same size as that in my car. And all those fans probably push more air too. Looks cool to! I wonder if you can put phosporous dies in the liquid so it glows....

At this point in time Houdini can't see that second gpu on the GTX 690 and that's why I recommend the GTX 680 right now with it's single GPU.

When graphics cards have 32GB of on-board memory, off-the-shelf mb's might have 64 to 256 primary cores or more with half a terabyte of RAM and we will still be wanting to have more. Won't be long.

Too bad you won't be able to run these future systems in the EU. :P

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

question: when you have two identical processors, with lets say max memory 32GB, can you have the sum of both in max memory (64) or just 32?

Processors used to talk to a system chipset with a memory controller to access memory. Now days processors have their own memory controller on the processor die itself which takes out middle man of the chipset and makes it faster but complicates things with multiple processors since there's no central place to go for memory.

To get to memory on other processors they talk to one another directly (Intel calls it QPI and AMD calls it HT). This kind of setup is called ccNUMA. Accessing memory through another processor when necessary is a little slower but you can still use all of the memory for a single task (like 64 GB if each processor has 32 GB attached). I say slower but not any slower than it would have been going through a chipset like it used to.

  • Like 1
Link to comment
Share on other sites

  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...