Jump to content

Going to buy a new PC, recommendations for houdini?


Recommended Posts

Hey guys, I'm gonna buy a new PC and wanted to ask here for some suggestions.

My budget is u$d 3000 to $5000 max

My daily apps, Octane, Cinema 4D, Zbrush, After Effects, and currently learning Houdini, wanna do some heavy particle / volume based animations.

Saw a friend list on amazon and gave me some ideas:

1) A solid state disk for the OS. 
Samsung 850 EVO - 2TB - 2.5-Inch SATA III Internal SSD (MZ-75E2T0B/AM)

2) Geforce gtx 980 (whats better, 3 gf 980 6gb or 1 titan z 12gb?

3) 
Intel Xeon E5-2630 v2 2.60 GHz Processor - Socket FCLGA2011 BX80635E52630V2

4) Motherboard? and RAM?

These specs are more for the octane/c4d side, not sure how houdini works, if I need great ram, or great video card, any input will be appreciated

Thanks!

Edited by caskal
Link to comment
Share on other sites

I wouldn't go for the 980's or Titan Z personally, they're old tech, lower memory than current offers, and especially if you want to be doing OpenCL accelerated Houdini sims you don't want to run out of GPU memory or the sims crash. Get one GPU for now. Nothing aside from Octane is going to use more than one anyway...and make it a 1080. If you find your renders are taking too long, you can always buy a second card later. They don't scale linearly for rendering anyway so the first GPU is always the best money spent.

You can find used Xeon v2/v3/v4 chips on eBay that have a ton of bang for the buck, I'd highly recommend that. Unless you physically damage the chips, you almost literally can't abuse a Xeon due to their locked multipliers. I run a >250 core render farm in my home studio, and I have never seen what a Xeon's retail box actually looks like.

You'll want loads of RAM too so you never have to worry about bottlenecking there either...be able to compute whatever your imagination and CPU patience will allow for instead of also having to prune back because of memory.

If you grab a CPU off eBay for $800, an good X99 motherboard for $400, 64GB/128GB RAM for $500/$1000, a GTX 1080 for $600, and let's call it $800 on hard drives/SSDs...you're at around $3,100-$3,600 depending on whether you do 64 or 128 RAM.

Case, Win 10 Pro, PSU, CPU cooler, blah blah, you're now between $3,500 and $4,100 no problem.

That should get you a beastly machine with >12 CPU cores, a cutting edge GPU with 8GB, 128GB RAM, and plenty of fast storage to fill up with lovely .bgeo files.

Edited by Kardonn
  • Like 2
Link to comment
Share on other sites

Hi all, I have a similar query. My budget is below $2,500. And I am going with 1070, but I am really confused as to what will be better? core i7 6800k or intel Xeon processors? 

How does Houdini processes its dops, pops, fuilds, etc? 

Is Xeon better or Core i7?

Lastly, I am really skeptic amd confused about used processors, will it give the same performance? Wont it break down sooner?

Edited by cgartashish
Added some more info
  • Like 1
Link to comment
Share on other sites

@Kardonn @marty @cgartashish thanks everybody for the detailed info, really useful tips. I also posted in a octane forum (octane is one of the main reasons for the new pc) and they dont recommend the titan since octane divides ram and will be a waste, for the 1080 should wait octane to get the cuda 8 toolkit going, most of them recommend couple gtx 980ti.

On the processor there's a battle between i7 and xeon.

So I'll think these days but probably will go with a solid state disk for the OS, 128 gb ram, 3 gtx 980ti, asus x99 deluxe, and i7 or xeon.

btw, hows octane running on houdini? I'll probably buy for it, I'm using on c4d right now and its awesome.

Thanks again, really appreciated!

Link to comment
Share on other sites

Definitely do Xeon over i7, like I was saying if you buy off eBay...for the same price as the 10 core i7 you can get an 18 core Xeon E5 v3. Similar applies down the price range too with the 8C i7 and even the 6C i7...you're just never going to do better than used Xeons.

They absolutely won't break down any sooner due to being used, especially if you're buying things like Xeon v3/v4 used. They can't possibly be older than 1-2yrs at this point, and Xeons (and CPUs in general) will always outlive their usefulness anyway. Plus with Xeon you can't overclock, so the absolute worst case scenario is that they've been used exactly as Intel designed them to be over the last year or two before being sold.

Far as Octane goes, I've never used it in Houdini but in regards to waiting for the CUDA 8 toolkit...I don't really understand why you'd get a 980ti over a 1080 when you can be 100% sure they're working on optimizing the software around the 1000 series cards right now. I think a triple 980ti setup is a lot of money to spend on tech that's already a version behind.

The 1080 is a superior card to the 980ti not just on compute scores but also power consumption. I'd get 1 x 1080 for now, see how you find your render times in Octane, wait for their software to be refreshed around the new tech, and then add more later.

http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/28

  • Like 2
Link to comment
Share on other sites

53 minutes ago, Kardonn said:

Definitely do Xeon over i7, like I was saying if you buy off eBay...for the same price as the 10 core i7 you can get an 18 core Xeon E5 v3. Similar applies down the price range too with the 8C i7 and even the 6C i7...you're just never going to do better than used Xeons.

They absolutely won't break down any sooner due to being used, especially if you're buying things like Xeon v3/v4 used. They can't possibly be older than 1-2yrs at this point, and Xeons (and CPUs in general) will always outlive their usefulness anyway. Plus with Xeon you can't overclock, so the absolute worst case scenario is that they've been used exactly as Intel designed them to be over the last year or two before being sold.

Far as Octane goes, I've never used it in Houdini but in regards to waiting for the CUDA 8 toolkit...I don't really understand why you'd get a 980ti over a 1080 when you can be 100% sure they're working on optimizing the software around the 1000 series cards right now. I think a triple 980ti setup is a lot of money to spend on tech that's already a version behind.

The 1080 is a superior card to the 980ti not just on compute scores but also power consumption. I'd get 1 x 1080 for now, see how you find your render times in Octane, wait for their software to be refreshed around the new tech, and then add more later.

http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/28

Hi,

Thanks for your suggestion. I have few more questions though.

Does Xeon supports water cooling?? Will I need one or is it made that way to not heat up too much? If I further upgrade to multi Xeon processors will the motherboard support multi water coolers? Does it supports high end games too? Games are not my priority but I do play them sometimes. And max RAM supported by Xeon??

Please suggest me some server motherboards. any huge difference between server and gaming motherboard? will it support multi GPUs?

Will Corsair RM 1000 PSU be supported by this system?

Please help. and Thanks again. :)

  • Like 1
Link to comment
Share on other sites

CPU Ghz is king in computing simulations, I would not choose a 2.6Ghz anything (laptop speed). You need at least a 4Ghz CPU IMHO. There are still a lot of single-core operations in VFX pipelines and the softwares you mention, so you want a fast CPU to chop through those operations quickly to get you to rendering/simming faster which is where most multi-CPU operations occur. When you are choosing the CPU you are choosing the motherboard options as well. If the Xeon does not support DDR4 then don't get it. You want the ability to have a larger memory footprint and DDR3 chips/motherboards are old technology at this point typically bottle necking at 32Gb.

  • Like 1
Link to comment
Share on other sites

@Atom @Kardonn.

I have selected two CPUs.

1) Intel Xeon E5-2620v4 - LGA2011v3 Socket Processor for C612 Chipset Boards (2.10Ghz Turbo Upto 3.0Ghz, 20MB Cache, 85Watt, 8 Cores, 16 Threads).

2) Intel Core i7 6800K - LGA2011v3 Broadwell-E Socket Processor (15M Cache, 3.4Ghz up to 3.60 GHz, 6 Core, 12 Threads, 140W).

I will couple the processor with GTX 1070 8GB graphics card and also provide atleast 64 GB RAM and will add more when I have some more savings.

Which one will be a better option in terms of simualtion, modeling, sculpting, rendering and all other 3D works? Does Xeon supports high 4K Textures? Will IPR and other on-screen realtime displays work fine with the xeon?

Does Xeon processor supports CPU water cooling? Or I wont require it?

Please suggest me a motherboard with two sockets in a $400 budget. 

One more thing I found a used intel Xeon E5-2670 8 cores, 16 threads, it is used for 2 years now, with no warranty. Should I go for it? Will it not break down sooner or lag in performance for being used for like 2 years?

Please help.

  • Like 1
Link to comment
Share on other sites

I use dual xeons on a machine at work, 2.3Ghz presenting 16 cores to the OS. I have an alternate dual xeon machine running at 2.6Ghz with only 8 cores. These are both the old MacPro (non-cylinder) style machines running DDR3. The 8 core machine is almost as fast at the 16 core under load while rendering from C4D. So twice as many cores does not mean twice as much speed. The megahertz of the CPU is very, very important.

I was considering this machine as nice VFX starting point, then I was planning on adding more memory to it. I have not purchased this machine but it met most of my requirements and is under $1,200. What I liked about this machine was the fact that it is built upon a 3rd party motherboard and not an OEM motherboard. This means overclocking is still possible down the road. Also watch out for the different flavors of 6700K chips make sure to get the 4.0Ghz or better chip, there is a difference. NOTE: The above machine is kind of weak on the power supply with only 500W, as far as adding more GPUs to it.

I believe you can add water cooling to most systems after the fact, however, make sure your case is setup from the start to accept water cooling radiator either on the top or back of the case. Check dimensions on your case and on the product you decide upon. Personally I have not made the leap to water cooling yet and still use traditional fan cooling.

Edited by Atom
  • Like 2
Link to comment
Share on other sites

30 minutes ago, Atom said:

I use dual xeons on a machine at work, 2.3Ghz presenting 16 cores to the OS. I have an alternate dual xeon machine running at 2.6Ghz with only 8 cores. These are both the old MacPro (non-cylinder) style machines running DDR3. The 8 core machine is almost as fast at the 16 core under load while rendering from C4D. So twice as many cores does not mean twice as much speed. The megahertz of the CPU is very, very important.

I was considering this machine as nice VFX starting point, then I was planning on adding more memory to it. I have not purchased this machine but it met most of my requirements and is under $1,200. What I liked about this machine was the fact that it is built upon a 3rd party motherboard and not an OEM motherboard. This means overclocking is still possible down the road. Also watch out for the different flavors of 6700K chips make sure to get the 4.0Ghz or better chip, there is a difference. NOTE: The above machine is kind of weak on the power supply with only 500W, as far as adding more GPUs to it.

I believe you can add water cooling to most systems after the fact, however, make sure your case is setup from the start to accept water cooling radiator either on the top or back of the case. Check dimensions on your case and on the product you decide upon. Personally I have not made the leap to water cooling yet and still use traditional fan cooling.

@Atom @Kardonn

I am really confused now. Please can you tell me if Houdini can really use my 8 cores of Xeons? Is Houdini engineered to use multi cores and multi-threading?

Not only rendering, but will extra cores benefit me in POP solvers, DOP solvers,etc? Will there be a lag in viewport due to the low Mhz of Xeon?

I also use Realflow, Maya, Blender, Unreal and lots of other softwares, so does all these 3d applications use multi cores efficiently??

And the final question, What will you choose?

(One Intel Core i7 6800K with 6 cores 3.4 GHz, 4.0 GHz OC 15M Cache)

OR

(Two Xeon E5-2620 v4 with 8 cores 2.10 GHz, 3.0 GHz maximum 20M Cache, that is, 16 cores and 32 threads).

  • Like 1
Link to comment
Share on other sites

Quote

Is Houdini engineered to use multi cores and multi-threading?

Just like other software, parts of Houdini are single threaded and parts are mult-threaded. Out of all the software I have used, Houdini seems to be the most multi-threaded, for instance VEX is multi-threaded but Python is not. So even within a single Houdini scene you can author it to be single threaded or multi-threaded.

If you compare the 6800K with the 6700K you 'll see what I mean by Ghz is important. The 6800K 3.4Ghz 6 core is slower than the 6700K 4Ghz 4 core. This website claims only a 1% advantage but it brings up my point of Ghz is more important than core count on prosumer based machines. The 6700K limits yout to 64Gb but the 6800K goes up to 128GB if the motherboard will support it.

 

I would always choose the fastest CPU.

Edited by Atom
  • Like 1
Link to comment
Share on other sites

@Atom 

Thanks a lottt for your prompt and great replies. I have one more question though. Just now I saw that multi-threading and multi cores are way different.

I think I have been misusing terms, I ran a few simulations and POP Solvers and monitored my hardware usage. Everytime the solver ran, all the cores of my CPU went upto full 100% usage. So, does this not mean that Houdini is using all 6 cores of my CPU? Will adding more CPUs wont increase the speed of the simulations?

More Cores = More work done at a given time?

Please let me know. I am really naive in all this and I dont want to end up wasting my savings on a system that is not a bang for every buck for my job.

Link to comment
Share on other sites

The chips are almost the same, take into account the turbo single thread speed, 6800K 10% slower but is 30% faster multi-core; the 6800K would seem to be better for Houdini work.

Link to comment
Share on other sites

Xeon especially v3 and v4 are very fast single threaded despite their listed GHz. My render/sim systems are all dual Xeon 2699 v3 @ 72 threads, and despite the chips being listed as "2.3 GHz" they run at 2.78 GHz when fully loaded, and clock up to 3.8 GHz on light threaded loads.

My advice is when you're already blowing $3.5K on everything in the system other than the CPU (not even counting license costs here), there's no sense in skimping on the CPU. Buy the best used Xeon v3/v4 you can find within a reasonable price range on eBay and run that. An 8C i7 is over $1000. A 16C Xeon can be bought for a similar price used.

Xeons will watercool just like any i7 chip, it's the same socket and mounting technology whether Xeon or i7-E. Like Atom said, you just need to make sure your case will mount a radiator. 99% of new cases will gladly.

And I assure you, Houdini along with any good renderer will use everything you throw at it:

gto6hd9.png

  • Like 2
Link to comment
Share on other sites

A 2TB SSD for the OS is a total overkill in my opinion, that money will be better spend elsewhere. Surely 500gb should be enough?

With windows you can install applications on other drives you know.

In terms of finding a machine, another option is to look for refurbished HP z machine on ebay. You can get some good deals on a z820 or the smaller z620. Both this machines will take dual gpus.

You will have a solid machine, and that will keep it's value when you want to sell it.

One thing I dislike about custom built PCs, is how fast they loose value.

In terms of GPUs, sure a 1080 will be more future proof. However in the UK it is twice the price of a 980ti. So it really depends where you want to spend the money and what CUDA/MONEY ratio is. What do the octane guys say? What's best bang for the buck?

 

 

 

Edited by cgcris
more info
  • Like 1
Link to comment
Share on other sites

@Atom @Kardonn

I am confused whether to buy a used Xeon processor, say like, 2 years used. It is way cheaper and affordable for me. I can have like 14 cores at affordable price. Will a used CPU not perform well? or are there any chances of their breaking down sooner on heavy loads?

I am adding a link to Ebay.com, Please if you can suggest me a Xeon with good condition or trusted seller. I am from India so please check if they deliver in India. I can spend almost $600 on my CPU.

http://www.ebay.com/sch/i.html?_from=R40&_sacat=0&Socket%20Type=LGA%202011%2Dv3&_nkw=Xeon E5&_dcat=164&rt=nc&LH_ItemCondition=4&_trksid=p2045573.m1684

Please guys help. Thanks :)

Link to comment
Share on other sites

15 hours ago, cgartashish said:

I am confused whether to buy a used Xeon processor, say like, 2 years used. It is way cheaper and affordable for me. I can have like 14 cores at affordable price. Will a used CPU not perform well? or are there any chances of their breaking down sooner on heavy loads?

I am personally looking at getting a couple of e5-2670 first gen. You can get two of these for about £130 pounds. A couple of years back that used to cost thousands. This is just for my home machine so I won't be using it all day long. 

I found this link on the net, and this is the type of build I would aim for if I was custom buiding a PC.

http://talcikdemovicova.com/32-thread-build-on-budget/

In regards to buying into GPUs for octane, there was an important announcement from otoy yesterday in regards to version 4. 

https://home.otoy.com/otoy-and-imagination-unveil-breakthrough-powervr-ray-tracing-platform/

This PowerVr Wizard is a piece of hardware that may replace GPU for rendering at least for octane.

https://imgtec.com/blog/real-time-ray-tracing-on-powervr-gr6500-ces-2016/

I would therefore not spend too much money on GPUs at this point. From their press release:

Quote

OctaneRender 4 with Imagination’s PowerVR Ray Tracing hardware delivers a 10x increase in ray tracing performance per watt over the best GPGPU ray tracing implementations pioneered by OTOY’s OctaneRender and Brigade Engine. This performance boost combined with the incredibly low energy consumption of the PowerVR GPU means the chip traces rays orders of magnitude more efficiently than a workstation GPU. 

 

 

 

Edited by cgcris
more info
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...