Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

gridMoth

Upcoming Build- future AMD vs Intel

22 posts in this topic

Okay, so let me start off by saying I know it is way too early to speak in hypothetically about this stuff... but just looking for some general knowledge/opinions for a build.

For the past few months I've been targeting running dual Xeon E5-2690's. Then news of Naples from AMD.

Is the general consensus Intel is still going to perform better and be much more stable...say running Arch Linux for example? Those cores with presumably a friendlier price point...just curious what someone with more experience thought about this moving forward—as I'm looking to get into an expensive workstation.

Like I said, I know we shouldn't really speculate, but if anyone has a bit of insight as to how Intel crunches data for simulations and whatnot, the stability/reliability for updates in firmware vs what may be attractive with all the Naples cores expected etc., I love to hear some thoughts on the architectures running Houdini, and overall workstations.

 

Much appreciated, and apologies if this isn't in the right thread.

Share this post


Link to post
Share on other sites

Naples base clock rate of 1.4GHz, turbo to 2.8GHz seems pretty slow.

Share this post


Link to post
Share on other sites
1 hour ago, marty said:

Naples base clock rate of 1.4GHz, turbo to 2.8GHz seems pretty slow.

Oh wow, not sure if I glossed over that somehow. For some reason I was under the impression that information was not out yet. I know higher core counts for server based chips typically have slower clock speeds, but that is not at all what I thought Naples was going to be. Might be okay for a headless render box one day, but...back to sticking my gut about Intel being the wise decision I suppose.

If others read this and can add any useful information in general about building a machine in 2017, or can add a bit of perspective with maybe holding off for a bit for Intel to drop prices, OR perhaps if the E5-2690 v4 may not be the best choice. Please do. I could really use some sidefx opinions so to speak.

The 2690 seemed to me like a good balance or trade-off with clock speeds/all core boost speed of 3.2, core count, and thermal output for the $.

[using this as a partial reference] http://bit.ly/2na7Gks

Share this post


Link to post
Share on other sites

Haven't looked into the current line ups but you want faster single core speed over slower mulit-cores as ops aren't perfectly threaded and fast single speed will improve every bit of Houdini including the gui interaction. Memory bound ops can be affected by mult-core transfers so very large sims may need a different setup to generalized 3d work. Naples may be super-wicked for that.

Rendering is the most improved part with multi-core but that should be weighed against using Redshift instead of Mantra, and with openCL starting to play more of a role in Sops and Dops too, which will run very well on the GPU.

Then there is the newer 512bit vector ops in the high-end Intel chips. If Houdini can use that then it will also change the equations.

1 person likes this

Share this post


Link to post
Share on other sites

While it's great that AMD is competing again, going for a Zen-based chip is more of a cost decision than a performance one. Hopefully it'll put pressure on Intel to drop prices a bit over the long haul.

1 person likes this

Share this post


Link to post
Share on other sites
8 hours ago, marty said:

Then there is the newer 512bit vector ops in the high-end Intel chips. If Houdini can use that then it will also change the equations.

Must research this...thanksAppreciate ya taking the time to respond—truly.

It's all so conflicting in many ways. My goal is to have both, GPU and CPU goodness. I have around a 10k budget that I absolutely can't afford to go wrong here. I'd like to be able to render in Arnold OR Redshift both ideally—depending on the project/time constraints etc.

I sorta thought Houdini was edging closer to more ops really taking advantage of more multi-threading... I see your point with openCL and GPU moving forward.

In some aspects, I want computationally heavier algorithmic stuff to become fluid in my workflow. I want RnD type stuff to get results fast and not be bogged down, yet still have most of generalized perks you speak of with single core higher clock rates.

So, I won't keep buggin' ya....but to clarify, given what I've mentioned;  you think it may be more beneficial going with a good i7 and loading up on GPU instead of running dual Xeons & GPU?

I had planned on running dual E5-2690's, starting off with 2 1080ti's and 1 980ti...   Not that I won't do my own homework, but if you have a good link or two on houdini's future dev and/or openCL etc. that you think I should read, please drop me a line. Thanks again.

Share this post


Link to post
Share on other sites
6 hours ago, malexander said:

While it's great that AMD is competing again, going for a Zen-based chip is more of a cost decision than a performance one. Hopefully it'll put pressure on Intel to drop prices a bit over the long haul.

For sure, I'm afraid Intel will drop prices the day after I purchase from them...but a win in the long run hopefully. Thanks for the reply

Share this post


Link to post
Share on other sites

Those Xeons are very nice - better than the i7s I could find, for total Ghz frequency, ~0.2 Ghz slower per core, more than double the cache and can access way more ram. 2 processors will work very well for nicely threaded stuff. Also maybe the pcie lanes are better suited when you have lots of GPUs but not sure on that.

Check out the compiled blocks nodes to see where Houdini is going. Your rig will probably last between 2-4 years, so that's H16-H20, and SideFx simply keep optimising their tools. You can't lose.

The AVX 512 may become optional like the current 'Houdini supports MMX and Streaming SIMD (SSE2) where present'. I assume it will be first supported by the Intel OpenCL compiler, so that part will automatically use it. Avatar Ren, above, knows much more about this though :)

Guide to Automatic Vectorization with Intel AVX-512 Instructions in Knights Landing Processors

https://colfaxresearch.com/knl-avx512/

Compiling for the Intel® Xeon Phi™ Processor and the Intel® Advanced Vector Extensions 512 ISA

https://software.intel.com/en-us/articles/compiling-for-the-intel-xeon-phi-processor-and-the-intel-avx-512-isa

Edited by marty
1 person likes this

Share this post


Link to post
Share on other sites

I was also looking at those chips, suspiciously cheap for what you get...I would like to know more about this as well. I'm currently running a (somewhat) outdated dual xeon build, it runs great but slow core clock speed, I'm thinking convert it to a dedicated render/cache slave, build a new faster box and also get into the GPU rendering game. My current mobo (supermicro MBD-X10DRL-I) only has 1 PCIe x16 slot, and I would want to run minimum 2x GTX 1080ti cards, maybe more, but still have a dual xeon system with tons of RAM. 

Traditionally I think that is a somewhat contradictory ask because it is a server build, and admittedly I really don't know that much about GPU rendering or how to plan a build for it, but I would like to have the best of both worlds- dual xeon 16-24 core machine with 2+ high end cards for GPU rendering as well.  Anyone have any advice on that type of build? Windows btw

Share this post


Link to post
Share on other sites

Thanks for the links&nuggets @marty. Good stuff. Reading up on the compiled blocks now, and looking forward to diving into all the AVX-512 more this weekend. Will also look into the pcie lanes, but I think you may be partially right. That is another area I really want to look at is latency type stuff. Feeling pretty good about the direction I'm headed after your input.  

I have so many more questions that I hope you and other smart folks in the forum will help me with in the coming weeks/months. But, I've done enough pestering for one day I think.

Super grateful :D.

Edited by gridMoth

Share this post


Link to post
Share on other sites

Naples is targeting supercomputing and massively parallel applications (Mantra and other renderers are definitely in this category). It will be especially useful for OpenCL and CUDA with 128 PCI Express lanes, that's crazy. The Xeon E5-2600 series has 40 PCI Express lanes.

It may or may not make a good workstation platform depending on how much stuff you do that relies on a single processor core. These days that's less and less common but there are some applications that lag in various areas. It will likely make a good render node and simulation node if their pricing is competitive like the Ryzen products.

1 person likes this

Share this post


Link to post
Share on other sites

Looking at the consumer chips, they have a dual DDR4 interface which is faster than the 4-core Haswell and lower Intel GPUs (~43GB/s vs. 25GB/s) but slower that the newer SkyLake+ CPUs (50+GB/s). The quad-channel socket 2011 and Xeon chips leave them in the dust at 75+GB/s. That could be a potential bottleneck for very large sims which require a lot of mem bandwidth.

I think this is probably the weak link in the Ryzen design. A 16 thread CPU requires a lot of memory bandwidth, and it could be starved by a dual channel interface. The server chip doesn't have this limitation, but it also takes a clockspeed hit.

Share this post


Link to post
Share on other sites

Wanted to get back right quick to say thanks for the replies @malexander & @lukeiamyourfather. Have a couple follow up questions myself, but will have to revisit this in a couple days. Might be able narrow my focus anyhow after marty's question.

Share this post


Link to post
Share on other sites
18 hours ago, marty said:

@lukeiamyourfather I'm trying to figure out how the 128 PCI lanes help OpenCL and CUDA. Do you mean you can run more 16x GPU cards?

Yep, you could run a bunch of them in 16x mode, and have some lanes leftover to access a couple of PCI-Ex-based SSDs for large datasets. This is particularly important as AMD GPUs now have virtual addressing which could target the data on SSDs directly (though I'm unsure if that's currently supported for external SSDs, or just the TB one built in to the new Radeon Pro SSG cards). Usually there's a few lanes taken up by ethernet, the chipset, and slow storage as well, so 40 can go really quick.

1 person likes this

Share this post


Link to post
Share on other sites

Exactly what Mark said. The extra PCI Express lanes are also going to be useful for high performance storage clusters with NVMe arrays or machines with tons of drives like ZFS. All around it's going to be a very useful platform in CGI production if the pricing is competitive.

1 person likes this

Share this post


Link to post
Share on other sites

Nice!  has anyone tested OpenCL CPU on AMD chips yet? Everyone goes silent when we've asked them to try it out :o

Share this post


Link to post
Share on other sites
5 hours ago, merlino said:

It's strange that 1700x and 1800x have the exact same performance in this case, isn't it?

The CPU benchmark shows them with varying performance. The OpenCL benchmark shows them as all being similar but this is to be expected as the OpenCL benchmark uses the GPU, not the CPU. So if all three were tested with the same GPU they should all come out roughly the same.

Share this post


Link to post
Share on other sites
On 4/6/2017 at 6:53 PM, lukeiamyourfather said:

The CPU benchmark shows them with varying performance. The OpenCL benchmark shows them as all being similar but this is to be expected as the OpenCL benchmark uses the GPU, not the CPU. So if all three were tested with the same GPU they should all come out roughly the same.

Shouldn't be the internal OpenCL from the CPU? I used the i7 OpenCL capabilities in some test and worked well :)

But now that you say that I go to check if the last CPU have OpenCL capabilities (Y)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now