Jump to content
kemijo

Is AMD potentially risky? (Threadripper)

Recommended Posts

kemijo    1

I've been planning a higher end home system for awhile with an i7, but now we have i9 vs Threadripper. Official benchmarks are out today and it seems silly to consider Intel for a HEDT machine for content creation primarily (gaming secondary).

For say, the i9 10 core vs the AMD 16 core, is this a no brainer? Are there any reasons why going with AMD at this point is a bad idea (besides slight edges in single core speed, etc)? I'm concerned that AMD may have unforeseen caveats in the future, e.g., something designed to take advantage of an exclusive Intel instruction set, etc. For a more concrete example, in this article about upcoming Renderman 22:

http://www.cgchannel.com/2017/08/pixar-unveils-renderman-22-and-renderman-xpu/

...it states "Other features due in RenderMan 22 include 'fast vectorized OSL shader network evaluation on Intel scalable-SIMD CPUs'," referring to the new Xeon Scalable CPUs. This may or may not be relevant as I am not considering a Xeon but is it possible that software from common vendors will simply not be supported by AMD?

Thanks for any insights!

Share this post


Link to post
Share on other sites
marty    574

that's referring to AVX-512 most likely, so you need a Xeon currently 

https://en.wikipedia.org/wiki/AVX-512#CPUs_with_AVX-512

 

Embree uses it straight away, IIRC Vray uses Embree:

https://embree.github.io/

 

Good discussion here about AMD missing AVX 512. A bit too general/theoretical for us in VFX though.

https://forums.anandtech.com/threads/will-amd-support-avx-512-and-intel-tsx.2508094/

 

Oh Renderman uses Embree, that's why Pixar are pushing Intel in their press release I'm guessing and I'm double guessing that this is because of Animal Logic's Glimpse developer now working on Renderman.

https://rmanwiki.pixar.com/display/REN/Legal+Notice

Edited by marty

Share this post


Link to post
Share on other sites
Atom    516

The old AMD chips work fine. The new Ryzen works fine. I don't encounter any problem with AMD CPU processors.

 

Edited by Atom

Share this post


Link to post
Share on other sites
marty    574

Reading around about the Threadripper is pretty interesting - because it is two Ryzen 1800 chips glued together there is an option to turn off NUMA in the bios which may make for some minor performance gains/losses for simulations 

Share this post


Link to post
Share on other sites
On 8/13/2017 at 11:02 PM, marty said:

Reading around about the Threadripper is pretty interesting - because it is two Ryzen 1800 chips glued together there is an option to turn off NUMA in the bios which may make for some minor performance gains/losses for simulations 

can you expand on the topic ?

Share this post


Link to post
Share on other sites
DaJuice    22

For what it's worth I will be putting together my Threadripper build this weekend. There aren't any benchmark scenes for Houdini, are there?

Share this post


Link to post
Share on other sites
marty    574

No standard test scenes but I would bet that it's approx 1/2 speed of a 1080Ti for OpenCL, read very good!, so I would be testing it as a substitute GPU for Houdini with heaps more ram and way more flexible.  It should be twice as fast as the Ryzen 1700 overall.

Share this post


Link to post
Share on other sites
malexander    294

The Zen cores are arranged in modules of 4. Intel cores are generally paired together with shared L2, though I think the latest iteration of Skylake-E removes that (7xx0 series).  It's pretty common practice to have cores share some resources to keep power requirements down.

Ryzen has two of those modules on a die with a memory controller. Threadripper has 4 modules with 2 memory controllers, and that's where the NUMA (non-uniform memory arrangement) and proper OS scheduling comes into play. The first 2 modules have access to one bank of memory, and the other 2 modules have access to the other bank. If a core from one module needs memory from the other module's bank, there's an extra hop to access the memory. That's the "non-uniform" part, since RAM latency can vary based on its physical location.

Accessing RAM is already pretty slow, which is why CPUs have large L3 caches, and use SMT (aka Hyperthreading(tm)) to hide the RAM access latency. Thread stalled on a memory request? Switch to the other one that's parked on the core and continue crunching numbers. The OS scheduler is also responsible for keeping threads on one module or the other if possible, so these days NUMA doesn't have quite the hit that it used to on the older multi-socket servers. That's why sometimes a software or firmware update is needed for new CPUs.

  • Like 2

Share this post


Link to post
Share on other sites
kemijo    1

Forgot to say, thanks for the info Marty and Atom! The AVX 512 Marty, quite interesting. I suppose at worst there'd be a speed hit if running on an AMD that doesn't have it.

Victor stop drinking haterade! :P

DaJuice please share your findings if you can! I haven't seen any benchmarks with Threadripper and Houdini as yet.

malexander or anyone else, any idea what difference RAM speeds would have on most DCC apps? Say 2400mhz vs 3200mhz? The price dif is usually pretty significant, is it worth it? (I may have asked this question in another thread, but I don't remember a clear cut answer, so apologies for the repeat).

Share this post


Link to post
Share on other sites
marty    574
17 hours ago, kemijo said:

ny idea what difference RAM speeds would have on most DCC apps?

I would say not much, I've a Duel Xeon @ 3.33 running Ubuntu and Ryzen 1700 @ 3.0 GHz, Ram speeds are 1GHz and 2GHz respectively and they work about the same speed. Although it's 12 cores of Xeon vs 8 of Ryzen....

Edited by marty

Share this post


Link to post
Share on other sites
malexander    294
On 2017/08/19 at 5:01 AM, kemijo said:

malexander or anyone else, any idea what difference RAM speeds would have on most DCC apps? Say 2400mhz vs 3200mhz? The price dif is usually pretty significant, is it worth it? (I may have asked this question in another thread, but I don't remember a clear cut answer, so apologies for the repeat).

Higher RAM speeds do increase performance slightly. It's worth getting if the markup on the price is <10%, but that's rarely the case. Also it's harder to get fast RAM in large DIMM packs that you'd need to populate all the slots (4-8). If you have to choose, always go for more RAM over faster RAM.

You can see the effect of RAM speed, and NUMA, in the first page of this article: http://www.anandtech.com/show/11726/retesting-amd-ryzen-threadrippers-game-mode-halving-cores-for-more-performance

Share this post


Link to post
Share on other sites
jojoodforce    1

Very interesting topic ! I was thinking the same about putting together a Ryzen built. and by the way @marty if you won't to change machine I will be happy to buy your old one that are 3-4 years ahead of mine :))))

Share this post


Link to post
Share on other sites
DaJuice    22

Hmm, so I've got the system up and running and everything is mostly peachy except for Houdini. Messing around I get a black screen of death fairly quickly. For example, adding a grid, a Mountain SOP and then playing with the height value. I haven't had much luck narrowing it down. Tried different nvidia drivers, tried swapping in an old Quadro 4000 and reinstalling the proper drivers for that. Nothing is overclocked atm, and the system seems perfectly stable otherwise running RealBench, Prime95, and FurMark.

There are still some rough spots with the motherboard BIOS (MSI X399 Gaming Pro Carbon AC), but I'm not sure that would account for the hard crashes I'm getting with Houdini. I will try getting some other 3D apps installed and see if this behavior is unique to H.

Share this post


Link to post
Share on other sites
malexander    294

Someone found that turning off "Core Performance Boost" in the MSI x399 BIOS fixed crashes for them. Having no experience with that motherboard I don't know where that setting is, though.

Share this post


Link to post
Share on other sites
DaJuice    22

Mark thank you, that might have fixed the issue. I only had about 10 minutes to play with it, but I was not able to make Houdini crash with Core Performance Boost disabled in the BIOS, whereas before I was able to reproduce the crash in about 30 seconds. Do you recall where you read about this? I'd like to have a bit more info before submitting a ticket to MSI.

 

Marty, I believe it was cooking in general (so not related to display drivers like I initially thought) that was causing the crashes, not a specific node. Like I said, changing parameters on the Mountain SOP would bring the system down and I don't think that node is OpenCL accelerated. But yeah, EPYC and TR are pretty closely related so that might be something to watch out for.

Share this post


Link to post
Share on other sites
malexander    294
38 minutes ago, DaJuice said:

 Do you recall where you read about this? I'd like to have a bit more info before submitting a ticket to MSI.

Another Houdini user submitted a bug to SideFX and managed to resolve it himself, and the bug happened to catch my eye. Similar setup to yours.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×