kemijo Posted August 11, 2017 Share Posted August 11, 2017 I've been planning a higher end home system for awhile with an i7, but now we have i9 vs Threadripper. Official benchmarks are out today and it seems silly to consider Intel for a HEDT machine for content creation primarily (gaming secondary). For say, the i9 10 core vs the AMD 16 core, is this a no brainer? Are there any reasons why going with AMD at this point is a bad idea (besides slight edges in single core speed, etc)? I'm concerned that AMD may have unforeseen caveats in the future, e.g., something designed to take advantage of an exclusive Intel instruction set, etc. For a more concrete example, in this article about upcoming Renderman 22: http://www.cgchannel.com/2017/08/pixar-unveils-renderman-22-and-renderman-xpu/ ...it states "Other features due in RenderMan 22 include 'fast vectorized OSL shader network evaluation on Intel scalable-SIMD CPUs'," referring to the new Xeon Scalable CPUs. This may or may not be relevant as I am not considering a Xeon but is it possible that software from common vendors will simply not be supported by AMD? Thanks for any insights! Quote Link to comment Share on other sites More sharing options...
Guest tar Posted August 11, 2017 Share Posted August 11, 2017 (edited) that's referring to AVX-512 most likely, so you need a Xeon currently https://en.wikipedia.org/wiki/AVX-512#CPUs_with_AVX-512 Embree uses it straight away, IIRC Vray uses Embree: https://embree.github.io/ Good discussion here about AMD missing AVX 512. A bit too general/theoretical for us in VFX though. https://forums.anandtech.com/threads/will-amd-support-avx-512-and-intel-tsx.2508094/ Oh Renderman uses Embree, that's why Pixar are pushing Intel in their press release I'm guessing and I'm double guessing that this is because of Animal Logic's Glimpse developer now working on Renderman. https://rmanwiki.pixar.com/display/REN/Legal+Notice Edited August 11, 2017 by tar Quote Link to comment Share on other sites More sharing options...
Victor Posted August 12, 2017 Share Posted August 12, 2017 "Is AMD potentially risky? (Threadripper)" Yes, this is a terrible idea.. -Victor PS: I may have a slight bias Quote Link to comment Share on other sites More sharing options...
Atom Posted August 12, 2017 Share Posted August 12, 2017 (edited) The old AMD chips work fine. The new Ryzen works fine. I don't encounter any problem with AMD CPU processors. Edited August 12, 2017 by Atom Quote Link to comment Share on other sites More sharing options...
Guest tar Posted August 13, 2017 Share Posted August 13, 2017 Reading around about the Threadripper is pretty interesting - because it is two Ryzen 1800 chips glued together there is an option to turn off NUMA in the bios which may make for some minor performance gains/losses for simulations Quote Link to comment Share on other sites More sharing options...
berniebernie Posted August 16, 2017 Share Posted August 16, 2017 On 8/13/2017 at 11:02 PM, marty said: Reading around about the Threadripper is pretty interesting - because it is two Ryzen 1800 chips glued together there is an option to turn off NUMA in the bios which may make for some minor performance gains/losses for simulations can you expand on the topic ? Quote Link to comment Share on other sites More sharing options...
Guest tar Posted August 16, 2017 Share Posted August 16, 2017 @berniebernie it gets more interesting now as the 1700/1800 are two 4 core chips glued together too! So the Threadripper is actually 4 CPUs glued together! https://forum.level1techs.com/t/why-ryzens-performance-is-so-different-cache-and-compute-complexes/113950 Quote Link to comment Share on other sites More sharing options...
DaJuice Posted August 16, 2017 Share Posted August 16, 2017 For what it's worth I will be putting together my Threadripper build this weekend. There aren't any benchmark scenes for Houdini, are there? Quote Link to comment Share on other sites More sharing options...
Guest tar Posted August 16, 2017 Share Posted August 16, 2017 No standard test scenes but I would bet that it's approx 1/2 speed of a 1080Ti for OpenCL, read very good!, so I would be testing it as a substitute GPU for Houdini with heaps more ram and way more flexible. It should be twice as fast as the Ryzen 1700 overall. Quote Link to comment Share on other sites More sharing options...
malexander Posted August 16, 2017 Share Posted August 16, 2017 The Zen cores are arranged in modules of 4. Intel cores are generally paired together with shared L2, though I think the latest iteration of Skylake-E removes that (7xx0 series). It's pretty common practice to have cores share some resources to keep power requirements down. Ryzen has two of those modules on a die with a memory controller. Threadripper has 4 modules with 2 memory controllers, and that's where the NUMA (non-uniform memory arrangement) and proper OS scheduling comes into play. The first 2 modules have access to one bank of memory, and the other 2 modules have access to the other bank. If a core from one module needs memory from the other module's bank, there's an extra hop to access the memory. That's the "non-uniform" part, since RAM latency can vary based on its physical location. Accessing RAM is already pretty slow, which is why CPUs have large L3 caches, and use SMT (aka Hyperthreading(tm)) to hide the RAM access latency. Thread stalled on a memory request? Switch to the other one that's parked on the core and continue crunching numbers. The OS scheduler is also responsible for keeping threads on one module or the other if possible, so these days NUMA doesn't have quite the hit that it used to on the older multi-socket servers. That's why sometimes a software or firmware update is needed for new CPUs. 1 Quote Link to comment Share on other sites More sharing options...
kemijo Posted August 19, 2017 Author Share Posted August 19, 2017 Forgot to say, thanks for the info Marty and Atom! The AVX 512 Marty, quite interesting. I suppose at worst there'd be a speed hit if running on an AMD that doesn't have it. Victor stop drinking haterade! DaJuice please share your findings if you can! I haven't seen any benchmarks with Threadripper and Houdini as yet. malexander or anyone else, any idea what difference RAM speeds would have on most DCC apps? Say 2400mhz vs 3200mhz? The price dif is usually pretty significant, is it worth it? (I may have asked this question in another thread, but I don't remember a clear cut answer, so apologies for the repeat). Quote Link to comment Share on other sites More sharing options...
Guest tar Posted August 20, 2017 Share Posted August 20, 2017 (edited) 17 hours ago, kemijo said: ny idea what difference RAM speeds would have on most DCC apps? I would say not much, I've a Duel Xeon @ 3.33 running Ubuntu and Ryzen 1700 @ 3.0 GHz, Ram speeds are 1GHz and 2GHz respectively and they work about the same speed. Although it's 12 cores of Xeon vs 8 of Ryzen.... Edited August 20, 2017 by tar Quote Link to comment Share on other sites More sharing options...
malexander Posted August 21, 2017 Share Posted August 21, 2017 On 2017/08/19 at 5:01 AM, kemijo said: malexander or anyone else, any idea what difference RAM speeds would have on most DCC apps? Say 2400mhz vs 3200mhz? The price dif is usually pretty significant, is it worth it? (I may have asked this question in another thread, but I don't remember a clear cut answer, so apologies for the repeat). Higher RAM speeds do increase performance slightly. It's worth getting if the markup on the price is <10%, but that's rarely the case. Also it's harder to get fast RAM in large DIMM packs that you'd need to populate all the slots (4-8). If you have to choose, always go for more RAM over faster RAM. You can see the effect of RAM speed, and NUMA, in the first page of this article: http://www.anandtech.com/show/11726/retesting-amd-ryzen-threadrippers-game-mode-halving-cores-for-more-performance Quote Link to comment Share on other sites More sharing options...
jojoodforce Posted August 21, 2017 Share Posted August 21, 2017 Very interesting topic ! I was thinking the same about putting together a Ryzen built. and by the way @marty if you won't to change machine I will be happy to buy your old one that are 3-4 years ahead of mine :)))) Quote Link to comment Share on other sites More sharing options...
DaJuice Posted August 21, 2017 Share Posted August 21, 2017 Hmm, so I've got the system up and running and everything is mostly peachy except for Houdini. Messing around I get a black screen of death fairly quickly. For example, adding a grid, a Mountain SOP and then playing with the height value. I haven't had much luck narrowing it down. Tried different nvidia drivers, tried swapping in an old Quadro 4000 and reinstalling the proper drivers for that. Nothing is overclocked atm, and the system seems perfectly stable otherwise running RealBench, Prime95, and FurMark. There are still some rough spots with the motherboard BIOS (MSI X399 Gaming Pro Carbon AC), but I'm not sure that would account for the hard crashes I'm getting with Houdini. I will try getting some other 3D apps installed and see if this behavior is unique to H. Quote Link to comment Share on other sites More sharing options...
malexander Posted August 21, 2017 Share Posted August 21, 2017 Someone found that turning off "Core Performance Boost" in the MSI x399 BIOS fixed crashes for them. Having no experience with that motherboard I don't know where that setting is, though. Quote Link to comment Share on other sites More sharing options...
Guest tar Posted August 21, 2017 Share Posted August 21, 2017 (edited) @DaJuice- can you check which nodes cause the errors. i.e. long shot but it may be something to do with OpenCL as the EPYC processor did have a few issues with OCL. Not sure how related the EPYC is to TR though. https://community.amd.com/thread/218879 Edited August 21, 2017 by tar Quote Link to comment Share on other sites More sharing options...
DaJuice Posted August 21, 2017 Share Posted August 21, 2017 Mark thank you, that might have fixed the issue. I only had about 10 minutes to play with it, but I was not able to make Houdini crash with Core Performance Boost disabled in the BIOS, whereas before I was able to reproduce the crash in about 30 seconds. Do you recall where you read about this? I'd like to have a bit more info before submitting a ticket to MSI. Marty, I believe it was cooking in general (so not related to display drivers like I initially thought) that was causing the crashes, not a specific node. Like I said, changing parameters on the Mountain SOP would bring the system down and I don't think that node is OpenCL accelerated. But yeah, EPYC and TR are pretty closely related so that might be something to watch out for. Quote Link to comment Share on other sites More sharing options...
malexander Posted August 21, 2017 Share Posted August 21, 2017 38 minutes ago, DaJuice said: Do you recall where you read about this? I'd like to have a bit more info before submitting a ticket to MSI. Another Houdini user submitted a bug to SideFX and managed to resolve it himself, and the bug happened to catch my eye. Similar setup to yours. Quote Link to comment Share on other sites More sharing options...
DaJuice Posted August 22, 2017 Share Posted August 22, 2017 I see. Thank you again, I haven't experienced any more crashes. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.