shawn_kearney Posted November 14, 2016 Share Posted November 14, 2016 Is it my imagination, or is OpenCL significantly faster on CentOS than on Windows? Seems that before my M4000 was marginally faster than CPU, but using CentOS it's much more dramatic? Quote Link to comment Share on other sites More sharing options...
dimovfx Posted November 14, 2016 Share Posted November 14, 2016 Could you share any test numbers? I remember some time ago I saw similar test, Windows vs Linux sim and linux was faster but I cant find it. 1 Quote Link to comment Share on other sites More sharing options...
shawn_kearney Posted November 14, 2016 Author Share Posted November 14, 2016 In an attempt to fully immerse myself, I didn't want to fall back on Windows when things got tough. So I don't have a windows partition. I don't know if the CPU is running slower, or if the GPU is running faster, but it feels like the latter. If the CPU were running the sim as slower as the GPU is seemingly running faster, I'd think I'd have concluded that Houdini on Linux is just SUPER slow with FLIPS. I've run a few FLIPS sims on on the CPU that I previously ran on window and I don't feel like they were any slower. Like I said, on Windows my GPU didn't seem a lot faster. Only when I had a huge number of particles did it matter at all, and only then marginally at best. With CentOS, my GPU seems almost 50%-100% faster than the CPU. I did notice that CPU cores were still active when OpenCL is enabled, and that my GPU was also being fully utilized. I don't *think* I had the Xeon OpenCL framework installed on Windows, so if CentOS installs them by default maybe this is what's going on? Though, I thought Houdini could use only one OpenCL device at a time, so I am assuming those cores are being used somewhere else. Quote Link to comment Share on other sites More sharing options...
art3mis Posted November 14, 2016 Share Posted November 14, 2016 Isn't it possible that linux is simply more efficient with memory usage? Some actual benchmarks are needed. Quote Link to comment Share on other sites More sharing options...
lukeiamyourfather Posted November 14, 2016 Share Posted November 14, 2016 3 hours ago, art3mis said: Isn't it possible that linux is simply more efficient with memory usage? Small gains yes. Huge gains like 50-100% are very unlikely unless there's a fundamental problem with the implementation in Windows (like mental ray...). It's likely that the simulation parameters are the cause of the difference in performance. OpenCL isn't used for the whole simulation, just parts of it. 1 Quote Link to comment Share on other sites More sharing options...
art3mis Posted November 15, 2016 Share Posted November 15, 2016 If this can be confirmed it will hasten my switch to Linux:) Quote Link to comment Share on other sites More sharing options...
shawn_kearney Posted November 15, 2016 Author Share Posted November 15, 2016 (edited) 6 hours ago, lukeiamyourfather said: Small gains yes. Huge gains like 50-100% are very unlikely unless there's a fundamental problem with the implementation in Windows (like mental ray...). It's likely that the simulation parameters are the cause of the difference in performance. OpenCL isn't used for the whole simulation, just parts of it. I'm thinking this must be the case, but honestly, under Windows I have never had much reason to switch on OpenCL as it was consistently performing similar to CPU across the board. I agree, better benchmarks are needed. Unfortunately I can't really provide them ATM. Maybe someone else could benchmark this file between the two? It's nothing special. Just a flip tank, a vector field and a ridged cube. floaty.hiplc Edited November 15, 2016 by shawn_kearney Quote Link to comment Share on other sites More sharing options...
shawn_kearney Posted November 22, 2016 Author Share Posted November 22, 2016 Another test and I am again getting 2x speed increase that I never, ever saw on Windows 10 Pro. I stopped the sim previously at around 12 minutes and 30-some frames in. This time with OpenCL enabled I was at 67 frames. I've never seen this on Windows, ever. Maybe SideFX optimized OpenCL in the time between installing on Windows and on Linux? IDK, Been running 15.5 Indie on both. Aside from Blender being a bit faster overall on Linux compared to Windows (which is expected), I'm not seeing the same kind of CUDA performance relative to CPU in Cycles. So this would appear to be OpenCL- or Houdini-specific. Quote Link to comment Share on other sites More sharing options...
Kardonn Posted November 25, 2016 Share Posted November 25, 2016 (edited) From the RedShift devs, very likely it applies to everything including this OpenCL performance gain in Linux you're seeing. Quote One important difference between GTX GPUs and Titan/Quadro/Tesla GPUs is TCC driver availability. TCC means "Tesla Compute Cluster". It is a special driver developed by NVidia for Windows. It bypasses the Windows Display Driver Model (WDDM) and allows the GPU to communicate with the CPU at greater speeds. The drawback of TCC is that, when you enable it, the GPU becomes ‘invisible’ to Windows and 3d apps (such as Maya, Houdini, etc) and becomes exclusive to CUDA applications, like Redshift. Only Quadros, Teslas and Titan GPUs can enable TCC. The GeForce GTX cards cannot use it. As mentioned above, TCC is only useful for Windows. Linux doesn't need it because the Linux display driver doesn’t suffer from latencies typically associated with WDDM. In other words, the CPU-GPU communication on Linux is, by default, faster than on Windows (with WDDM) across all NVidia GPUs, be it GTX cards or Quadro/Tesla/Titan. Edited November 25, 2016 by Kardonn 1 Quote Link to comment Share on other sites More sharing options...
shawn_kearney Posted November 25, 2016 Author Share Posted November 25, 2016 @Kardonn Well that's interesting, and would make sense why many people don't see it since most people aren't into wasting money on Quadros like I am It does, however, kind of throw a giant monkey wrench into the whole GTX vs Quadro debate. Quote Link to comment Share on other sites More sharing options...
Guest tar Posted November 25, 2016 Share Posted November 25, 2016 39 minutes ago, shawn_kearney said: Well that's interesting, and would make sense why many people don't see it since most people aren't into wasting money on Quadros like I am It does, however, kind of throw a giant monkey wrench into the whole GTX vs Quadro debate. Quadros will give you driver stability - the GeForce ones are mucked around with for each AAA+ game release. Quote Link to comment Share on other sites More sharing options...
shawn_kearney Posted November 25, 2016 Author Share Posted November 25, 2016 (edited) I know. That's a big reason I chose the card I had. Also in my particular machine there is a handle on the inside of the side panel that physically interferes with top-mounted axillary power connectors and larger than standard PCI cards. But a lot of folks use GTX just fine, too. Edited November 25, 2016 by shawn_kearney Quote Link to comment Share on other sites More sharing options...
Guest tar Posted November 25, 2016 Share Posted November 25, 2016 Thinking about it; the OpenCL driver code probably isn't changed at all between releases, only OpenGL needs to be optimised for games afaik. So a super GeForce card for OpenCL and a Quadro for viewport is probably the very best mix. Quote Link to comment Share on other sites More sharing options...
Kardonn Posted November 25, 2016 Share Posted November 25, 2016 I can say for sure that both my Titan X and GTX 1080 workstations drive a Houdini viewport at 4K and 5K like no one's business. Quote Link to comment Share on other sites More sharing options...
Guest tar Posted November 25, 2016 Share Posted November 25, 2016 20 minutes ago, Kardonn said: I can say for sure that both my Titan X and GTX 1080 workstations drive a Houdini viewport at 4K and 5K like no one's business. bet it does, but does it every break Houdini when you upgrade the driver? Quote Link to comment Share on other sites More sharing options...
Kardonn Posted November 25, 2016 Share Posted November 25, 2016 Nope never any viewport issues before BUT for about two weeks I was thinking RedShift had terrible out of core performance on my dual GTX 1080 machine which turned out to be completely driver related. Quote Link to comment Share on other sites More sharing options...
shawn_kearney Posted November 25, 2016 Author Share Posted November 25, 2016 3 hours ago, marty said: Thinking about it; the OpenCL driver code probably isn't changed at all between releases, only OpenGL needs to be optimised for games afaik. So a super GeForce card for OpenCL and a Quadro for viewport is probably the very best mix. Is this possible on Windows? I think there is only one Linux driver, but in the past you had to hack stuff together in order to get the Quadro driver to see the GTX. Quote Link to comment Share on other sites More sharing options...
Guest tar Posted November 25, 2016 Share Posted November 25, 2016 Not sure - don't have a Quadro to test with. I'd imagine it should though. Quote Link to comment Share on other sites More sharing options...
shawn_kearney Posted November 25, 2016 Author Share Posted November 25, 2016 (edited) 1 hour ago, marty said: Not sure - don't have a Quadro to test with. I'd imagine it should though. It seems that the Quadro drivers do in fact support both now, but I'm not about to spend $200 just to find out, especially when I'd need to buy a Dell Precision PSU to support them, which are like the most expensive desktop power supplies ever made. Edited November 25, 2016 by shawn_kearney Quote Link to comment Share on other sites More sharing options...
Guest tar Posted November 25, 2016 Share Posted November 25, 2016 yeah - building your own machine is best value Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.