Madio Posted March 6, 2013 Share Posted March 6, 2013 Hello! I have an old GTX 460 1 gb and it performs not so good. I've been searching some answers about what videocard fits best for using Houdini, but found very different opinions. My goal is to choose the best solution for 700-1000$ videocard, and here are some options: - FirePro W7000 - Quadro K4000(just came out) - GeForce Titan/ GTX 670-690 - Radeon 7970-7990 Also I want to pay attantion to GTX 580, 3 gb which is very popular and seems to perform good for every 3d package for its price, but it's difficult to find one in stores now. Many times I heard that we should stick with Nvidia cards and ingore ATI for its crappy drivers, but some Radeon specs outbeats even high-end Quadros. I'm a bit confused making a choice of a right card, really hope for your help! Quote Link to comment Share on other sites More sharing options...
xmetallicx Posted March 14, 2013 Share Posted March 14, 2013 http://www.sidefx.com/index.php?option=com_content&task=view&id=415&Itemid=269 Officially workstation cards are supported. But if had to pick i would go with nvidia cards, due to the better driver support. As they said consumer cards are used at your own risk, however i'm running a geforce gtx 660Ti at home to play with Houdini and haven't had a problem so far. Though i may have to use older drivers which might be more stable than the lastest drivers. Of course if you are in production, stability is what you want and workstation cards will fit your needs. Just my 2 cents Quote Link to comment Share on other sites More sharing options...
malexander Posted March 18, 2013 Share Posted March 18, 2013 SESI only recommends workstation cards because AMD and Nvidia's driver support for workstation applications is prioritized to their pro drivers. As a recent example, Houdini freezes up with Windows 8 and the Nvidia 310, 313 and 314 drivers. Nvidia patched this quickly in their Quadro driver (311.09), but the fix for the GEForce driver won't be included until 319 at the earliest. So, you take some risks with driver stability and support with a non-pro card. The best advice I have for Radeon and GEForce users is to find a driver that works, and stick with it as long as possible. Don't upgrade without a good reason, and be prepared to roll back if you do. Quote Link to comment Share on other sites More sharing options...
kurtw Posted March 24, 2013 Share Posted March 24, 2013 (edited) I was thinking of getting a titan, which concentrates a lot of processing power on one gpu vs some of the higher end geforce cards which is basically 2 gpus SLI'ed on one card. I'm not sure if Houdini's Open CL can take advantage of multiple cards? I just upgraded my system to a 6 core I7, 64GB of memory and ssd drives dedicated to cache files and system swap, and now my geforce GTX 470 is a bit dated, its faster for me to run my simulations on cpu vs open cl. So my big limitation is GPU, and the first choice seems to be the new Quadro K4000... But the Titan has 6GB of ram and is within the price point and I'm thinking I'd like to have the extra memory to run a larger simulation grid. It may be worth the tradeoff going with a ultra high end "gamer" card... I haven't had issues running with geforce at home [i've been running Quadro 4000-6000 series at work for the last while] Edited March 24, 2013 by kurtw Quote Link to comment Share on other sites More sharing options...
lukeiamyourfather Posted March 25, 2013 Share Posted March 25, 2013 Houdini 12.5 could be different but previous versions use an environment variable to define which OpenCL device to use. Difference instances can use different devices (but only one device per instance, so no combining devices on a single simulation). I personally wouldn't want a gaming card at work, at home it would be fine for tinkering. The Quadro or the FirePro are both great cards. The FirePro has more memory which would be handy for OpenCL simulations. Quote Link to comment Share on other sites More sharing options...
Zidhjian Posted March 29, 2013 Share Posted March 29, 2013 Where do you turn on the option to use GPU:s to simulate Pyro effects etc. Or does the program sniff it automatically? Quote Link to comment Share on other sites More sharing options...
lukeiamyourfather Posted March 29, 2013 Share Posted March 29, 2013 (edited) Where do you turn on the option to use GPU:s to simulate Pyro effects etc. Or does the program sniff it automatically? There's an option to enable it on the solver. For example on the Pyro solver where it says "Use OpenCL" on the advanced tab. Some simulations will see no benefit and others will see a huge benefit so try it both ways. Very large simulations probably won't work with OpenCL (not enough memory on the video cards these days). Edited March 29, 2013 by lukeiamyourfather Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.