Jump to content

laptop gpu - ati 7970m or nvidia 680m


dvfedele

Recommended Posts

Getting a laptop for maya and houdini work and trying to find the card that offers best driver support/performance.

I can't afford quadro and haven't put any ot those cards in the running.

I've been reading how the compute on the 600 series nvidia cards has been crippled as compared to 500 series. Floating point precision is way down. (is it even worth it to find a 580m or stick with 680m)

I'm struggling now between the ati 7970m or the nvidia gtx680m.

I know the 680m is a significant premium over the 7970m price, but chose to ignore it as I need the best possible peformance/stablility from the gpu

gtx680m

pros

driver support

performance - edges out 7970m in most benchmarks

can get a card with 4GB ram allowing for more complex sims/gpu work

cons

low compute

ati 7970m

pros

compute for opengl seems to edge out 680m

performance - edges out 7970m in most benchmarks

cons

driver support?

no cuda support (not sure if this matters since opencl)

nvidia fanboy - no experience with ati in 3d packages

Thanks for your advice!

Edited by dvfedele
Link to comment
Share on other sites

  • 2 months later...
  • 3 weeks later...
Nvidia has been crippling the OpenGL (and CUDA) performance on their gaming cards for a number of generations.

I'm not sure I'd go so far as to call them "crippled", but Nvidia definitely trades-off features to stay within the cards power/thermal limits. Games tend to be fragment-processing heavy, so higher clockspeeds are more important than the Quadro's ability to process two triangles per clock vs. the GEForce's one. But, when you increase the clockspeed something has to give in order to stay within limits, so the dual-primitive processing was disabled for the GEForce cards.

The GTX Titan is an interesting case in point - it allows the user to enable 1/3 FP32 rate FP64 processing, but doing so drops the clockspeed. It's off by default, with FP64 processing running at 1/24 FP32 rate, with a higher base clockspeed.

Granted there are other features that are simply not supported by consumer cards, such as Quad-buffered stereo and dual copy engines, but at some point there needs to be some differentiation between products beyond the extra VRAM, warranty and support in order to justify the price. There's no point in supporting features in a consumer driver that consumers will rarely, if ever, use. You are also much more likely to get Quadro driver fixes for driver bugs Houdini encounters much sooner than in the GEForce drivers.

I can understand why people get miffed that the features are different despite the same GPU silicon is used in both, but to tape out two different GPUs tailored to each market would increase the cost of each. I don't think anyone wants that. But really, it's no more nefarious than having multiple versions of software at different price points.

Link to comment
Share on other sites

Nvidia intentionally cripples their gaming cards for non-gaming tasks. I'm not talking about the nuances between their gaming and workstation products. Here's an article talking about it, at least from the GPGPU perspective.

http://www.theinquirer.net/inquirer/review/2162193/nvidias-gtx680-thrashed-amds-mid-range-radeon-hd-7870-gpu-compute

If you search around other forums and sites there's significant performance degradation after the GTX 200 series for GPGPU computing (like CUDA and OpenCL) and viewport performance for professional applications. There are folks saying their GeForce 9600 GT from five years ago is faster in the viewport than their new GTX 560 card.

Link to comment
Share on other sites

The situation between AMD and Nvidia got very interesting around the AMD 6000->7000 series and the Nvidia 500->600 series transitions. AMD ditched the VLIW4/5 architecture that held them back in compute tasks, and went to GCN for the 7000's which was an excellent balance between graphics and compute. This made AMD's offerings a lot more like Nvidia's 480 and 580 series cards, greatly improving their compute performance.

Ironically Nvidia went the other route, and instead reduced power usage for which they took a lot of flak in the 400/500 series (they've been pushing performance/watt for the past few years). The Kepler architecture (600 series) is more like the 560/460 architecture rather than the compute-heavy 580/480, which is why it doesn't do so well in compute tasks. The Titan, on the other hand, is more like the evolution of the 580 architecture.

So I suppose it could be said that their change in focus is "crippling", but it does make some sense to de-prioritize compute in gaming cards. I was, however, expecting the new Quadro series to be based on the big Kepler GPU, but only the K6000 will be. It seems that they expect GPGPU users to pick up a Tesla card. That I'm not sure I agree with, nor Nvidia's somewhat hap-hazard support of OpenCL which is very secondary to CUDA.

One big change between the GTX200 series and the 400 series was a big shuffle in the compute and texturing resources, with the 400 series losing quite a few texture units to make room for more complicated GPGPU logic. Which applications were seeing their performance degraded? (just curious, my searches seemed to lack the proper keywords)

On the other hand, AMD's issues have always been in their drivers. They've gotten much, much better, but it'll take awhile to win over those burnt by the unstable days (myself included). They've also been much more forthcoming with driver fixes. All in all, it's become a lot more competitive over the past year.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...