Jump to content
Yon

Two 1070 or One 1080ti?

Recommended Posts

As they are about the same price. 

People on nvidia discord are convinced sli is dead, and causes stuttering. Tho they were talking about gaming and not rendering. Does stuttering apply to rendering?

Looking to invest in some cards this season, what is your opinion on the best decision to make, given 1KCAD and a mobo with 3-way Sli

Since 1070 has 8gb, and 1080ti 11, If I went with 2 1070 I would be +5gb..can som1 vouch for sli? Cuz it's the only reason I wouldn't go with the 1070 at this point.

Looking to render large scenes, so I know 8gb of VRAM would be needed at least. Anything else you can share on the topic would help. Haven't studied gpu before.

 

thanks

Edited by Yon

Share this post


Link to post
Share on other sites

don't think you can just add up 2x8=16 and say that's > 11 by 5...i thought that whatever is the biggest of the lot that counts...so the 2x8 is really 8....not 16...I could be wrong tho...

This could be relevant to your decision:

https://www.redshift3d.com/support/faq

When Redshift uses multiple GPUs, is their memory combined?

Unfortunately, no. Say you have an 8GB GPU and a 12GB GPU installed on your computer. The total available memory will not be 20GB, i.e. the 8GB GPU will not be able to use the 12GB GPU's memory. This is a limitation of current GPU technology and not related to Redshift in particular. We, therefore, recommend users combine videocards that are fairly "equal" in terms of memory capacity.

Having said that, Redshift supports "out of core" rendering which helps with the memory usage of videocards that don't have enough VRAM (see below). This means that, in contrast with other GPU renderers, the largest possible scene you'll be able to render in the above scenario won't be limited by the system's weakest GPU.

Edited by Noobini

Share this post


Link to post
Share on other sites

Is that how it works? How could sli ever be practical then..

Share this post


Link to post
Share on other sites

Ic, thanks. I talked to others that confirmed gpu memory doesnt sum. But they said it will go with the highest card. I'm gonna ask on RS forum to confirm what it is.

Share this post


Link to post
Share on other sites

also take into account the viewport with all the bells and whistles turned on can take an enormous amount of vRam. i.e. UHD full-screen eats 5+GB IIRC 

  • Like 1

Share this post


Link to post
Share on other sites

yea some one else said the same thing, so he uses a 1080 for look dev on his workstation and 1070's for his slave

Share this post


Link to post
Share on other sites

the  advantage of sli would be for multitasking purpose. You would then be able to assign  which card ran which session of whatever software was open.

Share this post


Link to post
Share on other sites

Always go for 1 higher-end card over 2 mid-level cards. The compatibility issues and restrictions of SLI, not to mention the VRAM issue, only make 2 cards an option if you're pairing 2 top-end graphics cards together.

Share this post


Link to post
Share on other sites

Sorry to revive an old post but thought some additional comment might help others looking into GPU rendering.

SLI has nothing to do with GPU rendering. It’s a gaming feature. Still, a board with three way SLI means your setup can handle three full-size GPUs. In that sense, SLI support can be a good gauge for a mainboards capabilities.

Redshift can use each GPU to its full potential memory-wise. Memory isn’t combined but it also is not constrained by your lowest card. Other GPU renders are often less accommodating than Redshift.

Consider that for GPU rendering you want one card to drive your display and one or more other cards exclusively for rendering. I use a modest 750ti for my display and a pair of 1080s to render. 

Lastly, get as much RAM as you can. You definitely need more RAM than the total combined RAM in your GPUs and make sure that your system and CPU have enough PCIe lanes to run all of your GPUs. Some CPUs have limited PCIe lanes which could force multiple GPUs to run at reduced speeds which could throttle performance.

Share this post


Link to post
Share on other sites

I will also revive this post for the sake of clearing up a lot of misinformation here regarding GPU rendering. As Redshifts site points out several cost effective GPUs is almost always better than fewer expensive ones.

1. It is true that SLI has nothing to do with GPU rendering. SLI can actually impede GPU rendering with renderers such as Redshift.

2. You ARE actually limited to the memory of your lowest memory card when loading textures into memory when GPU rendering (Redshift). Not to worry though, because at least Redshift's out of core tech allows you to borrow from your RAM for what you don't have in your VRAM.

3. Driving your display with a lesser card is an outdated concept, that doesn't really work to your advantage in any way (other than the fact another card is in the mix helping rendering). Some will use Quadros sometimes though as a display card because they may require 10 bit support. Look-dev when GPU rendering is something you want to do with a decent card, especially with the real-time features that can now be taken advantage of.

4. It's not good advice to spend your money on as much RAM as possible, unless you plan to use some insane maps (and that is unwise anyway). RAM is expensive as you likely know, and that money is better served going to your GPUs.

Share this post


Link to post
Share on other sites

It depends what you route you plan on going with video cards.  I think it's wiser have a nice balance of ram and GPU, only once you get heavy into rendering especially considering GPU rendering which personally I'm not there, yet.

Share this post


Link to post
Share on other sites
Posted (edited)
On 05/05/2018 at 4:27 AM, Chosen Idea said:

2. You ARE actually limited to the memory of your lowest memory card when loading textures into memory when GPU rendering (Redshift). Not to worry though, because at least Redshift's out of core tech allows you to borrow from your RAM for what you don't have in your VRAM.

That's not completely correct; there's no clamping or limiting to the lowest available memory card. You use what you have on each card, and each card renders it's own bucket independently. If one card doesn't have enough VRAM to load textures for example, that bucket will render out of core whilst the other bucket will continue rendering at full speed.  

If one card doesn't have enough VRAM to load the scene at all, then the render will crash and you'll have to disable that card.

 

Edited by Tex

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×