Jump to content

Two 1070 or One 1080ti?


Yon

Recommended Posts

As they are about the same price. 

People on nvidia discord are convinced sli is dead, and causes stuttering. Tho they were talking about gaming and not rendering. Does stuttering apply to rendering?

Looking to invest in some cards this season, what is your opinion on the best decision to make, given 1KCAD and a mobo with 3-way Sli

Since 1070 has 8gb, and 1080ti 11, If I went with 2 1070 I would be +5gb..can som1 vouch for sli? Cuz it's the only reason I wouldn't go with the 1070 at this point.

Looking to render large scenes, so I know 8gb of VRAM would be needed at least. Anything else you can share on the topic would help. Haven't studied gpu before.

 

thanks

Edited by Yon
Link to comment
Share on other sites

don't think you can just add up 2x8=16 and say that's > 11 by 5...i thought that whatever is the biggest of the lot that counts...so the 2x8 is really 8....not 16...I could be wrong tho...

This could be relevant to your decision:

https://www.redshift3d.com/support/faq

When Redshift uses multiple GPUs, is their memory combined?

Unfortunately, no. Say you have an 8GB GPU and a 12GB GPU installed on your computer. The total available memory will not be 20GB, i.e. the 8GB GPU will not be able to use the 12GB GPU's memory. This is a limitation of current GPU technology and not related to Redshift in particular. We, therefore, recommend users combine videocards that are fairly "equal" in terms of memory capacity.

Having said that, Redshift supports "out of core" rendering which helps with the memory usage of videocards that don't have enough VRAM (see below). This means that, in contrast with other GPU renderers, the largest possible scene you'll be able to render in the above scenario won't be limited by the system's weakest GPU.

Edited by Noobini
Link to comment
Share on other sites

  • 3 months later...
  • 1 month later...

Sorry to revive an old post but thought some additional comment might help others looking into GPU rendering.

SLI has nothing to do with GPU rendering. It’s a gaming feature. Still, a board with three way SLI means your setup can handle three full-size GPUs. In that sense, SLI support can be a good gauge for a mainboards capabilities.

Redshift can use each GPU to its full potential memory-wise. Memory isn’t combined but it also is not constrained by your lowest card. Other GPU renders are often less accommodating than Redshift.

Consider that for GPU rendering you want one card to drive your display and one or more other cards exclusively for rendering. I use a modest 750ti for my display and a pair of 1080s to render. 

Lastly, get as much RAM as you can. You definitely need more RAM than the total combined RAM in your GPUs and make sure that your system and CPU have enough PCIe lanes to run all of your GPUs. Some CPUs have limited PCIe lanes which could force multiple GPUs to run at reduced speeds which could throttle performance.

Link to comment
Share on other sites

  • 4 weeks later...

I will also revive this post for the sake of clearing up a lot of misinformation here regarding GPU rendering. As Redshifts site points out several cost effective GPUs is almost always better than fewer expensive ones.

1. It is true that SLI has nothing to do with GPU rendering. SLI can actually impede GPU rendering with renderers such as Redshift.

2. You ARE actually limited to the memory of your lowest memory card when loading textures into memory when GPU rendering (Redshift). Not to worry though, because at least Redshift's out of core tech allows you to borrow from your RAM for what you don't have in your VRAM.

3. Driving your display with a lesser card is an outdated concept, that doesn't really work to your advantage in any way (other than the fact another card is in the mix helping rendering). Some will use Quadros sometimes though as a display card because they may require 10 bit support. Look-dev when GPU rendering is something you want to do with a decent card, especially with the real-time features that can now be taken advantage of.

4. It's not good advice to spend your money on as much RAM as possible, unless you plan to use some insane maps (and that is unwise anyway). RAM is expensive as you likely know, and that money is better served going to your GPUs.

Link to comment
Share on other sites

On 05/05/2018 at 4:27 AM, Chosen Idea said:

2. You ARE actually limited to the memory of your lowest memory card when loading textures into memory when GPU rendering (Redshift). Not to worry though, because at least Redshift's out of core tech allows you to borrow from your RAM for what you don't have in your VRAM.

That's not completely correct; there's no clamping or limiting to the lowest available memory card. You use what you have on each card, and each card renders it's own bucket independently. If one card doesn't have enough VRAM to load textures for example, that bucket will render out of core whilst the other bucket will continue rendering at full speed.  

If one card doesn't have enough VRAM to load the scene at all, then the render will crash and you'll have to disable that card.

 

Edited by Tex
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...