Jump to content

Will RT and tensor cores on RTX2080ti be utilized by GPU renderers right away.?


eebling

Recommended Posts

Hi, Has anyone seen much information about the latest Nvidia cards that will be coming out end September in relation to GPU rendering.? I have been waiting for the specs on these cards for months, and finally they have been released, but of course all the articles so far that I have seen are still somewhat speculation on performance and "leaks" of specs that may or may not be real and all are geared towards gaming.

 I must say, some of these leaked tests aren't too impressive, like 5% performance increases on the new GTX2080ti over the old 1080ti, but I would have to assume that's because the software doing the tests isn't taking advantage of the RT and tensor cores. I am disappointed that on a $1200 card, they still only have the same 11GB of ram as the 1080ti has, although it is faster/newer ram, I was hoping for more ram.!

  Have there been any statements made by redshift or otoy about what speed improvements will come form having a card with "RT and tensor cores"..? Just wondering because I will be needing another 2 gfx cards in the next month, and the 10 series cards are having great price drops recently, some 1070ti's are as low as $399. If these new flashy RT cores are going to be a huge performance gain, then I will probably hold out for at least the 2070's.

 

Any info would be great.

 

Thanks.

E

Link to comment
Share on other sites

Tensor cores are only good for doing 16b FP matrix multiplication, which is only useful if you're running some sort of machine learning algorithm that uses this sort of neural net processing. So it could be good for noise reduction. The RT cores appear to be only generally accessible via CUDA, DX12 or Vulkan (or via Nvidia's OPTIX library).

As for the VRAM, anything bigger than 11GB appears to be reserved for the Quadros.

Link to comment
Share on other sites

2 minutes ago, malexander said:

Tensor cores are only good for doing 16b FP matrix multiplication, which is only useful if you're running some sort of machine learning algorithm that uses this sort of neural net processing. So it could be good for noise reduction. The RT cores appear to be only generally accessible via CUDA, DX12 or Vulkan (or via Nvidia's OPTIX library).

As for the VRAM, anything bigger than 11GB appears to be reserved for the Quadros.

Ya, I'm sure the ram limit is something they are doing on purpose to make you go to the pro level cards if you want to enter the 16gb and up playing field. All the info I usually see about tensor cores relates to AI and machine learning like you mentioned, so wasn't sure if that kind of processing even was a factor with gpu rendering. I have a feeling that a GTX2080ti isn't going to be a massive speed improvement over a 1080ti in regards to gpu rendering, but I hope time will tell and prove me wrong.! I would love to speed up render times and not have to spend too much to do so. 

Link to comment
Share on other sites

The value of RTX in raytracing is scene dependent - as the RT cores can only do one thing, shoot rays for intersection, they cant shade, and they are restricted to certain primitive types [at the moment].  Production Shading is a huge part of the rendering process, and RTX will only marginally improve via the RTX platform specs, not RT cores.

Couple that with the rumor that Vendors tried to send back a literal mountain of 10xx cards to NVidia, that were manufactured for the Crypto market, that NVidia is forcing Vendors to now sell.   Plus, NvLink is still a no-go for the consumer line of RTX cards.   So, personally, there's just alot more value in a 1080ti right now.  I expect the price to continue to fall actually.

Any RTX render engine is at least several months away from Alpha stage, because they have to rewrite their engine to use the Optix API - which is what provides access to the RT cores on the RTX platform.

edit: Turns out NVlink is available for 2080 and 2080ti - that's a big deal for larger scenes.  So that's 22GB of VRAM for around $2.5k, that's an attractive option.

Edited by Daryl Dunlap
Link to comment
Share on other sites

41 minutes ago, Daryl Dunlap said:

The value of RTX in raytracing is scene dependent - as the RT cores can only do one thing, shoot rays for intersection, they cant shade, and they are restricted to certain primitive types [at the moment].  Production Shading is a huge part of the rendering process, and RTX will only marginally improve via the RTX platform specs, not RT cores.

Couple that with the rumor that Vendors tried to send back a literal mountain of 10xx cards to NVidia, that were manufactured for the Crypto market, that NVidia is forcing Vendors to now sell.   Plus, NvLink is still a no-go for the consumer line of RTX cards.   So, personally, there's just alot more value in a 1080ti right now.  I expect the price to continue to fall actually.

Any RTX render engine is at least several months away from Alpha stage, because they have to rewrite their engine to use the Optix API - which is what provides access to the RT cores on the RTX platform.

edit: Turns out NVlink is available for 2080 and 2080ti - that's a big deal for larger scenes.  So that's 22GB of VRAM for around $2.5k, that's an attractive option.

Ah interesting. 22GB sounds very nice, haha. Im guessing for the time being, I will wait for the 1080ti's to drop even more in price and get another one or 2 for my main work/render-station and i need a small form factor card to enable another machine to be able to render redshift, so might just plug a 1060ti in that one, better than the no graphics card it has now ;) .  That NVlink you mention, is that not available on the 1080/pascal based cards.? 

 

thanks 

 

E

Link to comment
Share on other sites

Interesting blog post:

https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

 

Quote

The RT Cores are only one part of the story though. The RTX graphics cards also support something called NVLink which doubles the memory available to V-Ray GPU for rendering with minimal impact to performance.


NVLink

NVLink is a technology that allows two or more GPUs to be connected with a bridge and share data extremely fast. This means that each GPU can access the memory of the other GPU and programs like V-Ray GPU can take advantage of that to render scenes that are too large to fit on a single card. Traditionally when rendering on multiple graphics cards, V-Ray GPU duplicates the data in the memory of each GPU, but with NVLink the VRAM can be pooled. For example, if we have two GPUs with 11GB of VRAM each and connected with NVLink, V-Ray GPU can use that to render scenes that take up to 22 GB. This is completely transparent to the user — V-Ray GPU automatically detects and uses NVLink when available. So, while in the past doubling your cards only allowed you to double your speed, now with NVLink you can also double your VRAM.


NVLink was introduced in 2016 and V-Ray GPU was the first renderer to support it officially in V-Ray 3.6 and newer versions. Until now, the technology has only been available on professional Quadro and Tesla cards, but with the release of the RTX series, NVLink is also available on gaming GPUs - specifically on the GeForce RTX 2080 and GeForce RTX 2080 Ti. Connecting two cards with NVLink requires a special NVLink connector, which is sold separately.

 

I know this is straight from a V-Ray developer, but somehow I'm still doubtful that it's actually proper NVLink on the GeForce RTX cards. Memory pooling definitely seems like a feature nvidia would use for up-selling to Quadro.

  • Thanks 1
Link to comment
Share on other sites

13 minutes ago, lukeiamyourfather said:

The benchmark results I've seen are underwhelming. I'm sure in time the other features like ray tracing will be better utilized but for now the new cards seem like pretty crap deals. My guess is it'll be a year or more before the ray tracing features start showing up in software in a meaningful way.

Ya, I'm unfortunately not too impressed with the price/performance that is being reported. Some of the benchmarks I have seen could be completely false, but one "overclocker" did some benchmark on supposedly a new 2080ti and it was a 5% higher score than the 1080ti. And if I recall, someone had said that the 1080ti was only about a 20% increase in performance over a 1070ti. If that is true, you could buy 3 1070ti's for the price of one 2080ti and get a lot more render power, minus the 3 gb difference in ram....

 

E

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...