Jump to content
eebling

workflow for a multi GPU system and redshift renders

Recommended Posts

Hey all, I have started using redshift a few months ago and so far i like it a lot.! I currently have one 1080ti 11gb gpu in the machine I am using it on. Would like to get either another one of those cards since the gpu prices seem to have dropped lately back to something more reasonable, or get 2 1070ti 8gb cards. I have heard that the performance diff is only like a 20% gain for the 1080 over the 1070's, so might be better off with getting 2 of the 1070's. The main question is though, what happens when you hit the render button on your redshift node for a sequence inside houdini if you have more than one gpu.? If I had 3 gpu's, would it automatically render frame one on the 1st gpu, frame 2 on the second, frame 3 on the 3rd and so on..? Would it render the first frame across all 3 cards simultaneously by breaking up the frame to submit to the different cards, is that even possible.? Do the extra gpu's only get used if you use a distributed rendering software like deadline or by launching RS renders from command line.? It seems like you have to launch from command line in order to get the other gpu's to render, but I have never worked with a machine with more than one gpu installed. If I were to submit a 100 frame render from within houdini by clicking the render to disk button on my RS node, would it only use the main gpu even with other gpu's installed.? Any info from artists using multi gpu systems would be great. I didn't find a lot of info about this on the redshift site, but might not have looked deep enough. The end goal would be to have the ability to kick off say 2 sequences to render on 2 of the gpu's while leaving my main gpu free to continue working in houdini, and if its time to end the work day, allow the main gpu to then render another sequence so all 3 cards are running at the same time. I will most likely need to get a bigger PSU for the machine, but that is fine. I am running windows 10 pro on a  I7-6850K hex core 3.6ghz cpu with 64gig ram in an asus x99-Deluxe II MB if that helps evaluate the potential of the setup. One last question, sli bridges.? Do you need to do this if trying to setup 2 additional gpu's that will only be used to render RS on.? I do not wish to have the extra gpu's combined to increase power in say a video game. I don't know much about sli and when/why it's needed in multi gpu setups, but was under the impression that is to combine the power of the cards for gaming performance.

 

Thanks for any info

 

E

Edited by eebling

Share this post


Link to post
Share on other sites

I would go with a single GTX 1080 Ti over two GTX 1070 Ti cards for several reasons. The single GTX 1080 Ti will have more memory (memory from multiple cards doesn't add up). The single GTX 1080 Ti will use less power and generate less heat (250 watts versus 360 watts). The resale value of the single GTX 1080 Ti will be higher.

Redshift can be configured to use one GPU, all of the GPU, or any combination of GPU in the machine. Same goes for using Redshift through a queue manager like Deadline. SLI doesn't matter because Redshift is talking directly to each GPU. Most of this stuff is covered by their FAQ.

https://www.redshift3d.com/support/faq

Share this post


Link to post
Share on other sites
18 minutes ago, lukeiamyourfather said:

I would go with a single GTX 1080 Ti over two GTX 1070 Ti cards for several reasons. The single GTX 1080 Ti will have more memory (memory from multiple cards doesn't add up). The single GTX 1080 Ti will use less power and generate less heat (250 watts versus 360 watts). The resale value of the single GTX 1080 Ti will be higher.

Redshift can be configured to use one GPU, all of the GPU, or any combination of GPU in the machine. Same goes for using Redshift through a queue manager like Deadline. SLI doesn't matter because Redshift is talking directly to each GPU. Most of this stuff is covered by their FAQ.

https://www.redshift3d.com/support/faq

Interesting, a link on the page you linked led to some more info and it says you can have a single frame render across multiple gpu's. It doesn't explicitly say weather or not you can use more than one gpu to render a frame while within houdini, clicking the render to mplay button on the redshift node, guessing that is a no and rendering only uses more than one gpu when launching from command line or deadline or similar program. I'm still curious over your choice of a single 1080ti over two 1070ti's. In a previous post on this site, a user claimed the 1080's only have a 20% performance gain over 1070's. Is this incorrect.? I would think if I have multiple passes to render, having 2 1070's would outperform an extra single 1080ti.? It's not an issue to buy a larger psu to power 3 cards in total over 2 cards, so that isn't a negative factor here, nor is the increase in heat. The resale value isn't much of a factor to me either, I usually use equipment till it is dead, or by the time I upgrade the GPU, there won't be much of a demand for a card of that age. If the points you mentioned were not factored into your decision, would you go for the 2 1070's, keeping in mind that most of the time you would need to be rendering more than one pass. I have a 1080ti and am under the assumption if I know I have a heavy element to render, I can assign it to the card with the most ram and use the 1070's for other passes. Thanks for the link.


E

Share this post


Link to post
Share on other sites

Yo use some paragraphs my man that's a difficult wall-of-text to parse!

By default redshift uses a bucket per GPU, so yes if you have multiple cards they will all render a single frame at the same time. If you wanted each card to render a separate frame, you would have to use render management software that supports 'GPU affinity', such as deadline. Two 1070's will certainly outperform a 1080ti, but have they have less RAM (and use more power, obviously). 

Each GPU is rendering each bucket on a frame independently, and using whatever memory that card has available. If one card runs out of memory, and the other doesn't - that bucket will fail and the render will continue on the other card. It's probably less hassle to have the same cards in your machine, but it will take full advantage of whatever you stick in. 

 

Share this post


Link to post
Share on other sites
1 minute ago, Tex said:

Yo use some paragraphs my man that's a difficult wall-of-text to parse!

By default redshift uses a bucket per GPU, so yes if you have multiple cards they will all render a single frame at the same time. If you wanted each card to render a separate frame, you would have to use render management software that supports 'GPU affinity', such as deadline. Two 1070's will certainly outperform a 1080ti, but have they have less RAM (and use more power, obviously). 

Each GPU is rendering each bucket on a frame independently, and using whatever memory that card has available. If one card runs out of memory, and the other doesn't - that bucket will fail and the render will continue on the other card. It's probably less hassle to have the same cards in your machine, but it will take full advantage of whatever you stick in. 

 

it just flows out... haha , so you say if a card runs out of memory, the render bucket fails on that card. I was sure I read that redshift will switch to computer ram to continue the render if the card's memory is exhausted.? Is this incorrect.? Or am I interpreting your response wrong.?


Thanks

 

E

Share this post


Link to post
Share on other sites

Yep; for some operations if will push to system memory if it runs out of VRAM - but it gets hell slow, you want to avoid it where possible. Things like volumes can't render out-of-core either.
If the scene itself is too heavy to even load up on one card, it will crash the render completely. If you work smart you shouldn't run into too many issues, make sure you're using instances/proxies rather than copies etc.

Here's a doc the dev's made that covers the hardware side of things:

https://docs.google.com/document/d/1rP5nKyPQbPm-5tLvdeLgCt93rJGh4VeXyhzsqHR1xrI/edit?usp=sharing

Edited by Tex

Share this post


Link to post
Share on other sites
3 minutes ago, Tex said:

Yep; for some operations if will push to system memory if it runs out of VRAM - but it gets hell slow, you want to avoid it where possible. Things like volumes can't render out-of-core either.
If the scene itself is too heavy to even load up on one card, it will crash the render completely.

Here's a doc the dev's made that covers the hardware side of things:

https://docs.google.com/document/d/1rP5nKyPQbPm-5tLvdeLgCt93rJGh4VeXyhzsqHR1xrI/edit?usp=sharing

Ah cool, I wasn't aware that volumes couldn't render at all "out of core". That could influence the decision a little bit.  I have rendered some pretty large vdb smoke sims on the 1080ti without any problems, I wonder what the vram usage was during that render. Might have to go back and render a frame to see if I can see the amount of ram being used in the task manager performance area in windows to give me an idea of how close I am getting to the 11gb limit.

 

Thanks for the info

 

E

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×