Jump to content
Readicculus

GPU in Houdini

Recommended Posts

Could someone please explain how exactly Houdini uses the GPU in a few different scenerios? Apart from drawing the viewport, what else is actually going on under the hood? How and when does Houdini use the GPU over the CPU? or both?

 

Say you have two 4 2080ti's linked in pairs with NVLink. Does Houdini just use one pair, one card, all four, or would it be best to set the environment variable in a way so that one pair is used for GPU, and the other is OpenCL; does it matter?

 

What would be most ideal? Like, if you were doing massive simulations or were to hypothetically use a Quadro RTX card, is that better overall, or more suited to just have one card? I don't really understand how it utilizes multiple cards if at all, and if another card is a bit of a waste. Could a single Titan RTX handle most anything Houdini throws at it, or would someone see a dramatic increase in performance, and how so, if they added another Titan RTX. Is that a huge advantage over the one if you linked those via NVLink?

I realize that might be great for GPU render engines like Octane or Redshift, but does it give Houdini an incredible amount of extra performance? Linking two expensive cards together like that, what kind of scenerio would be the limit in a sense? When might Houdini hit a bottleneck if a studio or professional that could afford a configuration like that?

Does OpenCL use linked cards like that too? Large amount of VRAM?

 

Thanks for helping me understand

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×