Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Readicculus

  • Rank

Personal Information

  • Name
    Koida Kashinaki
  • Location
  1. Has the external editor setup changed in H18? I no longer see the button in the UI. I have set the .env to use both EDITOR and VISUAL both separately from one another. Neither seems to do the trick, but I'm guessing the button no longer exists, and if read properly it will just open the editor on first edit, unlike the past versions of Houdini. Thanks
  2. GPU in Houdini

    Andy Lomas is amazing
  3. GPU in Houdini

    My fault. I wasn't referring to GPU rendering in Houdini. Meant for OpenCL memory share via NVLink in Houdini from the wrangle node; as @schwungsau pointed out. I was just kind of hoping Houdini would recognize NVLink as one local GPU for use in sims without having to rewrite some miracle from the API.
  4. GPU in Houdini

    Appreciate the responses and info @Mandrake0 @schwungsau @lukeiamyourfather I was reading up a bit, and came across this. https://www.servethehome.com/dual-nvidia-titan-rtx-review-with-nvlink/3/ So, the NVLink is definitely not being used in Houdini, but would be for GPU renderers like Octane and Redshift-eventually correct? Sorry for redundant question, English is not my native language, and I had trouble following some sentences above.
  5. GPU in Houdini

    I believe what you are saying is that I could code via OpenCL to uses any range of nodes, or memory use from multiple cards; but that is strictly done by allocating and instructing it to do so. Is that correct? To be clear, are you saying that Houdini will not recognize 2 cards linked with NVLink? Not yet? So, two 11GB graphics cards linked won't register the VRAM as 22GB? Since it only uses one card, that memory and throughput is wasted, or not usuable or advantageous to Houdini as of now? Thank you so much for the response. I think I understand much better in general, minus the above follow ups.
  6. GPU in Houdini

    Could someone please explain how exactly Houdini uses the GPU in a few different scenerios? Apart from drawing the viewport, what else is actually going on under the hood? How and when does Houdini use the GPU over the CPU? or both? Say you have two 4 2080ti's linked in pairs with NVLink. Does Houdini just use one pair, one card, all four, or would it be best to set the environment variable in a way so that one pair is used for GPU, and the other is OpenCL; does it matter? What would be most ideal? Like, if you were doing massive simulations or were to hypothetically use a Quadro RTX card, is that better overall, or more suited to just have one card? I don't really understand how it utilizes multiple cards if at all, and if another card is a bit of a waste. Could a single Titan RTX handle most anything Houdini throws at it, or would someone see a dramatic increase in performance, and how so, if they added another Titan RTX. Is that a huge advantage over the one if you linked those via NVLink? I realize that might be great for GPU render engines like Octane or Redshift, but does it give Houdini an incredible amount of extra performance? Linking two expensive cards together like that, what kind of scenerio would be the limit in a sense? When might Houdini hit a bottleneck if a studio or professional that could afford a configuration like that? Does OpenCL use linked cards like that too? Large amount of VRAM? Thanks for helping me understand