Search the Community
Showing results for tags 'GPU'.
-
Hey all, I just encountered a very strange issue I cannot seem to find a solution to. I am using an RTX 2080ti with a Ryzen 9 3950X and when I try to render a volume in karma, no matter the light source, as soon as I increase the volume limit above 0, my volume turns blue. Even with a basic distant white light, bounces make the volume turn blue. I am using the default karma cloud material and my scene is as basic as can be. The weird part is when I try to render the same scene on my old laptop with a 1070, it works fine. Which leads me to believe the issue lies in my GPU. I have just reinstalled windows and Houdini, left every setting default, updated all my drivers, nothing seems to be working. Some insight into why this might be happening would be great. Thanks in advance CPU renderview on 2080ti workstation XPU renderview on 2080ti workstation XPU renderview on 1070 laptop
-
Hi, I have recently created a set of 70+ HDAs called the IPOPs that aim at creating AOVs, Mattes & features for rendering. IPOPs are operator workflows designed to streamline shaders and AOV development in Houdini. With our toolset, artists can easily standardise their networks and access specially developed nodes for various renderers and presets, increasing efficiency and creativity. Get the IPOPs here! Renders in Karma & Mantra. 1. The Standard Library: Set of HDAs that help artists create quick shaders and AOVs and contain useful Utility Nodes such as Fresnel for Karma (CPU & XPU), Mask Falloffs in Shaders etc. The HDA library is constantly updated with new nodes to assist artists in speeding up their workflow and creating a streamlined system thereby increasing efficiency and creativity. 2. The Geometry AOVs: A set of remade common geometry AOVs for artists to quickly generate AOVs, Mattes & Utility Passes for Compositing. 3. The Particles AOVs: A set of remade useful particle AOVs for artists to quickly generate AOVs, Mattes & Utility Passes for Compositing. 4. The Volumes AOVs: A set of remade volume AOVs for artists to enhance their FX using AOVs, Mattes & Utility Passes for Compositing. We have a wide range of Bundles available for various workflows & we support both Houdini Apprentice & Houdini Indie! For any enquiries please email support@chakshuvfx.com I will be posting constant updates and developments here. IPOPs Master bundle: 1. Houdini Apprentice 2. Houdini Indie IPOPs Geometry AOVs bundle: 1. Houdini Apprentice 2. Houdini Indie IPOPs Particles AOVs bundle: 1. Houdini Apprentice 2. Houdini Indie IPOPs Volumes AOvs bundle: 1. Houdini Apprentice 2. Houdini Indie
-
Hi Houdini users, I am working on a new system build and wanted your thoughts about using a gaming card(4090) vs a workstation card (RTX a5500)? Main motivator behind this is wattage and thermals. Power consumption is not the concern, but raising the temp in the room by using a gaming card. Plus its very hard to get a 4090 that isn't drastically marked up to where I might as well get a workstation card for the same price. I am eyeing a AMD 7950x(CPU), but am waiting to see the reviews of the new 7000x3D chips coming in the next few weeks. Main reasons for the machine: Motion graphics and future 3D/Houdini work(hopefully). 3D side will be Houdini/redshift Any advice/input is welcome. Thank you for your input
-
Is there an existing workflow to run a pop sim on the gpu?
-
I have been running Houdini the past year on a Geforce GTX 1660 GTX. Suddenly, this week Houdini decided to start crashing every time I place a couple nodes in the scene. There is no error message or crash log. I have tried to troubleshoot a few things but after talking with sidefx support they said my graphics card just isn't supported. After looking at what cards are supported though, I could downgrade to the GTX 10 series and be fine. are none of the 16 series really not supported on Houdini, or is there anything else I can do to solve this problem? would TDR delay solve the problem, or is that just a bad idea to mess with?
-
Hi guys! I have the chance to upgrade my workstation. But I don't completely understand what would be the best choice in my case. Still learning about hardware and houdini´s performance. I´m in the advertisement Industry, and as you know, the timetable can get pretty crazy. I´m interested in improving simulation times and performance mostly. These are my specs. AMD Ryzen Threadripper 1950x 16-Core Processor, 37000 Mhz 32 GB 2666Mhz DDR4 (Corsair Vengeance) GTX 1080 Ti What would you do? Upgrade Graphics card? maybe getting a second one? or getting more RAM? I´m still learning about Hardware and I´m a little lost. Also If you can recommend a YouTube channel or forum where I can learn more about this stuff, I´d really appreciate it. Thanks a lot for the help! Cheers!
- 4 replies
-
- workstation
- ram
-
(and 3 more)
Tagged with:
-
Hey my Render View is working absolutely fine but if I wanna render to Disk I always get a short freeze followed by an Error: Error opening the tile device. I’m using RedShift V. 2.6.41 and Houdini 17.5.173 Hope somebody can help me with that.
-
Ok, so not a whole lot of Houdini Love over on the Redshift forums. Mostly Maya, 3dmax, C4D (in that order) https://www.redshift3d.com/forums/viewthread/10860/ Hence the reason for my posting here. Anyone care to share any Redshift tips, tricks and tutorials specific to Houdini users? Here is one great tip that Adrian from Redshift was kind enough to share for customizing RS Spot lights https://www.redshift3d.com/forums/viewthread/12283/#79060
-
Hi all, is it possible or a way to enable all my GPUs for OpenCL? i have 3x titan-x and a quadro m4000 total of 4 cards. thanks
-
Should we rejoice or be scared for new tech like that. What happens when anyone can do pyro-effect, with a couple of clicks? That said it looks quiet impressive for real time.
- 5 replies
-
- new technology
- gpu
-
(and 2 more)
Tagged with:
-
Hey : ) I have (as you can see in my signature) a old 4790k cpu, but a much newer rtx 2060. I just ran a pyro sim with openCL of 90 frames and it finished very quickly at 4 mins 30 seconds. When I use my cpu only (openCL off) it takes over 30 minutes to finish. Before I had my 2060, I was running a 970. OpenCL sims were faster than the cpu on that card too, but not that much faster. Is this just sign my I need to replace my cpu? The only reason I am hesitant is that I would also have to buy a new mother board and ram to go along with it. Thanks-------
-
Hi guys! I need some advice from hardware magicians. Since our studio is expanding a lot, we are looking to build some new powerful machines to renew our old workstations. Our main software is Houdini, 3dsmax and Modo, and renders are Redshift and Vray. We also want to use them as a farm, when no one is working on them or for night sims/renders. But for this one, it will be only for Houdini+Redshift, so im going to write down the list of hardware that i have in mind, and let's see if you can give me any help to have better specs. We can get up to 5-6k per machine. MOBO: Asus Z10PE-D16 WS CPUs: 2x Intel Xeon E5-2630V4 (i really think that is a bit overkill, i think it will be better and cheaper to get an i9 or one of this new AMD ones) RAM: 2x Crucial DDR4 2666MHz 32GB CL19 DRX4 ECC ( 64gb total, maybe getting 128, since this one will be the main houdini machine) SSD: Samsung 970 EVO Plus 1TB M.2 PCIe NVME SSD: GOODRAM SSD 960GB 2.5" CX300 LIQUID REFRIGERATION: Cooler Master MasterLiquid 120 PSU: Be Quiet! Dark Power PRO 11 1200W 80+ Platinum BOX: In Win 805C ATX GPU: 4x Gigabyte Nvidia GeForce RTX 2070 Gaming OC 8GB Let me know guys! Thanks in advance.
-
Hi all, I'm new to VFX, in search of a computer to get started with Houdini. My best guess at this point is a mid- to high-range gaming PC. Specs I'm considering: (below) Would this machine work at the intro level? Thanks. Intel® Core™ i9 9900K (8 Core/16 Threads, up to 4.7GHz on all cores, 16MB Cache) NVIDIA® GeForce® RTX 2080 OC with 8GB GDDR6 16 GB Dual-Channel-DDR4 at 2.666 MHz 256 GB M.2-PCIe-NVMe-SSD + 2 TB SATA Windows 10
-
Could someone please explain how exactly Houdini uses the GPU in a few different scenerios? Apart from drawing the viewport, what else is actually going on under the hood? How and when does Houdini use the GPU over the CPU? or both? Say you have two 4 2080ti's linked in pairs with NVLink. Does Houdini just use one pair, one card, all four, or would it be best to set the environment variable in a way so that one pair is used for GPU, and the other is OpenCL; does it matter? What would be most ideal? Like, if you were doing massive simulations or were to hypothetically use a Quadro RTX card, is that better overall, or more suited to just have one card? I don't really understand how it utilizes multiple cards if at all, and if another card is a bit of a waste. Could a single Titan RTX handle most anything Houdini throws at it, or would someone see a dramatic increase in performance, and how so, if they added another Titan RTX. Is that a huge advantage over the one if you linked those via NVLink? I realize that might be great for GPU render engines like Octane or Redshift, but does it give Houdini an incredible amount of extra performance? Linking two expensive cards together like that, what kind of scenerio would be the limit in a sense? When might Houdini hit a bottleneck if a studio or professional that could afford a configuration like that? Does OpenCL use linked cards like that too? Large amount of VRAM? Thanks for helping me understand
-
Hey, I took advantage of some of the price drops on GPU's lately with the release of the 20 series cards from Nvidia. I Got an evga 1070ti ftw2 card for 429$( also has a 20$ MIR to drop final cost to 409$). I put this into my machine that has had an evga 1080ti FE card in it since I built it a year and a half ago. I wanted to share the "real world" test results in case anyone else is wondering if it is worth it to pick up another card. The PC is running win10pro, 64gb ddr4 ram, intel i7 6850k 3.6ghz, primary drive is a samsung 960 evo m.2 ssd and a secondary crucial ssd, 1000 watt evga gold G3 psu, Houdini 16.5.268 and redshift 2.5.48 ( i think ) etc... I ran a redshift test on a scene that is a decent size pyro sim, rendered 60 frames at full HD with fairly hq setting. With just the 1080ti in the pc, the render took 38min17seconds. With the addition of the 1070ti, the render took 25min26seconds for the 60 frame sequence. Adding the second card took almost 13 minutes of the sequence render time. I would say it is worth the roughly $400 bucks. With the option of enabling/disabling gpu's in the redshift plugin options, I ran a single frame of the render and here was the result: with just the 1080ti - 26 seconds for the first frame. With just the 1070ti - 34s, with a little boost to the gpu settings on the 1070ti using the evga overclock software - 32 seconds( not enough for me to want to keep it overclocked beyond how it arrived). With both gpu's enabled - 15 seconds. I think I would be willing to buy another 1070ti while the sale/rebate is going on if it will reduce the render time a further 13 minutes. I'm assuming it would, but maybe I'm not adding something up right here. If adding one 1070ti to the machine cut 13 minutes off the render, wouldn't the addition of another 1070ti take another 13 minutes off the render time.? It would be incredible to drop the test sequence render time down from 38 min to 12 min for roughly $800 in hardware upgrades.! I ran all the PC's parts through a component checker online and even If I add a 3rd card, it should still have about a 100watts of buffer on the 1000w psu. Would probably want to add some more/better case fans if increasing the GPU count from one to three.! Anyways, thats what adding an extra card did for me. E
-
- performance
- specs
-
(and 3 more)
Tagged with:
-
Hi, Has anyone seen much information about the latest Nvidia cards that will be coming out end September in relation to GPU rendering.? I have been waiting for the specs on these cards for months, and finally they have been released, but of course all the articles so far that I have seen are still somewhat speculation on performance and "leaks" of specs that may or may not be real and all are geared towards gaming. I must say, some of these leaked tests aren't too impressive, like 5% performance increases on the new GTX2080ti over the old 1080ti, but I would have to assume that's because the software doing the tests isn't taking advantage of the RT and tensor cores. I am disappointed that on a $1200 card, they still only have the same 11GB of ram as the 1080ti has, although it is faster/newer ram, I was hoping for more ram.! Have there been any statements made by redshift or otoy about what speed improvements will come form having a card with "RT and tensor cores"..? Just wondering because I will be needing another 2 gfx cards in the next month, and the 10 series cards are having great price drops recently, some 1070ti's are as low as $399. If these new flashy RT cores are going to be a huge performance gain, then I will probably hold out for at least the 2070's. Any info would be great. Thanks. E
-
Hi everyone, I am working with our IT department to built up a desktop for heavily FX work(mostly water and pyro simulations). Below are three options that we are considering about it, I am looking for any comments or recommendation on different configuration. Thank you
-
Hey all, I have started using redshift a few months ago and so far i like it a lot.! I currently have one 1080ti 11gb gpu in the machine I am using it on. Would like to get either another one of those cards since the gpu prices seem to have dropped lately back to something more reasonable, or get 2 1070ti 8gb cards. I have heard that the performance diff is only like a 20% gain for the 1080 over the 1070's, so might be better off with getting 2 of the 1070's. The main question is though, what happens when you hit the render button on your redshift node for a sequence inside houdini if you have more than one gpu.? If I had 3 gpu's, would it automatically render frame one on the 1st gpu, frame 2 on the second, frame 3 on the 3rd and so on..? Would it render the first frame across all 3 cards simultaneously by breaking up the frame to submit to the different cards, is that even possible.? Do the extra gpu's only get used if you use a distributed rendering software like deadline or by launching RS renders from command line.? It seems like you have to launch from command line in order to get the other gpu's to render, but I have never worked with a machine with more than one gpu installed. If I were to submit a 100 frame render from within houdini by clicking the render to disk button on my RS node, would it only use the main gpu even with other gpu's installed.? Any info from artists using multi gpu systems would be great. I didn't find a lot of info about this on the redshift site, but might not have looked deep enough. The end goal would be to have the ability to kick off say 2 sequences to render on 2 of the gpu's while leaving my main gpu free to continue working in houdini, and if its time to end the work day, allow the main gpu to then render another sequence so all 3 cards are running at the same time. I will most likely need to get a bigger PSU for the machine, but that is fine. I am running windows 10 pro on a I7-6850K hex core 3.6ghz cpu with 64gig ram in an asus x99-Deluxe II MB if that helps evaluate the potential of the setup. One last question, sli bridges.? Do you need to do this if trying to setup 2 additional gpu's that will only be used to render RS on.? I do not wish to have the extra gpu's combined to increase power in say a video game. I don't know much about sli and when/why it's needed in multi gpu setups, but was under the impression that is to combine the power of the cards for gaming performance. Thanks for any info E
-
Hi, is there anyway to cache my pyro sim in opencl with my 4 gpu (4xgt1080)? i couldn't find any preferences on houdini or topic in search. Thank you.
-
I'm looking into graphics cards and i'm finding it hard to nail down exactly what to go for in terms of balancing budget with performance. Obviously getting the 1080ti would be better than the 1070 but there is, of course, the massive difference in price. Just within the 1070 model, limited to EVGA there are still multiple versions to decide between. Is there much wrong with going for the cheapest one? https://ca.pcpartpicker.com/products/video-card/#c=369&m=14&sort=price thanks everyone.
-
I have a geometry that has a clay material with a texture and it is animated during 45 thousand frames. My computer is 2 x XEON 2,4GHz E560 and i have 8 GTX1080ti and 16 x 8GB Ram Mantra is not using GPU for rendering of course but the % usage in CPU is low while rendering why? what can i do to accelerate rendering? How can i use GPU rendering ? thanks Martin
-
I know GPU renderers take advantage of multiple GPU's, but does H16 OpenCL performance also benefit?
-
Hi all, I would very much like to try out some GPU rendering and therefore I am looking for a Nvidia GPU. I had found a Asus 1080 Ti Poseidon 2nd hand, but Now want to check in with you guys what you Think is the best solution? any 1080 Ti models that are better than others? If so, which one would you recommend? cheers :-)
-
Hi, I'm looking to build a PC at home and I don't know a huge amount these things so I'm looking for some advice. My budget is around £3k. The main uses will be Houdini, Maya and Nuke. I do a fair amount of heavy fx work and I'm planning to do gpu rendering with redshift. My main questions are to do with which cpu and gpu to go for: CPU - the AMD threadripper looks very interesting, is it worth waiting to see how much that costs and how it compares to intel i7/i9? - are the i9 chips worth it? - or is it better to go for a dual cpu (e.g. dual xeons)? Would that fit in my budget? is it possible to use just one cpu on a dual socket (with the option to add another at a later date)? GPU - I would ultimately like to have 2 or 3 gpus for rendering but, so I guess this influences which motherboard I can get (with enough pci-e lanes) - I think the 1070 seems like a really good option to start with for price/performance