Josami Posted April 7, 2020 Share Posted April 7, 2020 Hi! I've been trying to push my computer in order to find out its limits. I have a *Threadripper 1950x *128gm ram at 2400 (I know I could go for a better choice, but I wasn't aware at the momment) Right now *GTX 1080 *Windows 10 *CPU max temperature 67C // GPU temperature max registered is 71C according to HWMonitor, but it usually is between 45 and 71. I'm running a FLIP simulation with .075 particle separation and a box container of 75x75x125. It is an 89 frames long simple animation of a water emision colliding with two rocks and a terrain. Its final particle number is 10.5m and the surface and velocity volumes are along 8m. While I have rendered the total number of frames succesfully one time, it did crash a lot before. Frame number varies, but it could be between 30-70 frames. Whenever this happened, my task manager registered a GPU peak from 0%-3% to 100%. It doesn't fills the ram. At crashing time it could be using something between 28-60gb of ram. At this point I've already tried 1) updating my gpu drivers, which indeed was allowed the simulation to finish, 2) disabling opencl in the flip solver also keeping it on but with CPU calculations instead of GPU. This made the simulation slower and I still got the random GPU peak. 3) trying to disable GPU right from the houdini.env, which was a bad idea because my CPU temperature got really high. Something interesting is that at some point of like two frames per minute, my CPU goes from registering peaks to a constant 100%. When this happens, time spent in simulating a frame can go up to ten minutes, although after it finishes, the next two or three could take a couple of minutes. The one after those will again last ten or more minutes, twenty minutes for the last frames. Ram could be around 60gb at this point. Maybe I've already met my PC limitation, which is fine if that's the case. However, something feels weird and maybe someone knows what is going on. Thank you very much for reading my message. Quote Link to comment Share on other sites More sharing options...
Atom Posted April 7, 2020 Share Posted April 7, 2020 (edited) Let's see, if we do the math for 75x75x125 @ 0.075 = 1.666 billion voxels. With that much memory usage, your 1080 won't be of any use. For large simulations, such as what you have described, you may need to distribute it across multiple machines. As an 8 post user, however, I would recommend choosing reasonable settings that will fit within your physical memory. now you know what is too big, just make it smaller and scale it up later if you need to maintain that exact size. Here is the expression I used to come up with that large number. ch("../flipsolver1/limit_sizex")/ch("particlesep")*ch("../flipsolver1/limit_sizey")/ch("particlesep")*ch("../flipsolver1/limit_sizez")/ch("particlesep") Edited April 7, 2020 by Atom Quote Link to comment Share on other sites More sharing options...
Skybar Posted April 7, 2020 Share Posted April 7, 2020 31 minutes ago, Atom said: Let's see, if we do the math for 75x75x125 @ 0.075 = 1.666 billion voxels. It's actually an eighth of that, around 200 million voxels. Keep in mind the resolution of the volumes is particle separation * gridscale. Quote Link to comment Share on other sites More sharing options...
Atom Posted April 7, 2020 Share Posted April 7, 2020 Yep, you're right, I forgot Grid Scale defaults to 2.0. Quote Link to comment Share on other sites More sharing options...
Josami Posted April 7, 2020 Author Share Posted April 7, 2020 Great! Thank you very much for your observations. So basically, it was indeed pushing my workstation to some point, although it was still kind of manageable. Those are great news! I will resim taking that into account for sure. Thank you very much again! Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.