Jump to content

Simulating smoke and fluids takes a long time.


simonj

Recommended Posts

Hi,

I have a Macbook Pro, 2.8Ghz Intel Core 2 Duo with 4GB ram and a decent graphics card (512 mb nvidia something something).

It's not the quickest computer when working with Houdini/simulations/renders, but it's alright. Now, I've been trying to learn how smoke behaves in houdini, as I'm trying to achieve a specific look. But I find it very hard to do since my simulation times are pretty high, first of all I need to be at about frame 50 to actually see how the smoke looks, and then for each change I make I have to re-simulate all these frames.

What can I do to make this a bit faster, accuracy and detail aren't that important, I just want to see if I get closer to what I'm trying to achieve. I'd love to keep experimenting with fluid-dynamics without having to invest in a new computer. I find learning very slow when the "idle-time" between the results are so long.

Thanks for any tips and answers.

Edited by simonj
Link to comment
Share on other sites

Simulating in 2D helps for learning lot's of the different parameters and fields fairly fast.

Full 3D simulation is simply slow, you can lower the resolution a bit, but apart from that you do require computing power. I have been waiting for a while for 8 core machines to drop in price a bit to handle these kind of simulations, but there doesn't seem to be that much mainstream demand for that kind of cpu power - One of the more "cheaper" options I came across was a Mac Pro. (On the Nvidia site there are also several links to manufacturers of 8 core workstations)

It's not because your machine is a bit slower you can not get good results, you might just have to stick with the approach of using a low resolution fluid velocity field to advect particles. That way you do not need a high res container (both for simulating and rendering).

Sprites can provide more points than you are actually simulating. You can get them to look quite well, but some things will be impossible (like merging the colors of two puffs of red and blue smoke together to form purple smoke for example).

This might be an interesting read in terms of using 2D fluids to help drive particles in 3D:

http://mag.awn.com/?article_no=1797&lt...cial%20Features

http://physbam.stanford.edu/~fedkiw/papers...ford2003-05.pdf

http://graphics.stanford.edu/~fedkiw/paper...ford2003-02.pdf

For rendering: As Jason stated in another thread: too much sprites will be almost the same as creating a volume. In one of the sidefx tutorials on fluids it is mentioned that mantra pretty much treats volumes as a lot of semi-transparent planes stacked on top of each other.

This was done with the advection technique:

http://www.nvidia.com/object/cuda_home.html#state=home -> Real Time 3D Fluid and Particle Simulation and Rendering (the car with the sand cloud behind it)

Good luck with it - but yeah, more cpu power is always a good thing.

Link to comment
Share on other sites

Hi there, I was just reading the articles, and i've heard of this method used before "referring to the rendering stage, of breaking out the the lighting into red, green and blue channels.", but i'm kind of hazey on still how to go about doing this from beginning to completion. would anyone be able to expand on this and give me a very simple example of how to do this? i think it could come in handy right now for my project i'm currently working on. "breaking everything up into millions of passes still flys by my head sometime because i'm more interested in the actual dynamics part" however i need to buckle down and really get a solid grounding for these very "useful" sounding passes that will allow me to not have to rerender "lighting"... and even if you'd like to throw on top of that, an example of "common" passes that are always being rendered out in current production pipelines. I know this is definately on a per shot basis but just for an example:...

i'm doing a shot which is created from, electricity, smoke, liquids, particle fluids... and basically it's just a whole mess of sim! thinking about breaking into "smart" render layers for comp would be so helpful to me!

thanks,

J

and btw, the 2d to 3d idea is awesome!!!!!!!

Link to comment
Share on other sites

hmmm. i thought about this some more. would getting the r g and b lighting for each "element" just require, going into vex, and taking the "Cd", and doing a "vector to float", and then making each one of those a parameter, called CdR, CdG and CdB respectively? and then sending those out to the "layers" when rendered?

Link to comment
Share on other sites

I'd just like to add that I know Side Effects is very aware of the slowness of simulation and are actively pursuing algorithmic and multithreading enhancements. We can all hope that some of the slowest phases of the simulation (like the Gas Project Non Divergent) are optimized in coming weeks.

Link to comment
Share on other sites

Hi there, I was just reading the articles, and i've heard of this method used before "referring to the rendering stage, of breaking out the the lighting into red, green and blue channels.", but i'm kind of hazey on still how to go about doing this from beginning to completion. would anyone be able to expand on this and give me a very simple example of how to do this? i think it could come in handy right now for my project i'm currently working on. "breaking everything up into millions of passes still flys by my head sometime because i'm more interested in the actual dynamics part" however i need to buckle down and really get a solid grounding for these very "useful" sounding passes that will allow me to not have to rerender "lighting"... and even if you'd like to throw on top of that, an example of "common" passes that are always being rendered out in current production pipelines. I know this is definately on a per shot basis but just for an example:...

Hey,

The "lighting into red, green and blue channels" method refers to setting your key light as Red, Fill as Green & Rim as Blue, for example. When you render a volume that way, it will be multi-coloured and in compositing, you can adjust each colour component separately and balance the lighting in comp. This way, you can have a stronger key or less fill without the need to re-render.

I'm just generalising the method and you can set it up however you like it.

Hope the above helps!

Cheers!

steven

Link to comment
Share on other sites

Ah fantastic! and just to clarify... Say I had multiple objects. A "glass sphere and a "wooden cube"... I would just set up my lights. My key light I would give the color: 1, 0, 0... my Fill: 0, 1, 0... and my Rim, 0, 0, 1... I would then render out everything as normal.. So say for example I wanted to have a diffuse and spec pass, I would render a Cd pass and a Spec pass. And then in comp, the RED component of the pass would be essentially holding the "key light" information. Example, the wooden cube, i'd render out my diffuse and spec pass. In comp, if I wanted to "adjust" my Key Light on that wooden cube, I would just adjust the RED component of the "diffuse" and "spec" pass?

oh and one more thing i just thought of, because i went into houdini and began to build a very simple scene to give it a test... How about Light Intensity? Do I set this up as normal or keep all intensities at 1?

if i have Key, Fill and Rim... Each, 1,0,0... 0,1,0... and 0,0,1... respectively... then, usually i'd have:

Key at like 1 intensity, Fill at .4 and Rim at .5? But for this "rendering out light's into different channels" method, would it be better to just keep them all at 1? Or I guess it doesn't matter because at that point, you can just change it anyways in the comp? since it's already in passes? So then I guess, would that just mean, yes it's fine to just set the intensities of my lights to whatever is CLOSE to what i want for the final product, in 3d before comp? example, just setting them at 1 for key, .4 for fill and .5 for rim? Should be fine then?

sorry if i sound "repetitive" but I just want to make sure I have a good understanding of this method. It seems MORE than useful to be implementing this.

Thanks,

Jonathan

Edited by itriix
Link to comment
Share on other sites

fantastic! thanks for the help

The big reason the R/G/B lighting technique works well with volumetric elements is because they don't vary color-wise. Smoke is mostly just shades of gray and so you can use "gelled" lights like this and monochrome them and get something okay.

For general (colored/textured) objects in a scene, you should investigate the "lightexport" functionality in VEX illuminance() loops to break out lighting into entirely separate planes in your deep raster .exr / .pic.

Link to comment
Share on other sites

ahh very nice, i'll take a look into that for sure. and thank you for clarifying it's used primarily with volumetric rendering. makes sense. i also was just looking around and noticed there was some abilities in cops to do 3d lighting.... after the fact. have you had any luck with this method? that seems pretty powerful too, just not sure how "good" it really is....

thanks again!!!

Edited by itriix
Link to comment
Share on other sites

ahh very nice, i'll take a look into that for sure. and thank you for clarifying it's used primarily with volumetric rendering. makes sense. i also was just looking around and noticed there was some abilities in cops to do 3d lighting.... after the fact. have you had any luck with this method? that seems pretty powerful too, just not sure how "good" it really is....

thanks again!!!

IMHO 2D relighting is a bit of a gimmick. It can possibly produce some decent results but nothing production ready. There are too many issues like motion blur, transparency, normal sampling that make it unwieldy.

A very effective and commonplace 2D lighting pipeline involves placing full intensity white lights ({1,1,1}) in your scene, exporting the shading information per light (using lightexports) and then, in 2D - you color-correct and add the illumination of these light layers together. You can bust out each component per light too - ie, diffuse and specular.

So, for a 2-light scene, you'd have something like these layers:

  • RGBA (throwaway)
  • light1_diffuse
  • light1_specular
  • light2_diffuse
  • light2_specular
  • Pz (depth, for DOF effects or fog)

Link to comment
Share on other sites

great! thank you once again for such a concise break down... i'll definately be testing this out asap! i'm used to rendering passes for spec, diffuse, occlusion, reflection... etc.... but rendering out passes in respect to the lights is new for me. so, can't wait to give it a try!

thanks again,

Jonathan

Edited by itriix
Link to comment
Share on other sites

They tell about passing data layers to particles which I could picture how to do in a more houdini way (i.e. more friendly way).

Hi Netvudu, all, great thread. Here's one method for using 2d fluid sims to run a 3d particle sim. Its basically just a 2d fluids with 2d advection, and then randomly rotating the result along Y (in sops) to give a 3d result. The result is obviously very symmetrical, so I tried to vary it by simming twice with different parameters and blending between the two sims. Some displacement was also used in sops to give a little more asymmetry.

Just one approach, I would be interested in seeing any other methods of making the 2d to 3d conversion. Jason?

brn_advect2d.hip

  • Like 1
Link to comment
Share on other sites

Hi Netvudu, all, great thread. Here's one method for using 2d fluid sims to run a 3d particle sim. Its basically just a 2d fluids with 2d advection, and then randomly rotating the result along Y (in sops) to give a 3d result. The result is obviously very symmetrical, so I tried to vary it by simming twice with different parameters and blending between the two sims. Some displacement was also used in sops to give a little more asymmetry.

Just one approach, I would be interested in seeing any other methods of making the 2d to 3d conversion. Jason?

Ah you beat me to it ;)

- nice work with the advection vop pop! and Cool simulation too!

One little thing I noticed is you're using the ptnum for your randomization and rotation. Probably better to use the particle id, so you avoid popping when particles die.

Edited by pclaes
Link to comment
Share on other sites

In terms of volumetric metaball shaders Gallenwolf recently made a pretty cool metaball shader with 4d noise. It kind of peels inside out, I had a look at the vex code, but still don't fully understand what the 4th coordinate of that noise stands for (except that is causes the peeling effect) =>
. I understand on a mathematical level how perlin noise is calculated, but that doesn't have a 4th coordinate :huh: .

This is the thread: http://forums.odforce.net/index.php?showto...p;hl=gallenwolf

I knew I saw this somewhere. Although the 5by5 software preview looks awesome.. Why isn't this in Houdini yet? Looks like people are finally unlocking the secret to this though, that's encouraging.

http://www.myrtlesoftware.com/index.php/fivebyfive

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...