Jump to content
eetu

eetu's lab

Recommended Posts

ikarus    64

the particle work is astonishing.

i curiously want to know what is involved in the gpu kernel you created to get the buckets working, as well as how the new particles are generated. im not a coder by any means but i love analyzing the hard detail in things like this

Share this post


Link to post
Share on other sites
eetu    496

After a bit of a pause, here's more horsing around. This time it is an exercise in python+sopsolver - Volume Game of Life.

An extension of the old Conway one into 3d, this one counts the neighbors as well. If the number of neighbors of a voxel falls between predefined limits, then that voxel will be alive next step. This one is implemented as a Python SOP inside a SOP Solver. It would work with Vex as well, probably a lot faster too, but I wanted to try Python.

In the hip, you can set the lower and upper limit for number of neighbors on the Python SOP.

 

edit: added the otl

vol_life_volume.mov

vol_life_isosurf.mov

vol_life_v008.hip

ee_volt.otl

Edited by eetu

Share this post


Link to post
Share on other sites
eetu    496

i curiously want to know what is involved in the gpu kernel you created to get the buckets working, as well as how the new particles are generated.

That was quite a simple setup, I just created all the new particles on the first frame. I made 40 (or whtever) copies of the original particles, and gave them random offsets. So in the first frame it looks like just your normal "cottonballsy" lets-copy-a-bunch-of-particles-to-each-particle setup, but it soon evens out as the original particles tug the new particles around.

The kernel just accumulates directions/distances to nearby particles (for normals/occlusion) and their velocities (for the advection).

Share this post


Link to post
Share on other sites
Klockworks    0

That was quite a simple setup, I just created all the new particles on the first frame. I made 40 (or whtever) copies of the original particles, and gave them random offsets. So in the first frame it looks like just your normal "cottonballsy" lets-copy-a-bunch-of-particles-to-each-particle setup, but it soon evens out as the original particles tug the new particles around.

The kernel just accumulates directions/distances to nearby particles (for normals/occlusion) and their velocities (for the advection).

How exactly are you calculating your normals? I was thinking it could be done by finding the general direction towards its neighbouring particles and reversing the direction of the vector, is this how you went about it?

Does your setup allow for particles to birth/die, or is it a static count?

Share this post


Link to post
Share on other sites
eetu    496

How exactly are you calculating your normals? I was thinking it could be done by finding the general direction towards its neighbouring particles and reversing the direction of the vector, is this how you went about it?

Does your setup allow for particles to birth/die, or is it a static count?

That is exactly how I did it.

As it stands, it doesn't really handle birth/death. I've changed approaches since, I'll post something in a couple of days :)

Share this post


Link to post
Share on other sites
Klockworks    0

That is exactly how I did it.

As it stands, it doesn't really handle birth/death. I've changed approaches since, I'll post something in a couple of days :)

Your work inspired me to the point that I am trying my own implementation as we speak in houdini. As well as a few other peoples data expansion work. Going to see if I can't make an uber millions of particles otl.

Share this post


Link to post
Share on other sites
eetu    496

Next iteration of particle multiplication.

This is quite a different beast than the earlier one: now it's a mantra procedural, all the new particles are created per-frame inside mantra.

It's pretty much following the Sony Cluster approach from this year's Siggraph volume rendering course.

comp_b_tn.png

Animation of the new particles, with both source and new particles side-by-side, and with them overlaid on top.

It's adaptive, so I didn't really set any particle count, but I think the last frame was something like 130mil.

I first prototyped everything with vex & python inside Houdini. It was a lot slower, but things are just soo much easier to debug inside Houdini, with spreadsheet and all. Inside mantra one is pretty much flying blind :)

The voronoi-like look is kinda nice, but I can't really get rid of it as it is now. working on it..

At least it'll be great for whitewater splashes, heh.

Share this post


Link to post
Share on other sites
Pazuzu    175

Nice Procedural!!! :blink:

I hope that H12 comes with a point procedural, that will be great!!!

Share this post


Link to post
Share on other sites
teresuac    0

that's cool eetu ! I just discover your lab, some very inspiring stuff.

I was trying to add particles at render time and I see you already did this.

now it's a mantra procedural, all the new particles are created per-frame inside mantra.

it's a program procedural shader ?

It's not well documented, can you tell me how do you call a python script with it and how do you get your initial geometry ?

thanks

Share this post


Link to post
Share on other sites
eetu    496

I was trying to add particles at render time and I see you already did this.

it's a program procedural shader ?

It's not well documented, can you tell me how do you call a python script with it and how do you get your initial geometry ?

I'm not sure if there is a way to create geometry at rendertime with python, the VRAY Procedurals are written in C++ and compiled to .dll/.so

Peter Claes some help on how to get started, as well as some code for an object instancing procedural you can study.

Mark Story's clusterThis instancer is also open source and a good reference.

eetu.

Share this post


Link to post
Share on other sites
eetu    496

An adventure into generative art.

I ran into Multi-Scale Turing Patterns, and thought it would be fun to try and do that for volumes.

It was satisfying to be able to do it with volume SOPs and VOPs without needing to write any code :)

shot_000.jpg

Viewport flipbooks [mov] [mov]

Renders [mov] [mov]

I also tried putting a point light inside the volume and rendering with scattering, and got this nice happy accident:

shot_001.jpg

Most of the scatter tests looked like crap, but I hope I'll get a cool animation rendered soon.

  • Like 3

Share this post


Link to post
Share on other sites
Klockworks    0

An adventure into generative art.

I ran into Multi-Scale Turing Patterns, and thought it would be fun to try and do that for volumes.

It was satisfying to be able to do it with volume SOPs and VOPs without needing to write any code :)

shot_000.jpg

Viewport flipbooks [mov] [mov]

Renders [mov] [mov]

I also tried putting a point light inside the volume and rendering with scattering, and got this nice happy accident:

shot_001.jpg

Most of the scatter tests looked like crap, but I hope I'll get a cool animation rendered soon.

And the ever prevalent question in this thread: how did you do it? I am having a hard time understanding the link's directions.

Share this post


Link to post
Share on other sites
eetu    496

Something I found in my archives. I saw something similar from someone else - can't remember where or from whom, sorry.

rigidsmoke.jpg

Share this post


Link to post
Share on other sites
symek    266

Something I found in my archives. I saw something similar from someone else - can't remember where or from whom, sorry.

rigidsmoke.jpg

ha, ha, ha! funny!

Thanks eetu!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×