Jump to content

eetu's lab


eetu

Recommended Posts

  • 6 months later...

the particle work is astonishing.

i curiously want to know what is involved in the gpu kernel you created to get the buckets working, as well as how the new particles are generated. im not a coder by any means but i love analyzing the hard detail in things like this

Link to comment
Share on other sites

  • 1 year later...

After a bit of a pause, here's more horsing around. This time it is an exercise in python+sopsolver - Volume Game of Life.

An extension of the old Conway one into 3d, this one counts the neighbors as well. If the number of neighbors of a voxel falls between predefined limits, then that voxel will be alive next step. This one is implemented as a Python SOP inside a SOP Solver. It would work with Vex as well, probably a lot faster too, but I wanted to try Python.

In the hip, you can set the lower and upper limit for number of neighbors on the Python SOP.

 

edit: added the otl

vol_life_volume.mov

vol_life_isosurf.mov

vol_life_v008.hip

ee_volt.otl

Edited by eetu
Link to comment
Share on other sites

i curiously want to know what is involved in the gpu kernel you created to get the buckets working, as well as how the new particles are generated.

That was quite a simple setup, I just created all the new particles on the first frame. I made 40 (or whtever) copies of the original particles, and gave them random offsets. So in the first frame it looks like just your normal "cottonballsy" lets-copy-a-bunch-of-particles-to-each-particle setup, but it soon evens out as the original particles tug the new particles around.

The kernel just accumulates directions/distances to nearby particles (for normals/occlusion) and their velocities (for the advection).

Link to comment
Share on other sites

  • 1 month later...

That was quite a simple setup, I just created all the new particles on the first frame. I made 40 (or whtever) copies of the original particles, and gave them random offsets. So in the first frame it looks like just your normal "cottonballsy" lets-copy-a-bunch-of-particles-to-each-particle setup, but it soon evens out as the original particles tug the new particles around.

The kernel just accumulates directions/distances to nearby particles (for normals/occlusion) and their velocities (for the advection).

How exactly are you calculating your normals? I was thinking it could be done by finding the general direction towards its neighbouring particles and reversing the direction of the vector, is this how you went about it?

Does your setup allow for particles to birth/die, or is it a static count?

Link to comment
Share on other sites

How exactly are you calculating your normals? I was thinking it could be done by finding the general direction towards its neighbouring particles and reversing the direction of the vector, is this how you went about it?

Does your setup allow for particles to birth/die, or is it a static count?

That is exactly how I did it.

As it stands, it doesn't really handle birth/death. I've changed approaches since, I'll post something in a couple of days :)

Link to comment
Share on other sites

That is exactly how I did it.

As it stands, it doesn't really handle birth/death. I've changed approaches since, I'll post something in a couple of days :)

Your work inspired me to the point that I am trying my own implementation as we speak in houdini. As well as a few other peoples data expansion work. Going to see if I can't make an uber millions of particles otl.

Link to comment
Share on other sites

  • 2 weeks later...

Next iteration of particle multiplication.

This is quite a different beast than the earlier one: now it's a mantra procedural, all the new particles are created per-frame inside mantra.

It's pretty much following the Sony Cluster approach from this year's Siggraph volume rendering course.

comp_b_tn.png.b6126f1e47b728b4e55297fc87d39e72.png

Animation of the new particles (seq_c.mov), with both source and new particles side-by-side (seq_c_sbs.mov), and with them overlaid on top. (seq_c_overlay.mov)

It's adaptive, so I didn't really set any particle count, but I think the last frame was something like 130mil.

I first prototyped everything with vex & python inside Houdini. It was a lot slower, but things are just soo much easier to debug inside Houdini, with spreadsheet and all. Inside mantra one is pretty much flying blind :)

The voronoi-like look is kinda nice, but I can't really get rid of it as it is now. working on it..

At least it'll be great for whitewater splashes, heh.

Edited by eetu
Link to comment
Share on other sites

that's cool eetu ! I just discover your lab, some very inspiring stuff.

I was trying to add particles at render time and I see you already did this.

now it's a mantra procedural, all the new particles are created per-frame inside mantra.

it's a program procedural shader ?

It's not well documented, can you tell me how do you call a python script with it and how do you get your initial geometry ?

thanks

Link to comment
Share on other sites

I was trying to add particles at render time and I see you already did this.

it's a program procedural shader ?

It's not well documented, can you tell me how do you call a python script with it and how do you get your initial geometry ?

I'm not sure if there is a way to create geometry at rendertime with python, the VRAY Procedurals are written in C++ and compiled to .dll/.so

Peter Claes some help on how to get started, as well as some code for an object instancing procedural you can study.

Mark Story's clusterThis instancer is also open source and a good reference.

eetu.

Link to comment
Share on other sites

An adventure into generative art.

I ran into Multi-Scale Turing Patterns, and thought it would be fun to try and do that for volumes.

It was satisfying to be able to do it with volume SOPs and VOPs without needing to write any code :)

shot_000.jpg.da1af69da0548641243e1174b7a70a3c.jpg

Viewport flipbooks turing_07.mov] [turing_09.mov]

Renders [turing_10.mov] [turing_11.mov] [turing_12.mov]

I also tried putting a point light inside the volume and rendering with scattering, and got this nice happy accident:

shot_001.jpg.c3eabf7583af48f52445076f752a34fc.jpg

Most of the scatter tests looked like crap, but I hope I'll get a cool animation rendered soon.

Edited by eetu
  • Like 3
Link to comment
Share on other sites

An adventure into generative art.

I ran into Multi-Scale Turing Patterns, and thought it would be fun to try and do that for volumes.

It was satisfying to be able to do it with volume SOPs and VOPs without needing to write any code :)

shot_000.jpg

Viewport flipbooks [mov] [mov]

Renders [mov] [mov]

I also tried putting a point light inside the volume and rendering with scattering, and got this nice happy accident:

shot_001.jpg

Most of the scatter tests looked like crap, but I hope I'll get a cool animation rendered soon.

And the ever prevalent question in this thread: how did you do it? I am having a hard time understanding the link's directions.

Link to comment
Share on other sites

  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...