Jump to content

Voxels/storm/voxel_b


Recommended Posts

hey, houdini magicians,

This topic is not directly related towards HDK yet, but it eventually will. So I start it here in advance.

Ok, please, don't say "holy batman". I want this project to become a collective attempt/study, as it was with Fast Occlusion project.

What I basically want to do is to research voxel coding/rendering in here and, probably (and higly desirable) to ingeneer a "Voxel_B./Storm_wonna_be" plugin for Houdini.

I do understand that Voxel B. is a tough topic. Nevertheless I do believe that Voxel B. is not something impossible. If someone has already done it - then somebody else can,

(unless the original author is a martian, and I belive Alan Kapler is not one of them).

First what we need is a working concept (and I don't think I completely realize what I/we need). Prolly this attempt will be essentially a trial and error stydy.

Anyway - here's the first question:

Let's suggest that Voxel is a "heap of textures" that face the camera (do they really need to face the camera?..) What happens when a light source

faces the voxel perpenducalary? How do we treat shadig the parts of the "texture layer" that is lit from its' side? (see the picture for reference)

Thanks. Any thoughts/ideas/comments/sources_of_Voxel_B._from_Jason are higly welcome ;)

voxelheap9dy.gif

Link to comment
Share on other sites

Voxels are a corse sampling of 3D space. Each voxel contains one or more attributes which hold the results of one or more 3D functions. Typically these functions need information from neighbouring voxels to arrive at their results; this is because the entire system is typically affected by external/internal forces which are both space- and time-dependent. A voxelized chunk of space then, is typically used to represent a 4-dimensional system (3 for space, 1 for time) -- a system that represents some physical process (like fluid dynamics) would also be required to conserve energy and be generally "stable", which imposes a set of constraints.

The slices (or "heap of textures") that you show in that image represent a sampling of the voxel space, frozen at some moment in time, such as one might get from ray marching an i3d texture with steps of constant length; they are not a good representation of what is actually stored in the voxel structure itself, so it's probably not the best mental model to use when thinking about voxels.

As for the question of how to treat illumination that is perpendicular to the viewing direction, the generic answer would be: calculate attenuation by slicing along the light's direction (not the camera's). But if the material represented inside the voxels is meant to be a solid (as opposed to smoke etc.) then you need to be able to calculate gradients of the density function so as to derive a shading normal.

To get a better taste for voxels and how they can be used to model fluids, I highly recommend reading Jos Stam's paper "A Simple Fluid Solver based on the FFT". This is a great, simple introduction to a bare-minimum-working-solver, and it comes complete with an implementation in C which you can download from here.

Good luck!

Link to comment
Share on other sites

How to make voxelbitch in 283 easy steps:

i think it will be limiting to think of the voxels as a heap of textures - that's just how they are stored in memory. You really need to think of them as a proper volume and not limit your renderer to only rendering through the volume from limited directions. Design limitations will make your life much more difficult than if you designed it right from the start, so i'd like to make some suggestions to make life down the road a little easier. First you'll need to become familiar with "integration", which is a fancy way of saying 'how much does light get absorbed by the clouds when it travels from point A to point B through the volume. This is the foundation of volumetric rendering, and developing an efficient routine is key. It can't matter which direction you're traveling through the volume.

Work in world space.

It shouldn't take you too long to create these routines for an uncompressed array of voxels. Remember, getting a working volume renderer is much different than creating a production-ready volume renderer. By FAR the greatest challenge in creating a volume renderer is making it efficient, both in terms of speed and memory. I can't stress that enough. Remember that going from a 50x50x50 voxel buffer to a 100x100x100 voxel buffer is an 8-times increase in calculations and memory consumption. This means that a 400x400x400 buffer will take 256 times as long to calc as your 50x50x50. This will very quickly grind your renderer to a halt when you try to get to vid-res voxels, let alone film res. Optimizations are generally invented as they are needed. Every time your software hits a speed bottleneck, an optimization needs to be made to speed it up again.

here are some other things you should keep in mind when creating the overall design of your program. If you expect this to be used for doing VFX, these (unfortunately) will ALL need to be addressed, guaranteed. This is a prelude to optimizations you will eventually need to make.

You'll eventually want hires buffers, and voxels are memory pigs, so you'll want to create a way to store voxels in memory that can compress redundant areas, but still be extremely quick to read. In voxelbitch, compressed data really takes no longer to read than uncompressed data. Think about being able to store as bytes or short-floats as well. Unless you get into ridiculous resolutions like in stealth where there were skies full of 2K clouds, you don't need to worry about rendering voxels directly from disk. I think this was a serious drawback of I3D - the assumption that rendering off disk is important, and i feel like it was at the expense of rendering speed. Memory is cheap, and speed is everything, and with decent in-memory compression you shouldn't have to go to disk.

Remember that you should be able to handle multiple volumes in the same scene. this means that your integration may go through several different voxel buffers as it travels from point A to point B. Don't design your way into a corner by limiting an integration to be within one buffer.

Voxels will be HUGE on disk (when you want to store them), so think about disk compression formats. wavelet-types are best for voxels. your format needs to pack voxel data much smaller on disk than the in-memory compression, and be quick to read (gzip not acceptable!)

Don't limit your voxels to being cubes.

Frustum buffers will be a necessity, and unfortunately i found them to be a bit of a nightmare when it comes to integration. This took a long time to sort through.

Point-cloud type filling is the standard way of filling the volume, but remember that you will need many other volume-filling tools as well, especially splines. While point clouds will probably always have a place in effects, i find their look somewhat dated and their flexibility limited. I tended to work almost entirely with splines.

As voxel resolutions get higher, it will be essential to implement deep shadows. Plan for this in your initial design.

VOLUMETRIC MOTION BLUR. these three words will make your life hell, as well as the lives of people who use your software. A good, FAST way to handle motion blur is really important.

Noise is a CPU's worst enemy, and you will be needing a lot of it. keep this in mind.

Color voxels have limited use, but would be necessary for doing things like fire.

Remember that making your software usable is the only way it will ever get used. That sounds silly, but it's true. make sure people can get started on it fairly quickly.

If i was redesigning voxelbitch, i would seriously think about taking advantage of graphics cards to do things (like calculating noise)

That may seem like a lot to bite off (it should keep you going for the first 4-5 years) but remember that voxelbitch started off as exactly what you're trying to create now. Only incrementally did it become what it is, so go for it.

good luck, and stay away from loaded weapons.

i am, in fact, a Martian.

nanoo nanoo

alan

Link to comment
Share on other sites

Point-cloud type filling is the standard way of filling the volume, but remember that you will need many other volume-filling tools as well, especially splines.  While point clouds will probably always have a place in effects, i find their look somewhat dated and their flexibility limited.  I tended to work almost entirely with splines.

I'm wondering what you mean by spline filling. Do you have curves or surfaces with a spline falloff? Very interesting post, btw.

Andrew

Link to comment
Share on other sites

I'm wondering what you mean by spline filling.  Do you have curves or surfaces with a spline falloff?

22095[/snapback]

sorry, by splines i mean a curve, like a bezier curve which defines the center of a volumetric noodle. They can be tricky to work with though. That's what the twisters in Day After Tomorrow were built from. Also, the more distant splashing water, like the whitewater crashing around the statue of liberty were all splines.

Link to comment
Share on other sites

andrewvk asked me if i could recommend a book on integration.

unfortunately i can't, since i never read any. perhaps do some googling on the subject? the form of integration i did was pretty simple, and quite possibly NOT the right way to do it. i should have mentioned the term 'ray marching' in my explanation. you march along your ray taking opacity samples of the volume, correct the sample based on the length of the march (i did this with a powf function), and accumulate the opacity values in a variable.

here's a psuedo-code version of what the raymarch/integration might look like:

finalOpacity = 0

loop (position=startPos; position<endPos; position += marchSize)

{

opacitySample = getVolumeSample(position)

[modulate opacity sample based on marchSize]

finalOpacity += (1-finalOpacity) * opacitySample;

}

  • Like 1
Link to comment
Share on other sites

What? All this talk of integration and raymarching and no mention of tau?  Alan, I'm disappoint(ed|ing).

22122[/snapback]

Yeah, what's tau? I've been reading in the Advanced Renderman book in the section on raymarched volume clouds and I see "tau". I know it's a variable and how to use the variable, but what is "tau" in the real world?

Dave

Link to comment
Share on other sites

Yeah, what's tau? I've been reading in the Advanced Renderman book in the section on raymarched volume clouds and I see "tau". I know it's a variable and how to use the variable, but what is "tau" in the real world?

Dave

22166[/snapback]

Ah, yes the "real world", or at least the illusion of one. Tao, of course, refers to the great paradoxical force which drives the universe, appearing as two opposites but actually being a single thing in a constant dualistic struggle of trying to resolve itself, and driving the universe forward in that pursuit. How did "the real world", existence itself come to be? The tao ultimately suggests that beginnings and ends are an illusion. I would surmise that its the irresolvability of the paradox of reality's existence that keeps reality going in the first place, and it in fact generates reality itself. The universe (or 'reality' for more scope) is the great physical problem-solving computer, trying to solve the question of its own existence, as we are all sub-processes of that same great question. We too search for the great answer, it drives us forward, spawning new sub-processes (children) that will take over the eternal search for the answer to the eternal unsolvable question. Through the realization of paradox as an indicator of 'profound' truth we accept the unsolvabili... oh shit Gillians Island marathon gotta go

Link to comment
Share on other sites

Ah, yes the "real world", or at least the illusion of one.  Tao, of course, refers to the great paradoxical force ...

ROTFL! Good one! :lol:

Hey Dave, at the risk of grossly oversimplifying things here, tau is, as I understand it, just the Greek letter (equivalent to our 't') that's often used to represent the extinction coefficient -- the rate at which light intensity traveling through some medium will decay until it vanishes altogether (which is naturally closely related to density).

So, in the context of ray-marching, you'll usually see it in an expression of the form:

L_atten = exp(-tau * x);

where x is some (positive) distance, and tau is usually assumed to be constant (i.e: a global optical property of the medium). But if you're modeling something like smoke then tau is a varying quantity and you need to approximate the integral numerically somehow. That's why you'll often see both quantities tied to the step size (as in Larry Gritz's smoke shader). This is because you're then approximating the definite integral via a summation (of a bunch of trapezoids in Larry's case):

L_out = L_in * ( 1 - SUM[ exp( -dtau * dx) ] )

You might want to google around for "trapezoidal integration" or "composite trapezoidal rule".

Then again, you'll also see 'sigma' used a lot (with the same meaning given to 'tau' above) in the SSS literature, so go figure... but I like 'TAO' a lot better! :D

Link to comment
Share on other sites

Thanks, Mario.

While the end of Alan's explanation was all funny and everything, I liked your answer much better, since it actually answered the question I was asking.

That being said, I had to call everyone in my office over to see Alan's little spinning yin-yang logo thing. That's just cool.

Dave

Link to comment
Share on other sites

  • 4 weeks later...

ok doki, folks

Here's what I've got so far (have been refreshing school/college math for a while).

I am still at the stage of birthng a concept. I want everybody who is willing to participate to critisize (in a constructive way, if possible) whatever I've got for now - so that we could draw the best concept in this collaborative effort.

a)Sorry for probably too many forthcoming 'I's - if you want it - just exchange it for 'we' whenever you see one.

B) For now I whould like the system to generate clouds, mainly. Lots of them (prolly a sky full of clouds), viewed and lighted from any angle. Though I plan to propagate it to all sorts of smokes as well.

1. I plan to generate noise using hardware shaders (I haven't researched houdini's capabilities to use hardware shaders yet, but I belive it can use them - can't it?)

2. I plan to keep the noise on planes.

3. Planes whould slice the areas, where I plan to create clouds.

4. Areas and Planes work like 'cookies' - so that I create a 'cloudy shape' (for exmaple from metaballs or primitives, or even isosurfaces - whatever) - then these 'cloudy shapes'

are sliced with planes and thusc create a "scliced version of the cloudy shape".

5. Then these slices are textured and rendered, or, alternitevly - textured and rendered 1 by 1 and composited(some automatization will be needed here, but I don't think that this is gonna be a problem). - Thus it will help to save video memory, as far as only 1 textured slice is stored in memory at avery single moment. - Thus many slices can be processed this way.

6.Slices are perpendicular to the camera and everytime the camera changes its' position - slices are repositioned and reprocessed respectively.

7. I plan to involve some sort of CFT (fluid solver) as a source of velocity vectors for motion-bluring parts of the volume, represented by the 'cloudy shapes' - as if they were filled with a fluid that moves constantly. This will also help (I suppose) to simulate interaction of these clouds with other objects (e.g. an airplane, rushing through the cloud).

Ok - prolly I missed something - but that's it so far. Watch the illustration. Your suggestions and comments are higly welcome.

http://img292.imageshack.us/img292/3218/concept000ec.jpg

(use the link if you don't see it)

concept000ec.jpg

Thanks.

Link to comment
Share on other sites

wow. this is a whole lot of feedback! ;)

Does it mean "yes"/"no"/"go for it"/or "man, this os so stupid - you don't even understand how stupid it is!..." ?  :huh:

22864[/snapback]

Hey MADjestic,

First of all, nothing that helps you understand something, could ever possibly be called "stupid".

Now; as far as your plan goes, I don't think there's anything "wrong" with it, but there *does* seem to be a lot of overlap with areas that have already been researched (and published) quite a lot. In other words, it seems you're setting yourself up to reinvent quite a few wheels. But, then again, maybe not. Maybe you end up with a new and efficient way of doing all of this, but my recommendation would be to spend a good week doing nothing but reading all the papers you can get your hands on. And also take stock of the info/tools that you already have at your disposal, for example:

1. The i3d format may not be the best thing since sliced bread, but it *is* a pretty nice, efficient (though disk-bound), robust way to store volumetric information. It also provides an API for doing common volumetric tasks like integrating some attribute along a path (via VEX).

2. Slicing up the bounding box of an object, then interpreting the intersection of each slice with the object's surface as the "envelope" of a volume, is really no different (in terms of sampling the volume) than what i3d already provides. So I would suggest that, at least as far as a method to store and access the data goes, you might want to investigate i3d as a means to do it. And while you're weighing the potential benefits of each method, remember that any slicing mechanism will be, by definition, view dependent, whereas a voxel grid (i3d) isn't (by construction) -- this detail tends to make a big difference.

3. I wouldn't worry about hardware-assisted anything at this point (noise or otherwise). It would, IMHO, be premature since you're still ironing out the overall approach. I think the first step would be to think about how the data needs to flow through all the processing steps, and then choose a storage method that can facilitate that flow as much as possible -- Alan gave you a *lot* of excellent pointers in his post(!).

What I see so far (the slices and such) has mostly to do with how to represent/store the volume data, but not at all to do with a fluid simulation. I thought simulating fluids was the original intent (?)

If it is, then I'd suggest modeling your representation/storage based on what you perceive will be the needs of the simulator. If it isn't, and all you're looking for is a generic method of representing volume data, then my suggestion would be to use i3d (at least to start with).

Last but not least, I'd again recommend combing through all the available papers on voxels and volume rendering. I think you'll find a lot of helpful material out there, and it might save you from reinventing at least some of the steps -- time that you can spend refining or repurposing those methods instead.

This is all "in my limited opinion" of course.

Cheers!

Link to comment
Share on other sites

HEY!

I just finished reading this paper and, while writing my previous post, had a crazy idea:

Has anyone out there considered using 3D wavelets for representing volume data??

Besides the potential storage savings it *might* provide, it's also conceivable that computing some complex integrals *might* be reduced to wavelet products... hmmmm?

Is that just crazy talk?.. Alan?, Crunch?, Andrew?...

[edit]

DOH!

Wolfood just pointed out that Alan already mentioned wavelets as a form of compression in his post. But what about that paper's main contribution, which is the ability to represent the integral of two or more functions via wavelet products?

[/edit]

Link to comment
Share on other sites

  • 3 months later...
To get a better taste for voxels and how they can be used to model fluids, I highly recommend reading Jos Stam's paper "A Simple Fluid Solver based on the FFT". This is a great, simple introduction to a bare-minimum-working-solver, and it comes complete with an implementation in C which you can download from here.

Good luck!

22092[/snapback]

Thanks Mario,

btw - the code is slightly more than 3 pages and the theory is laid down fairly straight - so implementing it in Houdini environment should not be too hard for anyone willing to. If somebody is interested - here's a compiled version for windows of the code.

don't forget to copy glut32.dll to windows/system

fluids.rar

Link to comment
Share on other sites

If somebody is interested - here's a compiled version for windows of the code.

25207[/snapback]

Yup. That's the code I was talking about -- and I believe it's what Jason has been suggesting as a possibly useful thing to have even as a simple 2D (COPs) fluids tool, so...

Go for it! :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...