Jump to content

eetu's lab


eetu

Recommended Posts

The light has been on at eetu's lab this weekend, I started figuring out the volumetric spline stuff discussed here earlier.

I'm thinking it's "just" a volumetric tube, but with a deformed coordinate system following along, so that noise/texture will flow along nicely. I built a prototype of that with all SOPs, and it works 'ok', although there are some clear accuracy issues. This is the most 'explicit' way of doing it, as I'm creating all the voxel data inside Houdini. The next step would be to do at the very least the noise or rather the whole coordinate step in the shader, and the other end of the spectrum would be to do it all inside mantra. Doing it all inside SOPs isn't totally stupid though, for example that way you could 'rasterize' many curves into one volume, plus there might of course be other uses for the voxel data.

For this I built a cylindrical coordinate system around the spline, which does let you do some nice-looking things, but for many things I think a deformed cartesian coordinate system would be more practical. An additional problem with the cylindrical coordinate system is that the angular coordinate needs to loop over the 0..1 range, which the typical noises do not do.

Here I'm building actual volumes for the coordinate system, a length-along-spline coordinate volume, a distance-from-spline volume and an angle-about-spline volume. (the errors in w are just at the ends)

volspline_uvw.thumb.jpg.195cece3a0b54d9f0c305fb33660f53a.jpg

After, I apply noise to the density volume using the above volumes' values as the coordinates. For the angular coordinate I used a twisting cosine with two positive lobes, for looping purposes.

volspline.jpg.ea825be7f895cec9ef688dc4334071aa.jpg

volspline.mov

I wonder if I'll take the next step with vex or HDK. Or maybe python, as I've been looking for something to test out numpy with..

Edited by eetu
  • Like 2
Link to comment
Share on other sites

I ran into the sneak peek at the next version of HDR Light Studio, and thought that the idea of painting on the mesh and projecting that onto the envmap was a neat one.

I just had to try that in Houdini, here's a live setup that does it with some pointcloud VOPSOPs and Ray'ing.

First I calculate the reflection vectors for the object to painted on (a torus in this hip), with relation to cam1.

After that I transfer the paint to a sphere representing the envmap, by comparing the current point normal to the reflection vectors in the torus, brought in as a pointcloud. The pointcloud is filtered with respect to how well the directions match, and the point color is imported from the filtered points of the pointcloud.

For visualization, a third mesh (teapot here) does simple reflection in a VOPSOP, color is imported from the evn sphere by Ray'ing in the reflection direction from the teapot points. For a more abstract visualization, a plane shows the envmap in the uv-space.

Everything is done with point attributes, and thus at point resolution, so it's not too precise.

Once again, probably not too useful as it is, but these are fun to do :)

post-2678-133941358843_thumb.jpg

6MB .mov

27MB .avi

envpaint_v003.hip

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

Amazing trick!!!

Thank you from the bottom of my heart for this! Very cool!

After checking out the "classic" Lagoa teaser video, I started thinking if something like the crumbly stuff in the beginning could be done with variable (high)viscosity FLIP fluids.

Well, this falls short of the Lagoa stuff, but it's an interesting look anyway, I think.

ee_mud1.jpg

(click for anim)

It's quite simple really, I just init the per-pixel viscosities with a VOPSOP noise inside a SOP Solver, behind an Intermittent Solve DOP set to run "Only Once".

Hip attached for inquiring minds.

Link to comment
Share on other sites

Here's a take on a cumulative stress map for deforming geometry. It records the maximum curvature a mesh has had in the life of a sim, in relation to it's rest curvature.

stressmap.jpg.146cc2fd5606d9dbfc664cf5384d16b0.jpgstressmap2.mov

One weird thing: Curvature SOP gives a very different result in SOPs proper, that it gives inside a SOP Solver.

(That's why my rest curvature is initialized in a run-once SOP Solver..)

ee_curvature.hipnc

Edited by eetu
  • Like 3
Link to comment
Share on other sites

  • 1 month later...
  • 2 weeks later...

After checking out the "classic" Lagoa teaser video, I started thinking if something like the crumbly stuff in the beginning could be done with variable (high)viscosity FLIP fluids.

Well, this falls short of the Lagoa stuff, but it's an interesting look anyway, I think.

ee_mud1.jpg

(click for anim)

It's quite simple really, I just init the per-pixel viscosities with a VOPSOP noise inside a SOP Solver, behind an Intermittent Solve DOP set to run "Only Once".

Hip attached for inquiring minds.

hey cool stuff! may I ask you, how much time did it take for the simulation?

Link to comment
Share on other sites

  • 3 weeks later...
  • 2 months later...

Back to volumes!

Raycasting against a distance field can be pretty fast. Think of it like adaptive step raymarching, but you can get a good estimate of next step size by just reading the sdf value at current point.

Here's an illustrative image stolen from Inigo Quilez:

sdf_trace_iq.png.49215b888c7dec22e64ba2618442a9c6.png

In essence you just keep stepping forward until the returned sdf value is lower than a user-specified tolerance, or until a set maximum step count is reached.

For a simple case of a 1000x1000 grid and a torus, this takes about 1 second on my machine, while raycasting against the torus polygons takes 4 seconds.

One thing where this could have a qualitative advantage over raycasting is that here you can have a per-point offset for the surface.

I've never actually used this anywhere, but I'm sure there is a use case out there somewhere ;)

sdf_trace_v003.hip

Edited by eetu
  • Like 2
Link to comment
Share on other sites

Neat idea, thanks eetu, specially it would be cool to adopt it to vex at render time (it would be awesome to have a vex function which generates at render-time sdf of an object similarly to, say, sample_geometry and point clouds). That would open doors for many interesting solutions like this one (think fast sss for example).

Although I suspect that these particular measurements looks good for sdf because ray tracing is not terrible fast is Sops. Employing something like vray library or embree might revitalize ray casting.

In anyway, this is super idea!

Link to comment
Share on other sites

  • 1 month later...

Inspired by the "genus 6 Kandinsky Surface voronoi volume" post on sidefx forums, I thought those should be doable in Houdini alone.

Nothing advanced, just a couple of SOPs, really. I did spend a little extra effort in getting bendy lines, though.

A bit of a worst-case scenario for raytracing, this :)

katiska1.thumb.jpg.92437cbb091c6a057a39547dd02aa25a.jpg

katiska4.thumb.jpg.8d5edd766801868390afeb2cad5a7862.jpg

katiska5.thumb.jpg.43445fe503ec100a37860ba5ad4a884b.jpg

PS. If you download the hip, it spends a while subdiving on load..

kandinsky_v007.hip

Edited by eetu
  • Like 9
Link to comment
Share on other sites

  • 2 weeks later...
  • 4 months later...

eetu!

Can you pretty please post a scene file for this?

The regular voronoi-cells are starting to look boring. How to make them more interesting?

One of the things you can do is scale the object, do the voronoi and scale back - that way you can get things like splinters.

splinter1.png

But you can take things further, all you need is a deformation that is reversible and has a defined value everywhere in space.

Now that's not always easy. If you just bend something and then apply the opposite bend operation on the result, it will look very different that what you started with.

One way to get deformations like that to be reversible, is to use lattice deformation.

Deform your lattice, then use the lattice SOP on you geo, shatter, then apply the lattice SOP with the source and target lattices interchanged.

There are some inaccuracies with this approach, and the target doesn't match the original geometry exactly.

tubevoro.jpg

For more interesting patterning, let's dive into freeform deformations with a VOP SOP.

One's first idea might be to apply noise to a geo, shatter, and then apply the same noise but with a negated amplitude.

This does not work, as the same noise function will be computed at the displaced positions, resulting in different values,

and the reverse operation results in a mesh that doesn't match the original.

So, after doing the deformation and the shattering, we need to get the noise value in the coordinate system we had before deformation,

and - ta-da - that's what rest coordinates are for. This almost works, but not quite, but the resulting mesh is still not 100% what we started with.

This is a subtle problem, but happens because of the new points created in the shatter process, even though they have nicely interpolated rest

coordinates (which is very nice). If you have a new point created halfway between existing points, it's rest value is in between, but the

noise value at that midpoint is not necessarily at the midpoint of the noise values at the original points..

So, what finally worked for me was simply storing the actual deformation value to each point, then the reverse operation for each original

and new point brings them right back to where the original surface was.

What about the inside geometry, an insightful reader might ask, and rightfully so. By default the Voronoi Shatter SOP seems to propagate the rest/deform

values from the outside surface to that piece's newly created inside surface. This means that the coincident points, and thus polygons, get different values depending

on their piece membership, and this results in an intersecting jumble inside the mesh. AttributeTransfer to the rescue - with that you can get interpolated values inside the

mesh. It might not be a perfect interpolation, but it doesn't matter that much, as long as coincident points get the same values.

In the end everything pretty much works, but there are still some artifacts with some inconvex pieces that need to be ironed out. I guess this all should be done inside the shatter tool for

maximum quality.

noisevoro.jpg

noisevoro_shat.jpg

And then, a somehow directable tool would be more useful..

Link to comment
Share on other sites

  • 10 months later...

Ohh it's been a year since the last update, gotta start popping the accumulated r&d stack! :)
 
Tonight I ran into vinyvince's thread on Hansmeyer's systems, and remembered I wanted to do those too.
 
Here's the simplest "System 1" - it would've been easier to do it in code, but this time in SOPs:
hansmeyer_s1_ogl2.jpg

hansmeyer_s1_rend.jpg

 

The system is simple and symmetric, so the animation is really quite boring.

 

If the geom1etry is converted to nurbs curves it looks a bit more interesting as a still, here's only the 8th generation:

hansmeyer_spline2.jpg

 

And this is the above, animated and accumulated.

hansmeyer_accum2.jpg

 

With more interesting movement and some color action these are gonna look a lot better, but this is a nice start.

hansmeyer_subdivision_09.hip

Edited by eetu
  • Like 6
Link to comment
Share on other sites

This one is over a year old and half forgotten, but it's such a perverse setup that it needs to be shared.

 

It's an attempt at the "Water inside of air field" technique, but with the added twist that it actually has two separate FLIP objects that affect each other.

I tried to follow the Bridson paper - in spirit at least, so there are some added operators inside the flip solver that try to approximate what the paper is doing.

 

The meat of the technique was splitting the FLIP solver in two, so that the volume representations for both fluids for the current frame are available before either fluids gets actually solved. Before the solve, the velocity fields from the "other" fluid are copied over to "this" fluid, with proper masking and boundary conditions.

 

dualflip25a.jpg

[84meg .mov]

 

In the end it sadly doesn't actually work too well, so it got laid on the wayside - but maybe some of you flipsters can get a kick out of it! ;)

 

 

flip_twofluids_v027.hip

  • Like 1
Link to comment
Share on other sites

A prototype for a job that never was: a scragly tree gets formed by growing wires:

wiretree.jpg.5a95eaf912395913b35aa9851bca517e.jpg

 

The bit that creates root-to-ends curves from an lsystem might be useful. It starts from all the end points, and walks down it's neighbors, always taking the neighbor with the lowest point number. Seems to work :)

 

 

wiretree_v005.hip

Edited by eetu
  • Like 8
Link to comment
Share on other sites

  • 3 weeks later...
I'll store this here as well - a try at rendertime booleans. It's nice, as any kind of geometry works: here we have a primitive sphere cutting a nurbs torus and polygon boxes. 

 

This is a bit of an old-school approach, I count the intersections that happen along the viewing ray inside the cutting sphere - if there is an odd number of them we are inside of your object and we shade the backside of our cutting sphere, if there is an even number of them, we shoot the ray from the far side of the sphere and it continues as if nothing happened. The surfaces need to closed (manifold, I guess) for this to work. 

 

There are some precision issues if you look closely, you will need to play with the raytrace biases depending on your scene size. Also, you would need to incorporate the shader for your cut surfaces into this one. I wouldn't vouch for this approach to be production proof, but it just might cut it.

 

post-2678-0-32602500-1399062113_thumb.jp

ee_render_boolean.mov

ee_render_booleans_v003.hip

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...