eetu Posted June 3, 2012 Author Share Posted June 3, 2012 (edited) The light has been on at eetu's lab this weekend, I started figuring out the volumetric spline stuff discussed here earlier. I'm thinking it's "just" a volumetric tube, but with a deformed coordinate system following along, so that noise/texture will flow along nicely. I built a prototype of that with all SOPs, and it works 'ok', although there are some clear accuracy issues. This is the most 'explicit' way of doing it, as I'm creating all the voxel data inside Houdini. The next step would be to do at the very least the noise or rather the whole coordinate step in the shader, and the other end of the spectrum would be to do it all inside mantra. Doing it all inside SOPs isn't totally stupid though, for example that way you could 'rasterize' many curves into one volume, plus there might of course be other uses for the voxel data. For this I built a cylindrical coordinate system around the spline, which does let you do some nice-looking things, but for many things I think a deformed cartesian coordinate system would be more practical. An additional problem with the cylindrical coordinate system is that the angular coordinate needs to loop over the 0..1 range, which the typical noises do not do. Here I'm building actual volumes for the coordinate system, a length-along-spline coordinate volume, a distance-from-spline volume and an angle-about-spline volume. (the errors in w are just at the ends) After, I apply noise to the density volume using the above volumes' values as the coordinates. For the angular coordinate I used a twisting cosine with two positive lobes, for looping purposes. volspline.mov I wonder if I'll take the next step with vex or HDK. Or maybe python, as I've been looking for something to test out numpy with.. Edited November 29, 2020 by eetu 2 Quote Link to comment Share on other sites More sharing options...
magneto Posted June 3, 2012 Share Posted June 3, 2012 Looks amazing eetu Quote Link to comment Share on other sites More sharing options...
eetu Posted June 11, 2012 Author Share Posted June 11, 2012 I ran into the sneak peek at the next version of HDR Light Studio, and thought that the idea of painting on the mesh and projecting that onto the envmap was a neat one. I just had to try that in Houdini, here's a live setup that does it with some pointcloud VOPSOPs and Ray'ing. First I calculate the reflection vectors for the object to painted on (a torus in this hip), with relation to cam1. After that I transfer the paint to a sphere representing the envmap, by comparing the current point normal to the reflection vectors in the torus, brought in as a pointcloud. The pointcloud is filtered with respect to how well the directions match, and the point color is imported from the filtered points of the pointcloud. For visualization, a third mesh (teapot here) does simple reflection in a VOPSOP, color is imported from the evn sphere by Ray'ing in the reflection direction from the teapot points. For a more abstract visualization, a plane shows the envmap in the uv-space. Everything is done with point attributes, and thus at point resolution, so it's not too precise. Once again, probably not too useful as it is, but these are fun to do 6MB .mov 27MB .avi envpaint_v003.hip 1 Quote Link to comment Share on other sites More sharing options...
buran13 Posted June 22, 2012 Share Posted June 22, 2012 Amazing trick!!! Thank you from the bottom of my heart for this! Very cool! After checking out the "classic" Lagoa teaser video, I started thinking if something like the crumbly stuff in the beginning could be done with variable (high)viscosity FLIP fluids. Well, this falls short of the Lagoa stuff, but it's an interesting look anyway, I think. (click for anim) It's quite simple really, I just init the per-pixel viscosities with a VOPSOP noise inside a SOP Solver, behind an Intermittent Solve DOP set to run "Only Once". Hip attached for inquiring minds. Quote Link to comment Share on other sites More sharing options...
eetu Posted June 27, 2012 Author Share Posted June 27, 2012 (edited) Here's a take on a cumulative stress map for deforming geometry. It records the maximum curvature a mesh has had in the life of a sim, in relation to it's rest curvature. stressmap2.mov One weird thing: Curvature SOP gives a very different result in SOPs proper, that it gives inside a SOP Solver. (That's why my rest curvature is initialized in a run-once SOP Solver..) ee_curvature.hipnc Edited November 29, 2020 by eetu 3 Quote Link to comment Share on other sites More sharing options...
TomFX Posted August 13, 2012 Share Posted August 13, 2012 very generous with your knowledge thank you very muchly mate..keep it up im facinated by the work Quote Link to comment Share on other sites More sharing options...
FromHell Posted August 21, 2012 Share Posted August 21, 2012 After checking out the "classic" Lagoa teaser video, I started thinking if something like the crumbly stuff in the beginning could be done with variable (high)viscosity FLIP fluids. Well, this falls short of the Lagoa stuff, but it's an interesting look anyway, I think. (click for anim) It's quite simple really, I just init the per-pixel viscosities with a VOPSOP noise inside a SOP Solver, behind an Intermittent Solve DOP set to run "Only Once". Hip attached for inquiring minds. hey cool stuff! may I ask you, how much time did it take for the simulation? Quote Link to comment Share on other sites More sharing options...
poco Posted September 10, 2012 Share Posted September 10, 2012 cool...wish to share the hip! thanks a lot.. Playing with SDF's. A font object is converted to sdf, and some particles spawned inside. After that, the inverse of the sdf gradient at the point location is copied to the velocity of the point, every frame. animation Quote Link to comment Share on other sites More sharing options...
eetu Posted December 2, 2012 Author Share Posted December 2, 2012 (edited) Back to volumes! Raycasting against a distance field can be pretty fast. Think of it like adaptive step raymarching, but you can get a good estimate of next step size by just reading the sdf value at current point. Here's an illustrative image stolen from Inigo Quilez: In essence you just keep stepping forward until the returned sdf value is lower than a user-specified tolerance, or until a set maximum step count is reached. For a simple case of a 1000x1000 grid and a torus, this takes about 1 second on my machine, while raycasting against the torus polygons takes 4 seconds. One thing where this could have a qualitative advantage over raycasting is that here you can have a per-point offset for the surface. I've never actually used this anywhere, but I'm sure there is a use case out there somewhere sdf_trace_v003.hip Edited November 29, 2020 by eetu 2 Quote Link to comment Share on other sites More sharing options...
symek Posted December 3, 2012 Share Posted December 3, 2012 Neat idea, thanks eetu, specially it would be cool to adopt it to vex at render time (it would be awesome to have a vex function which generates at render-time sdf of an object similarly to, say, sample_geometry and point clouds). That would open doors for many interesting solutions like this one (think fast sss for example). Although I suspect that these particular measurements looks good for sdf because ray tracing is not terrible fast is Sops. Employing something like vray library or embree might revitalize ray casting. In anyway, this is super idea! Quote Link to comment Share on other sites More sharing options...
eetu Posted January 12, 2013 Author Share Posted January 12, 2013 (edited) Inspired by the "genus 6 Kandinsky Surface voronoi volume" post on sidefx forums, I thought those should be doable in Houdini alone. Nothing advanced, just a couple of SOPs, really. I did spend a little extra effort in getting bendy lines, though. A bit of a worst-case scenario for raytracing, this PS. If you download the hip, it spends a while subdiving on load.. kandinsky_v007.hip Edited November 29, 2020 by eetu 9 Quote Link to comment Share on other sites More sharing options...
eetu Posted January 26, 2013 Author Share Posted January 26, 2013 (edited) A procedural aperture mechanism (leaf shutter). iris_v004.hip Edited November 29, 2020 by eetu 6 Quote Link to comment Share on other sites More sharing options...
R_cyph Posted May 30, 2013 Share Posted May 30, 2013 eetu! Can you pretty please post a scene file for this? The regular voronoi-cells are starting to look boring. How to make them more interesting? One of the things you can do is scale the object, do the voronoi and scale back - that way you can get things like splinters. But you can take things further, all you need is a deformation that is reversible and has a defined value everywhere in space. Now that's not always easy. If you just bend something and then apply the opposite bend operation on the result, it will look very different that what you started with. One way to get deformations like that to be reversible, is to use lattice deformation. Deform your lattice, then use the lattice SOP on you geo, shatter, then apply the lattice SOP with the source and target lattices interchanged. There are some inaccuracies with this approach, and the target doesn't match the original geometry exactly. For more interesting patterning, let's dive into freeform deformations with a VOP SOP. One's first idea might be to apply noise to a geo, shatter, and then apply the same noise but with a negated amplitude. This does not work, as the same noise function will be computed at the displaced positions, resulting in different values, and the reverse operation results in a mesh that doesn't match the original. So, after doing the deformation and the shattering, we need to get the noise value in the coordinate system we had before deformation, and - ta-da - that's what rest coordinates are for. This almost works, but not quite, but the resulting mesh is still not 100% what we started with. This is a subtle problem, but happens because of the new points created in the shatter process, even though they have nicely interpolated rest coordinates (which is very nice). If you have a new point created halfway between existing points, it's rest value is in between, but the noise value at that midpoint is not necessarily at the midpoint of the noise values at the original points.. So, what finally worked for me was simply storing the actual deformation value to each point, then the reverse operation for each original and new point brings them right back to where the original surface was. What about the inside geometry, an insightful reader might ask, and rightfully so. By default the Voronoi Shatter SOP seems to propagate the rest/deform values from the outside surface to that piece's newly created inside surface. This means that the coincident points, and thus polygons, get different values depending on their piece membership, and this results in an intersecting jumble inside the mesh. AttributeTransfer to the rescue - with that you can get interpolated values inside the mesh. It might not be a perfect interpolation, but it doesn't matter that much, as long as coincident points get the same values. In the end everything pretty much works, but there are still some artifacts with some inconvex pieces that need to be ironed out. I guess this all should be done inside the shatter tool for maximum quality. And then, a somehow directable tool would be more useful.. Quote Link to comment Share on other sites More sharing options...
eetu Posted April 10, 2014 Author Share Posted April 10, 2014 (edited) Ohh it's been a year since the last update, gotta start popping the accumulated r&d stack! Tonight I ran into vinyvince's thread on Hansmeyer's systems, and remembered I wanted to do those too. Here's the simplest "System 1" - it would've been easier to do it in code, but this time in SOPs: The system is simple and symmetric, so the animation is really quite boring. If the geom1etry is converted to nurbs curves it looks a bit more interesting as a still, here's only the 8th generation: And this is the above, animated and accumulated. With more interesting movement and some color action these are gonna look a lot better, but this is a nice start. hansmeyer_subdivision_09.hip Edited April 10, 2014 by eetu 6 Quote Link to comment Share on other sites More sharing options...
eetu Posted April 10, 2014 Author Share Posted April 10, 2014 This one is over a year old and half forgotten, but it's such a perverse setup that it needs to be shared. It's an attempt at the "Water inside of air field" technique, but with the added twist that it actually has two separate FLIP objects that affect each other. I tried to follow the Bridson paper - in spirit at least, so there are some added operators inside the flip solver that try to approximate what the paper is doing. The meat of the technique was splitting the FLIP solver in two, so that the volume representations for both fluids for the current frame are available before either fluids gets actually solved. Before the solve, the velocity fields from the "other" fluid are copied over to "this" fluid, with proper masking and boundary conditions. [84meg .mov] In the end it sadly doesn't actually work too well, so it got laid on the wayside - but maybe some of you flipsters can get a kick out of it! flip_twofluids_v027.hip 1 Quote Link to comment Share on other sites More sharing options...
eetu Posted April 10, 2014 Author Share Posted April 10, 2014 (edited) These pretty balls light up at impact. Not with a light, but they have emissive random geometry inside. impactlights_v005.hip Edited November 29, 2020 by eetu 2 Quote Link to comment Share on other sites More sharing options...
eetu Posted April 10, 2014 Author Share Posted April 10, 2014 This was a small test of having stickyness on a flip collider. flip_coll_stick_dev_v004.hip Quote Link to comment Share on other sites More sharing options...
eetu Posted April 10, 2014 Author Share Posted April 10, 2014 (edited) A prototype for a job that never was: a scragly tree gets formed by growing wires: The bit that creates root-to-ends curves from an lsystem might be useful. It starts from all the end points, and walks down it's neighbors, always taking the neighbor with the lowest point number. Seems to work wiretree_v005.hip Edited November 29, 2020 by eetu 8 Quote Link to comment Share on other sites More sharing options...
tfreitag Posted April 11, 2014 Share Posted April 11, 2014 hey eetu, this is one the most inspiring thread here at oforce...you share so much useful knowledge. THANKS Quote Link to comment Share on other sites More sharing options...
eetu Posted May 2, 2014 Author Share Posted May 2, 2014 I'll store this here as well - a try at rendertime booleans. It's nice, as any kind of geometry works: here we have a primitive sphere cutting a nurbs torus and polygon boxes. This is a bit of an old-school approach, I count the intersections that happen along the viewing ray inside the cutting sphere - if there is an odd number of them we are inside of your object and we shade the backside of our cutting sphere, if there is an even number of them, we shoot the ray from the far side of the sphere and it continues as if nothing happened. The surfaces need to closed (manifold, I guess) for this to work. There are some precision issues if you look closely, you will need to play with the raytrace biases depending on your scene size. Also, you would need to incorporate the shader for your cut surfaces into this one. I wouldn't vouch for this approach to be production proof, but it just might cut it. ee_render_boolean.mov ee_render_booleans_v003.hip 2 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.