ben Posted July 31, 2014 Share Posted July 31, 2014 Very inspiring, as usual. Thank you so much for sharing it all, and thanks for the book reference. Quote Link to comment Share on other sites More sharing options...
eetu Posted August 4, 2014 Author Share Posted August 4, 2014 My darling from the Lightwave days; Grit accessibility shader. I did upload a version of it on my early days in Houdini-land, although the ur-original is from '98.. This one isn't that much different, added a few more features and made a proper asset out of it. If it seems to work for you guys, I'll put it on Orbolt. From the node help; Grit is an accessibility shader, that is, it tries to figure out how easily accessible a location is. Conceptually it can be thought of as how big a sphere could fit here and still touch this spot. Whereas ambient occlusion tries to figure out how much of the above-horizon hemisphere is occluded by shooting a lot of rays and seeing how many of them hit ot miss, Grit only regards the nearest distance any of the rays hit. Also, Grit only shoots a single fan of rays around the normal instead of trying to fill the whole hemisphere with rays, because of this it should always be signicantly faster. When used with the negated normal, Grit finds exposed edges instead of hidden crevices. Some examples of using grit as a mask: weepul, lw iiro, houdini Attached is the HDA and a very simple hip. ee_grit_test.hip grit.otl 3 Quote Link to comment Share on other sites More sharing options...
eetu Posted August 13, 2014 Author Share Posted August 13, 2014 (consolidated to this thread as well, for completeness' sake) Deep ID If you save out an ID value for each of your samples in a deep exr file, you should be able to extract perfect masks or remove all samples associated with an object/primitive. Sounds nice, no? Here's a hip and nuke file to play with, as well as an example exr. Example images of removing one motionblurred object in post, in the nuke file you can pick whichever you want. (If I remember correctly, it's been a while and I don't have Nuke on this machine) example EXR (too big to attach) deep_id_v003.hip deep_id_v003.nk.zip 2 Quote Link to comment Share on other sites More sharing options...
Skybar Posted August 15, 2014 Share Posted August 15, 2014 (consolidated to this thread as well, for completeness' sake) Deep ID If you save out an ID value for each of your samples in a deep exr file, you should be able to extract perfect masks or remove all samples associated with an object/primitive. Sounds nice, no? Here's a hip and nuke file to play with, as well as an example exr. Example images of removing one motionblurred object in post, in the nuke file you can pick whichever you want. (If I remember correctly, it's been a while and I don't have Nuke on this machine) Cool! Small tip: Instead of those Inline VOPs using Renderstate(), you have the Render State VOP Quote Link to comment Share on other sites More sharing options...
jumper Posted September 9, 2014 Share Posted September 9, 2014 Hey Eetu, Any chance of seeing a hip file for the quaddel/cancer stuff yet? Thanks! Stephen Quote Link to comment Share on other sites More sharing options...
eetu Posted October 7, 2014 Author Share Posted October 7, 2014 (edited) This time something that might even be useful, a tonemapping COP. I've recently worked on game tonemapping, and thought why not implement some of the operators in Houdini. This time written in VEX. Operators I implemented are from Reinhard, Insomniac Games and John Hable. Some of the parameters can be a bit opaque, so I implemented a simple preview of the curve overlaid on the image. A preview video. I guess I should do this in Nuke too.. tonemap_example.hip ee_tonemap.otl EDIT: a version patched by @fsimerey to work with newer houdini versions is at Edited February 28, 2017 by eetu 4 Quote Link to comment Share on other sites More sharing options...
slamfunk Posted October 8, 2014 Share Posted October 8, 2014 awesome very handy tool! thanks Quote Link to comment Share on other sites More sharing options...
eetu Posted November 18, 2014 Author Share Posted November 18, 2014 (edited) Simulating sand/snow as a two-phase variable viscosity flip fluid. Something I've been dabbling in every now and then since last year and that awesome Disney mpm paper. I of course wanted to duplicate their stuff, but soon saw that it would be a huge job to try and build a proper material-point-method system in DOPs. I decided to try and bring some of the behavioural aspects into the more familiar flip system, in whatever way they could be expressed in it. Here the main idea is to think of sand/snow as a two-phase fluid -- it's either free flowing (viscosity zero) or clumped/solid (viscosity million) and certain conditions can change it's state between those two phases. The main driver for phase change here is strain magnitude, calculated with a Gas Strain Integrate DOP. Another factor is pressure; I've tried using it to both decrease and increase the phase change threshold - thinking of sand and snow respectively. I've also approximated compressibility by mixing in a bit of pre-nondivergent velocity with the nondivergent. I also have a material strength field that ends up creating a lot of visually interesting features. This is of course far from perfect, but does give birth to some interesting behaviours. I've tried to counter some of the peculiarities by e.g. modifying the strain with velocity (matter in free fall is under no stress and thus can easily harden again) and depth from surface (internal pressure values can be nonphysically high with flip). Also I'm doing negative vortex confinement and negative surface tension to suppress fluid-like behaviour, I'm not sure whether that does bad things to the energy balance. Overall this needs a lot of tweaking for any situation and it seems to be hard to create a generic tool out of this, so that's why I'm just giving out the raw r&d hip here.. Some older snow-like tests, which still had a more straightforward variable viscosity approach instead of variable material strength that drives the viscosity phase change: (well in reality this is all mud-like, but that doesn't sound so cool;) The hip is not meant to be usable as-is, but rather as an inspiration for inquiring minds on how to build custom stuff into the flip solver. As for documentation I'll just say that the main parameter is the Phase Change Threshold in the strain_to_viscosity gasfieldvop inside the solver.. mpm_dev_h64castle.hip Edited November 29, 2020 by eetu fixed images 3 Quote Link to comment Share on other sites More sharing options...
djwarder Posted November 18, 2014 Share Posted November 18, 2014 Amazing work as usual Eetu! Can't wait to have a play with this ... Quote Link to comment Share on other sites More sharing options...
ikarus Posted November 19, 2014 Share Posted November 19, 2014 I think this builds on good ideas for wet sand modelling, good effort! Quote Link to comment Share on other sites More sharing options...
eetu Posted March 25, 2015 Author Share Posted March 25, 2015 (edited) A bit more growth stuff, this time with more control - it tries to grow along a curve. 40MB mp4 render and 28MB mov opengl preview. I also tested a hard boundary, it is a very explicit way to control the effect, but the result is not too organic, plus it lacks a sense of directionality. Edited November 29, 2020 by eetu fixed image and links 6 Quote Link to comment Share on other sites More sharing options...
goldleaf Posted March 25, 2015 Share Posted March 25, 2015 Woh, that is way cool eetu! Is this also done w/ Python HDAs? Very, very cool. Cellular automata is something I have a hard time wrapping my head around, but I enjoy looking at examples in Houdini - things tend to make more sense here Quote Link to comment Share on other sites More sharing options...
br1 Posted March 25, 2015 Share Posted March 25, 2015 Hard boundary looks great too ! Quote Link to comment Share on other sites More sharing options...
eetu Posted March 25, 2015 Author Share Posted March 25, 2015 Woh, that is way cool eetu! Is this also done w/ Python HDAs? Very, very cool. Cellular automata is something I have a hard time wrapping my head around, but I enjoy looking at examples in Houdini - things tend to make more sense here Thanks! This is no cellular automaton, this is a poly surface being abused within a Solver SOP. Trying to follow the quaddel path here.. Quote Link to comment Share on other sites More sharing options...
eetu Posted March 25, 2015 Author Share Posted March 25, 2015 I'll just consolidate the raytrace uv baking shenanigans from this thread into here. The original scene is from Sebkaine. ------------------------------------------------------------------------------------- How to bake to UVmaps with PBR Ok, I tried this before and failed, but this time I kinda got it. Quite hacky, but here we go. First, render out an uv-unwrap of P and N using micropolygon mode. Second, and this is optional, dilate P and N in COPs. Thirdly, bake with PBR, using a lens shader that picks up the P and N from the above map, and shoots each ray back at the surface from 1mm height or something. Being Houdini, this can of course be trivially automated. It's not perfect and can need a bit of tinkering, but in a pinch it could be usable I have used the lens shader method before with mesh/nurbs primitives to bake some effect things, but by itself the method can only work with geometry that is a single primitive. This time I first thought I'd bake out a map with primitive IDs which the lens shader could read, but soon realized it would be a lot smarter to just bake out the P and N. It took a while to get the map dilation to work - I had all sorts of weird stuff happening before I realized the COPs dilate doesn't like negative numbers.. ee_uvlens_bedroom.hip ----- How to bake to point colors with PBR Ok, here we go! 1) In a lens shader, determine the sequential number of the pixel we are in, think of it as a point number, and fetch the P and N of that point in your geo.2) As before, use those to fire a ray back at the surface from a millimeter up3) After rendering, in a vopsop, do the inverse operation of 1) and calculate u and v coordinates for the N'th pixel, where N is the current point number. Fetch the color in that pixel and set point Cd attribute to that. Step 2 is prone to fail with polygon edges, as the ray might or might not hit the surface when fired from/at an edge vertex. One could fire 4 rays and pick the shortest one or something like to fix it.There is a very weird flipping going on, as you can see, the color of the blue light is on the wrong side. Looks as if z is flipped somewhere, but I can't figure out where. One could probably trace rays "by hand" from the points, but I suppose the only way to get all the sampling goodness out of mantra is to render actual pixels. bake_pbr_vertices.hip ---- (also check Skybar's quickie fix for the flipping problem in the original thread) 3 Quote Link to comment Share on other sites More sharing options...
Popular Post eetu Posted March 28, 2015 Author Popular Post Share Posted March 28, 2015 Something completely different for a change. Greenpeace had a video projection show here in Helsinki, footage of the arctic projected onto the local cathedral.I helped a little bit by adding an "ice" effect on top of the footage that made it look as if the front parts of the cathedral were made of ice.Of course there was a bit of trickery involved, so it has a place here in the Lab.The editing guys wanted to have freedom to change the edit as late as possible, and rendering refractive stuff can be sloooowww, hence the trickery:First I took some photos and and did a fast model in Agisoft Photoscan.I did a very simple model of the front parts in Houdini, and placed a large quad behind it - about where the wall of the cathedral would be. I set the quad to be emissive, with R going 0..1 according to u and G going 0..1 according to v.The pillars were refractive, so every pixel in the final render pass was of the color signifying the uv-coordinates of the location where the refraction ray hit the quad. In essence it's just a distortion uv-map. I had a photo from the future location of the projector, so I eyeballed the model and camera to match the real view.In COPs I then performed with VOPs the per-pixel uv-ampping operation needed to comp the ice on top of the footage. It was real fast, about a second a frame for 1080p, quite a bit snappier than it would've been to actually render each frame.. (short mp4) The problem with this approach, well one of them, is that every pixel only picks up one color sample from the background, so no soft refraction or multiple refractions/reflection on top of each other. And no antialias. To mitigate this I rendered the uv texture in 4k and did the compositing step in 4k as well, so I got at least a bit of antialias going. It was still fast.I also did a simple sss diffuse render that I blended a little into the final frame. Here's some footage of the final thing I found on Youtube: ice_ice_cathedral_v011.hip 11 Quote Link to comment Share on other sites More sharing options...
zoki Posted March 29, 2015 Share Posted March 29, 2015 very nice eetu i remember walking beside this church when i went to market lovely city helsinki and nice projection mapping tricks keep it up Quote Link to comment Share on other sites More sharing options...
musamaster Posted April 17, 2015 Share Posted April 17, 2015 Eetu your posts are a godsend for us trying to figure out high end ideas. I fell in love with the unified growth stuff and saw that you were dabbling in it. I would love to hear how you tackled the self intersection testing in the sdf. In my tests, as soon as i try to converted the growing poly into vdb's and back, the surface just blends with itself creating a smooth blob. How did you get it to stop growing as it is about to intersect? Quote Link to comment Share on other sites More sharing options...
eetu Posted April 18, 2015 Author Share Posted April 18, 2015 Hey, I do not convert vdb back to polygons, I just use the vdb distance field value to check for intersections. Inside an attribute vop sop I calculate the new, "grown", position for current point, check if that new position would be inside the volume, and based on that I use either the old or the new position. See the attached file for details, it has the follow-curve and hard boundary versions of this. I just held a two-day workshop on growth techniques at Bartlett School of Architecture this week, and quite liked it, maybe I'll make a video tutorial on this stuff. They also graciously 3d-printed one of the growth structures, I think this was frame 800 of the attached: quaddel_v016.hip 7 Quote Link to comment Share on other sites More sharing options...
Serg Posted April 18, 2015 Share Posted April 18, 2015 AWESOME stuff Eetu This is easily one of the best cg threads ever Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.