Jump to content

Leaderboard


Popular Content

Showing most liked content since 02/27/2017 in all areas

  1. 56 points
    Few tips and tricks to manipulate gas simulation. 1. Independent resolution grid. E.g. Overriding vel grid size independent to a density grid. 2. Creating additional utilities. E.g. gradient, speed, vorticity and etc which can be used to manipulate forces. 3. Forces via VEX and some example snippets. smokesolver_v1.hipnc P.S. Some of this technique are not Open CL friendly though.
  2. 51 points
    There are so many nice example files on this website that I am often searching for. I wanted to use this page as a link page to other posts that I find useful, hopefully you will too. Displaced UV Mapped Tubes Particles Break Fracture Glue Bonds Render Colorized Smoke With OpenGL Rop Moon DEM Data Creates Model Python Script Make A Belly Bounce Helicopter Dust Effect Conform Design To Surface Benjamin Button Intro Sequence UV Style Mapping UV Box and Multiple Projection Styles Ping Pong Frame Expression Instance vs. Copy (Instance Is Faster) Particle Bug Swarm Over Vertical and Horizontal Geometry Rolling Cube Rounded Plexus Style Effect Pyro Smoke UpRes Smoke Trails From Debris Align Object Along Path Fading Trail From Moving Point Swiss Cheese VDB To Polygons Get Rid Of Mushroom Shape In Pyro Sim A Tornado Ball Of Yarn Particles Erode Surface Unroll Paper Burrow Under Brick Road Non Overlapping Copies Build Wall Brick-By-Brick FLIP Fluid Thin Sheets Smoke Colored Like Image Volumetric Spotlight Moving Geometry Using VEX Matt's Galaxy Diego's Vortex Cloud Loopable Flag In Wind Eetu's Lab <--Must See! Wolverine's Claws (Fracture By Impact) Houdini To Clarisse OBJ Exporter Skrinkwrap One Mesh Over Another Differential Growth Over Surface [PYTHON]Post Process OBJ Re-Write Upon Export Rolling Clouds Ramen Noodles Basic Fracture Extrude Match Primitive Number To Point Number Grains Activate In Chunks Fracture Wooden Planks Merge Two Geometry Via Modulus Fill Font With Fluid DNA Over Model Surface VDB Morph From One Shape To Another Bend Font Along Curve Ripple Obstacle Across 3D Surface Arnold Style Light Blocker Sphere Dripping Water (cool) Exploded View Via Name Attribute VEX Get Obj Matrix Parts eetu's inflate cloth Ice Grows Over Fire Flying Bird As Particles DEM Image To Modeled Terrain Pyro Temperature Ignition Extrude Like Blender's Bevel Profile Particles Flock To And Around Obstacles BVH Carnegie Mellon Mocap Tweaker (python script) Rolling FLIP Cube Crowd Agents Follow Paths Keep Particles On Deforming Surface Particle Beam Effect Bendy Mograph Text Font Flay Technique Curly Abstract Geometry Melt Based Upon Temperature Large Ship FLIP Wake (geo driven velocity pumps) Create Holes In Geo At Point Locations Cloth Blown Apart By Wind Cloth Based Paper Confetti Denim Stitching For Fonts Model A Raspberry Crumple Piece Of Paper Instanced Forest Floor Scene FLIP pushes FEM Object Animated Crack Colorize Maya nParticles inside an Alembic Path Grows Inside Shape Steam Train Smoke From Chimney Using Buoyancy Field On RBDs In FLIP Fluid Fracture Along A Path COP Based Comet Trail eetu's Raidal FLIP Pump Drip Down Sides A Simple Tornado Point Cloud Dual Colored Smoke Grenades Particles Generate Pyro Fuel Stick RBDs To Transforming Object Convert Noise To Lines Cloth Weighs Down Wire (with snap back) Create Up Vector For Twisting Curve (i.e. loop-d-loop) VDB Gowth Effect Space Colonization Zombie L-System Vine Growth Over Trunk FLIP Fluid Erosion Of GEO Surface Vein Growth And Space Colonization Force Only Affects Particle Inside Masked Area Water Ball External Velocity Field Changes POP particle direction Bullet-Help Small Pieces Come To A Stop Lightning Around Object Effect Lightning Lies Upon Surface Of Object Fracture Reveals Object Inside Nike Triangle Shoe Effect Smoke Upres Example Julien's 2011 Volcano Rolling Pyroclastic FLIP Fluid Shape Morph (with overshoot) Object Moves Through Snow Or Mud Scene As Python Code Ramp Scale Over Time Tiggered By Effector Lattice Deforms Volume Continuous Geometric Trail Gas Enforce Boundary Mantra 2D And 3D Velocity Pass Monte Carlo Scatter Fill A Shape Crowd Seek Goal Then Stop A Bunch Of Worms Potential Field Lines Around Postive and Negative Charges Earthquake Wall Fracture Instance Animated Geometry (multiple techniques) Flip Fluid Attracted To Geometry Shape Wrap Geo Like Wrap3 Polywire or Curve Taper Number Of Points From Second Input (VEX) Bullet Custom Deformable Metal Constraint Torn Paper Edge Deflate Cube Rotate, Orient and Alignment Examples 3D Lines From 2D Image (designy) Make Curves In VEX Avalanche Smoke Effect Instant Meshes (Auto-Retopo) Duplicate Objects With VEX Polywire Lightning VEX Rotate Instances Along Curved Geometry Dual Wind RBD Leaf Blowing Automatic UV Cubic Projection (works on most shapes) RBD Scatter Over Deforming Person Mesh FLIP Through Outer Barrier To Inner Collider (collision weights) [REDSHIFT] Ground Cover Instancing Setup [REDSHIFT] Volumetric Image Based Spotlight [REDSHIFT] VEX/VOP Noise Attribute Planet [REDSHIFT] Blood Cell Blood Vessel Blood Stream [REDSHIFT] Light Volume By Material Emission Only [REDSHIFT] Python Script Images As Planes (works for Mantra Too!) [REDSHIFT] MTL To Redshift Material [REDSHIFT] Access CHOPs In Volume Material [REDSHIFT] Mesh Light Inherits Color [REDSHIFT] Color Smoke [REDSHIFT] FBX Import Helper [REDSHIFT] Terrain Instancer Height Field By Feature Dragon Smashes Complex Fractured House (wood, bricks, plaster) Controlling Animated Instances Road Through Height Field Based Terrain Tire Tread Creator For Wheels Make A Cloth Card/Sheet Follow A NULL Eye Veins Material Matt Explains Orientation Along A Curve Mesh Based Maelstrom Vortex Spiral Emit Multiple FEM Objects Over Time Pushing FEM With Pyro Spiral Motion For Wrangle Emit Dynamic Strands Pop Grains Slope, Peak and Flat Groups For Terrains Install Carnegie Mellon University BVH Mocap Into MocapBiped1 Ramp Based Taper Line Fast Velocity Smoke Emitter Flip Fill Cup Ice Cubes Float [PYTHON]Export Houdini Particles To Blender .bphys Cache Format Collision Deform Without Solver or Simulation Mograph Lines Around Geometry Waffle Cornetto Ice Cream Cone Ice Cream Cone Top Unroll Road Or Carpet Burning Fuse Ignites Fuel or Painted Fuel Ignition Painted Fuel Combustion Small Dent Impact Deformation Particle Impact Erosion or Denting Of A Surface Helicopter Landing Smoke And Particles Radial Fracture Pieces Explode Outwards Along Normal Tangent Based Rocket Launch Rolling Smoke Field Tear/Rip FLIP (H12 still works in H16) Rain Flows Over Surface Rains Water Drip Surface Splash Smoke Solver Tips & Tricks Folding Smoke Sim VEX Generated Curve For Curling Hair Copy and Align One Shape Or Object To The Primitives Of Another Object (cool setup) A Better Pop Follow Curve Setup FEM Sea Cucumber Moves Through Barrier Fracture Cloth Smoke Confinement Setup Merge multiple .OBJ directly Into A Python Node Blood In Water Smoke Dissipates When Near Collision Object Whirlpool Mesh Surface Whirlpool Velocity Motion For FLIP Simple Bacteria Single Point Falling Dust Stream Flames Flow Outside Windows Gas Blend Density Example Localized Pyro Drag (smoke comes to a stop) Granular Sheet Ripping Post Process An Export (Post Write ROP Event) Corridor Ice Spread or Growth Set Velocity On Pieces When Glue Bonds Break Water Drops Along Surface Condensation Bottle Grains Snow or Wet Sand Starter Scene A Nice Little Dissolver Turn An Image Into Smoke Fading Ripples Grid Example Stranger Things Wall Effect Face Through Rubber Wall [PYTHON]Create Nurbs Hull Shelf Tool [PYTHON] Ramp Parameter Select Outside Points Of Mesh, Honor Interior Holes Sparks Along Fuse With Smoke Umbrella Rig Melt FLIP UVs Tire Burn Out Smoke Sim Flip or Pyro Voxel Estimate Expression Motorcycle or Dirt Bike Kicks Up Sand Particles Push Points Out Of A Volume [PYTHON]Cellular Automata Cave Generator Punch Dent Impact Ripple Wrinkle VEX Rotate Packed Primitive Via Intrinsic Kohuei Nakama's Effect FLIP Fluid Inside Moving Container Particles Avoid Metaball Forces FLIP Divergence Setup FLIP Transfer Color Through Simulation To Surface Morph Between Two Static Shapes As Pyro Emits Constraint Based Car Suspension Pyro Smoke Gas Disturbs Velocity Wire Solver Random Size Self Colliding Cables Fast Cheap Simple Collision Deform CHOP Based Wobble For Animated Character Slow Motion FLIP Whaitewater Avoid Stepping In Fast Pyro Emission FLIP Fluid Fills Object Epic Share Of Softbody/Grain Setups (Must see!) Balloon, Pizza, Sail, Upres Shirt, Paint Brush Create Pop Grain Geometry On-The-Fly In A DOPs Solver Varying Length Trails VEX Based Geometry Transform Determine Volume Minimum and Maximum Values Grain Upres Example Animated pintoanimation For Cloth Sims Batch Render Folder Of OBJ files Vellum Weaving Cloth Fibers Knitting Kaleidoscopic Geometry UV Image Map To Points Or Hair Color Particles Like Trapcode Particular Flat Tank Boat Track With Whitewater Orthographic Angle Font Shadow Select Every Other Primitive or Face? Printer Spits Out Roll Of Paper Unroll Paper, Map, Plans, Scroll Simple Vellum L-System Plant Basic Cancer Cell Use Google To Discover Attached HIP Files Useful Websites: Tokeru Houdini Houdini Vex Houdini Python FX Thinking iHoudini Qiita Ryoji Video Tutorials: Peter Quint Rohan Dalvi Ben Watts Design Yancy Lindquist Contained Liquids Moving Fem Thing Dent By Rigid Bodies Animating Font Profiles Guillaume Fradin's Mocap Crowd Series(no longer available) Swirly Trails Over Surface http://forums.odforce.net/topic/24861-atoms-video-tutorials/ http://forums.odforce.net/topic/17105-short-and-sweet-op-centric-lessons/page-5#entry127846 Entagma SideFX Go Procedural
  3. 34 points
    Filament like structure, combination of Smoke Solver, VDB Advect Points + Volume Rasterize Particles. smokesolver_v3.hipnc
  4. 33 points
    Another, focused on instancing smoke objects. Manipulating points with basic instancing attributes, i@cluster, v@scale and f@sourceframe. How to activate smoke object and holding a volume source. This method ideal for triggering independent gas simulation on impact data. Additional examples, e.g. grid clustering method for trail and non-trail which I'm merging from a separate thread. smokesolver_v2.hipnc
  5. 29 points
    During the last 3 weeks, a did some Rnd and published my results on vimeo . Some people asked me to share my files here, so here we are i hope it will help!
  6. 26 points
    I want to share a little tool I made for grooming feathers. Its a set of 6 nodes, one base node and 5 modifiers. Super easy to use. Just connect them and.. there you go - you got yourself a pretty little feather. You can layer modifiers as many as you want. Any feedback is super appreciated. https://www.dropbox.com/sh/8v05sgdlo5erh0b/AADSfadqkxgPOBVeaGr2O49Oa?dl=0
  7. 23 points
    A lot of people asked me to share this fake fire method.If you interested it, you can will check this simple hip. After rander i used ACES for a better look. fake_fire_rnd.hip
  8. 22 points
    Pixelkram / Moritz S. (of Entagma) and I are proud to announce MOPs: an open-source toolkit for creating motion graphics in Houdini! MOPs is both a suite of ready-to-use tools for solving typical motion graphics problems, and a framework for building your own custom operators easily. More information is available from our website: http://www.motionoperators.com Enjoy!
  9. 22 points
  10. 20 points
    Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves. OpenGL OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers). Modern GL implies a lot of different things, but the key ones are: geometry data and shader data must be backed by VRAM buffers, Shaders are required, and all fixed function lighting, transformation, and shading is gone. This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes. OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well). Bottlenecks As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI. The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer. The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU. Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit. Mantle, Vulkan, and DX12 AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences). Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging. In short, it's a handful and can be rather overwhelming. Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems. Apple and OpenGL When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since. Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist. Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture). Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal. Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan). Enter Metal Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language. If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions. I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess. And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world
  11. 20 points
    Hello, since last week i can play with houdini again to keep going my tests ... and bellow , some of my latest hip files from this video: torus+wrinckles+.hiplc stick man rbd+ .hiplc bubbles- rbd+cloth-2.hiplc
  12. 20 points
    I've wanted to tackle mushroom caps in pyro sims for a while. Might as well start here... Three things that contribute greatly to the mushroom caps: coarse sub-steps, temperature field and divergence field. All of these together will comb your velocity field pretty much straight out and up. Turning on the velocity visualization trails will show this very clearly. If you see vel combed straight out, you are guaranteed to get mushrooms in that area. If you are visualizing the velocity, best to adjust the visualization range by going forward a couple frames and adjusting the max value until you barely see red. That's your approximate max velocity value. Off the shelf pyro explosion on a hollow fuel source sphere at frame 6 will be about 16 Houdini units per second and the max velocity coincides with the leading edge of the divergence filed (if you turn it on for display, you'll see that). So Divergence is driving the expansion, which in turn pushes the velocity field and forms a pressure front ahead of the explosion because of the Project Non-Divergent step that assumes the gas is incompressible across the timestep, that is where where divergence is 0. I'm going to get the resize field thingy out of the way first as that is minor to the issue but necessary to understand. Resizing Fields Yes, if you have a huge explosion with massive velocities driven by a rapidly expanding divergence field, you could have velocities of 40 Houdini units per second or higher! Turning off the Gas Resize will force the entire container to evaluate which is slow but may be necessary in some rare cases, but I don't buy that. What you can do is, while watching your vel and divergence fields in the viewport, adjust the Padding parameter in the Bounds field high enough to keep ahead of the velocity front as that is where you hope for some nice disturbance, turbulence and confinement to stir around the leading edge of the explosion. or... Use several fields to help drive the resizing of the containers. Repeat: Use multiple fields to control the resizing of your sim containers. Yep, even though it says "Reference Field" and the docs say "Fluid field..", you can list as many fields in this parameter field that you want to help in the resizing. In case you didn't know. Diving in to the Resize Container DOP, there is a SOP Solver that contains the resizing logic that constructs a temporary field called "ResizeField", importing the fields (by expanded string name from the simulation object which is why vector fields work) with a ForEach SOP, each field in turn, then does a volume bound with the Volume Bounds SOP on all the fields together using the Field Cutoff parameter. Yes there is a bit of an overhead in evaluating these fields for resizing, but it is minor compared to having no resizing at all, at least for the first few frames where all the action and sub-stepping needs to happen. Default is density and why not, it's good for slower moving sims. Try using density and vel: "density vel". You need both as density will ensure that the container will at least bound your sources when they are added. Then vel will very quickly take over the resizing logic as it expands far more rapidly than any other field in the sim. Then use the Field Cutoff parameter to control the extent of the container. The default here is 0.005. This works for density as this field is really a glorified mask: either 0 or 1 and not often above 1. Once you bring the velocity field in to the mix, you need to adjust the Field Cutoff. Now that you have vel defined along side density, this Field Cutoff reads as 0.005 Houdini units per second wrt the vel field. Adjust Field Cutoff to suit. Start out at 0.01 and then go up or down. Larger values give you smaller, tighter containers. Lower values give you larger padding around the action. All depends on your sim, scale and velocities present. Just beware that if you start juicing the ambient shredding velocity with no Control Field (defaults to temperature with it's own threshold parameter so leave there) to values above the Field Cutoff threshold, your container will zip to full size and if you have Max Bounds off, you will promptly fill up your memory and after a few minutes of swapping death, Houdini will run out of memory and terminate. Just one of the things to keep in mind if you use vel as a resizing field. Not that I've personally done that... The Resolution Scale is useful to save on memory for very large simulations, which means you will be adjusting this for large simulations. The Gas Resize Field DOP creates a temporary field called ResizeBounds and the resolution scale sets this containers resolution compared to the reference fields. Remember from above that this parameter is driving the Volume Bound SOP's Bounding Value. Coarser values leads to blurred edges but that is usually a good thing here. Hope that clears things up with the container resizing thing. Try other fields for sims if they make sense but remember there is an overhead to process. For Pyro explosions, density and vel work ok. For combustion sims like fire, try density and temperature where buoyancy contributes a lot to the motion.
  13. 18 points
    I thought it fitting to post this here too ;). For better or worse, I'm launching a vfx and animation studio at the end of the week. Some of you may recognize some of the name (if you squint and look at it just right). http://theodstudios.com
  14. 15 points
    Hi everyone, Herer's a little personal project I did over the last year. No keyframes where used for the animation. Each movement is generated through physical simulation or procedural noise. The Bananas and Pears are done in H16.5 using CHOPs controlled Bones and then fed into a FEM simulation. All the other fruits are done using H17 and Vellum. ÖBST: "How would fruits move if they could?" Hope you like it.
  15. 15 points
    Hey, thought I'd share this here. Preview of tree and foliage creation and layout tools now available on Gumroad. I've released them as "pay what you want" as my contribution to the community. I plan to keep supporting and improving these tools in future as well as releasing other tools. Let me know if you have any feedback/suggestions and I look forward to seeing what people create with them. Enjoy! https://gumroad.com/l/zWFNX
  16. 15 points
    Hi all, I had been doing a rnd project on how to generate knitted garments in Houdini lately. And one my inspiration was from a project which was done by Psyop using Fabric engine and the other one is done by my friend Burak Demirci. Here are the links of them. http://fabricengine.com/case-studies/psyop-part-2/ https://www.artstation.com/artist/burakdemirci Some people asked to share my hip file and I was going to do it sooner but things were little busy for me. Here it is, I also put some sticky notes to explain the process better, hope it helps. Also this hip file is the identical file of the one that I created this video except the rendering nodes https://vimeo.com/163676773 .I think there are still some things that can be improved and maybe done in a better way. I would love to see people developing this system further. Cheers! Alican Görgeç knitRnD.zip
  17. 15 points
    Basic smoke solver built within SOP solver, utilising openVDB nodes. Happy exploring & expanding =) P.S. DOP’s smoke solver still solves quicker in many cases though. vdbsmokesolver_v1.hipnc vdbsmokesolver_v2.hipnc
  18. 14 points
    Congratulations, SideFX -- and a priceless still frame! Best of luck for the future and keep innovating!
  19. 14 points
  20. 14 points
    Hi, I played around your scene a bit. This is the result. I also ended up changing your extrusion expression using exp() instead of pow(). The gist of it was to do 2 iteration phases. 1 is ahead, and the other is behind. Then just blend between those two. The challenging part was to create the attributes needed to specify the iteration level, and the blending amount (check the blend_and_iter wrangle node for how I processed it). Although this seems to chug to slow when there's already too many polygons to process. I wanted to place it inside a compiled block, but polyExtrudes are not yet compilable. H16.5.268 NC - Subdiv_Test_v2.rar
  21. 14 points
    Coarse Sub-Steps If you have an expanding gas field front that from frame 1 to 2 or frame 2 to 3 travels one or two Houdini units and substeps are set to 1, you will get combed straight velocity vectors which means mushroom caps. No matter how much turbulence or confinement you set on your Pyro Solver DOP, there simply isn't enough time to evolve these fields and have an effect on the result. More substeps means smaller velocities to deal with between substeps making things more manageable too. In an attempt to keep substeps at 1, you can manufacture noise and pump that in to vel but in the end two things will happen: The Non-Divergent step will take your noise and negate most of it, or you end up pumping in so much noise because it isn't working with smaller values you tried earlier, that it swamps the entire effect and it looks like a fractal hash and not that nice evolving fireball. Oh and if you really pump in tons of noise in to vel, it too can create many smaller velocity fronts pushing ahead and you end up with smaller mushroom caps! Doh... This is in essence what the Gas Disturbance DOP does. The Pyro Solver has a Gas Disturbance DOP in it's logic and those parameters are promoted up to the top asset interface but we're concerned about substeps right now and allowing enough time for turbulence and confinement to create the nice swirls on the leading edge of the explosion. So it's coming down to sub steps to try and allow for a lot more character around the leading pressure front for fast evolving explosion type simulations. Two ways to go about this: Brute force increase the global substeps for the entire DOP network, or use the Pyro Solver Substeps in the Advanced tab. Brute Force Global Substeps For explosions, the huge almost instantaneous velocities happen at the first 5-10 frames. It would be nice to keyframe animate the Sub Steps parameter, but you can't (DOPs is that way). If you set the global sub-steps to get enough detail in the first few frames you have to carry those sub-steps through the rest of the sim when things are moving a lot slower and those substeps are no longer required. Not that great. No wonder everyone tries to inject their own pumps to affect vel to avoid global substepping. Pyro Solver Substeps The Pyro Solver exposes minimum and maximum substepping logic to control when and how the Pyro Solver will substep. This sounds interesting and could be just what we need. But what is CFL Condition? No it isn't the Canadian Football League even though we know that 3 downs rule and 4 downs are for those that can't deal 3. It's named after a couple guys who in the '20's, that's 1920's, who were trying to figure out the frequency of data samples they required in order to map and predict fluid simulations and pressures/resistance to flow with fast moving collision objects (that be ships). The help note on the actual Gas SubStep DOP explains it quite well: timestep will be reduced if the velocity field will move only 1 voxel in a timestep. A CFL of 2 will allow it to move 2 voxels in a timestep. Or something like that. You can find it on wikipedia. You can set your minimum substeps to 1 and your maximum substeps to a high enough value such that if the CFL Condition is exceeded, more substeps will occur when the simulation has large velocities and less when the velocity is smaller. Hopefully this gives enough time to let the turbulence and other methods to stir up the vel field kick in. Keyframe Timescale There is a third option to controlling sub steps but that is to keyframe animate the Timescale. Yes more than valid to do this to slow down the sim at the start and then speed up when the huge velocities subside. As a matter of fact, the shelf tools set Timescale to 0.65 as an attempt to get a good looking explosion or fireball without having to resort to substeps. But this is not an automatic method. This requires intervention if you want to animate the timescale. This means you have to run the sim and evaluate. Then you keyframe the timescale and you end up with an entirely different simulation. Then you move your keys, run again. Then you increase the resolution of the simulation and everything changes again. In many ways, it's worth to at least give the min and max substeps a go and see if you can dial in the CFL Condition to get a happy balance. As you increase the resolution of the simulation, the CFL condition measured in voxels will allow substeps to run up a bit faster to the max without too much of a change in the final result.
  22. 13 points
    I finally got around to cleaning up the hip file and have attached it. If you end up using this, please let me know how it's going and share a pointer to your work. The major challenge with writing this in VEX was that VEX does not offer any of the canonical data structures one would use to efficiently implement this. For me, the simulation ends up running out of memory around a 1000 frames. As I am novice to Houdini, I would also appreciate any feedback and comments you might have if you end up taking a look. Enjoy? :-) MorphogensisInVex.hiplc
  23. 13 points
    Basic: // Primitive wrangle. int pts[] = primpoints(0, @primnum); vector rest = point(0, "P", pts[0]); vector prev_pos = rest; matrix3 frame = ident(); for (int i = 0; i < len(pts); i++) { vector pos = point(0, "P", pts[i]); rotate(frame, 0.1, {0, 0, 1}); vector new_pos = (pos - rest) * frame + prev_pos; rest = pos; prev_pos = new_pos; setpointattrib(0, "P", pts[i], new_pos); } Advanced: // Primitive wrangle. #define TWO_PI 6.2831852 addpointattrib(0, "N", {0, 0, 0}); int pts[] = primpoints(0, @primnum); int npt = len(pts); // Loop variables. vector rest = point(0, "P", pts[0]); vector prev_pos = rest; matrix3 frame = ident(); for (int i = 0; i < npt; i++) { vector pos = point(0, "P", pts[i]); vector delta = pos - rest; rest = pos; // Make normal. Point normals could be used instead. vector normal = normalize(cross(cross({0, 1, 0}, delta), delta)); if (length(normal) == 0) { normal = {0, 0, 1}; } // Drive a shape with ramps and multipliers. vector axis; float ramp, angle; // Twist the bend axis. axis = normalize(delta); ramp = chramp("twist_profile", (float) i / npt); angle = fit01(ramp, -TWO_PI, TWO_PI) * ch("twist") / (npt - 1); rotate(frame, angle, axis); // Bend the curve. axis = normalize(cross(normal, delta)); ramp = chramp("bend_profile", (float) i / npt); angle = fit01(ramp, -TWO_PI, TWO_PI) * ch("bend") / (npt - 1); rotate(frame, angle, axis); // Compute new position and normal. vector new_pos = delta * frame + prev_pos; prev_pos = new_pos; setpointattrib(0, "P", pts[i], new_pos); setpointattrib(0, "N", pts[i], normal * frame); } curl.hipnc
  24. 13 points
    Methods to Stir Up the Leading Velocity Pressure Front We need to disturb that leading velocity pressure front to start the swirls and eddies prior to the fireball. That and have a noisy interesting emitter. Interesting Emitters and Environments I don't think that a perfect sphere exploding in to a perfect vacuum with no wind or other disturbance exists, except in software. Some things to try are to pump in some wind like swirls in to the container to add some large forces to shape the sim later on as it rises. The source by default already has noise on it by design. This does help break down the effect but the Explosion and fireball presets have so much divergence that very quickly it turns in to a glowing smooth ball. But it doesn't hurt. It certainly does control the direction of the explosion. Directly Affecting the Pressure Front - Add Colliders with Particles One clever way is to surround the exploding object with colliders. Points set large enough to force the leading velocity field to wind through and cause the nice swirls. There are several clever ways to proceduralize this. The easiest way is with the Fluid Source SOP and manipulate the Edge Location and Out Feather Length and then scatter points in there then run the Collide With tool on the points. Using colliders to cut up the velocity over the first few frames can work quite well. This will try to kick the leading pressure velocity wave about and hopefully cause nice swirling and eddies as the explosion blows through the colliders. I've seen presentations where smoke dust walls flowing along the ground through invisible tube colliders just to encourage the swirling of the smoke. You can also advect points through the leading velocity field and use these as vorticles to swirl the velocity about. The one nice thing about using geometry to shape and control the look, as you increase the resolution of the sim, it has a tendency to keep it's look in tact, at least the bulk motion. As an aside, you could add the collision field to the resize container list (density and vel) to make sure the colliders are always there if it makes sense to do so. Colliders work well when you have vortex confinement enabled. You can use this but confinement has a tendency to shred the sim as it progresses. You can keyframe confinement and boost it over the first few frames to try and get some swirls and eddies to form. Pile On The Turbulence Another attempt to add a lot of character to that initial velocity front is to add heaping loads of turbulence to counter the effect of the disturbance field. You can add as many Gas Turbulence DOPs to the velocity shaping input of the Pyro Solver to do the job. Usually the built-in turbulence is set up to give you nice behaviour as the fireball progresses. Add another net new one and set it up to only affect the velocity for those first few frames. Manufacturing the turbulence in this case. In essence no different than using collision geometry except that it doesn't have the regulating effect that geometry has in controlling the look of the explosion, fireball or flames, or smoke. As with the shredding, turbulence has it's own visualization field so you can see where it is being applied. Again the problem is that you need a control field or the resize container will go to full size but if it works, great. Or use both colliders and turbulence pumped in for the first few frames and resize on the colliders. Up to you. But you could provide some initial geometry in /obj and resize on that object if you need to. Hope this helps...
  25. 13 points
    Project Non-Divergent Step and Mushrooms The Project Non-Divergent DOP is responsible for 99.9% of the simulation's behaviour. Yes hundreds of DOPs inside the Pyro Solver all playing a part but all funnelling through that single Non-Divergent step. This means that if you don't like the look of your sim and the mushrooms, it's ultimately because of the Non-Divergent step creating a vel field that doesn't do it for you. If you want to see for yourself, unlock the Pyro Solver, dive in, find the Smoke Solver, unlock that, dive in and find the projectmultigrid DOP and bypass it, then play. Nothing. For most all Pyro sims, this is the Project Non-Divergent Multigrid as it is the fastest of the Non-Divergent micro-solvers. This specific implementation only takes the vel and divergence field and assuming across the timestep that the gas is non-compressible when divergence is 0, will create a counter field called Pressure and then apply that pressure field to the incoming vel to remove any compression or expansion and that gives you your velocity, nice turbulent and swirly, or combed straight out. Just tab-add a Project Non-Divergent Multigrid DOP in any dop network and look at the fields: Velocity Field, Goal Divergence Field and Pressure Field (generated every timestep, used, then removed later on). All the other fields in Pyro are there to affect vel and divergence. Period. Nothing else. At this point I don't care about rendering and the additional fields you can use there. It's about vel and divergence used to advect those fields in to interesting shapes, or mushrooms. If you want to create your own Pyro Solver taking in say previous and new vel, density, temperature, and then in a single Gas Field VOP network, create an interesting vel and divergence field, then pass that straight on to the Project Non-Divergent Multigrid microsolver, then advect density, temperature and divergence afterward, go for it. Knowing that only vel and divergence drive the simulation is very important. All the other fields are there to alter the vel and divergence field. So if you have vel vectors that are combed straight, divergence (combustion model in Pyro) or buoyancy (Gas Buoyancy DOP on temperature driving vel) have a lot to do with it. Or a fast moving object affecting vel...
  26. 13 points
    There is no mystery as to how Houdini works. Anything that gets done in Houdini can be expressed by a node. Whether that node is a coded c++ operator, an operator written in VEX (or using VOP nodes representing vex functions), Python operators or Houdini Digital Assets (HDA's), each node does it's own bit and then caches it's result. There is no lower level than nodes. The nodes in Houdini are the lowest level atomic routine/function/programme. A SOP node for example takes incoming geometry and processes it all in of itself, then caches it's result which is seen in the viewport, MMB on the node as it's stats and in the Details View to see the specific attribute values. If this is a modifier SOP, it will have a dependency on it's input node. If there is an upstream change, the current node will be forced to evaluate. If there is a parameter reference to another node and the other node is marked "dirty" and affects this node, this node will have been forced to evaluate. To generalize the cooking structure of a SOP network, for every cook (frame change, parm change, etc), the network starts at the Display/Render node and then walks up the chain looking for nodes with changes and evaluates dependencies for each node also querying those nodes for changes until it hits the top nodes. The nodes marked dirty causing the network to evaluate the dirty nodes top down evaluating the dependencies that were found. You can set a few options in the Performance Monitor to work in the older H11 way and see this evaluation tree order if you wish. Change that. It is "mandatory" that you do this if you want a deeper understanding of Houdini. You definitely need to use the Performance Monitor if you want to see how the networks have evaluated as it is based on creation order along with the set-up dependencies. Yes deleting and undeleting an object can and will change this evaluation order and can sometimes get you out of a spot with crashing. If you haven't used the Performance Monitor pane, then there you go. Use it. Just remember to turn it off as it does have an overhead performance wise. Another key is to use the MiddleMouseButton (MMB) on any and all nodes to see what they have cached from the last cook evaluation. Memory usage, attributes currently stored, etc. the MMB wheel on my mouse is as worn in as the LMB as I use it so much. You can see if the node is marked as time dependent or not which will affect how it evaluates and how it will affect it's dependent nodes. You can RMB on the node and open up the Dependency view for that operator which will list all references and dependencies. You can hit the "d" key in the network and in the parameter display options, in the Dependency tab, enable the various dependency aids (links and halos) in the network to see the dependencies in the network. Houdini is a file system, in memory, and on disk in the .hip "cpio" archive file. If you want, you can use a shell, and given any .hip file, run the hexpand shell command on the file. This will expand the Houdini file in to a directory structure that you can read and edit if you so wish. Then wrap it back up with hcollapse. If you really want to see how Houdini works low level, then this how it all ends up, and how it all starts. It's just hscript Houdini commands that construct the nodes including the folder nodes themselves. Each node is captured as three distinct files: the file that that adds the node and wires it up to other nodes, the parameter file that sets the nodes parameters, and another file that captures additional info on the node. If you locked a SOP, then that binary information will be captured as a fourth file for that node. It is for this reason that .hip files are very small, that is unless you start locking SOPs and that is not wise. Better to cache to disk than lock but nothing stopping you. When you open up a .hip file, all the nodes are added, wired, parameters modified and nodes cooked/evaluated. There are different types of node networks and nodes of a specific type can only be worked on in specific directory node types. This forces you to bop all over the place, especially if you still willingly choose to use the Build desktop which I do not prefer. You have to have a tree view up somewhere in the interface to see how the network lays out as you work. It's also very handy for navigating your scene quickly. The Technical Desktop is a good place to start when working on anyone's file as there is a tree view and a few other panes such as the Details View, Render Scheduler and more. If you want to use the technical desktop and follow a vid done with the Build desktop, simply switch up the Network with the Parameter pane and now the right hand side is the same as Build, but now you can follow the tree view and see where and when other nodes are dropped down. A new Houdini file is an unread book, full of interesting ideas. Using a desktop that exposes a tree view pane, you can quickly see what the user has been up to in a couple seconds. Again use the Technical Desktop as a start if you are still using Build (if you know me you will know I will force you to have a tree view up). You can quickly traverse the scene and inspect the networks. If that isn't enough, you can pop open the Performance Monitor and see what nodes are doing the most work. You really don't need any videos, ultimately just the .hip file. Helps if the scene is commented and nodes named based on intent. Let's stick to SOPs. In Houdini, attributes are an intrinsic part of the geometry that is cached by each SOP. Not some separate entity that needs to be managed. That is what makes SOPs so elegant. That wire between two SOPs is the geometry being piped from one SOP to the next, attributes and all. Not a link per attribute (which in other software can be a geometry attribute, parameter attribute, etc). This makes throwing huge amounts of geometry with lots of attributes a breeze in Houdini. All SOPs will try their best to deal with the attributes accordingly (some better than others and for those others, please submit RFE's or Bugs to Side Effects to see if there is something that can be done). You can create additional geometry attributes by using specific SOPs: - Point SOP creates "standard" point attributes - Vertex SOP creates "standard" vertex attributes - Primitive SOP creates "standard" Primitive attributes - Use the Attribute Create SOP to create ad-hoc attributes with varying classes (float, vector, etc) of type point, vertex, primitive or Detail. - Use VEX/VOPs to create standard and ad-hoc point attributes. - Use Python SOPs to create any standard or ad-hoc geometry attributes. One clarification that must be made is the distinction between a "point" and a "vertex" attribute in Houdini. There are other softwares that use the term vertex to mean either point attributes or prim/vertex attributes. Games have latched on to this making the confusion even deeper but alas, it isn't. In Houdini, you need to make the distinction between a point and a vertex attribute very early on. A point attribute is the lowest level attribute any data type can have. For example, vector4 P position (plus weight for NURBs) is a point attribute that locates a point in space. If you want, that is all you need: points. No primitives what so ever. Then instance stuff to them at render time. You can assign any attribute you want to that point. To construct a Primitive, you need to have a point for the primitive's vertices to reference as a location and weight. In the case of a polygon, the polygon's vertices is indexing points. You can see this in the Details View when inspecting vertex attributes as the vertex number is indicated as <primitive_number>:<vertex_number> and the first column is the Point Num which shows you which point each vertex is referencing as it's P position and weight. Obviously you can have multiple vertices referencing a single point and this is what gives you smooth shading by default with no vertex normals (as the point normals will be used and automatically averaged across the vertices sharing this point). In the case of say a Primitive sphere, there is a single point in space, then a primitive of type sphere with a single vertex that references that point position to locate the sphere. Then there is intrinsic data on the sphere (soon to be made available in the next major release) where you can see the various properties of that sphere such as it's bounds (where you can extrapolate the diameter), area, volume, etc. Other primitive types that have a single point and vertex are volume primitives, metaball primitives, vdb grid primitives, Alembic Archive primitives, etc. How does a Transform SOP for example know how to transform a primitive sphere from a polygonal sphere? Answer is that it has been programmed to deal with primitive spheres in a way that is consistent with any polygon geometry. Same goes for Volumes. It has been programmed to deal with Volumes to give the end user the desired result. This means that all SOPs properly coded will handle any and all primitive types in a consistent fashion. Some SOPs are meant only for Parametric surfaces (Basis SOP, Refine SOP, Carve SOP, etc.) and others for Polygons (PolySplit, etc.) but for the most part, the majority of SOPs can work with all primitive types. What about attributes? The Carve SOP for example can cut any incoming polygon geometry at any given plane. It will properly bi-lineraly interpolate all attributes present on the incoming geometry and cache the result. It is this automatic behaviour for any and all point, vertex, primitive and detail Attributes that makes working with SOPs a breeze. How does Houdini know what to do with vertex attributes when position P, velocity v and surface normal N need to be handled differently? When performing say a rotate with a Transform SOP and the incoming geometry has surface normals N, velocity vector v, and a position cache "rest", each attribute will be treated correctly (well N because it is a known default attribute but for user-defined attributes, you can specify a "hint" to the vector that will tell it to be either vector, 3 float position, or of type surface normal). It is this auto-behaviour with attributes and the fact you don't need to manage attributes makes using SOPs so easy and very powerful without having to resort to code. Remember that each SOP is a small programme unto it's self. It will have it's own behaviours, it's own local variables if it supports varying attributes in it's code logic, it's own parameters, it's own way of dealing with different primitive types (polygons, NURBs, Beziers, Volumes, VDB grids, Metaballs, etc). If you treat each SOP as it's own plug-in programme, you will be on the right path. Each SOP has it's own help card which if it is authored correctly will explain what this plug-in does, what the parameters do, what local variables are available if at all, some other nodes related to this node, and finally example files that you can load in to the current scene or another scene. Many hard-core Houdini users picked things up by just trolling the help example files and this is a valid way to learn Houdini as each node is a node and a node is what does the work and if we were to lock geometry in the help cards the Houdini download would be in the Gigabytes so nodes are all that is in the help cards and nodes is what you need to learn. I'm not going to touch DOPs right now as that is a different type of environment purpose built for simulation work. Invariably a DOP network ends up being referenced by a SOP to fetch the geometry so in the end, it is just geometry which means SOPs. Shelf tools are where it's at but I hear you. Yes there is nothing like being able to wire up a bunch of nodes in various networks and reference them all up. Do that for a scratch FLIP simulation once or twice, fine. Do that umpteen times a week, well that is where the Shelf Tools and HDA's make life quite simple. But don't be dismayed by Shelf Tools. All of those tools are simply executing scripts that place and wire operators together and set up parameter values for you. No different than when you save out a Houdini .hip scene file. If you are uber-hard-core, then you don't even save .hip files and you wire everything from scratch, every time, each time a bit different, evolving, learning. So with the shelf tool logic you find so objectionable, if you open up an existing .hip scene file, you are also cheating. Reminds me of the woodworker argument as to what is hand built and what isn't. I say if you use anything other than your teeth and fingernails to work the wood, you are in essence cheating, but we don't do that. Woodworkers put metal or glass against wood because fingernails take too long to grow back and teeth are damaged for ever when chipped. And I digress... Counter that to power users in other apps that clutch to their code with bare white knuckles always in fear of the next release rendering parts of their routines obsolete. With nodes, you have a type name and parameter names. If they don't change from build to build, they will load just fine. I can load files from before there were .hip files and they were called .mot (from Sage for those that care to remember) from 1995. Still load, well with a few meaningless errors but they still load. A Point SOP is a Point SOP and a Copy SOP is a Copy SOP. No fear of things becoming obsolete. Just type in the "ophide" command in the Houdini textport and you will still find the Limb and Arm SOPs (wtf?). LOL! First thing I do every morning? Download latest build(s). Read the build journal changes. If there is something interesting in that build, work up something from scratch. Then read forums time permitting and answer questions from scratch if I can. All in the name of practice. Remember from above that a .hip file is simply a collection of script files in a folder system saved on disk. A Houdini HDA is the same thing. A shelf tool again is the same thing: a script that adds and wires nodes and changes parameters. Not pounding a bunch of geometry and saving the results in a shape node never to have known the recipe that got you there. To help users sort out what created which node, you can use the "N" hotkey in any network and that will toggle the node names from the default label, the tool that added that node and finally nothing. Hitting "N" several times while inspecting a network will toggle the names about. That and turning on the dependency options in the network will help you see just what each shelf tool did to your scene. Knowing all this, you can now troll through the scene and see what the various shelf tools did to the scene. If you like to dig even deeper, you can use the Houdini textport pane and use the opcf (aliased to cd), opls (aliased to ls), and oppwf (aliased to oppwd and pwd) to navigate the houdini scene via the textport as you would in a unix shell. One command I like to show those more interested in understanding how Houdini works is to cd to say /obj then do an opls -al command to see all the nodes with a long listing. You will see stats very similar to those found in a shell listing files or if you RMB on any disk file and inspect it's info or state. Remember Houdini "IS" a file system with additional elaborate dependencies all sorted out for you. There are user/group/other permissions. Yes you can use opchmod (not aliased to chmod but easily done with the hscript alias command) to change the permission on nodes: like opchmod 000 * will remove read/write/execute permissions on all the nodes in the current directory and guess what? The parameters are no longer available for tweaking. Just remember to either tell your victim or to fix it for them or you may be out of a job yourself. opchmod 777 * gives back the permissions. An opls -al will verify this. Now you know what our licensing does to node states as you can set the state of a node to be read and execute only but remove the write to any DOP or POP node and you have a Houdini license while a Houdini FX license will enable the write to all nodes in all networks. Also knowing this, the .hip file truly is a book with a lot of history along with various ways of inspecting who created what node and when, what tool was used to create this node, what dependencies are on this node, is it time dependent, and more, all with a quick inspection. After all this, learning Houdini simply becomes learning each node in turn and practice, practice, practice. Oh and if you haven't figured out by now, many nodes have a very rich history (some older than 30 years now) and can do multiple things, so suck it up, read the node help cards, study the example files and move forward. The more nodes you master, the more you can see potential pathways of nodes and possibilities in your mind, the faster you work, the better you are. The more you do this, the more efficient your choices will become. The learning curve is endless and boundless. All visual. All wysiwyg.
  27. 12 points
    The latest production build of Houdini brings exciting new improvements to the licensing and render restrictions on Houdini Indie while also adding a powerful new UV Flatten tool and important refinements to FBX import and export. The Houdini Indie program has been a successful initiative designed to provide indie animators and gamedevs, who have a gross income less than $100K USD, with access to the features of Houdini FX with very few restrictions. In response to feedback from the Houdini Indie community, SideFX is increasing the render resolution limit for image sequences from 1080p to 4K x 4K in order to facilitate the generation of high definition imagery, which is especially important for artists creating VR content. SideFX will also be issuing a supplementary license to all Indie users to allow them to install Houdini on a second computer/laptop, or to work in dual boot mode. The use of these two licenses is restricted to a single artist, who can only use Houdini Indie on one of these computers at a time as per the EULA. With these added features and the recent addition of third party rendering, SideFX has set a new annual price for Houdini Indie of $269 USD for a single year or $399 USD for a two-year term. LEARN MORE...
  28. 12 points
    I am not sure if this has been posted before but just came across this site: http://wordpress.discretization.de/houdini/ To quote: "This website is here to help you to get started with Houdini in order to complete the Mathematical Visualization course at the Technical University of Berlin. The aim is to enable you to run your own geometry related algorithms while taking advantage of Houdini’s excellent visual graphics while avoiding to dig deep into the theory behind it." There seems to be some great material on here. In particular a unique way of getting scipy/numpy to work with Houdini - apparently you just copy and paste the entire anaconda 2.7 build into the Houdini python directory?! http://wordpress.discretization.de/houdini/home/advanced-2/installing-and-using-scipy-in-houdini/
  29. 12 points
    Hello Everyone, This is a recording of the presentation I gave at the sidefx booth at GDC. Topic: Creating a custom grooming system in Houdini for VFX and Games. Chapters: --Hair grooming For VFX. --Auto generating cards with texture for Real-time rendering. --Exporting the hair for real-time rendering using Nvidia Hairworks. I hope you guys like it! If you have any questions please feel free to email me at sabervfx@gmail.com Link: https://vimeo.com/sabervfx/hairfx Thanks Saber
  30. 12 points
    Hello, here you can download my files from softbody week-6 : You can use my files for your jobs or just for fun/learn/make tutorial/ whar ever . It's free BUT Keep in mind i am not a professionnal, so pay attention : i'm quite sure my way to work is not the best way to work in production/studio. cheers ! Yohann upres-shirt.hiplc balloon-v2.hiplc baloons-grain solver.hiplc upres-attribtransfer.hiplc upres-curvature.hiplc
  31. 12 points
    Hello everybody, i'm finishing coding a small raytracer that run in sop using vex. one of those thing I always wanted to try to do myself. it store everything on points so no rasterization plane as the idea was to have all the rendering data accessible for later use as you would with any other attributes. it is some sort of an hybrid in the sense that it is correct enough to try to make things look good. it feature many BRDF shading models, photon mapping global illumination ( mathematically done the simple way but it work) and full recursive ray's tree splitting for reflections and refractions. Here a few videos showing some of the feature and a big part of them are already available for download as an OTL for the non commercial edition for everybody interested with the hope it can be helpful to anybody that never coded those things before like me, as I learned a lot during the way. here the videos: This one have been updated recently with lots of new clips showing improvements there and there And this one got th GI part of it with a little demo at the end. download link in the description area: Hope you enjoy, best alessandro
  32. 12 points
    Hi friends! I just released an article on Medium walking through how I built a "Compute Dual" wrangle in VEX. As a quick summary, I basically wanted to know how the "Compute Dual" feature of the divide sop worked, so i slapped together a neat lil wrangle, to do just that! CLICK ME TO GO TO THE ARTICLE!!!!!!!! Here's a cute little gif showing off the construction of a dual graph. If you like it, please check out the article, it's free and it'd mean the whole world to me! =) Love you fools, Jake http://jakericedesigns.com/
  33. 12 points
    Hey everyone! Here's a demo of some point cloud tools I created to calculate concave and convex curvature as well as gradient and curl direction, also sharpening. Forgot to mention in the demo that the curvature calculation is a great way to do differential growth by advection along normal * curve * noise * parm. Also, the curl calculation can be used to make grass patches and groom fur by orienting curves along the direction to add swirly variance. There's a lot of ways these tools can be handy, interested to see what you all come up with! Let me know if you have any questions, enjoy! curveGradientCurlSharpen_v002.hipnc
  34. 12 points
    There was an error in pop_too_close wrangle. It deleted both intersecting bubbles, not just the smaller one, drastically reducing bubblecount. Normally it should remove only degenerate bubbles almost enclosed by neighbours. It also seems that whole loop can be replaced with a point wrangle. So, it cooks instantly now, retains topology and scales better. Scattering and pscale setup really matters. You need to generate a good foam first, before doing intersections. The current setup should be improved somehow. bubbles2.hipnc
  35. 12 points
    Try this... Put down a measure SOP and set it to measure the perimeter of your curves. After that a primitive wrangle and write. #include <groom.h> adjustPrimLength(0, @primnum, @perimeter, @perimeter*@dist); groom.h is a included file containing some functions used in the grooming tools and one of the functions is... void adjustPrimLength(const int geo, prim; const float currentlength, targetlength)
  36. 11 points
    Here is a breakdown of how I made this bear made out of duplo blocks: The video isn't super comprehensive, I mainly cover setting up the constraints in SOPs, though I briefly touch on some other parts too. But the hip file is available to pick apart! Thanks! duplo_bear.zip
  37. 11 points
    Hi, Just posting some of my recent art. Most of it is houdini. Some is a mix of Houdini, Daz and Marvelous Designer. If you see a character, that's definitely from Daz. Everything is rendered in Octane. regards Rohan
  38. 11 points
    Hello I've started with this around H 16 release. Basically wanted to explore, to which level I'd be able to use procedural modeling when it comes to characters. So, "non procedural" part here belongs to another app, exactly Maya, where I've created a base body model, rig, posing - while Houdini part is hair of all sorts (hair, eyelashes, eyebrows..), also a lot of suit. Detailed map, what exactly belongs to which app is here. Let's say that 'harness system' is what I'm considering as most successful part. Later, started with Mantra renders, which turned out in kind of addiction - here are few of around hundred renders of this thing, I did in Mantra.
  39. 11 points
    HI FRIENDS! So over the past year I've been doing far too much Houdini in my free time, and I noticed that all of the people I look up to in the community have their own cute ODForce threads. So with the release of my latest blog post on Voronoi Diagrams and Remeshing, I thought it best to make one of those threads, to avoid flooding the Education section with tons of new posts... Anyways here's a link to my new blog post: https://medium.com/@jakerice_7202/voronoi-for-the-people-60c0f11b0767 And if the link itself isn't enough, here are a couple of GIFs from the blog post, including one that didn't make the cut. All credit for the post title goes to @mestela <3 Big thanks to @toadstorm for editing my grammar as well, and the whole ThinkProcedural discord for putting up with my insanity
  40. 11 points
    some other files from "softbody- week3" video to understand how to mimic interaction between rbd/cloth , pyro/cloth, how to play with attributes and multisolver like @massdensity. same way for fluid/cloth interaction and other attributes used in the video. ++ cloth+pyro.hipnc blob+boxes-shellmassdensity.hipnc blob-shellmassdensity.hipnc
  41. 11 points
    hey all thought i'd just drop a grenade of Houdini work i put together over 2016 & early 2017 - nothing particularly complex - most of this is all stuff i've learnt off this forum and youtube video tutorials by ppl like @mestela @ParticleSkull @Farmfield @rohandalvi the guys at Entagma, Sidefx, and a bundle of other ppl whom i cant seem to tag like Ben Watts - i just want to say thankyou for all your help and tutorage on forums and the time you guys take to put all those epic videos together and encouraging me to learn cheers ant hoob (houdini noob geddit?!) this was my first project set in houdini - following tutorials and clearly inspired by Method (who isn't c'mon) and their video.. I then went away and did some more twiddling with sop based stuff to try some more fun effects - had alot of fun with this one - i love balloon boy lol... this was inspired by all the Hydraulic press channels and just trying a fun few setups... this one was inspired by Erik Fergusons fem stuff... and this one was an attempt to try some fun ragdoll/crowd sim stuff :)
  42. 11 points
    please take a look at the attached file. it´s an example how you could create bezier curves with arbitrary degree and another one relying on beziers in hermite form since you wrote about blending curves... petz curves_vex.hipnc
  43. 11 points
    Here is collection of breakdowns for a project I was working on during last half of a year Vimeo album: https://vimeo.com/album/4471569 Or individual videos:
  44. 11 points
    ok, here is the example file with 4 ways (cache the instance geometry first, both blue nodes ) 1. (Purple) rendering points with instancefile attrib directly through fast instancing 2. (Green) overriding unexpandedfilename intrinsic for any packeddisk primitive copied onto points without stamping 3. (Red) just for comparison Instance SOP, which is using copy stamping inside, so it will be slower than previous methods 4. (Yellow) copying static alembic without stamping and overriding abcframe in this case to vary time for each instance independently (if you need various alembics you can vary abcfilename as well) ts_instance_and_packed_examples_without_stamping.hip
  45. 10 points
    nature.hipnc just to say hello and share some stuffs. /cnc_verkstad/ Tesan Srdjan
  46. 10 points
    I put together a simple script to read the .mtl file, typically associated with a .obj, and create a Redshift material for each entry it finds in the .mtl with a map_Kd token. A Redshift material is created and a texture map is linked to the diffuse color with the filename for the map populated with what is found in the .mtl entry. This is useful when importing architectural .obj files that typically have a large number of materials associated with them. Expected .mtl format of supported tokens. #newmtl _4_2_ <- Material name. #Ns 96.078431 <- Specular intensity #Kd 0.640000 0.640000 0.640000 <- Diffuse color. #Ks 0.500000 0.500000 0.500000 <- Specular color. #Ni 1.000000 <- Index of refraction. #d 1.000000 <- Opacity. #map_Kd 21_budova/_4_2_.jpg <- Map name. #map_Kn 21_budova/_4_2_n.jpg <- Map name. The result of one-click texture mapping. Here is the Colosseum auto texture mapped. The path to the .mtl file and texture path is hard coded. Place the code in a shelf tool button and adjust the path to point to your .mtl file. mtl_to_redshift_061618.zip
  47. 10 points
    I am working on a way to use one object to dent or impact another object, with a ripple effect around the impact area. The color red is available to create a group out of the impact area. Here is what I have so far. There is a small jump in the result which I have not figured out how to solve yet. ap_matte_vdb_dent_with_ripple.hiplc
  48. 10 points
    A small summary from the Houdini -Carbon Workflow for the german Movie Project "Die kleine Hexe". Hope You like it
  49. 10 points
    I wanted to see if I could play a video in Houdini using some python. With this as the result. Don't think it's the first time this is done but it is still nice to see. The result
  50. 10 points
    I was experimenting with ways to add the spring asset I made post sim. It was a little tricky to extract the spring constraints and replace them, but I cracked it eventually. I also did a very un-houdini-like test where I just copied the spring asset a bunch of times and attached it to an old bit of geo I had lying around. Sorry about the colors on this one. I don't really give it much thought before I hit the render button and go to bed. I posted the hips here. As always, most have ROPs that will need to be re-rendered to see anything. I also locked the geo into the scenes, so they are heavier than usual.
×