Jump to content

Leaderboard


Popular Content

Showing most liked content since 06/03/2018 in all areas

  1. 25 points
    A lot of people asked me to share this fake fire method.If you interested it, you can will check this simple hip. After rander i used ACES for a better look. fake_fire_rnd.hip
  2. 22 points
    Pixelkram / Moritz S. (of Entagma) and I are proud to announce MOPs: an open-source toolkit for creating motion graphics in Houdini! MOPs is both a suite of ready-to-use tools for solving typical motion graphics problems, and a framework for building your own custom operators easily. More information is available from our website: http://www.motionoperators.com Enjoy!
  3. 20 points
    There are so many nice example files on this website that I am often searching for. I wanted to use this page as a link page to other posts that I find useful, hopefully you will too. Displaced UV Mapped Tubes Particles Break Fracture Glue Bonds Render Colorized Smoke With OpenGL Rop Moon DEM Data Creates Model Python Script Make A Belly Bounce Helicopter Dust Effect Conform Design To Surface Benjamin Button Intro Sequence UV Style Mapping UV Box and Multiple Projection Styles Ping Pong Frame Expression Instance vs. Copy (Instance Is Faster) Particle Bug Swarm Over Vertical and Horizontal Geometry Rolling Cube Rounded Plexus Style Effect Pyro Smoke UpRes Smoke Trails From Debris Align Object Along Path Fading Trail From Moving Point Swiss Cheese VDB To Polygons Get Rid Of Mushroom Shape In Pyro Sim A Tornado Ball Of Yarn Particles Erode Surface Unroll Paper Burrow Under Brick Road Non Overlapping Copies Build Wall Brick-By-Brick FLIP Fluid Thin Sheets Smoke Colored Like Image Volumetric Spotlight Moving Geometry Using VEX Matt's Galaxy Diego's Vortex Cloud Loopable Flag In Wind Eetu's Lab <--Must See! Wolverine's Claws (Fracture By Impact) Houdini To Clarisse OBJ Exporter Skrinkwrap One Mesh Over Another Differential Growth Over Surface [PYTHON]Post Process OBJ Re-Write Upon Export Rolling Clouds Ramen Noodles Basic Fracture Extrude Match Primitive Number To Point Number Grains Activate In Chunks Fracture Wooden Planks Merge Two Geometry Via Modulus Fill Font With Fluid DNA Over Model Surface VDB Morph From One Shape To Another Bend Font Along Curve Ripple Obstacle Across 3D Surface Arnold Style Light Blocker Sphere Dripping Water (cool) Exploded View Via Name Attribute VEX Get Obj Matrix Parts eetu's inflate cloth Ice Grows Over Fire Flying Bird As Particles DEM Image To Modeled Terrain Pyro Temperature Ignition Extrude Like Blender's Bevel Profile Particles Flock To And Around Obstacles BVH Carnegie Mellon Mocap Tweaker (python script) Rolling FLIP Cube Crowd Agents Follow Paths Keep Particles On Deforming Surface Particle Beam Effect Bendy Mograph Text Font Flay Technique Curly Abstract Geometry Melt Based Upon Temperature Large Ship FLIP Wake (geo driven velocity pumps) Create Holes In Geo At Point Locations Cloth Blown Apart By Wind Cloth Based Paper Confetti Denim Stitching For Fonts Model A Raspberry Crumple Piece Of Paper Instanced Forest Floor Scene FLIP pushes FEM Object Animated Crack Colorize Maya nParticles inside an Alembic Path Grows Inside Shape Steam Train Smoke From Chimney Using Buoyancy Field On RBDs In FLIP Fluid Fracture Along A Path COP Based Comet Trail eetu's Raidal FLIP Pump Drip Down Sides A Simple Tornado Point Cloud Dual Colored Smoke Grenades Particles Generate Pyro Fuel Stick RBDs To Transforming Object Convert Noise To Lines Cloth Weighs Down Wire (with snap back) Create Up Vector For Twisting Curve (i.e. loop-d-loop) VDB Gowth Effect Space Colonization Zombie L-System Vine Growth Over Trunk FLIP Fluid Erosion Of GEO Surface Vein Growth And Space Colonization Force Only Affects Particle Inside Masked Area Water Ball External Velocity Field Changes POP particle direction Bullet-Help Small Pieces Come To A Stop Lightning Around Object Effect Lightning Lies Upon Surface Of Object Fracture Reveals Object Inside Nike Triangle Shoe Effect Smoke Upres Example Julien's 2011 Volcano Rolling Pyroclastic FLIP Fluid Shape Morph (with overshoot) Object Moves Through Snow Or Mud Scene As Python Code Ramp Scale Over Time Tiggered By Effector Lattice Deforms Volume Continuous Geometric Trail Gas Enforce Boundary Mantra 2D And 3D Velocity Pass Monte Carlo Scatter Fill A Shape Crowd Seek Goal Then Stop A Bunch Of Worms Potential Field Lines Around Postive and Negative Charges Earthquake Wall Fracture Instance Animated Geometry (multiple techniques) Flip Fluid Attracted To Geometry Shape Wrap Geo Like Wrap3 Polywire or Curve Taper Number Of Points From Second Input (VEX) Bullet Custom Deformable Metal Constraint Torn Paper Edge Deflate Cube Rotate, Orient and Alignment Examples 3D Lines From 2D Image (designy) Make Curves In VEX Avalanche Smoke Effect Instant Meshes (Auto-Retopo) Duplicate Objects With VEX Polywire Lightning VEX Rotate Instances Along Curved Geometry Dual Wind RBD Leaf Blowing Automatic UV Cubic Projection (works on most shapes) RBD Scatter Over Deforming Person Mesh FLIP Through Outer Barrier To Inner Collider (collision weights) [REDSHIFT] Ground Cover Instancing Setup [REDSHIFT] Volumetric Image Based Spotlight [REDSHIFT] VEX/VOP Noise Attribute Planet [REDSHIFT] Blood Cell Blood Vessel Blood Stream [REDSHIFT] Light Volume By Material Emission Only [REDSHIFT] Python Script Images As Planes (works for Mantra Too!) [REDSHIFT] MTL To Redshift Material [REDSHIFT] Access CHOPs In Volume Material [REDSHIFT] Mesh Light Inherits Color [REDSHIFT] Color Smoke [REDSHIFT] FBX Import Helper [REDSHIFT] Terrain Instancer Height Field By Feature Dragon Smashes Complex Fractured House (wood, bricks, plaster) Controlling Animated Instances Road Through Height Field Based Terrain Tire Tread Creator For Wheels Make A Cloth Card/Sheet Follow A NULL Eye Veins Material Matt Explains Orientation Along A Curve Mesh Based Maelstrom Vortex Spiral Emit Multiple FEM Objects Over Time Pushing FEM With Pyro Spiral Motion For Wrangle Emit Dynamic Strands Pop Grains Slope, Peak and Flat Groups For Terrains Install Carnegie Mellon University BVH Mocap Into MocapBiped1 Ramp Based Taper Line Fast Velocity Smoke Emitter Flip Fill Cup Ice Cubes Float [PYTHON]Export Houdini Particles To Blender .bphys Cache Format Collision Deform Without Solver or Simulation Mograph Lines Around Geometry Waffle Cornetto Ice Cream Cone Ice Cream Cone Top Unroll Road Or Carpet Burning Fuse Ignites Fuel or Painted Fuel Ignition Painted Fuel Combustion Small Dent Impact Deformation Particle Impact Erosion or Denting Of A Surface Helicopter Landing Smoke And Particles Radial Fracture Pieces Explode Outwards Along Normal Tangent Based Rocket Launch Rolling Smoke Field Tear/Rip FLIP (H12 still works in H16) Rain Flows Over Surface Rains Water Drip Surface Splash Smoke Solver Tips & Tricks Folding Smoke Sim VEX Generated Curve For Curling Hair Copy and Align One Shape Or Object To The Primitives Of Another Object (cool setup) A Better Pop Follow Curve Setup FEM Sea Cucumber Moves Through Barrier Fracture Cloth Smoke Confinement Setup Merge multiple .OBJ directly Into A Python Node Blood In Water Smoke Dissipates When Near Collision Object Whirlpool Mesh Surface Whirlpool Velocity Motion For FLIP Simple Bacteria Single Point Falling Dust Stream Flames Flow Outside Windows Gas Blend Density Example Localized Pyro Drag (smoke comes to a stop) Granular Sheet Ripping Post Process An Export (Post Write ROP Event) Corridor Ice Spread or Growth Set Velocity On Pieces When Glue Bonds Break Water Drops Along Surface Condensation Bottle Grains Snow or Wet Sand Starter Scene A Nice Little Dissolver Turn An Image Into Smoke Fading Ripples Grid Example Stranger Things Wall Effect Face Through Rubber Wall [PYTHON]Create Nurbs Hull Shelf Tool [PYTHON] Ramp Parameter Select Outside Points Of Mesh, Honor Interior Holes Sparks Along Fuse With Smoke Umbrella Rig Melt FLIP UVs Tire Burn Out Smoke Sim Flip or Pyro Voxel Estimate Expression Motorcycle or Dirt Bike Kicks Up Sand Particles Push Points Out Of A Volume [PYTHON]Cellular Automata Cave Generator Punch Dent Impact Ripple Wrinkle VEX Rotate Packed Primitive Via Intrinsic Kohuei Nakama's Effect FLIP Fluid Inside Moving Container Particles Avoid Metaball Forces FLIP Divergence Setup FLIP Transfer Color Through Simulation To Surface Morph Between Two Static Shapes As Pyro Emits Constraint Based Car Suspension Pyro Smoke Gas Disturbs Velocity Wire Solver Random Size Self Colliding Cables Fast Cheap Simple Collision Deform CHOP Based Wobble For Animated Character Slow Motion FLIP Whaitewater Avoid Stepping In Fast Pyro Emission FLIP Fluid Fills Object Epic Share Of Softbody/Grain Setups (Must see!) Balloon, Pizza, Sail, Upres Shirt, Paint Brush Create Pop Grain Geometry On-The-Fly In A DOPs Solver Varying Length Trails VEX Based Geometry Transform Determine Volume Minimum and Maximum Values Grain Upres Example Animated pintoanimation For Cloth Sims Batch Render Folder Of OBJ files Vellum Weaving Cloth Fibers Knitting Kaleidoscopic Geometry UV Image Map To Points Or Hair Color Particles Like Trapcode Particular Flat Tank Boat Track With Whitewater Orthographic Angle Font Shadow Select Every Other Primitive or Face? Printer Spits Out Roll Of Paper Unroll Paper, Map, Plans, Scroll Simple Vellum L-System Plant Basic Cancer Cell 2D Vellum Solution Vellum Animated Zero Out Stiffness To Emulate Collapse Whitewater On Pre Deformed Wave Use Google To Discover Attached HIP Files Useful Websites: Tokeru Houdini Houdini Vex Houdini Python FX Thinking iHoudini Qiita Ryoji Video Tutorials: Peter Quint Rohan Dalvi Ben Watts Design Yancy Lindquist Contained Liquids Moving Fem Thing Dent By Rigid Bodies Animating Font Profiles Guillaume Fradin's Mocap Crowd Series(no longer available) Swirly Trails Over Surface http://forums.odforce.net/topic/24861-atoms-video-tutorials/ http://forums.odforce.net/topic/17105-short-and-sweet-op-centric-lessons/page-5#entry127846 Entagma SideFX Go Procedural
  4. 20 points
    Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves. OpenGL OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers). Modern GL implies a lot of different things, but the key ones are: geometry data and shader data must be backed by VRAM buffers, Shaders are required, and all fixed function lighting, transformation, and shading is gone. This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes. OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well). Bottlenecks As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI. The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer. The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU. Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit. Mantle, Vulkan, and DX12 AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences). Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging. In short, it's a handful and can be rather overwhelming. Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems. Apple and OpenGL When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since. Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist. Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture). Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal. Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan). Enter Metal Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language. If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions. I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess. And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world
  5. 19 points
    I thought it fitting to post this here too ;). For better or worse, I'm launching a vfx and animation studio at the end of the week. Some of you may recognize some of the name (if you squint and look at it just right). http://theodstudios.com
  6. 15 points
    Hi everyone, Herer's a little personal project I did over the last year. No keyframes where used for the animation. Each movement is generated through physical simulation or procedural noise. The Bananas and Pears are done in H16.5 using CHOPs controlled Bones and then fed into a FEM simulation. All the other fruits are done using H17 and Vellum. ÖBST: "How would fruits move if they could?" Hope you like it.
  7. 13 points
    I've wanted to tackle mushroom caps in pyro sims for a while. Might as well start here... Three things that contribute greatly to the mushroom caps: coarse sub-steps, temperature field and divergence field. All of these together will comb your velocity field pretty much straight out and up. Turning on the velocity visualization trails will show this very clearly. If you see vel combed straight out, you are guaranteed to get mushrooms in that area. If you are visualizing the velocity, best to adjust the visualization range by going forward a couple frames and adjusting the max value until you barely see red. That's your approximate max velocity value. Off the shelf pyro explosion on a hollow fuel source sphere at frame 6 will be about 16 Houdini units per second and the max velocity coincides with the leading edge of the divergence filed (if you turn it on for display, you'll see that). So Divergence is driving the expansion, which in turn pushes the velocity field and forms a pressure front ahead of the explosion because of the Project Non-Divergent step that assumes the gas is incompressible across the timestep, that is where where divergence is 0. I'm going to get the resize field thingy out of the way first as that is minor to the issue but necessary to understand. Resizing Fields Yes, if you have a huge explosion with massive velocities driven by a rapidly expanding divergence field, you could have velocities of 40 Houdini units per second or higher! Turning off the Gas Resize will force the entire container to evaluate which is slow but may be necessary in some rare cases, but I don't buy that. What you can do is, while watching your vel and divergence fields in the viewport, adjust the Padding parameter in the Bounds field high enough to keep ahead of the velocity front as that is where you hope for some nice disturbance, turbulence and confinement to stir around the leading edge of the explosion. or... Use several fields to help drive the resizing of the containers. Repeat: Use multiple fields to control the resizing of your sim containers. Yep, even though it says "Reference Field" and the docs say "Fluid field..", you can list as many fields in this parameter field that you want to help in the resizing. In case you didn't know. Diving in to the Resize Container DOP, there is a SOP Solver that contains the resizing logic that constructs a temporary field called "ResizeField", importing the fields (by expanded string name from the simulation object which is why vector fields work) with a ForEach SOP, each field in turn, then does a volume bound with the Volume Bounds SOP on all the fields together using the Field Cutoff parameter. Yes there is a bit of an overhead in evaluating these fields for resizing, but it is minor compared to having no resizing at all, at least for the first few frames where all the action and sub-stepping needs to happen. Default is density and why not, it's good for slower moving sims. Try using density and vel: "density vel". You need both as density will ensure that the container will at least bound your sources when they are added. Then vel will very quickly take over the resizing logic as it expands far more rapidly than any other field in the sim. Then use the Field Cutoff parameter to control the extent of the container. The default here is 0.005. This works for density as this field is really a glorified mask: either 0 or 1 and not often above 1. Once you bring the velocity field in to the mix, you need to adjust the Field Cutoff. Now that you have vel defined along side density, this Field Cutoff reads as 0.005 Houdini units per second wrt the vel field. Adjust Field Cutoff to suit. Start out at 0.01 and then go up or down. Larger values give you smaller, tighter containers. Lower values give you larger padding around the action. All depends on your sim, scale and velocities present. Just beware that if you start juicing the ambient shredding velocity with no Control Field (defaults to temperature with it's own threshold parameter so leave there) to values above the Field Cutoff threshold, your container will zip to full size and if you have Max Bounds off, you will promptly fill up your memory and after a few minutes of swapping death, Houdini will run out of memory and terminate. Just one of the things to keep in mind if you use vel as a resizing field. Not that I've personally done that... The Resolution Scale is useful to save on memory for very large simulations, which means you will be adjusting this for large simulations. The Gas Resize Field DOP creates a temporary field called ResizeBounds and the resolution scale sets this containers resolution compared to the reference fields. Remember from above that this parameter is driving the Volume Bound SOP's Bounding Value. Coarser values leads to blurred edges but that is usually a good thing here. Hope that clears things up with the container resizing thing. Try other fields for sims if they make sense but remember there is an overhead to process. For Pyro explosions, density and vel work ok. For combustion sims like fire, try density and temperature where buoyancy contributes a lot to the motion.
  8. 11 points
    nature.hipnc just to say hello and share some stuffs. /cnc_verkstad/ Tesan Srdjan
  9. 11 points
    Hi, Just posting some of my recent art. Most of it is houdini. Some is a mix of Houdini, Daz and Marvelous Designer. If you see a character, that's definitely from Daz. Everything is rendered in Octane. regards Rohan
  10. 10 points
    I can't take credit for it, but it needed to be shared. This made me cry with laughter.
  11. 10 points
    During the last 3 weeks, a did some Rnd and published my results on vimeo . Some people asked me to share my files here, so here we are i hope it will help!
  12. 9 points
    what if the self buttons were creating DOP setups inside one SOP network instead of having a Geometry node, a DOP network for simulation and another Geometry node to import the data and save to disk. It makes much more sense to see the data flow from top to bottom in one network without having to jump to different levels for no reason. maybe it's just me... grains.hipnc
  13. 9 points
    Hi! Here is something I've been working for a few time on and off. I was searching for something that I could use for plants that are colliding near the camera. So I ended up using the bullet solver with packed geometry. I'm not sure if it's the right way to go for this kind of things but at least I learned a lot about the constraints. Thinking afterwards I probably should have used a few more substeps but I hope you can enjoy it anyway! Suggestions for improvements are welcome
  14. 9 points
  15. 9 points
    I'm working on something related to art direct the swirly motion of gases; Its an implementation of a custom buoyancy model that let you art direct very easily the general swirly motion of gases without using masks, vorticles, temperature sourcing to have more swirly motion in specific zones, etc. Also it gets rid of the "Mushroom effect" for free with a basic turbulence setup. Here are some example previews. Some with normal motion, others with extreme parameters values to stress the pipeline. For the details is just a simple turbulence + a bit of disturbance in the vel field, nothing complex, because of this the sims are very fast (for constant sources: average voxel count 1.8 billions, vxl size 0.015, sim time 1h:40min (160 frames), for burst sources, vxl size 0.015, sim time 0h:28min). I'm working on a vimeo video to explain more this new buoyancy model that I'm working on. I hope you like it! Cheers, Alejandro constantSource_v004.mp4 constantSource_v002.mp4 burstSource_v004.mp4 constantSource_v001.mp4 burstSource_v002.mp4 burstSource_v003.mp4 burstSource_v001.mp4 constantSource_v003.mp4
  16. 8 points
    Hi everyone! The past week I worked on a personal project for learn something about hairs - vellum. It's my first project ever with hairs so I guess is nothing special but several people asked to see the hip file so here it is. Final result: Hip file (I had to recreate it but it should be pretty much the same): groom_clumping_03.hipnc
  17. 8 points
    Few tips and tricks to manipulate gas simulation. 1. Independent resolution grid. E.g. Overriding vel grid size independent to a density grid. 2. Creating additional utilities. E.g. gradient, speed, vorticity and etc which can be used to manipulate forces. 3. Forces via VEX and some example snippets. smokesolver_v1.hipnc P.S. Some of this technique are not Open CL friendly though.
  18. 7 points
    I also moved all the tumblr example files I've been sharing onto my new website that you can find here. https://www.richlord.com/tools
  19. 7 points
    Article on SideFX.com: https://www.sidefx.com/community/houdini-175-launch-event/
  20. 7 points
  21. 7 points
    "The Tree" Another R&D image from the above VR project: The idea for the VR-experience was triggered by a TV-show on how trees communicate with each other in a forest through their roots, through the air and with the help of fungi in the soil, how they actually "feed" their young and sometimes their elderly brethren, how they warn each other of bugs and other adversaries (for instance acacia trees warn each other of giraffes and then produce stuff giraffes don't like in their leaves...) and how they are actually able to do things like produce substances that attract animals that feed on the bugs that irritate them. They even seem to "scream" when they are thirsty... (I strongly recommend this (german) book: https://www.amazon.de/Das-geheime-Leben-Bäume-kommunizieren/dp/3453280679/ref=sr_1_1?ie=UTF8&qid=1529064057&sr=8-1&keywords=wie+bäume+kommunizieren ) It's really unbelievable how little we know about these beings. So we were looking to create a forest in an abstract style (pseudo-real game-engine stuff somehow doesn't really cut it IMO) that was reminiscent of something like a three dimensional painting through which you could walk. In the centre of the room, there was a real tree trunk that you were able to touch. This trunk was also scanned in and formed the basis of the central tree in the VR forest. Originally the idea was, that you would touch the tree (hands were tracked with a Leap Motion controller) and this would "load up" the touched area and the tree would start to become transparent and alive and you would be able to look inside and see the veins that transport all that information and distribute the minerals, sugar and water the plant needs. From there the energy and information would flow out to the other trees in the forest, "activate" them too and show how the "Wood Wide Web" connected everything. Also, your hands touching the tree would get loaded up as well and you would be able to send that energy through the air (like the pheromones the trees use) and "activate" the trees it touched. For this, I created trees and roots etc. in a style like the above picture where all the "strokes" were lines. This worked really great as an NPR style since the strokes were there in space and not just painted on top of some 3D geometry. Since Unity does not really import lines, Sascha from Invisible Room created a Json exporter for Houdini and a Json Importer for unity to get the lines and their attributes across. In Unity, he then created the polyline geometry on the fly by extrusion, using the Houdini generated attributes for colour, thickness etc. To keep the point count down, I developed an optimiser in Houdini that would reduce the geometry as much as possible, remove very short lines etc. In Unity, one important thing was, to find a way to antialias the lines which initially flickered like crazy - Sascha did a great job there and the image became really calm and stable. I also created plants, hands, rocks etc. in a fitting style. The team at Invisible Room took over from there and did the Unity part. The final result was shown with a Vive Pro with attached Leap Motion Controller fed by a backpack-computer. I was rather adverse to VR before this project, but I now think that it actually is possible to create very calm, beautiful and intimate experiences with it that have the power to really touch people on a personal level. Interesting times :-) Cheers, Tom
  22. 7 points
  23. 7 points
    Basic: // Primitive wrangle. int pts[] = primpoints(0, @primnum); vector rest = point(0, "P", pts[0]); vector prev_pos = rest; matrix3 frame = ident(); for (int i = 0; i < len(pts); i++) { vector pos = point(0, "P", pts[i]); rotate(frame, 0.1, {0, 0, 1}); vector new_pos = (pos - rest) * frame + prev_pos; rest = pos; prev_pos = new_pos; setpointattrib(0, "P", pts[i], new_pos); } Advanced: // Primitive wrangle. #define TWO_PI 6.2831852 addpointattrib(0, "N", {0, 0, 0}); int pts[] = primpoints(0, @primnum); int npt = len(pts); // Loop variables. vector rest = point(0, "P", pts[0]); vector prev_pos = rest; matrix3 frame = ident(); for (int i = 0; i < npt; i++) { vector pos = point(0, "P", pts[i]); vector delta = pos - rest; rest = pos; // Make normal. Point normals could be used instead. vector normal = normalize(cross(cross({0, 1, 0}, delta), delta)); if (length(normal) == 0) { normal = {0, 0, 1}; } // Drive a shape with ramps and multipliers. vector axis; float ramp, angle; // Twist the bend axis. axis = normalize(delta); ramp = chramp("twist_profile", (float) i / npt); angle = fit01(ramp, -TWO_PI, TWO_PI) * ch("twist") / (npt - 1); rotate(frame, angle, axis); // Bend the curve. axis = normalize(cross(normal, delta)); ramp = chramp("bend_profile", (float) i / npt); angle = fit01(ramp, -TWO_PI, TWO_PI) * ch("bend") / (npt - 1); rotate(frame, angle, axis); // Compute new position and normal. vector new_pos = delta * frame + prev_pos; prev_pos = new_pos; setpointattrib(0, "P", pts[i], new_pos); setpointattrib(0, "N", pts[i], normal * frame); } curl.hipnc
  24. 7 points
    Project Non-Divergent Step and Mushrooms The Project Non-Divergent DOP is responsible for 99.9% of the simulation's behaviour. Yes hundreds of DOPs inside the Pyro Solver all playing a part but all funnelling through that single Non-Divergent step. This means that if you don't like the look of your sim and the mushrooms, it's ultimately because of the Non-Divergent step creating a vel field that doesn't do it for you. If you want to see for yourself, unlock the Pyro Solver, dive in, find the Smoke Solver, unlock that, dive in and find the projectmultigrid DOP and bypass it, then play. Nothing. For most all Pyro sims, this is the Project Non-Divergent Multigrid as it is the fastest of the Non-Divergent micro-solvers. This specific implementation only takes the vel and divergence field and assuming across the timestep that the gas is non-compressible when divergence is 0, will create a counter field called Pressure and then apply that pressure field to the incoming vel to remove any compression or expansion and that gives you your velocity, nice turbulent and swirly, or combed straight out. Just tab-add a Project Non-Divergent Multigrid DOP in any dop network and look at the fields: Velocity Field, Goal Divergence Field and Pressure Field (generated every timestep, used, then removed later on). All the other fields in Pyro are there to affect vel and divergence. Period. Nothing else. At this point I don't care about rendering and the additional fields you can use there. It's about vel and divergence used to advect those fields in to interesting shapes, or mushrooms. If you want to create your own Pyro Solver taking in say previous and new vel, density, temperature, and then in a single Gas Field VOP network, create an interesting vel and divergence field, then pass that straight on to the Project Non-Divergent Multigrid microsolver, then advect density, temperature and divergence afterward, go for it. Knowing that only vel and divergence drive the simulation is very important. All the other fields are there to alter the vel and divergence field. So if you have vel vectors that are combed straight, divergence (combustion model in Pyro) or buoyancy (Gas Buoyancy DOP on temperature driving vel) have a lot to do with it. Or a fast moving object affecting vel...
  25. 6 points
    More Unlimited Fun nature2 fun.hipnc
  26. 6 points
    Hi. How about computing local space per primitive instead, and then get noise position from point position in the local space? Some sort of edge based UV unwrap. // Primitive wrangle. int pts[] = primpoints(0, @primnum); // Compute averaged primitive normal from point normals computed from their neighbours. vector normals[]; foreach (int pt; pts) { vector normalized_edges[]; vector pt_pos = point(0, "P", pt); foreach (int nb; neighbours(0, pt)) { vector nb_pos = point(0, "P", nb); append(normalized_edges, normalize(pt_pos - nb_pos)); } append(normals, normalize(avg(normalized_edges))); } vector normal = normalize(avg(normals)); // Compute edge tangent. vector pt0 = point(0, "P", pts[0]); vector pt1 = point(0, "P", pts[1]); vector edge = normalize(pt0 - pt1); // Compute bitangent and orthonormalize matrix. vector perp = normalize(cross(normal, edge)); normal = normalize(cross(edge, perp)); 3@tangent_space = set(perp, normal, edge); Final deformation code: // Point wrangle. int prim; xyzdist(1, @P, prim, set(0)); matrix3 tangent_space = prim(1, "tangent_space", prim); vector pos = @P * invert(tangent_space); float deform = noise(pos * {10,1,100}) * 0.05; v@P += v@N * deform; Some image sampling could work too: tangent_space_noise.hipnc
  27. 6 points
    This operator allows you to call a collection of nodes on any data or simply no data (generators). It gives you full control over how the lambda function should be run.
  28. 6 points
    Hello everyone! My name is Daniele, I'm new to Odforce. This is my first post, nice to meet you! I've been a character animator for a few years but recently I started learning Houdini. I'm learning using resources I find online, including some super useful posts from this forum. I would love to use this post to keep track of my progress and share my results with you. My first project is a procedural building, hip is attached. Cheers! Daniele procedural_house_01.hipnc
  29. 6 points
    Always wanted to make a cloudscape. Rendered in Octane. About 48 mins on a single GTX 1080. 1600 x 2000 pixels.
  30. 6 points
    it's pretty straightforward out of the box just use v@N, v@up or p@orient on your instancing points in such a way that resulting reference frame has Y pointing in up-down direction of your ocean (so in normal direction of the ball) and X in the direction you want wind to blow in in your file, since v@N is pointing outwards and v@N defines Z axis, your ocean deforms in a tangential direction and therefore you are seeing weird deformation here is the modified file ts_ocean_on_ball.hip
  31. 6 points
    Hi roberttt! I did that specific fracture before the Houdini 16+ booleans were available, using a custom voronoi cutters technique. Basically, I used boolean-style cutter geometry to guide a voronoi fracture. 1) Scattered lots of points on the cutter geo, point-jitter them for width, and create cluster attributes on those points to create small clumps 2) Create a band of voronoi points a bit further from the cutter geometry, to define the large chunks. These points all get the same cluster value, and make sure that cluster value isn't used in the small-chunks clusters. 3) Run the fracture with clustering.... although the new H17 voronoi fracture doesn't seem to have clustering built in. So I believe you need to do the clustering post-fracture in H17, which unfortunately doesn't have an option to remove the unnecessary internal faces, so the geom can be a bit heavy with the new workflow. (Unless I'm missing something obvious!) I don't think I've used this voronoi fracture workflow at all since the H16+ booleans were released, and I've removed that technique from my CGMA destruction class. Nowadays I would handle this in one of these ways: - Running a primary boolean fracture to define the main chunks, and then running a secondary pass where I generate additional fragments on the edges of the main pieces. There are various ways to generate those secondary boolean edge cuts, and it's always a bit experimental. - Fracture everything at once into lots of small pieces, and use noise or geometry-grouping to define the larger shapes from the smaller fracture. Then once those large chunks are defined, use constraints or the name attribute or double-packing to get them to behave as individual large pieces. Hope this helps! :-)
  32. 6 points
    I pared the scene right back and rebuilt it using stuff I learned at the start of the year, its behaving as expected now I think. constrain_to_animated_anchors_matts_soft_rotation.hipnc
  33. 6 points
    anyone interested in the h17 pyro custom velocity.. normal_tools.hiplc
  34. 6 points
    Hey All! The first part of this tutorial has been available for almost a year now, but because of the sad news that hit CMI, I was unable to upload the 2nd "half" there. So instead I just made it available on Youtube for everyone, to make up for that I suppose. I'm considering putting the first part on there as well, if enough people want that. In this tutorial covers the following, using Houdini * Generating water meshes * Updating the terrain based on the water * Generate walking paths on the terrain * Create some basic instances * Build a flexible system using external files to place these instances. Built on Houdini 16.5, but should work on 16 too, or all the way back to H14 if you skip the heightfield part. Recommended specs: at least 16GB of RAM, reduce the terrain size if you have less. Disclaimer: Work files are as is, and do not contain the cached geometry to save on space, this may explain node errors on the various "File Cache" nodes. It does however also contain the work done in the first half of the tutorial, albeit mostly undocumented: https://tinyurl.com/y89egjvq Hopefully its of some use! Twan
  35. 6 points
    I have been exploring how constraints work and I have put together a basic RBD car rig. The vehicle/car rig supports front and rear wheel drive, a spring suspension, motor speed, adjustable wheel size with front and rear axle offsets and a switchable front/back engine block mass. Dive inside and look for the node named Controls to play with the various settings. I have a 1st draft attempt at steering, but it does not really work yet. If anyone has any ideas on how to link steering to the constraints I'd love to see them. Thanks to Richard Lord and Julian Johnson for posting their constraint systems and Matt Estela's CGWiki . Dissecting their work helped me build this rig. ap_basic_vehicle_090318.hiplc
  36. 6 points
    Last week a guy asked in the brazilian houdini group, in facebook, how to simulate colored smoke. I believe there are lots of hip files with this kind of effect, but while thinking about it, it came to my mind the possibility of using CMYK instead of RGB, since cmyk is more suitable for mixing colored things other than light. I couldn´t spend more time testing it or improving the file, but it seems to work, so here´s the hip file. colored_smoke_V002.hip
  37. 6 points
    Now Im guessing you can post this. All the presentations from Siggraph are available! https://vimeo.com/goprocedural and talking about H17
  38. 6 points
    Here is another slight variation. Instead of generating a fixed length line segment that moves through time, generate the full path over the entire time range. Then add a primitive attribute @path_pos to each line primitive. Drive the offset along path value, of the path deformer, with this attribute. Then you can have some geometry leading others as they each flow along their own path. float frame_end = 200.0; // deform_path node expects input in the range of 0-1. f@path_pos = fit01(@Frame,0,frame_end); // Now offset each path based upon it's index. float delta = 0.05; // per-line delay time can be set here. f@path_pos -= (delta * @prim); ap_ps_Cardume_Odforce_v3.04.hiplc
  39. 6 points
    You're losing sight of the bigger picture here, which is to create art. FX TD's are by definition going to be on the technical side of things, but their goal is to facilitate the creation of art. The final image is what matters, 99% of the time. People with engineering mindsets sometimes like to get caught up in the "elegance" or "physical correctness" of their solutions, but that stuff rarely (if ever) matters in this field. Rotating an object is conceptually a simple thing, but it turns out that there's quite a bit of math involved. Is it really insulting one's intelligence to not assume that every artist is willing to study linear algebra to rotate a cube on its local axis? I do know how to do this, and I still don't want to have to write that code out every single time. It's a pain in the ass! Creating a transform matrix, converting to a quaternion, slerping between the two quaternions, remembering the order of multiplication... remembering and executing these steps every time gets in the way of exploration and play. Besides, all of that is only possible because SESI wrote a library of functions to handle this. Should we be expected to also write our own C++ libraries to interpolate quaternions? Should we be using Houdini at all, instead of writing our own visual effects software? Who engineered the processor that you're using to compute all this? This is a rabbit hole you'll never escape from. Anyways, Entagma and MOPs are not officially affiliated at all, so Entagma's core mission of reading white papers so that you don't have to is unlikely to change.
  40. 6 points
    Check out my latest project - creating an open library full of learning resources about various areas of VFX. It has many houdini-related presentations and theses. library: https://github.com/jtomori/vfx_good_night_reading blog post: https://jurajtomori.wordpress.com/2018/06/11/learning-resources-about-vfx-and-cg/
  41. 6 points
    Try this... Put down a measure SOP and set it to measure the perimeter of your curves. After that a primitive wrangle and write. #include <groom.h> adjustPrimLength(0, @primnum, @perimeter, @perimeter*@dist); groom.h is a included file containing some functions used in the grooming tools and one of the functions is... void adjustPrimLength(const int geo, prim; const float currentlength, targetlength)
  42. 5 points
    Finally after two years of full stagnation I found the energy to fully revamp it using HUGO (which is freaking cool) and update with the very latest projects. Unfortunately 2 projects I have been working I am not allowed to share (yet)… one of them was 8 months of my life so I am pretty gutted but hopefully soon will be ok to say we did it. https://jordibares.com [jordibares.com] Most of it now is Houdini of course… :-)
  43. 5 points
    For anyone that is interested: here is a tutorial on the best method I found to get nice motion. I figured I would post this since I still get messaged about this thread. https://youtu.be/DLSmz9HOKlE
  44. 5 points
    here you go. scene file in attachment. pattern_example.hiplc
  45. 5 points
    "Circular Paths" Embraced by the universe we try to get our roots, our feelers, our nerve endings into it's substance to feel more connected with it, with each other, with ourselves, with life and love... Created in Houdini Rendered in Redshift Post in Luminar Cheers, Tom
  46. 5 points
    A setup that finds the maximum pscale points can have, so the resulting spheres don't intersect. Scene at: https://jmp.sh/hEctEky
  47. 5 points
  48. 5 points
    Hi all, I had been doing a rnd project on how to generate knitted garments in Houdini lately. And one my inspiration was from a project which was done by Psyop using Fabric engine and the other one is done by my friend Burak Demirci. Here are the links of them. http://fabricengine.com/case-studies/psyop-part-2/ https://www.artstation.com/artist/burakdemirci Some people asked to share my hip file and I was going to do it sooner but things were little busy for me. Here it is, I also put some sticky notes to explain the process better, hope it helps. Also this hip file is the identical file of the one that I created this video except the rendering nodes https://vimeo.com/163676773 .I think there are still some things that can be improved and maybe done in a better way. I would love to see people developing this system further. Cheers! Alican Görgeç knitRnD.zip
  49. 5 points
    In case it's useful to anyone, here's an asset I made a little while ago and finally got round to documenting. It takes one or m ore curves and generates 3 levels of braided curves - first level coiled around the input, second coiled around the first and the third around the second. Additionally it can make a final level of 'hairs'. It supports animated inputs, and will transfer velocity onto the output but in many cases it's probably better to use a timeshift to generate the rope on 1 frame and use a point deform or similar. This also avoids texture jumping problems if using the supplied 'pattern' colour method. There's a fairly comprehensive help card with it and below is a demo video. If anyone finds bugs or problems, please let me know and I'll try and fix them when I have time... I'd be very interested to see anything anyone makes with it... TB__RopeMaker_1_0.hda
  50. 5 points
    Basic smoke solver built within SOP solver, utilising openVDB nodes. Happy exploring & expanding =) P.S. DOP’s smoke solver still solves quicker in many cases though. vdbsmokesolver_v1.hipnc vdbsmokesolver_v2.hipnc
×