Jump to content

Leaderboard


Popular Content

Showing most liked content since 05/18/2018 in all areas

  1. 20 points
    Pixelkram / Moritz S. (of Entagma) and I are proud to announce MOPs: an open-source toolkit for creating motion graphics in Houdini! MOPs is both a suite of ready-to-use tools for solving typical motion graphics problems, and a framework for building your own custom operators easily. More information is available from our website: http://www.motionoperators.com Enjoy!
  2. 8 points
    Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves. OpenGL OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers). Modern GL implies a lot of different things, but the key ones are: geometry data and shader data must be backed by VRAM buffers, Shaders are required, and all fixed function lighting, transformation, and shading is gone. This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes. OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well). Bottlenecks As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI. The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer. The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU. Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit. Mantle, Vulkan, and DX12 AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences). Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging. In short, it's a handful and can be rather overwhelming. Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems. Apple and OpenGL When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since. Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist. Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture). Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal. Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan). Enter Metal Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language. If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions. I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess. And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world
  3. 6 points
  4. 4 points
    Houdini tool that expands on the functionality of polyexpand2d. Allows you to create detailed bevel profiles from polylines with clean, non-intersecting topology. I built it while trying to create a procedural gothic tracery system. I liked the topology created by polyexpand2d, but the divisions parameter and edgedist attribute just wouldn't let me get detailed enough bevel profiles. It would be great if sidefx could wrap something like this up into the polyexpand2d node itself as a third output option (eg. offset curves | offset surfaces | bevelled). I hope someone finds it useful mt_polyExpandPlus_1.0.hda
  5. 4 points
    Generating documentation for Houdini Python modules (toolutils and others) https://jurajtomori.wordpress.com/2018/05/28/houdini-tip-houdini-python-modules-documentation/
  6. 3 points
    Get your MOPs here: Font Blower FX: vu_GetOffTheBlower.hipnc
  7. 3 points
    "The Tree" Another R&D image from the above VR project: The idea for the VR-experience was triggered by a TV-show on how trees communicate with each other in a forest through their roots, through the air and with the help of fungi in the soil, how they actually "feed" their young and sometimes their elderly brethren, how they warn each other of bugs and other adversaries (for instance acacia trees warn each other of giraffes and then produce stuff giraffes don't like in their leaves...) and how they are actually able to do things like produce substances that attract animals that feed on the bugs that irritate them. They even seem to "scream" when they are thirsty... (I strongly recommend this (german) book: https://www.amazon.de/Das-geheime-Leben-Bäume-kommunizieren/dp/3453280679/ref=sr_1_1?ie=UTF8&qid=1529064057&sr=8-1&keywords=wie+bäume+kommunizieren ) It's really unbelievable how little we know about these beings. So we were looking to create a forest in an abstract style (pseudo-real game-engine stuff somehow doesn't really cut it IMO) that was reminiscent of something like a three dimensional painting through which you could walk. In the centre of the room, there was a real tree trunk that you were able to touch. This trunk was also scanned in and formed the basis of the central tree in the VR forest. Originally the idea was, that you would touch the tree (hands were tracked with a Leap Motion controller) and this would "load up" the touched area and the tree would start to become transparent and alive and you would be able to look inside and see the veins that transport all that information and distribute the minerals, sugar and water the plant needs. From there the energy and information would flow out to the other trees in the forest, "activate" them too and show how the "Wood Wide Web" connected everything. Also, your hands touching the tree would get loaded up as well and you would be able to send that energy through the air (like the pheromones the trees use) and "activate" the trees it touched. For this, I created trees and roots etc. in a style like the above picture where all the "strokes" were lines. This worked really great as an NPR style since the strokes were there in space and not just painted on top of some 3D geometry. Since Unity does not really import lines, Sascha from Invisible Room created a Json exporter for Houdini and a Json Importer for unity to get the lines and their attributes across. In Unity, he then created the polyline geometry on the fly by extrusion, using the Houdini generated attributes for colour, thickness etc. To keep the point count down, I developed an optimiser in Houdini that would reduce the geometry as much as possible, remove very short lines etc. In Unity, one important thing was, to find a way to antialias the lines which initially flickered like crazy - Sascha did a great job there and the image became really calm and stable. I also created plants, hands, rocks etc. in a fitting style. The team at Invisible Room took over from there and did the Unity part. The final result was shown with a Vive Pro with attached Leap Motion Controller fed by a backpack-computer. I was rather adverse to VR before this project, but I now think that it actually is possible to create very calm, beautiful and intimate experiences with it that have the power to really touch people on a personal level. Interesting times :-) Cheers, Tom
  8. 3 points
    Check out my latest project - creating an open library full of learning resources about various areas of VFX. It has many houdini-related presentations and theses. library: https://github.com/jtomori/vfx_good_night_reading blog post: https://jurajtomori.wordpress.com/2018/06/11/learning-resources-about-vfx-and-cg/
  9. 3 points
    Tank Tread: PS: you are welcome to add to this thread of your wacky brainshortcircuitings..more the merrier... vu_MOPs_Tank_Tread.hipnc
  10. 3 points
    Houdini tip | Open parameter path in file browser
  11. 3 points
    Linear algebra book online with interactive examples: http://immersivemath.com/ila/ Haven't gone through the lot but looks helpful.
  12. 3 points
    here you go found_overlap_example_toadstorm.hip
  13. 2 points
    Just to make this clear: Manu & I (Entagma) are not planning on changing the content we're creating nor our format. Rest assured there will be VEX tutorials. Also if you feel you need an extra dose of VEX - we're just currently running a VEX only course on our Patreon Cheers, Mo
  14. 2 points
    You're losing sight of the bigger picture here, which is to create art. FX TD's are by definition going to be on the technical side of things, but their goal is to facilitate the creation of art. The final image is what matters, 99% of the time. People with engineering mindsets sometimes like to get caught up in the "elegance" or "physical correctness" of their solutions, but that stuff rarely (if ever) matters in this field. Rotating an object is conceptually a simple thing, but it turns out that there's quite a bit of math involved. Is it really insulting one's intelligence to not assume that every artist is willing to study linear algebra to rotate a cube on its local axis? I do know how to do this, and I still don't want to have to write that code out every single time. It's a pain in the ass! Creating a transform matrix, converting to a quaternion, slerping between the two quaternions, remembering the order of multiplication... remembering and executing these steps every time gets in the way of exploration and play. Besides, all of that is only possible because SESI wrote a library of functions to handle this. Should we be expected to also write our own C++ libraries to interpolate quaternions? Should we be using Houdini at all, instead of writing our own visual effects software? Who engineered the processor that you're using to compute all this? This is a rabbit hole you'll never escape from. Anyways, Entagma and MOPs are not officially affiliated at all, so Entagma's core mission of reading white papers so that you don't have to is unlikely to change.
  15. 2 points
    SplinePush: (screw thread, spiral, helix) While you can use Boolean, it might get messy...this MOPs method will always give you quads. Live update with height, turns, push amt. You can see this is 'modelling'...nothing to do with mograph....but is that going to stop me from using MOPs ? Absolutely NOT...it's just a (great) tool...use it to the best of your ability instead of "oohh but that's not VEX...not real Houdini way..." vu_MOPs_spiralPush.hipnc
  16. 2 points
  17. 2 points
    Mountain SOP works with normals. Just create a circle with normals on ZX Plane. mountainCircle.hipnc
  18. 2 points
    just a handful of nodes with MOPs...(5 nodes...that's one hand) PolywireWidth.hipnc
  19. 2 points
    Longest Axis, Octant, Boxify, Hexplanar Mapping
  20. 2 points
    I found this lying around on my hard drive and wanted to throw it up so it didn't get lost. Its a simple effect with VDBs cutting away a surface, then using those cut points to render concentric circles at render time. I've seen this effect before. This is my attempt at it. Hips can be got over here. Don't forget to re-render the ROPs that generate the point data for mantra. Otherwise you'll get a black surface at render time.
  21. 2 points
    My take on Diffusion Effect You can take this as far as you want, Hope my Paper And Hip Files Helps Some People!!!! https://www.artstation.com/artwork/ExrXK 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect01.hipnc 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect02.hipnc 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect03.hipnc 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect04.hipnc 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect05.hipnc 2DAE08-VFX_ Sim2_Exam_Declercq_Mante.pdf
  22. 2 points
    I think it's because "id" is recognised automatically in vex as an integer attribute. As the rand() function returns a float, this gets truncated, resulting in 0. If you explicitly cast your "id" attribute as a float, I think you'll get what you're after. Try f@id = rand(@ptnum); The reason it works for @myId is because user defined variables default to floats if not explicitly cast. If you change it to i@myId = rand(@ptnum) it will return 0 for the same reason as stated above. Hope that helps. Cheers, WD
  23. 2 points
    although clustering might work, I would suggest splitting the shot into 2 simulations. 1) pyro coming out of engines 2) big pyroclastic area "on" the ground this way your containers are efficient and you have way more control. you still could use clustering for the 1) simualtion part though i still might have a look at the file in the evening if i have time
  24. 2 points
    Working on a procedural cave generator using input curves for the base shape and cellular automata. The goal is for them to be game engine ready with textures, what do you think?
  25. 2 points
    I found the solution. 1) Create a number of points equal to the number of total packed prims you have in your sim (not the number of unique pieces. for my building it was around 18,000.) I used the points generate sop for this. Each point corresponds to a packed prim. Drop down a wrangle and plug your points into the first input. Attach your simulated packed prims to the second input. For each point, you will create 3 new attributes: v@pivot, 3@transform, and s@name. The pivot and the transform are primintrinsics and you can copy them from the current packed prim attached to the second input (current meaning the one indexed at @ptnum). Also copy the packed prims positions and piece string attribute (@name_orig in the tutorial) onto your point. You can use the @name to create a new attribute called s@instancefile that points to wherever that particular piece's .rs proxy file is located on disk (this of course could have been done in one step but I like to break it up). Now you have all the attributes on your points that Redshift needs to find and instance your proxies. 2) Make sure to add the RS obj parameters to whichever object contains your new instancing points, and BE SURE to untick the box under instancing that says 'Ignore Pivot Point Attribute'. Anddd your done!
  26. 2 points
    Hello everyone, For the past 2 years, I've been learning Houdini with the hope of transitioning from working fulltime as a HTML5 developer, into a 3D interactive visualiser. I'm using Houdini to further my understanding of 3D, realtime, VFX, AR, VR and machine learning. It is an amazing experience learning the software. This is a video showing the creation of a mech: https://vimeo.com/258470468 I then derived a realtime model from the high resolution version to create a 3D web presentation with options to alter the environment lighting, material and accessory: https://playcanv.as/p/zBVh4yEF Any feedback from the Houdini community would be amazing! Thanks, David.
  27. 2 points
    Solitude posted a nice Mantra setup that shows how to advect Cd through a volume. I went ahead and ported it to Redshift for rendering. Now you can use an image to colorize your smoke. ap_rs_solitude_pyro_cd_060118.hiplc
  28. 2 points
    maybe this link can help you
  29. 2 points
    The backticks remove the ambiguity of whether the string field represents a true string or an expression. There's no ambiguity in float and int fields because all the characters must be numeric (or numeric related). If you're not a fan of the backticks, you can key the string parameter, then toggle to expression mode by LMB clicking on the parm label, then entering the expression. Keying and switching to expression mode removes that ambiguity.
  30. 2 points
    I use Houdini to understand math basics for myself and VEX is a very handy tool for that: you are able to build 3D elements with math on the fly.
  31. 2 points
    Here are two expressions you can use to estimate how many voxels your current settings generate. One for pyro and one for flip. This can help in dialing in RAM usage. Pyro Voxel Estimate: (Apply To SmokeObject) ch("sizex")/ch("divsize")*ch("sizey")/ch("divsize")*ch("sizez")/ch("divsize") Fluid Voxel Estimate: ( Apply To FlipFluidObject) ch("../flipsolver1/limit_sizex")/ch("particlesep")*ch("../flipsolver1/limit_sizey")/ch("particlesep")*ch("../flipsolver1/limit_sizez")/ch("particlesep") The expression evaluates the division size and the box/domain area specified to contain the volume. How To Install: Select the FluidObject and click the gear icon to add a new parameter to the node. Add a float value and place it at the top. then paste the expression into the field. Click on the label to lock it in and you should see a scientific number presented to you. At the end of the scientific notation is typically a +06, +07,+08. +06 means x 1 million, +7 means x 10 million and +08 means x100 million voxels. So this value is read as 1.9 million voxels (1.95313e+06). The expression expects nodes to be named the same as the shelf tool generates. So if you use the shelf tools the expression should work. If you have custom node names in your network adjust the expression as needed.
  32. 2 points
    Heyya! Over the past couple of days I've been building and extending this, at it's core very simple, Noise generator tool. It's called, incredibly intuitively: Noiser. I've gotten quite sick of always doing the same simple Noise-VOP over and over again so I built this nifty tool that saves me a small but accumulating amount of time (and energy) every time I need some noise. I'm really fond of it and it rarely takes me more than a couple of minutes into any project that I drop it down. Here's a quick video demonstration: https://vimeo.com/271007816 and a simple demo screen of the defaults: And that's the relatively simple setup. I hope someone else will find it as useful as I do! Cheers, Martin PS: I just found a volume-noise tool in my OTLs-folder, so I thought I'd just share this as well. Practically the same thing working for both SDFs and Volumes (VDB & Primitive). noiser.hda vol_noise.hda
  33. 2 points
    you can also add the point attribute i@found_overlap = 1 to your packed rbds before the dopnet, this will make the solver try to resolve overlaps
  34. 2 points
    Phew, that was harder than expected ! A few problems arose, that for now I kinda dodged, or more accurately diverted onto the user to fix. Easy fix, do not worry (see file) Here is a working code, but not usable. Working because it actually does the job, not usable for the time it takes to do so. For 104 pieces, 96 frames, it took 50 seconds. If I recall correctly, for the whole 240 frames, it was something like 6 minutes. It works, but it's too long. Wayyy too long. EDIT : I must say I haven't tried sending the result to Maya or other packages yet. It's supposed to work, but eh, not tested. EDIT2 : At the current moment, importing in Maya with FBX works properly. Only drawback is that it floods the Outliner with a bunch of skinCluster and tweakSet. I'm not very used to the Maya-way, so if someone could tell me if there is a way to either get rid of them in Maya or at the export from Houdini, or tell me that it's normal, I'd be grateful P.S. Manually deleting the tweakSets doesn't seem to do anything. The next step is taking the problem from before, the one I diverted on the user, and try to correct it automatically. See code comments and file for reference. node = hou.pwd() obj = hou.node("/obj") def extractEulerRotates(self, rotate_order="zyx", thePivot=(0,0,0)): # Thanks to the Houdini help page for that. return hou.Matrix4(self.extractRotationMatrix3()).explode(rotate_order=rotate_order, pivot=thePivot)["rotate"] # Currently not used, as there is a line that does exactly that (a shorter line, as this is still only one line... whatever xD) def createGeomNodes(pieceName, masterSubnet, boneParent, workingNode, geoSubnet): # Creates a bone and a geometry node fetching the simulation's geometry, and then skins the geometry to the bone with a Capture Proximity. currentBone = masterSubnet.createNode("bone", str("bone_"+pieceName)) currentBone.setFirstInput(boneParent) currentBone.moveToGoodPosition() #initialPosition = workingNode.points()[loopValue].attribValue("P") # Keeping that for reference. #theFullTransform = workingNode.prims()[loopValue].fullTransform() #initialRotation = extractEulerRotates(theFullTransform) skinnedGeo = geoSubnet.createNode("geo", pieceName) skinnedGeo.moveToGoodPosition() skinnedGeo.deleteItems(skinnedGeo.allSubChildren()) # Removes the file node objectMergeNode = skinnedGeo.createNode("object_merge") objectMergeNode.parm("objpath1").set(str(workingNode.sopNode().path())) deleteNode = skinnedGeo.createNode("delete") deleteNode.setFirstInput(objectMergeNode) deleteNode.moveToGoodPosition() deleteNode.parm("group").set("@name="+pieceName) # Putting the piece's name in the group to keep only this one deleteNode.parm("negate").set(1) # Set to Delete Non-Selected deleteNode.parm("entity").set(1) # Set to Points timeShiftNode = skinnedGeo.createNode("timeshift") timeShiftNode.setFirstInput(deleteNode) timeShiftNode.moveToGoodPosition() timeShiftNode.parm("frame").deleteAllKeyframes() # Remove the expression already present timeShiftNode.parm("frame").set(1) # Just to be sure, manually set the frame parameter to 1. Could be useless though unpackNode = skinnedGeo.createNode("unpack") unpackNode.setFirstInput(timeShiftNode) unpackNode.moveToGoodPosition() captureProximNode = skinnedGeo.createNode("captureproximity") captureProximNode.setFirstInput(unpackNode) captureProximNode.moveToGoodPosition() captureProximNode.parm("rootpath").set(str(currentBone.path())) # Set the rootpath to only one bone and not the hierarchy. Easier skinning. deformNode = skinnedGeo.createNode("deform") deformNode.setFirstInput(captureProximNode) deformNode.moveToGoodPosition() deformNode.setDisplayFlag(True) deformNode.setRenderFlag(True) ''' # Applying some color to the skinned geometry. Used for debug attribWrangle = skinnedGeo.createNode("attribwrangle", "color") attribWrangle.setFirstInput(deformNode) attribWrangle.parm("snippet").set("@Cd = {0,0,1};") attribWrangle.moveToGoodPosition() attribWrangle.setDisplayFlag(True) attribWrangle.setRenderFlag(True) # ''' return currentBone # I need the created bone for later reference def bakePackedAnim(): # Saving out some time-related variables intialFrame = hou.intFrame() startFrame = int(hou.hscriptExpression("$RFSTART")) # Don't know how to do it in Python endFrame = int(hou.hscriptExpression("$RFEND")) # Don't know how to do it in Python hou.setFrame(startFrame) workingNode = hou.node("/obj/simulated_geo/OUT_script").geometry() # Gets the geometry. Change this to point to your geometry. <---- masterSubnet = obj.createNode("subnet", "baked_animation") # Will contain all the bones and geometry masterSubnet.moveToGoodPosition() boneParent = masterSubnet.createNode("null", "Parent") # All bones will be parented to this null boneParent.moveToGoodPosition() geoSubnet = masterSubnet.createNode("subnet", "geometry") # Will contain all the geometry geoSubnet.moveToGoodPosition() # I create it here for the moveToGoodPosition to place it where I want geoSubnet.setPosition((0,7)) # And I move it slightly up. This is just to have it at a nice place in the node editor :) # Another nicety would be to set the visible bounds for the view in the node editor. But I don't know how to, and it's not very important. boneList = [] for fragments in workingNode.points(): boneList.append( createGeomNodes(fragments.attribValue("name"), masterSubnet, boneParent, workingNode, geoSubnet) ) # There is no clever thinking into the order of the arguments. # I just made a function and passed the arguments as the errors showed up. =P # Transfers the animation from the specified geometry to the bone # I plan to make this next part into a VEX wrangler instead. The previous part is easier done in Python and is acceptably fast, # but for the next part, Python is very shitty, speed-wise. Will see. for i in xrange(startFrame, endFrame+1): # For some reasons, xrange goes from the correct start value to the end value, minus 1. Strange. for j in xrange(0, len(boneList)): # But xrange works here. Hmmmm. #for j in xrange(9, 10): # Used for debug currentBone = boneList[j] hou.setFrame(i) initialPosition = hou.Vector3(hou.node(geoSubnet.path() + "/piece" + str(j) + "/timeshift1").geometry().points()[0].attribValue("P")) theFullTransform = workingNode.prims()[j].fullTransform() #thePosition = workingNode.points()[j].attribValue("P") #initialPosition = hou.Vector3(workingNode.points()[j].attribValue("P")) #theRotation = extractEulerRotates(theFullTransform, initialPosition) #thePivot = hou.Vector3(workingNode.prims()[j].intrinsicValue("pivot")) thePosition = theFullTransform.extractTranslates("srt") # Get the transform information from the Identity Matrix # thePosition += initialPosition # Move to the right position # The previous line is required if the identity matrix hasn't been correctly set. See wall of text below. # As of now, the position is correct. # BUT THE ROTATION ISN'T # The problem is that it still rotates around the old pivot point, the one before the "thePosition += initialPosition" line. # I've got to figure out how to make the rotation rotate around another center # For now, I'll keep the fix as is and go on with the second option # The fix is in simulate_geo, in the input 0 of the switch # All it does is correctly populate the identity matrix before the simulation (see comment just before this wall of text) # The second option is doing this after the simulation, in a for-each loop nested into the HDA that this will become # I'm letting all the commented tests here for future reference if needed. # The problem is in Python, this code is utterly slow # For 104, 96 frames, it took 50 seconds. That's wayyy too much for wayyy too few pieces. # All the commented code is some tests to manually rebuild the identity matrix to be able to have to right rotation. # Without success. # Next step is described above. #print theFullTransform #theFullTransform.setAt(3,0,thePosition[0]) #theFullTransform.setAt(3,1,thePosition[1]) #theFullTransform.setAt(3,2,thePosition[2]) #print theFullTransform #print "\n" #initialPosition = hou.Vector3(workingNode.points()[j].attribValue("P")) #initialPosition = hou.Vector3(thePosition) #print initialPosition #theRotation = theFullTransform.extractRotates(transform_order="srt", rotate_order="zyx", pivot=initialPosition) #initialPosMatrix = hou.hmath.buildTranslate(initialPosition) #modifiedMatrix = theFullTransform + initialPosMatrix #print modifiedMatrix theRotation = theFullTransform.extractRotates(transform_order="srt", rotate_order="zyx") # Position key = hou.Keyframe(thePosition[0]) currentBone.parm("tx").setKeyframe(key) key = hou.Keyframe(thePosition[1]) currentBone.parm("ty").setKeyframe(key) key = hou.Keyframe(thePosition[2]) currentBone.parm("tz").setKeyframe(key) # Rotation key = hou.Keyframe(theRotation[0]) currentBone.parm("rx").setKeyframe(key) key = hou.Keyframe(theRotation[1]) currentBone.parm("ry").setKeyframe(key) key = hou.Keyframe(theRotation[2]) currentBone.parm("rz").setKeyframe(key) hou.setFrame(intialFrame) bakePackedAnim() # Would need to create a UI or a button for convenience, calling this out. packed_anim_baker.hip EDIT : For the ones that doesn't have access to Houdini FX, I'll explain the "diverted-on-the-user" fix. Just before sending the geometry to the simulation, drop down an attribute wrangle, store the current position in a variable, and set the position to the origin. v@oldP = @P; @P = (0,0,0); Then unpack, then pack (make sure to have the same amount of pieces after repacking). This little ping-pong game is absolutely necessary. I suggest using the probably already-present name attribute in the pack node, and to transfer it too, so that you have a name primitive attribute. Then, place an attribute promote to move the name attribute from primitive to the points. Then, place an attribute copy with the attribpromote wired into its left input, and the AttributeWrangle wired to its right input. Use "name" in Attribute to Match, and "oldP" in Attribute Name. It's then just a matter of placing an Attribute Wrangle under all that, like so v@P = v@oldP; Send that to the simulation, and the code will be all too happy to spit out correct position and rotation when you run it later on ! All it does really, is correctly populate the Identity Matrix (see in the Geometry Spreadsheet, Intrinsic "PackedFullTransform"). Without the fix, the position is all zeros, but with it, it has the correct initial positions and will be correctly updated. Yay !
  35. 2 points
    Hey, old thread but figured I'd chime in, if you are looking for something to read only a random nearby point, try this in a wrangle node--- Its smart enough to detect second input geometry. (Pic 1) Just make sure to hit the `Create Spare Parameters` button on the right of the wrangle node vexpression parameter. Then set `maxLineLength` and `maxFindCount` up to find near by points using `pcfind`. int read=0; if(npoints(1)>0){ read=1; } int pts[]=pcfind(read,"P",v@P,chf("maxLineLength"),chi("maxFindCount")); int l=len(pts); float fl=float(l); int randomConnect=chi("randomConnect"); int rander=pts[int(random(@ptnum*fl+randomConnect)*fl*fl) % l]; vector curPos=attrib(read,"point", "P", rander); int to=addpoint(0,curPos); addprim(0,"polyline",@ptnum,to); Since I'm not sure the end goal, I'll share this line generator I wrote. Connects any found points; supports second input detection. (Pic 2) Same as above, `Create Spare Parameters` Use `Keep Point Count` to not create needless amounts of duplicate points. int drawLine(vector f; vector t){ int base=addpoint(0,f); int to=addpoint(0,t); int line=addprim(0,"polyline",base,to); return line; } //----------------------------- int read=0; if(npoints(1)>0){ read=1; } int maxLines=chi("maxLineCount"); float minLen=chf("minLineLength"); int pts[]=pcfind(read,"P",v@P,chf("maxLineLength"),chi("maxFindCount")); int randomConnect=chi("randomConnect"); int keepPointCount=min(1, max(0,chi("keepPointCount"))); int runner=0; vector curPos; int pt; if(randomConnect == 0){ for(int x=0; x<len(pts);++x){ pt=pts[x]; if(runner > maxLines){ break; } curPos=attrib(read,"point", "P", pt); if(length(curPos-v@P)>minLen && (@ptnum<pt || read)){ if(keepPointCount){ int to=pt; if(read){ to=addpoint(0,curPos); } addprim(0,"polyline",@ptnum,to); }else{ drawLine(v@P,curPos); } runner++; } } }else{ int l=len(pts); float fl=float(l); int rander=pts[int(random(@ptnum*fl+randomConnect)*fl*fl) % l]; curPos=attrib(read,"point", "P", rander); if(keepPointCount){ int to=rander; if(read){ to=addpoint(0,curPos); } addprim(0,"polyline",@ptnum,to); }else{ drawLine(v@P,curPos); } }
  36. 2 points
    Hey James, I would approach it in a different way, check it out: first of all, packed rigid bodies act like particles so you can use the pop wind instead of the wind force (and all the other pop forces, this give us a lot of forces varieties) Here's how you can set this up: To make statement run multiple times it must be inside the dop network (use a popwrangle for this) Idk how to make the pieces scale up or down using vex (never thought of that) so I would do it using the primitive sop. To use the primitive sop you will need a sop solver. To use the sop solver with the bullet solver you must connect both using a Multi Solver (I know, it looks complex but it's a very common technique, you will see yourself doing a lot of multisolvers in th future) Here's how it should looks like: Inisde the sop solver you can set things like this and use the scale parameter to scale individual pieces based on some other attribute (age in this situation) (Inside the sop solver you can do stuff you'd do in sop context, affecting the dop net stuff) It looks like it's working fine now, I hope it can makes some things clear for you Cheers, Alvaro Plant Debris scale_fix_02.hipnc
  37. 2 points
    - hexadecimal color input in the color picker - gltf 2.0 import / export - better typo font / svg support for motion design - some love on the traditional deformer likes twist deform / wire deform to make them more straightforward / user friendly ( i still do twist in VOP in H ... ) - a visual preview of the shaders coming from external render engine in the shader palette ( octane / redshift / arnold / prman )
  38. 2 points
    if you convert your vel volumes to VDB (convertvdb+vdbvectormerge) you can access the volume data by name directly in a gasfieldwrangle inside DOPs, and add them all within a loop: // add all VDB velocity volumes to vel int clusters = 10; for(int i=0;i<clusters;i++){ v@vel += volumesamplev(0,"vel_"+itoa(i),@P); }
  39. 2 points
    particles is just points and are treated as such in the solver. Sourcing all geo add primitves and so on but the solver doesnt care about them. this is why they dont get oriented according to the changing orient attrib. see the file for 2 ways to solve this dm_popSpin.hip
  40. 2 points
    "Contours" Experimenting with contours on a terrain mesh. Rendered in Redshift and post-processed in Lightroom: And with different post in Luminar 2018: Cheers, Tom
  41. 2 points
    "Cubicles" Same structure as above but rendered semi-opaque in Redshift: And a detail view: Cheers, Tom
  42. 2 points
    You can add the SSS Samples Property to geometry objects: Now when you render, you'll get more samples on the objects where you need it. Pixel Samples are the one property that can only live at the object level; there are probably others, but that's the only one I can remember. Here's some documentation: http://www.sidefx.com/docs/houdini/props/_index per_obj_sss.hipnc
  43. 2 points
    This approach is a little heavy, but it works. Tried cunning things with carves and adjustPrimLength that's been discussed here, but the problem is the polylines have all sorts of orientations, so growing them in that fashion often makes them grow opposite to the overall flow. The base idea here is the same as the infection/growth stuff that all the kids are doing nowadays. Growing_lines_between_points_me.hipnc
  44. 2 points
    I missed the one without any matrix (angle / axis -> quaternion). All are simple anyway... rotation_matrix_quaternion_ortho_vectors.hip
  45. 1 point
    What a great set of tools. Thank you guys. Love the logo as well!
  46. 1 point
    Hello, check my latest tool which I would like to share with you: Batch textures conversion. Find out more on Github or my blog post. https://github.com/jtomori/batch_textures_convert https://jurajtomori.wordpress.com/2018/05/25/batch-textures-conversion-tool/
  47. 1 point
    Here is a Python script I wrote, Images As Planes, that reads in images in a folder and creates Redshift Materials with texture maps for each image in the folder. If you are looking into creating materials for Redshift, via python, consider this example code. Here is a procedural planet rendered in Redshift. It routes VEX/VOP generated noise attributes to Redshift materials. ap_rs_procedural_planet_050517.hiplc
  48. 1 point
    Here is how to fix Divide SOP limitation about open geometry: // Point Wrangle. #define ALMOST_ZERO 1e-6 vector4 plane = chv("../clip2/dir"); plane.w = ch("../clip2/dist"); float dist = abs(@P.x * plane.x + @P.y * plane.y + @P.z * plane.z - plane.w); if (dist > ALMOST_ZERO) { removepoint(0, @ptnum); } And with it, it should be really fail-proof solution, comparing with Cookie SOP. But in 15.5 there is a new Dissolve SOP which able to create curves from edges, and it works quite good, much better than old one. Give it a try also, if you are up to date. edge_to_curve.hipnc @NNois, I agree, many nodes have such "side features", like Facet's removing inline points by threshold or Divide's removing shared edges. I believe it is what you could get from the main algorithms for free. If you do something like shaders or VEX wrangles, you might notice that you could compute a lot of auxiliary and potentially useful data by outputting inner variables.
  49. 1 point
    And now I can convert to VDB and back to get my final surface. ap_ripple_test_across_3D_surface_VDB.hipnc
  50. 1 point
    1.In Animated version/sop_solver sop, you are transferring the color on previous frame.This is allowing houdini to remember color info on the previous frame. 2. For position you need live geometry, input_1 node in sop solver. Take position data from this node and use it to set position for previous frame geometry. solverSOP_Solution.hipnc
×