Jump to content

Leaderboard


Popular Content

Showing most liked content since 05/24/2018 in all areas

  1. 20 points
    Pixelkram / Moritz S. (of Entagma) and I are proud to announce MOPs: an open-source toolkit for creating motion graphics in Houdini! MOPs is both a suite of ready-to-use tools for solving typical motion graphics problems, and a framework for building your own custom operators easily. More information is available from our website: http://www.motionoperators.com Enjoy!
  2. 16 points
    Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves. OpenGL OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers). Modern GL implies a lot of different things, but the key ones are: geometry data and shader data must be backed by VRAM buffers, Shaders are required, and all fixed function lighting, transformation, and shading is gone. This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes. OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well). Bottlenecks As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI. The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer. The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU. Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit. Mantle, Vulkan, and DX12 AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences). Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging. In short, it's a handful and can be rather overwhelming. Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems. Apple and OpenGL When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since. Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist. Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture). Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal. Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan). Enter Metal Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language. If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions. I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess. And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world
  3. 7 points
    Houdini implementation
  4. 5 points
    "The Tree" Another R&D image from the above VR project: The idea for the VR-experience was triggered by a TV-show on how trees communicate with each other in a forest through their roots, through the air and with the help of fungi in the soil, how they actually "feed" their young and sometimes their elderly brethren, how they warn each other of bugs and other adversaries (for instance acacia trees warn each other of giraffes and then produce stuff giraffes don't like in their leaves...) and how they are actually able to do things like produce substances that attract animals that feed on the bugs that irritate them. They even seem to "scream" when they are thirsty... (I strongly recommend this (german) book: https://www.amazon.de/Das-geheime-Leben-Bäume-kommunizieren/dp/3453280679/ref=sr_1_1?ie=UTF8&qid=1529064057&sr=8-1&keywords=wie+bäume+kommunizieren ) It's really unbelievable how little we know about these beings. So we were looking to create a forest in an abstract style (pseudo-real game-engine stuff somehow doesn't really cut it IMO) that was reminiscent of something like a three dimensional painting through which you could walk. In the centre of the room, there was a real tree trunk that you were able to touch. This trunk was also scanned in and formed the basis of the central tree in the VR forest. Originally the idea was, that you would touch the tree (hands were tracked with a Leap Motion controller) and this would "load up" the touched area and the tree would start to become transparent and alive and you would be able to look inside and see the veins that transport all that information and distribute the minerals, sugar and water the plant needs. From there the energy and information would flow out to the other trees in the forest, "activate" them too and show how the "Wood Wide Web" connected everything. Also, your hands touching the tree would get loaded up as well and you would be able to send that energy through the air (like the pheromones the trees use) and "activate" the trees it touched. For this, I created trees and roots etc. in a style like the above picture where all the "strokes" were lines. This worked really great as an NPR style since the strokes were there in space and not just painted on top of some 3D geometry. Since Unity does not really import lines, Sascha from Invisible Room created a Json exporter for Houdini and a Json Importer for unity to get the lines and their attributes across. In Unity, he then created the polyline geometry on the fly by extrusion, using the Houdini generated attributes for colour, thickness etc. To keep the point count down, I developed an optimiser in Houdini that would reduce the geometry as much as possible, remove very short lines etc. In Unity, one important thing was, to find a way to antialias the lines which initially flickered like crazy - Sascha did a great job there and the image became really calm and stable. I also created plants, hands, rocks etc. in a fitting style. The team at Invisible Room took over from there and did the Unity part. The final result was shown with a Vive Pro with attached Leap Motion Controller fed by a backpack-computer. I was rather adverse to VR before this project, but I now think that it actually is possible to create very calm, beautiful and intimate experiences with it that have the power to really touch people on a personal level. Interesting times :-) Cheers, Tom
  5. 4 points
    Probably because the case study ends up derailing the actual work.
  6. 4 points
    Houdini tool that expands on the functionality of polyexpand2d. Allows you to create detailed bevel profiles from polylines with clean, non-intersecting topology. I built it while trying to create a procedural gothic tracery system. I liked the topology created by polyexpand2d, but the divisions parameter and edgedist attribute just wouldn't let me get detailed enough bevel profiles. It would be great if sidefx could wrap something like this up into the polyexpand2d node itself as a third output option (eg. offset curves | offset surfaces | bevelled). I hope someone finds it useful mt_polyExpandPlus_1.0.hda
  7. 4 points
    Generating documentation for Houdini Python modules (toolutils and others) https://jurajtomori.wordpress.com/2018/05/28/houdini-tip-houdini-python-modules-documentation/
  8. 3 points
    Get your MOPs here: Font Blower FX: vu_GetOffTheBlower.hipnc
  9. 3 points
    Check out my latest project - creating an open library full of learning resources about various areas of VFX. It has many houdini-related presentations and theses. library: https://github.com/jtomori/vfx_good_night_reading blog post: https://jurajtomori.wordpress.com/2018/06/11/learning-resources-about-vfx-and-cg/
  10. 3 points
    Tank Tread: PS: you are welcome to add to this thread of your wacky brainshortcircuitings..more the merrier... vu_MOPs_Tank_Tread.hipnc
  11. 3 points
    The backticks remove the ambiguity of whether the string field represents a true string or an expression. There's no ambiguity in float and int fields because all the characters must be numeric (or numeric related). If you're not a fan of the backticks, you can key the string parameter, then toggle to expression mode by LMB clicking on the parm label, then entering the expression. Keying and switching to expression mode removes that ambiguity.
  12. 3 points
    Houdini tip | Open parameter path in file browser
  13. 3 points
    "Contours" Experimenting with contours on a terrain mesh. Rendered in Redshift and post-processed in Lightroom: And with different post in Luminar 2018: Cheers, Tom
  14. 2 points
    Working on a procedural cave generator using input curves for the base shape and cellular automata. The goal is for them to be game engine ready with textures, what do you think?
  15. 2 points
    Okay, I think all of those group masking bugs are fixed. There's a few other fixes and features added as well; details are in our updates thread.
  16. 2 points
    Just to make this clear: Manu & I (Entagma) are not planning on changing the content we're creating nor our format. Rest assured there will be VEX tutorials. Also if you feel you need an extra dose of VEX - we're just currently running a VEX only course on our Patreon Cheers, Mo
  17. 2 points
    You're losing sight of the bigger picture here, which is to create art. FX TD's are by definition going to be on the technical side of things, but their goal is to facilitate the creation of art. The final image is what matters, 99% of the time. People with engineering mindsets sometimes like to get caught up in the "elegance" or "physical correctness" of their solutions, but that stuff rarely (if ever) matters in this field. Rotating an object is conceptually a simple thing, but it turns out that there's quite a bit of math involved. Is it really insulting one's intelligence to not assume that every artist is willing to study linear algebra to rotate a cube on its local axis? I do know how to do this, and I still don't want to have to write that code out every single time. It's a pain in the ass! Creating a transform matrix, converting to a quaternion, slerping between the two quaternions, remembering the order of multiplication... remembering and executing these steps every time gets in the way of exploration and play. Besides, all of that is only possible because SESI wrote a library of functions to handle this. Should we be expected to also write our own C++ libraries to interpolate quaternions? Should we be using Houdini at all, instead of writing our own visual effects software? Who engineered the processor that you're using to compute all this? This is a rabbit hole you'll never escape from. Anyways, Entagma and MOPs are not officially affiliated at all, so Entagma's core mission of reading white papers so that you don't have to is unlikely to change.
  18. 2 points
    SplinePush: (screw thread, spiral, helix) While you can use Boolean, it might get messy...this MOPs method will always give you quads. Live update with height, turns, push amt. You can see this is 'modelling'...nothing to do with mograph....but is that going to stop me from using MOPs ? Absolutely NOT...it's just a (great) tool...use it to the best of your ability instead of "oohh but that's not VEX...not real Houdini way..." vu_MOPs_spiralPush.hipnc
  19. 2 points
  20. 2 points
    Mountain SOP works with normals. Just create a circle with normals on ZX Plane. mountainCircle.hipnc
  21. 2 points
    just a handful of nodes with MOPs...(5 nodes...that's one hand) PolywireWidth.hipnc
  22. 2 points
    Longest Axis, Octant, Boxify, Hexplanar Mapping
  23. 2 points
    I found this lying around on my hard drive and wanted to throw it up so it didn't get lost. Its a simple effect with VDBs cutting away a surface, then using those cut points to render concentric circles at render time. I've seen this effect before. This is my attempt at it. Hips can be got over here. Don't forget to re-render the ROPs that generate the point data for mantra. Otherwise you'll get a black surface at render time.
  24. 2 points
    My take on Diffusion Effect You can take this as far as you want, Hope my Paper And Hip Files Helps Some People!!!! https://www.artstation.com/artwork/ExrXK 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect01.hipnc 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect02.hipnc 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect03.hipnc 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect04.hipnc 2DAE08-VFX_Sim2_Exam_Declercq_Mante_Effect05.hipnc 2DAE08-VFX_ Sim2_Exam_Declercq_Mante.pdf
  25. 2 points
    I think it's because "id" is recognised automatically in vex as an integer attribute. As the rand() function returns a float, this gets truncated, resulting in 0. If you explicitly cast your "id" attribute as a float, I think you'll get what you're after. Try f@id = rand(@ptnum); The reason it works for @myId is because user defined variables default to floats if not explicitly cast. If you change it to i@myId = rand(@ptnum) it will return 0 for the same reason as stated above. Hope that helps. Cheers, WD
  26. 2 points
    Point Cloud tools optimised for large datasets (400 million points upwards) - I have a 2.2 billion points dataset so take a guess the time for of the PointCloudISO - Filtering (outlier removal, Feature estimation, registration) - Normal estimation - Point cloud sampling (RMLS) - Region segmentation (detectin of planes like floor, walls, etc..) - Model fitting may be?
  27. 2 points
    although clustering might work, I would suggest splitting the shot into 2 simulations. 1) pyro coming out of engines 2) big pyroclastic area "on" the ground this way your containers are efficient and you have way more control. you still could use clustering for the 1) simualtion part though i still might have a look at the file in the evening if i have time
  28. 2 points
    I found the solution. 1) Create a number of points equal to the number of total packed prims you have in your sim (not the number of unique pieces. for my building it was around 18,000.) I used the points generate sop for this. Each point corresponds to a packed prim. Drop down a wrangle and plug your points into the first input. Attach your simulated packed prims to the second input. For each point, you will create 3 new attributes: v@pivot, 3@transform, and s@name. The pivot and the transform are primintrinsics and you can copy them from the current packed prim attached to the second input (current meaning the one indexed at @ptnum). Also copy the packed prims positions and piece string attribute (@name_orig in the tutorial) onto your point. You can use the @name to create a new attribute called s@instancefile that points to wherever that particular piece's .rs proxy file is located on disk (this of course could have been done in one step but I like to break it up). Now you have all the attributes on your points that Redshift needs to find and instance your proxies. 2) Make sure to add the RS obj parameters to whichever object contains your new instancing points, and BE SURE to untick the box under instancing that says 'Ignore Pivot Point Attribute'. Anddd your done!
  29. 2 points
    Hi mark, I have done similar test long back. There is a target(parent point) and points(child) coming towards the target(both have the same point count). When a child point acquires a target point the other child point doesn't query the same point. have attached the file.runs pretty fast. make sure the active distance is less. Since i have used array it is pretty heavy when there are more points to query at that instant(when active distance is more).Also you can disable attract in sop solver and get the positionId(posid) and use pop attract. nearbyattract.hipnc FX TD Dneg
  30. 2 points
    Hello everyone, For the past 2 years, I've been learning Houdini with the hope of transitioning from working fulltime as a HTML5 developer, into a 3D interactive visualiser. I'm using Houdini to further my understanding of 3D, realtime, VFX, AR, VR and machine learning. It is an amazing experience learning the software. This is a video showing the creation of a mech: https://vimeo.com/258470468 I then derived a realtime model from the high resolution version to create a 3D web presentation with options to alter the environment lighting, material and accessory: https://playcanv.as/p/zBVh4yEF Any feedback from the Houdini community would be amazing! Thanks, David.
  31. 2 points
    Solitude posted a nice Mantra setup that shows how to advect Cd through a volume. I went ahead and ported it to Redshift for rendering. Now you can use an image to colorize your smoke. ap_rs_solitude_pyro_cd_060118.hiplc
  32. 2 points
    maybe this link can help you
  33. 2 points
    Just wanted to add another research paper to bump up the argument for a weighted straight skeleton implementation. http://peterwonka.net/Publications/pdfs/2011.TOG.Kelly.ProceduralExtrusions.TechreportVersion.final.pdf Covers the process that Tom Kelly and Peter Wonka developed for procedural extrusions using input guide curves:
  34. 2 points
    Hello, most of the .hip from my video #7 are here. enjoy note : thank you particuleskull, your hip (from november in this thread) helps me a lot for my grain rnd ! elephant grain.hiplc grain for each voronoi pieces-v3.hiplc grain-tree.hiplc paint brush-v2+.hiplc sticky projection.hiplc
  35. 2 points
    "Shells of Light" My second piece with extreme DOF in Redshift. The basis is a shell made of wires. There are no lights other than a HDRI environment in the scene and no post effects are used other than color correction in Luminar2018 & Photoshop. All the structures are the result of the very shallow depth of field. Rendered at 10000 pixels square for high res printing. Took about 9 hours with a GTX 980 TI and a GTX 1080 TI at 32768 samples per pixel max. Prints: https://fineartamerica.com/featured/shells-of-light-thomas-helzle.html
  36. 2 points
    "Cubicles" Same structure as above but rendered semi-opaque in Redshift: And a detail view: Cheers, Tom
  37. 2 points
    You can add the SSS Samples Property to geometry objects: Now when you render, you'll get more samples on the objects where you need it. Pixel Samples are the one property that can only live at the object level; there are probably others, but that's the only one I can remember. Here's some documentation: http://www.sidefx.com/docs/houdini/props/_index per_obj_sss.hipnc
  38. 2 points
    I missed the one without any matrix (angle / axis -> quaternion). All are simple anyway... rotation_matrix_quaternion_ortho_vectors.hip
  39. 1 point
    for anyone curious about this, I had succes by using the old op:fullpath expression. In my case it looked like this: op:`opfullpath("../../cop2net1/Bump_map/OUT/")` I would love to know why I have to use that expresion rather than just a relative path like ../../cop2net1/Bump_map/OUT/
  40. 1 point
    seems like the Group in MOPs_Transform_Modifier is not working ? Attached is illustration, the left branch is the desired result, while on the right, done in a different way...using Group in the MOPs_Transform_Modifier, it doesn't have any effect. So the idea is, even if ALL points have theirs @mops_falloff established, I thought if I restrict the effect to my Group selection in the Transform_Modifier...it should work ? (and yeah the gotcha with group is it has to be points since MOPs work on packed points principle) MOPs_Transform_Modifier_Group.hipnc
  41. 1 point
    Hi, fetch feedback should work. Prim_ForEach_FB.hipnc
  42. 1 point
    Mantra GPU rendering is the only feature I wish.
  43. 1 point
    Hey, thought I'd share this here. Preview of tree and foliage creation and layout tools now available on Gumroad. I've released them as "pay what you want" as my contribution to the community. I plan to keep supporting and improving these tools in future as well as releasing other tools. Let me know if you have any feedback/suggestions and I look forward to seeing what people create with them. Enjoy! https://gumroad.com/l/zWFNX
  44. 1 point
    "Brushed" I'm currently working on my first VR-Art-project with "Invisible Room" in Berlin and was exploring 3D-brushes from - in this case Tiltbrush - and Quill in our R&D. Here the result was exported as Alembic, procedurally coloured in Houdini and rendered with Redshift over a scanned-in rice-paper-texture: Interesting times... :-) Cheers, Tom
  45. 1 point
    I did a little python toolset to speed up my VEX writing. It replaces "code shortcuts" into "code snippets" and also updates the parm interface (default values, channel ranges from comments, ramps are generated from custom library). You can build your own libraries of snippets with specific channel ranges and defaults. I am really python newbie, so it is "alpha". The code is not prepared for exceptions, it is not exemplary, but I hope it would help somebody. It helps me already so I will debug it on the fly. The initial idea comes from here, Matt Estela: http://www.tokeru.com/cgwiki/index.php?title=HoudiniUserInterfaceTips#customise_right_click_menus qqReplace.py rampCollect.py updateInterfaceFromComments.py
  46. 1 point
    This has come up quite often for me when working with imported static FBX files. Often you will get every little piece of a model inside it's own geo object but you really just want to work with them all as a single mesh. This script will examine your node selection and create a new /obj level object that will object merge in all the nodes selected into a single merge. Then you can just use that single node to represent your model. (i.e fix up normals, detail materials, prepare for simulation, export etc...) import hou THRESHOLD = 0.015 def luminance(pixel): return (0.299 * pixel[0] + 0.587 * pixel[1] + 0.114 * pixel[2]) def is_similar(pixel_a, pixel_b, threshold): return abs(luminance(pixel_a) - luminance(pixel_b)) < threshold if len(hou.selectedNodes()): # Make a geo node that will ObjectMerge in all the nodes in the selection. node_geo = hou.node('/obj').createNode("geo","geo_merge_result") if node_geo: node_geo.moveToGoodPosition() node_geo.node('file1').destroy() node_merge = node_geo.createNode('merge') node_merge.moveToGoodPosition() # Create a normal to fix up everything after the merge. node_normal = node_geo.createNode("normal","normal1") node_normal.setInput(0, node_merge) # Create a NULL for our output placeholder. node_normal.setDisplayFlag(True) node_normal.setRenderFlag(True) node_normal.moveToGoodPosition() # Create a NULL for our output placeholder. node_null = node_geo.createNode("null","OUT") node_null.setInput(0, node_normal) # Create a NULL for our output placeholder. node_null.setDisplayFlag(True) node_null.setRenderFlag(True) node_null.moveToGoodPosition() for (n,node) in enumerate(hou.selectedNodes()): node_temp = node_geo.createNode("object_merge",node.name()) node_temp.parm('objpath1').set(node.path()) node_temp.parm('xformtype').set(1) node_temp.moveToGoodPosition() assign_by_node_color = True if assign_by_node_color: # Use the color of nodes to inherit the same @shop_materialpath. s = "rs_DEFAULT" if is_similar(node.color().rgb(), (0.584,0.776,1.0), THRESHOLD): s = "rs_blue3" if is_similar(node.color().rgb(), (0.6,0.6,0.6), THRESHOLD): s = "rs_grey5" if is_similar(node.color().rgb(), (0.145,0.667,0.557), THRESHOLD): s = "rs_green5" if is_similar(node.color().rgb(), (1.0,0.725,0.0), THRESHOLD): s = "rs_yellow6" if is_similar(node.color().rgb(), (0.996,0.933,0.0), THRESHOLD): s = "rs_yellow5" # Create a wrangle to define our shop_materialpath. node_wrangle = node_geo.createNode("attribwrangle","attribwrangle1") node_wrangle.parm('snippet').set('s@shop_materialpath = "/shop/%s";' % s) node_wrangle.parm('class').set(1) node_wrangle.moveToGoodPosition() node_wrangle.setInput(0, node_temp) # Create a color to match the node color. node_color = node_geo.createNode("color","color1") node_color.parm('colorr').set(node.color().rgb()[0]) node_color.parm('colorg').set(node.color().rgb()[1]) node_color.parm('colorb').set(node.color().rgb()[2]) node_color.moveToGoodPosition() node_color.setInput(0, node_wrangle) node_merge.setInput(n,node_color) else: node_merge.setInput(n,node_temp) In this image the white areas were not part of the selection. The color of the nodes in the selection is forwarded into the object merge as an additional color node. There is also an attribute wrangle added inline to assign the @shop_materialpath based upon the color detected. This can convert objects with same exact material referencing multiple copies of the same material into a single /shop path material. You can adjust to /mat if needed.
  47. 1 point
    or you can use Gallery manager to store any shader or node tree. then you can tranfer gallery files. very convenient and easy to use.
  48. 1 point
    There are two animated meshes composited with soft pixel mask, due to visible mismatch between them. The mismatch is from low-frequency noise animation on spiked mesh, I think. Spiked mesh may be quite coarse and it doesn't need to contain every small detail of the fine mesh, it can contain reasonable amount of meshing artifacts on the nearby area. So, it is easy to create one. You can make spiked mesh by creating lowpoly triangle mesh, then extrude and beveling result. Compositing mask could be rendered with the value used to drive animation, then clamped, blurred, painted, etc. In my case @zscale used to extrude primitives should be enough to start with. Getting isotropic triangle mesh procedurally in Houdini is not trivial if the number of triangles is very low. You will loose volume and get bad shaped triangles (all triangles at the ear tip area are bad, for example). To get better triangles, you need to do manual retopology. It's easy to make a few triangles, anyway. spiked_ear.hipnc
  49. 1 point
    this will give you list of all point groups each point belongs to as string array attrib called ptingrps string ptgrps[] = detailintrinsic(0, "pointgroups"); s[]@ptingrps = {}; foreach(string grp;ptgrps) { if (inpointgroup(0, grp, @ptnum)) append(s[]@ptingrps, grp); }
  50. 1 point
    Here is a small example of this technique. render the sequence to see the effect instance_offset.zip
×