Jump to content

Leaderboard


Popular Content

Showing most liked content since 06/17/2018 in all areas

  1. 8 points
    Since there's been a lot of talk around the web about graphics APIs this past week with Apple's decision to deprecate OpenGL in MacOS Mojave, I thought I'd take this opportunity to discuss the various graphics APIs and address some misconceptions. I'm doing this as someone who's used all versions of OpenGL from 1.0 to 4.4, and not with my SideFX hat on. So I won't be discussing future plans for Houdini, but instead will be focusing on the APIs themselves. OpenGL OpenGL has a very long history dating back to the 90s. There have been many versions of it, but the most notable ones are 1.0, 2.1, 3.2, and 4.x. Because of this, it gets a reputation for being old and inefficient, which is somewhat true but not the entire story. Certainly GL1.0 - 2.1 is old and inefficient, and doesn't map well to modern GPUs. But then in the development of 3.0, a major shift occurred that nearly broken the GL ARB (architecture review board) apart. There was a major move to deprecate much of the "legacy" GL features, and replace it with modern GL features - and out of that kerfuffle the OpenGL core and compatibility profiles emerged. The compatibility profile added these new features alongside the one ones, while the core profile completely removed them. The API in the core profile is what people are referring to when they talk about "Modern GL". Houdini adopted modern GL in v12.0 in the 3D Viewport, and more strict core-profile only support in v14.0 (the remaining UI and other viewers). Modern GL implies a lot of different things, but the key ones are: geometry data and shader data must be backed by VRAM buffers, Shaders are required, and all fixed function lighting, transformation, and shading is gone. This is good in a lot of ways. Geometry isn't being streamed to the GPU in tiny bits anymore and instead kept on the GPU, the GL "big black box" state machine is greatly reduced, and there's a lot more flexibility in the display of geometry from shaders. You can light, transform, and shade the model however you'd like. For example, all the various shading modes in Houdini, primitive picking, visualizers, and markers are all drawn using the same underlying geometry - only the shader changes. OpenGL on Windows was actually deprecated decades ago. Microsoft's implementation still ships with Windows, but it's an ancient OpenGL 1.1 version that no one should use. Instead, Nvidia, AMD and Intel all install their own OpenGL implementations with their drivers (and this extends to CL as well). Bottlenecks As GPUs began getting faster, what game developers in particular started running into was a CPU bottleneck, particularly as the number of draw calls increased. OpenGL draw calls are fast (more so that DirectX), but eventually you get to a point where the driver code prepping the draw started to become significant. More detailed worlds meant not only bigger models and textures, but more of them. So the GPU started to become idle waiting on draws from the CPUs, and that draw load began taking away from useful CPU work, like AI. The first big attempt to address this was in the form of direct state access and bindless textures. All resources in OpenGL are given an ID - an integer which you can use to identify a resource for modifying it and binding it to the pipeline. To use a texture, you bind this ID to slot, and the shader refers to this slot through a sampler. As more textures we used and switched within a frame, mapping the ID to its data structure became a more significant load on the driver. Bindless does away with the ID and replaces it with a raw pointer. The second was to move more work to the GPU entirely, and GLSL Compute shaders (GL4.4) were added, along with Indirect draw calls. This allows the GPU to do culling (frustum, distance based, LOD, etc) with an OpenCL-like compute shader and populate some buffers with draw data. The indirect draw calls reference this data, and no data is exchanged between GPU and CPU. Finally, developers started batching as much up as possible to reduce the number of draw calls to make up for these limitations. Driver developers kept adding more optimizations to their API implementations, sometimes on a per-application basis. But it became more obvious that for realtime display of heavy scenes, and with VR emerging where a much higher frame rate and resolution is required, current APIs (GL and DX11) were reaching their limit. Mantle, Vulkan, and DX12 AMD recognized some of these bottlenecks and the bottleneck that the driver itself was posing to GPU rendering, and produced a new graphics API called Mantle. It did away with the notion of a "fat driver" that optimized things for the developer. Instead, it was thin and light - and passed off all the optimization work to the game developer. The theory behind this is that the developer knows exactly what they're trying to do, whereas the driver can only guess. Mantle was eventually passed to Khronos, who develops the OpenGL and CL standards, and from that starting point Vulkan emerged. (DirectX 12 is very similar in theory, so for brevity’s sake I'll lump them together here - but note that there are differences). Vulkan requires that the developer be a lot more up-front and hands on with everything. From allocating large chunks of VRAM and divvying it up among buffers and textures, saying exactly how a resource will be used at creation time, and describing the rendering pipeline in detail, Vulkan places a lot of responsibility on the developer. Error checking and validation can be entirely removed in shipping products. Even draw calls are completely reworked - no more global state and swapping textures and shaders willy-nilly. Shaders must be wrapped in an object which also contains all its resources for a given draw per framebuffer configuration (blending, AA, framebuffer depths, etc), and command buffers built ahead of time in order to dispatch state changes and draws. Setup becomes a lot more complicated, but also is more efficient to thread (though the dev is also completely responsible for synchronization of everything from object creation and deletion to worker and render threads). Vulkan also requires all shaders be precompiled to a binary format, which is better for detecting shader errors before the app gets out the door, but also makes generating them on the fly more challenging. In short, it's a handful and can be rather overwhelming. Finally, it's worth noting that Vulkan is not intended as a replacement for OpenGL; Khronos has stated that from its release. Vulkan is designed to handle applications where OpenGL falls short. A very large portion of graphics applications out there don't actually need this level of optimization. My intent here isn't to discourage people from using Vulkan, just to say that it's not always needed, and it is not a magic bullet that solves all your performance problems. Apple and OpenGL When OSX was released, Apple adopted OpenGL as its graphics API. OpenGL was behind most of its core foundation libraries, and as such they maintained more control over OpenGL than Windows or Linux. Because of this, graphics developers did not install their own OpenGL implementations as they did for Windows or Linux. Apple created the OpenGL frontend, and driver developers created the back end. This was around the time of the release of Windows Vista and its huge number of driver-related graphics crashes, so in retrospect the decision makes a lot of sense, though that situation has been largely fixed in the years since. Initially Apple had support for OpenGL 2.1. This had some of the features of Modern GL, such as shaders and buffers, but it also lacked other features like uniform buffers and geometry shaders. While Windows and Linux users enjoyed OpenGL 3.x and eventually 4.0, Mac developers were stuck with a not-quite-there-yet version of OpenGL. Around 2012 they addressed this situation and released their OpenGL 3.2 implementation ...but with a bit of a twist. Nvidia and AMD's OpenGL implementations on Windows and Linux supported the Compatibility profile. When Apple released their GL3.2 implementation it was Core profile only, and that put some developers in a tricky situation - completely purge all deprecated features and adopt GL3.2, or remain with GL2.1. The problem being that some deprecated features were actually still useful in the CAD/DCC universe, such as polygons, wide lines, and stippled lines/faces. So instead of the gradual upgrading devs could do on the other platforms, it became an all-or-nothing affair, and this likely slowed adoption of the GL3.2 profile (pure conjecture on my part). This may have also contributed to the general stability issues with GL3.2 (again, pure conjecture). Performance was another issue. Perhaps because of the division of responsibility between the driver developer of the GPU maker and the OpenGL devs at Apple, or perhaps because the driver developers added specific optimizations for their products, OpenGL performance on MacOS was never quite as good as other platforms. Whatever the reason, it became a bit of a sore point over the years, with a few games developers abandoning the platform altogether. These problems likely prompted them to look at at alternate solution - Metal. Eventually Apple added more GL features up to the core GL4.1 level, and that is where it has sat until their announcement of GL deprecation this week. This is unfortunate for a variety of reasons - versions of OpenGL about 4.1 have quite a few features which address performance for modern GPUs and portability, and it's currently the only cross platform API since Apple has not adopted Vulkan (though a third party MoltenVK library exists that layers Vulkan on Metal, it is currently a subset of Vulkan). Enter Metal Metal emerged around the time of Mantle, and before Khronos had begun work on Vulkan. It falls somewhere in between OpenGL and Vulkan - more suitable for current GPUs, but without the extremely low-level API. It has compute capability and most of the features that GL does, with some of the philosophy of Vulkan. Its major issues for developers are similar to those of DirectX - it's platform specific, and it has its own shading language. If you're working entirely within the Apple ecosystem, you're probably good to go - convert your GL-ES or GL app, and then continue on. If you're cross platform, you've got a bit of a dilemma. You can continue on business as usual with OpenGL, fully expecting that it will remain as-is and might be removed at some point in the future, possibly waiting until a GL-on-top-of-Metal API comes along or Apple allows driver developers to install their own OpenGL like Microsoft does. You can implement a Metal interface specific to MacOS, port all your shaders to Metal SL and maintain them both indefinitely (Houdini has about 1200). Or, you can drop the platform entirely. None of those seem like very satisfactory solutions. I can't say the deprecation comes as much of a surprise, with Metal development ongoing and GL development stalling on the Mac. It seems like GL was deprecated years ago and this is just the formal announcement. One thing missing from the announcement was a timeframe for when OpenGL support would end (or if it will end). It does seem like Apple is herding everyone toward Metal, though how long that might take is anyone's guess. And there you have it, the state of Graphics APIs in 2018 - from a near convergence of DX11 and GL4 a few short years ago, to a small explosion of APIs. Never a dull moment in the graphics world
  2. 4 points
    Probably because the case study ends up derailing the actual work.
  3. 2 points
    Okay, I think all of those group masking bugs are fixed. There's a few other fixes and features added as well; details are in our updates thread.
  4. 2 points
    "The Tree" Another R&D image from the above VR project: The idea for the VR-experience was triggered by a TV-show on how trees communicate with each other in a forest through their roots, through the air and with the help of fungi in the soil, how they actually "feed" their young and sometimes their elderly brethren, how they warn each other of bugs and other adversaries (for instance acacia trees warn each other of giraffes and then produce stuff giraffes don't like in their leaves...) and how they are actually able to do things like produce substances that attract animals that feed on the bugs that irritate them. They even seem to "scream" when they are thirsty... (I strongly recommend this (german) book: https://www.amazon.de/Das-geheime-Leben-Bäume-kommunizieren/dp/3453280679/ref=sr_1_1?ie=UTF8&qid=1529064057&sr=8-1&keywords=wie+bäume+kommunizieren ) It's really unbelievable how little we know about these beings. So we were looking to create a forest in an abstract style (pseudo-real game-engine stuff somehow doesn't really cut it IMO) that was reminiscent of something like a three dimensional painting through which you could walk. In the centre of the room, there was a real tree trunk that you were able to touch. This trunk was also scanned in and formed the basis of the central tree in the VR forest. Originally the idea was, that you would touch the tree (hands were tracked with a Leap Motion controller) and this would "load up" the touched area and the tree would start to become transparent and alive and you would be able to look inside and see the veins that transport all that information and distribute the minerals, sugar and water the plant needs. From there the energy and information would flow out to the other trees in the forest, "activate" them too and show how the "Wood Wide Web" connected everything. Also, your hands touching the tree would get loaded up as well and you would be able to send that energy through the air (like the pheromones the trees use) and "activate" the trees it touched. For this, I created trees and roots etc. in a style like the above picture where all the "strokes" were lines. This worked really great as an NPR style since the strokes were there in space and not just painted on top of some 3D geometry. Since Unity does not really import lines, Sascha from Invisible Room created a Json exporter for Houdini and a Json Importer for unity to get the lines and their attributes across. In Unity, he then created the polyline geometry on the fly by extrusion, using the Houdini generated attributes for colour, thickness etc. To keep the point count down, I developed an optimiser in Houdini that would reduce the geometry as much as possible, remove very short lines etc. In Unity, one important thing was, to find a way to antialias the lines which initially flickered like crazy - Sascha did a great job there and the image became really calm and stable. I also created plants, hands, rocks etc. in a fitting style. The team at Invisible Room took over from there and did the Unity part. The final result was shown with a Vive Pro with attached Leap Motion Controller fed by a backpack-computer. I was rather adverse to VR before this project, but I now think that it actually is possible to create very calm, beautiful and intimate experiences with it that have the power to really touch people on a personal level. Interesting times :-) Cheers, Tom
  5. 1 point
    In case you hadn't already seen it, there was this post recently that mentioned using cone twist constraints for a bending-type effect in splintering wood, perhaps that might help
  6. 1 point
    generally yes ;), %d will work with most "number" types same as %f, the difference between them is mostly formatting I think. For instance with %d, it will limit the amount of decimals in the output string, whereas %f will try to keep as many digits behind the decimal point as possible. There are some other tricks that you could use, such as %02d. that will pad the start of the output with zeros, so you can do a more easy alpha-numeric search for example personally, I never had to use anything other than %f, %d or %s.
  7. 1 point
    Contributing answers to this forum is always a good way to share knowledge
  8. 1 point
    looks to me like the effect is called "rotating a bunch of things and then meshing them together". learning how the copy SOP works with template point attributes, and how VDB surfacing works, are probably your main goals for this.
  9. 1 point
    Start working on this again. Added some width variation to the input curve, vertical shafts and broken it into segments.
  10. 1 point
    Append a polyframe node on your resampled lines, set "Tangent name" to N, set N.y to 0 and copy a box onto these points. stairs_pframe.hipnc
  11. 1 point
    for anyone curious about this, I had succes by using the old op:fullpath expression. In my case it looked like this: op:`opfullpath("../../cop2net1/Bump_map/OUT/")` I would love to know why I have to use that expresion rather than just a relative path like ../../cop2net1/Bump_map/OUT/
  12. 1 point
    just being a bit silly here... (someone can add 8 bit sound FX ?) vu_SpaceInvaders.hipnc
  13. 1 point
    no idea why you do things in such convoluted ways... anyway..here... mod_everyPrim_fix.hipnc
  14. 1 point
    ah right! you'd think i'd know better by now. we'll have that in the next build.
  15. 1 point
    The problem is that you're transforming the coins pre-pack using copy stamping. You want to use template point attributes so that the coins are all identical going into the copy SOP, and the transforms are defined entirely by the packed intrinsics.
  16. 1 point
    @Noobini I found the bug; it's in Apply Attributes. It's incorrectly combining the existing group mask (group1) with `@mops_falloff>0` via a Python script, and it was doing it with bad syntax to boot. If you disable "Ignore zero falloff prims" it will work in the meantime. I'll have this fixed in the next build.
  17. 1 point
    seems like the Group in MOPs_Transform_Modifier is not working ? Attached is illustration, the left branch is the desired result, while on the right, done in a different way...using Group in the MOPs_Transform_Modifier, it doesn't have any effect. So the idea is, even if ALL points have theirs @mops_falloff established, I thought if I restrict the effect to my Group selection in the Transform_Modifier...it should work ? (and yeah the gotcha with group is it has to be points since MOPs work on packed points principle) MOPs_Transform_Modifier_Group.hipnc
  18. 1 point
    @toadstorm and @moedeldiho Thanks so much for this. These are very nice tools!
  19. 1 point
    Hi, fetch feedback should work. Prim_ForEach_FB.hipnc
  20. 1 point
  21. 1 point
    You're losing sight of the bigger picture here, which is to create art. FX TD's are by definition going to be on the technical side of things, but their goal is to facilitate the creation of art. The final image is what matters, 99% of the time. People with engineering mindsets sometimes like to get caught up in the "elegance" or "physical correctness" of their solutions, but that stuff rarely (if ever) matters in this field. Rotating an object is conceptually a simple thing, but it turns out that there's quite a bit of math involved. Is it really insulting one's intelligence to not assume that every artist is willing to study linear algebra to rotate a cube on its local axis? I do know how to do this, and I still don't want to have to write that code out every single time. It's a pain in the ass! Creating a transform matrix, converting to a quaternion, slerping between the two quaternions, remembering the order of multiplication... remembering and executing these steps every time gets in the way of exploration and play. Besides, all of that is only possible because SESI wrote a library of functions to handle this. Should we be expected to also write our own C++ libraries to interpolate quaternions? Should we be using Houdini at all, instead of writing our own visual effects software? Who engineered the processor that you're using to compute all this? This is a rabbit hole you'll never escape from. Anyways, Entagma and MOPs are not officially affiliated at all, so Entagma's core mission of reading white papers so that you don't have to is unlikely to change.
  22. 1 point
    Houdini implementation
  23. 1 point
    Point Cloud tools optimised for large datasets (400 million points upwards) - I have a 2.2 billion points dataset so take a guess the time for of the PointCloudISO - Filtering (outlier removal, Feature estimation, registration) - Normal estimation - Point cloud sampling (RMLS) - Region segmentation (detectin of planes like floor, walls, etc..) - Model fitting may be?
  24. 1 point
    Hi mark, I have done similar test long back. There is a target(parent point) and points(child) coming towards the target(both have the same point count). When a child point acquires a target point the other child point doesn't query the same point. have attached the file.runs pretty fast. make sure the active distance is less. Since i have used array it is pretty heavy when there are more points to query at that instant(when active distance is more).Also you can disable attract in sop solver and get the positionId(posid) and use pop attract. nearbyattract.hipnc FX TD Dneg
  25. 1 point
    The backticks remove the ambiguity of whether the string field represents a true string or an expression. There's no ambiguity in float and int fields because all the characters must be numeric (or numeric related). If you're not a fan of the backticks, you can key the string parameter, then toggle to expression mode by LMB clicking on the parm label, then entering the expression. Keying and switching to expression mode removes that ambiguity.
  26. 1 point
    i think you SHOULD RFE to sidefx, they might come up with some clever idea to simplify or make it more accessible, you never know..
  27. 1 point
    Hi, Since almost 2 years, i 'm making some looping GIF using mostly Houdini and octane under the Spyrogif alias. Most of this works are made during various productions to test some Houdini features or while waiting during simulation time. :-) Now i've got a number of those, I thought it might interest you. These tests cover a number of differents technicals approaches and workflows from simple keyframe animation and modelling to fully procedural stuffs. The only thing in common in all these tests is that almost all are using modulo expressions with time blending to get perfects cycles. All these GIF are using a houdini>octanerender via alembic export. The main reason to that is only the fact i like to tweak my render at home and to not overload various postproduction compagnies renderfarm with silly and weird tests. :-) If you want to keep track on this "project" feel free to subscribe to my tumblr. http://spyrogif.tumblr.com/ Edit : You can now follow this on Facebook too. https://www.facebook.com/spyrogif/ Hope you like it. Ps : i'm feeling always guilty to not participate in this forums more. It's a real gold mine and a awsome community (odforce and sidefx forum). Thanks you to everybody, you are awsome. I know that i can always count on you when i struggle with a problem. Thanks for that. Some of them.. More at Spyrogif
  28. 1 point
    "Brushed" I'm currently working on my first VR-Art-project with "Invisible Room" in Berlin and was exploring 3D-brushes from - in this case Tiltbrush - and Quill in our R&D. Here the result was exported as Alembic, procedurally coloured in Houdini and rendered with Redshift over a scanned-in rice-paper-texture: Interesting times... :-) Cheers, Tom
  29. 1 point
    "Shells of Light" My second piece with extreme DOF in Redshift. The basis is a shell made of wires. There are no lights other than a HDRI environment in the scene and no post effects are used other than color correction in Luminar2018 & Photoshop. All the structures are the result of the very shallow depth of field. Rendered at 10000 pixels square for high res printing. Took about 9 hours with a GTX 980 TI and a GTX 1080 TI at 32768 samples per pixel max. Prints: https://fineartamerica.com/featured/shells-of-light-thomas-helzle.html
  30. 1 point
    I did a little python toolset to speed up my VEX writing. It replaces "code shortcuts" into "code snippets" and also updates the parm interface (default values, channel ranges from comments, ramps are generated from custom library). You can build your own libraries of snippets with specific channel ranges and defaults. I am really python newbie, so it is "alpha". The code is not prepared for exceptions, it is not exemplary, but I hope it would help somebody. It helps me already so I will debug it on the fly. The initial idea comes from here, Matt Estela: http://www.tokeru.com/cgwiki/index.php?title=HoudiniUserInterfaceTips#customise_right_click_menus qqReplace.py rampCollect.py updateInterfaceFromComments.py
  31. 1 point
    Hey guys, I got it working!! I changed Rebuild SDF from "Adaptive" to "None". The file sizes came out a bit larger, but I no longer had issue where my ocean disappeared randomly. I tested it out on my old scenes with these issues, and I could cache it out with no problem. Thanks for all the help, I really appreciated it. Phiphat
  32. 1 point
    or you can use Gallery manager to store any shader or node tree. then you can tranfer gallery files. very convenient and easy to use.
  33. 1 point
    "The Shell of Light" An exploration of extreme DOF in Redshift. The basis is a wire-shell, but rendered with massive DOF. This creates the most fascinating and somehow otherworldly structures. Art Prints here: https://fineartamerica.com/profiles/thomas-helzle.html Cheers, Tom
  34. 1 point
    I think it highly depends on what you mean by "simulating". That is, A "getting a look like the one in the picture" or B "predicting the actual airflow to an arbitrary degree of accuracy". A is relatively easy. Take a look at "Volume trails", they will give a look exactly like that. B is probably much harder and would probably involve some deeper research. Check the hip for a crude version of A quasi_airflow_odf.hip
  35. 1 point
    Hi all. I made something a few weeks ago that I thought might be useful, so I somewhat tutorialised it here: http://www.pixelninja.design/manhattan-voronoi-approximation/ Basically it's a method of achieving something close to a Manhattan-distance voronoi diagram. I hope someone out there finds this useful If anyone knows of a better method, or a method of achieving an actual Manhattan voronoi I'd love to hear it! ManhattanApproximation_01.hiplc
  36. 1 point
    Vector Quantization, Curl Noise, Dot Product I was inspired by entagma and Thomas Dotheij Download my source .hiplc here: http://lab.ikoon.cz/post/153855061629/vector-quantization-curl-noise-dot-product
  37. 1 point
    Here's a slightly different set up that I found worked well to keep the smoke moving. I modified your set up, it's not perfect but it will show the workflow. You scatter points in the bounds of the tornado, transfer your rotation vel to the points and bring them into the dop as a gasparticlefield. Tornado_test7_rtep.hiplc
  38. 1 point
    My take on Object/point following the curve/path inside SOP. using creep and primitive SOPs. FollowCurve.hip
  39. 1 point
    Here is how to fix Divide SOP limitation about open geometry: // Point Wrangle. #define ALMOST_ZERO 1e-6 vector4 plane = chv("../clip2/dir"); plane.w = ch("../clip2/dist"); float dist = abs(@P.x * plane.x + @P.y * plane.y + @P.z * plane.z - plane.w); if (dist > ALMOST_ZERO) { removepoint(0, @ptnum); } And with it, it should be really fail-proof solution, comparing with Cookie SOP. But in 15.5 there is a new Dissolve SOP which able to create curves from edges, and it works quite good, much better than old one. Give it a try also, if you are up to date. edge_to_curve.hipnc @NNois, I agree, many nodes have such "side features", like Facet's removing inline points by threshold or Divide's removing shared edges. I believe it is what you could get from the main algorithms for free. If you do something like shaders or VEX wrangles, you might notice that you could compute a lot of auxiliary and potentially useful data by outputting inner variables.
  40. 1 point
    I made an attempt at trying to simplify Matt's setup so I could get a better "seal" between the original object and the trail generated object. In this setup I use an animated font. I discarded any rest geometry and just fed the font into the cookie. The animation of the source does happen along another line. I removed the FOREACH loop and just skin directly, it does seem to work. ap_font_continuous_trail.hipnc
  41. 1 point
  42. 1 point
  43. 1 point
  44. 1 point
  45. 1 point
  46. 1 point
  47. 1 point
    If you need galaxy formation in time and you have a lot of computing power you can use NBody sim solver with real phisical parameters to generate galaxy. In attached scene there is only 65K points. For more realistic results you probably need several millions of them. NBody sim is done inside loop of POP VOP (Houdini guys it would be nice if someone code it to run on GPU and include it in new release of Houdini as NBody solver). In the attached scene, period of several billions of years is scaled to several hundred frames. A few parameters control whole sim, time step, cluster scale, velocity scale, drag, etc. Nice thing with this method is you don't have to worry about collisions of two or more galaxies (There is not enough computing power on the world to calc that ..... ) Seriously... take another sphere inside same SOP and generate point cloud on it like it was done with first, move it on the scene several units away of the first one by transform SOP, merge those two point clouds and connect it in pop network. After merge it is actually single point cloud but some points have offset in space. Run sim and enjoy in spectacular two galaxies collision. They will merge eventually in time (and space). https://mega.nz/#!3JwhhZoZ!X6WJn9dPvxUAC2BqwbD1KevluzzRa5Nt1Nm3DAguIz8 When you get desired look, convert that point cloud in volume and you are done. Cheers
  48. 1 point
    Thanks for the answer, is this the asset you are you talking about : http://www.orbolt.com/asset/SideFX::pyronuke ? I'll look into it tonight after work and post any progress i'll make
  49. 1 point
    Here is the scene with the imported Alembic boy. It works like a charm! Thanks again for your help, Petz slicedBoyFromUVs.zip
  50. 1 point
    FLIP fluids are massively powerful, but they're also complicated and require a certain amount of understanding of what's going on beneath the hood to get the most out of. I've been working with them in various forms for almost 6 years now, and I'm still discovering and learning new tricks daily. For large, high-resolution volumes of water, FLIP is about the only way you can possibly go at the moment. Pure particle-based fluids scale too badly as you increase the particle count. VDB meshing is also far-and-away the most effective way I've come across to convert tens-of-millions of particles into a decent mesh in a short amount of time, but VDBs also have their fair share of complexity, and unlike everything else in Houdini, their inner workings are hidden away and a little bit black-magic. I'm still in the early days of a serious love/hate relationship with them... they're powerful to the point of having rapidly become absolutely essential to my workflow, but the moment I try to do anything clever with them, I end up shouting at the monitor :-P
×