Jump to content

Search the Community

Showing results for tags 'instancing'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General
    • Lounge/General chat
    • Education
    • Jobs
    • Marketplace
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 36 results

  1. Instancing tree leaves?

    Hello all, I have a built a rig that allows for the dynamic simulation of trees. I was able to define an instance point which is offset from the trees branches. This enables the copying of a packed leaf primitive onto the point. The next step is to update the rotation of the packed primitive. It seems like the copy to points has defaulted to using the z-axis (vector pointing in the {0,0,1} direction). The tree includes a central wire which captures the deformation as it moves through space. I have been able to construct a normal by using the following logic: Using a leaf point find the closest point of the wire compare point positions (@P{leaf point} - @P{wire point}) normalizing the compared vector and setting an @N attribute VEX CODE However, when using this @N (normal) on the copy to point the leaf geometry picks up this @N (built above in vex) and discards the desired z-axis vector of {0,0,1}. Here is a diagram demonstrating the results. Figure 1 - This is the result of the vex code written above (visualized left), notice when using the vex code normal the leaf is unable to correctly orient itself to the point (visualized right). The custom vex code normal is required to guide the orientation and the tree deforms. This is shown in the following .gif animation. Figure 2 - A .gif animation demonstrating the updating normal as the tree deforms I have also tried adding an @up vector to see if that resolves the issue, however, that route is also producing me undesirable results. I have though about using some form of quaternion or rotational matrix, however, I am currently studying these linear algebra concepts and I need time and guidance before fully understanding how to apply the knowledge in Houdini. Could someone please offer guidance in this regard. I am working on supplementary .hip file which should be posted shortly. Warm regards, Kimber
  2. Hi everyone. I'm trying to create a setup that instances a few models randomly as vellum soft bodies. I want them to spawn four at a time, every twenty or so frames. I'm almost there I think- however my current solution involves using a switch that picks a random model based on time, so I always end up getting sets of four of the same model for every round of instancing. I feel like the solution should be relatively simple, but I don't mind changing my approach significantly if it's necessary. I've attached a .hip file and an image. As you can see in the picture, there are very obvious groups of objects that formed at the same time: VellumInstancing.hipnc
  3. I would like to input Maya geometry into the HDA, which then will do it's magic, like placing that 1 geo 100 times, and then instance it when baked.
  4. Hi, i'am trying to instance lights on custom points that were created from geometry, but render doesn't start at all or rendering and loading a lot slower than standard primitive geometry and ram usage incredible high. If anyone can help, I will be very grateful Custom points https://prnt.sc/sz0fcl Grid primitive https://prnt.sc/sz0grm inctance_problem.rar Problem is solved, pls delete this topic
  5. Hi forum! Seeking out some help here, cos I can't get my head around generating vellum geometry. I am working on a scene where I need to create close-up splashing water droplets on a static surface. Typically this would probably be a FLIP task but it seems like overkill and I am only after a couple of droplets, so the idea is to make rain and splashes with particles, copy some spheres on them and feed them to vellum to get some nice softbodies, then VDB them. My problem is I don't understand how I can "emit" on my source points that come to existence arbitrarily (ie. not on frame 1) and differentiate the newly created vellum patches. I've looked around, watched the masterclasses, I understand it has to be somehow parallel to constraint creation, maintain point count/id etc, but I cant figure out the solution. Its gonna be a single splash so continuous emission (as in a particle emitter per se) doesn't work for me. Any pointers are appreciated, check the hip for a rough sketch of the problem! Thanks in advance! od_vellum_splash.hip
  6. Hello, I am currently working on an HDA, which allows me to place different objects on points using the "unreal_instance"-Attribute. Meshes and Particle Systems are placed at the right place. The problem starts with BP-Actors, they are placed without the world offset but rather at the 0/0/0 world position. Any ideas on how to fix this issue ? kind regards, Aiden Edit: it looks like it has an offset on the local X axis of around 1000 units.
  7. Hello everyone! I am trying to do a simple thing but it seems like I am missing something. I want something similar to this: But instead of controlling it by pscale, I would like to take the bounds of my instancing objects and make them get away from each other if the bounds of the instanced object is bigger than the distance between the points, or maybe deleting them as well. I am trying to do this just to scatter some crowds in a field, is there is an easier way to do that, it would be nice to know as well as I am quite new to crowds. I am attaching a simple example file of what I am trying to do. Crowd_RND_01.hip
  8. Learning Houdini on Twitch.tv

    Hello! I'm starting a twitch dot television channel and will be streaming Houdini training content. I've worked at several large VFX and advertising studios as well as taught Houdini classes at Academy of Art university. I'm hoping I can reach a larger audience through Twitch as well as the idea that people viewing the stream can participate by asking questions and providing feedback in real time. This is my channel: https://www.twitch.tv/johnkunz My first stream will be starting on Sunday (Jan 26th) @ 1pm PST. If you follow my channel (it's free ), you'll get an email notification whenever I start a stream. I'll be going over this project I recently finished https://www.behance.net/gallery/90705071/Geometric-Landscapes showing how I built things (VOPs, packed prims, Redshift render) and why I set things up the way I did. Some of the images I made are shown below. Please come by this Sunday with any questions or ideas you might have!
  9. Hi all! There's 1 week left to register for my Mastering Destruction 8-week class on CGMA! In supplemental videos and live sessions we'll be focusing on some of the awesome new features in Houdini 18, and exploring anything else you might be interested in, such as vehicle destruction. For those that haven't seen me post before, I'm an FX Lead at DNEG, and previously worked at ILM / Blue Sky / Tippett. Destruction is my specialty, with some highlights including Pacific Rim 2, Transformers, Jurassic World, and Marvel projects. I've also done presentations for SideFX at FMX/SIGGRAPH/etc, which are available on their Vimeo page. If you have any questions, feel free to ask or drop me a message! https://www.cgmasteracademy.com/courses/25-mastering-destruction-in-houdini https://vimeo.com/keithkamholz/cgma-destruction
  10. Hello, I have a question but I haven't found away to solve it , I want to instance soft bodies to a particle system that is constantly generating particles (constantly changing the numer of particles). That could be with FEM , grains or vellum you name it. which would be the best approach to do that, so far I faked the effect with the analytic foam tutorial on Entagma, but yet it is missing a lot of the motion I am looking for. Thank you for your help!
  11. I'm trying to instance simple sharp rectangular geo onto scattered points to create a frost effect. I've tried 3 methods: - The copy to points sop (w/ pack and instance) - Adding the s@instance attrib with a path to the geo - Using the instance node at obj level When rendering with around 1000 points I get a decent preview quickly in the IPR but a full render at 256 samples still takes far too long. If I try to up the point count to something like 50k, I struggle to get any feedback from the IPR. If I go to 100k I get no feedback in the IPR, it just freezes. When I try to render to disk, the render doesn't even begin. What am I doing wrong? From what I've seen I should be able to achieve 100k instances fairly easily if I'm not mistaken. I appreciate any help! Specs: i7-6700k, GTX 1080, 16GB RAM
  12. Hi guys, This is noob question but I just can't figure it out. I have a pyro cache in the scene. All I want to do is to intance that obj node and transform a bit and scale it up. I've tried using Instance object but it seems I can't rotate or move it. Can anyone help? Thanks! shawn
  13. Rocket Bus - houdini training

    Hello everyone, The primary aim of this training is to provide a project that will take the viewer through as much of Houdini as possible. Letting them utilize all the various tools that Houdini provides for modeling and VFX to generate a detailed environment. Currently the training is available for redshift and octane. Next week, I'll be recording the mantra version. There is a 25% introductory discount in the entire training till 18th may 2019 Click on the link given below for more information http://www.rohandalvi.net/rocketbus
  14. Hi, I was just trying to replace my pieces with hi quality instancing but it gives different geometry in Rigid body Simulation? Could you please guide me where did I miss MANY THANKS PREM wall rbd sim v7.hipnc
  15. If you're interested, be sure to check out the course right here: http://bit.ly/2tEhWFN L-systems offer a great way of procedurally modeling geometry, and with these skills, you'll be able to create a wide variety of complex objects. If you've ever wanted to create foliage, complex fractal geometry, or spline-based models from scratch, then L-systems offers you a procedural way of doing so. Unlike many other tutorials, this course aims to address the topic in an easy-to-follow, technical, and artistic way. In addition to L-systems, you'll learn more about instancing and how to work efficiently with Houdini. We'll be rendering out millions of polygons with both Mantra and Redshift while aiming to optimize render times as much as possible. These optimizations are applicable across a wide variety of 3D topics. Large scale environment creation, destruction, crowds, and real-time render engines are all examples of other 3D topics which will benefit from this course. Thank you for watching!
  16. Hi, I'm trying to assign a material to some instances and I'd like for the opacity to be driven by a set the instanceObject uvs (uv) and the basecolor to be driven by the instanceTemplate uvs (uvgral). Is this possible? What I'm trying is to call uvgral in the material but it's not working. Overrides I do at SOP level don't work (naturally) and my attempt to make an override at object level was unsuccessful. Thanks!
  17. Arnold Instance Offset Volumes

    Hey there, Having a little bit of Arnold trouble, I'm a little inexperienced rendering limitations and instancing with Arnold. Trying to load in instanced vdb's via an instancer obj node, using per point instancefile. But they're not renderable.. I did same with instance sop, but they weren't renderable, until I unpacked them -- which I believe defeats my purpose. Just trying to instance a volume , and offset time for variation. --- probably limitation with packed prims with Arnold but it'll work if I directly Instance Object my Arnold Volume obj node (where I lose offset control, per point). So a little confused. htoa 3.03 atm perhaps I could update because I've been told recently packed prims were further supported. Houdini Indie, so no access to .ass files. If I @instancefile (instance sop), I get access to the frame variable. but I have to unpack. If I @instancefile (instance obj), within an instanceobj, I get no render effect. If I @instancepath (instance obj), within an instanceobj, I can render, but have no access to the frame variables. Cheers, will play around with and give my solution if any.
  18. I'm having an issue where some of my instanced objects are not casting shadows when rendering to Arnold. In my example project that I've uploaded, box_instance_1 is not casting shadows on the ground plane where as box_instance_2 is casting shadows perfectly fine. Even stranger, when I hide box_instance_1 from render, box_instance_2 suddenly does not cast shadows. As far as I know I've set up both instances in the same way. Clearly something weird is going on, and I'd love to know if anyone else has encountered this issue? I'm on version 16.5.439 rock_instances.hip
  19. With the help of both the Redshift community and resources here, I finally figured out the proper workflow for dealing with Redshift proxies in Houdini. Quick summary: Out of the box, Mantra does a fantastic job automagically dealing with instanced packed primitives, carrying all the wonderful Houdini efficiencies right into the render. If you use the same workflow with Redshift, though, RS unpacks all of the primitives, consumes all your VRAM, blows out of core, devours your CPU RAM, and causes a star in nearby galaxy to supernova, annihilating several inhabited planets in the process. Okay, maybe not that last one, but you can't prove me wrong so it stays. The trick is to use RS proxies instead of Houdini instances that are in turn driven by the Houdini instances. A lot of this was based on Michael Buckley's post. I wanted to share an annotated file with some additional tweaks to make it easier for others to get up to speed quickly with RS proxies. Trust me; it's absolutely worth it. The speed compared to Mantra is just crazy. A few notes: Keep the workflow procedural by flagging Compute Number of Points in the Points Generate SOP instead of hard-coding a number Use paths that reference the Houdini $HIP and/or $JOB variables. For some reason the RS proxy calls fail if absolute paths are used Do not use the SOP Instance node in Houdini; instead use the instancefile attribute in a wrangle. This was confusing as it doesn’t match the typical Houdini workflow for instancing. There are a lot of posts on RS proxies that mention you always need to set the proxy geo at the world origin before caching them. That was not the case here, but I left the bypassed transform nodes in the network in case your mileage varies The newest version of Redshift for Houdini has a Instance SOP Level Packed Primitives flag on the OBJ node under the Instancing tab. This is designed to basically automatically do the same thing that Mantra does. It works for some scenarios but not all; it didn't work for this simple wall fracturing example. You might want to take that option for a spin before trying this workflow. If anyone just needs the Attribute Wrangle VEX code to copy, here it is: v@pivot = primintrinsic(1, “pivot”, @ptnum); 3@transform = primintrinsic(1, “transform”, @ptnum); s@name = point(1, “name_orig”, @ptnum); v@pos = point(1, “P”, @ptnum); v@v = point(1, “v”, @ptnum); Hope someone finds this useful. -- mC Proxy_Example_Final.hiplc
  20. Hello all, I've run into a problem that's been making me go in 6 directions and I can't seem to get any of them straight in my head. I have one cache that I want to copy to points that show up over time, I'm basically doing a running explosion like this https://gfycat.com/shinyhardanaconda When I copy the packed and cached sim onto the points the cache doesn't play with the offset of the points on the copied caches, so how would I go about offsetting each new sim on the new point. I've created a point attribute that marks the frame that each point gets created but I don't know how to connect that to a time offset or shift that would make it work on each point. Am I even doing this the right way? Anything helps, thanks.
  21. Hi, I am looking for the ways to replicate in Houdini point instancing done in some other application. I will skip here the data importing part because it is clear in my situation how to do that with Python. Let's say I already have a Python dictionary with elements like 'name':'transform', where 'name' is the name of the object to be instanced and 'transform' is a list of 16 floats representing world transform matrix of that instance. I have figured out so far how to do it at the object level. Here is my Python code for that: # My existing dictionary containing pairs like 'name':'transform' my_dict = {'foo': '....', 'bar':'......', ......} # where each dictionary value is a list of 16 floats for key in my_dict.keys(): node = hou.node('/obj').createNode('null') node.setName(key) m4 = hou.Matrix4(1) m4.setTo(my_dict[key]) node.setParmTransform(m4) This gives me bunch of named nulls with the correct transforms. And I can parent my object under those nulls. But I need the same at the SOP level. I need a bunch of points with 'name', vector 'scale' and preferably quaternion 'orient' attributes which I can pipe into the the right input of the CopySOP for instancing. Any help on that would be much appreciated.
  22. I'm having some trouble making my clustered pyro sim have collisions. It's a custom setup where I make a couple of containers with instancing at start frame and I've made a small change in the Gas Resize Fluid Dynamic DOP to resize each container separately based on $OBJID. The problem is that when I add collision objects, my density just doesn't show up or source anymore. When I use the shelf tool to make a static or deforming collision object, I don't get any smoke. Or I get it in weird places where the containers overlap. And when I use a Source Volume node set to Collision, all I get is what appears to be the Source field, but there is no density coming from it. When I disable the Source Volume node, density is sourcing and moving normally again. I've tested it in small scenes with basic geometry where I use shelf tools to setup the clustering and collision and then it seems to work. With the Source Volume node gives me the same issue, except when there's just 1 container (so no clusters). Then it works just fine. Since the shelf tool works, my guess is that I have to set some setting somewhere so that it works with instancing. I've been comparing the working test scene with my scene, but I haven't found anything yet. I'm afraid I won't see what I need to change even if it's under my nose. I've added the .hip file, but beware as it makes quite a bit of geo. So expect some loading time when you go in the SURFACE node. Thanks for any help! Eckhart TerrainDemat.v013.hip
  23. Hello! I'm currently working on a shot where a rocket takes off. I've gotten the setup working quite well for the start and the look of the spreading of the fluid, I now need the rocket to take off further. This introduced the problem that my container becomes gigantic as the bottom of the container spreads horizontally but the trail is just narrow. That's why I dug into instancing over the last few days and got quite good results. The issue I am failing at is transferring values between my clusters. I take the smoke object from the current iteration and add 1 and subtract 1 to get the adjacent smoke objects and merge them. I took the setup from the incredibly generous Florian Bard (http://flooz-vfx.com/ It's the rabbit_trail.hip file where he does that) but when trying to apply that to my particular case I couldn't get it to work. I tried various ways of importing the other smoke objects but apparently it's not fetching them. This is an image of my latest Version (v67) showing the cut. Attached are also some dailies demonstrating the issue. It seemed to be fine before I introduced dissipation in my sim but v67 (pyro_daily_v67_[1001-1105].mp4) now has this very apparent cut where the smokeobjects meet. (The look I'm going for is v38 (pyro_daily_v38_[0980-1160].mp4) - that was before the sim needed to be bigger and thus worked in Instancing.) That's the basic gist of what I do with the clusters. Import current, merge with next, merge with previous, then smooth out based on voxel proximity to border. Here is the HIP as well if taking a peek could help. GoFullApolloDreizehn_In_v243o.hip [Also for some reason my smoke object clusters start at 2. In my scene I have only two, and they are named smokeobject1_2 and _3 which I can't quite understand.] Thank you in advance, Martin
  24. Hi guys, I am currently building a environment for which I want to instance a lot of mega scan rocks. I am using the instance object so that I can instance thousands of the rocks with a huge hit. The problem is now that I turn on the displacement on the rocks everything comes to a halt. It looks like mantra is placing all the objects and then dicing and displacing the geometry. Is there a way of dicing and displacing the geo first? Should I just bake the displacement into the geo?
  25. Hello, I recently went through Steven Knipping's tutorial on Rigid Bodies (Volume 2). And now I have a nice scene with a building blowing up. The simulation runs off low-res geometry and then Knipping uses instancing to replace these parts with high-res geometry that has detailing in the break-areas. The low-res geometry has about 500K points as packed disk primitives so it runs pretty fast in the editor. Also, it gets ported to Redshift in a reasonable amount of time (~10 seconds). However, when I switch to the high-res geometry, as you might guess, the 10 seconds turn into around 4 minutes, with a crash rate of about 30% due to memory. When I unpacked it I think it was 40M points, which I can understand are slow to write/read in the Houdini-Redshift bridge, but is there no way to make Redshift use the instancing and packed disk primitives? My theory kind of is that RS unpacks all of that and that's why it takes forever, because when I unpack it beforehand, it works somewhat faster - at least for the low-res. The high-res just crashes. I probably don't understand how Redshift works, and have wrong expectations. It would be nice if someone could give me an explanation. Attached you'll find an archive with the scene with one frame (Frame 18) of the simulation included as well as the folder for the saved instances of low- and high-res geometry. Thanks a lot for your help, Martin (Here's the pic of the low-res geometry, the high-res is basically every piece bricked down into ~80 times more polygons.) Martin_RBDdestr_RSproblem_archive.zip