Search the Community
Showing results for tags 'workflow'.
-
Lately I have been learning the flip solver. I am concerned with one big difference with the pyro/vellum solver, which is that i could not reproduce what the flip solver does with a dop. The inside of the this solver is much more complex. I feel a bit insecure about this, I don't like being unable to reproduce what I learn here in a custom setup ( which I might need to do if I have to do a more custom simulation, with interactions with others solvers or whatever) and I don't like having a sim that automatically looks good but that i cannot fully control. How did you implement it in you workflows ? I guess you can customize a lot by editing fields before inputing them in the flip solver but would that be enough ? If anyone has advice it would be great but I would be glad as well to just hear your opinions or your experience with this change.
-
http://www.patreon.com/posts/31506335 Carves out polygons using a point attribute with the ability to define the carve values per primitive using primitive attributes. Pure VEX implementation, 10x faster than the default Carve SOP (compiled). It preserves all available attributes. It supports both open and closed polygons.
- 16 replies
-
- 3
-
- procedural
- fx
- (and 9 more)
-
New Houdini Workflow Tutorial! In this video, I take an in-depth look at: - The Copy to Points node - The Variant Attribute - Packing and Intrinsics - Rest Position and calculating it for moving geometry - Loops - And a lot of tips and tricks for automating and optimizing the process.
- 1 reply
-
- tutorial
- copy to points
-
(and 2 more)
Tagged with:
-
How could i add a sop to every geometry node inside a subnet with a complex hierarchy (multiple subnet inside subnet inside subnet etc...talking about non repeating pattern ) Merging everything inside a single geometry to cook my modification is not an option here as i really need to keep my hierarchy paths intact for multiple software workflow and rendering purpose. Scripting looks like the way to go but i m not comfortable enough with it. Any idea how to tackle this specific problem ? Hscript ? Python ? In others words : /// For each geometry node inside a subnet. Add a specific Sop inside geometry nodes. Connect this new sop to the last existing sop . Set the display flag to this new node. Joy. // Thanks for your help.
-
Happy Friday, I'm currently working on a project where I need to export alembics assets into C4D. The assets I want to export have embedded Prim attributes that I want to convert to Prim groups in order for the artist to have control of the material selections in C4D. Currently, I'm splitting all the attributes and converting them to groups manually which is time-consuming and can break. Since I'll be modifying and making adjustments I was wondering if there was a more procedural way of doing this through a wrangle or possibly the partition sop? Any help would be greatly appreciated, thanks!
- 5 replies
-
- groups
- attributes
-
(and 3 more)
Tagged with:
-
Hello guys, I am aware of this page but I just wanted to get an idea of some of your preferences in terms of preparing geometry for render. https://www.sidefx.com/docs/houdini/render/tips.html If I understand well, exporting as alembic is good for flexibility and to use geo as proxy in other contexts What I noticed with the Rop Alembic Output is that, even though i put an * in the attribute section, my alembic file displays only the P and name attribute. Where are the others gone? How can I recover them? Attribute Transfer? Do you polysoup before using the ROP? Have I missed something important I should be doing to get my sim ready for shading and rendering? All tips super welcomed!! Thank you
-
patreon.com/posts/38913618 Subdivision surfaces are piecewise parametric surfaces defined over meshes of arbitrary topology. It's an algorithm that maps from a surface to another more refined surface, where the surface is described as a set of points and a set of polygons with vertices at those points. The resulting surface will always consist of a mesh of quadrilaterals. The most iconic example is to start with a cube and converge to a spherical surface, but not a sphere. The limit Catmull-Clark surface of a cube can never approach an actual sphere, as it's bicubic interpolation and a sphere would be quadric. Catmull-Clark subdivision rules are based on OpenSubdiv with some improvements. It supports closed surfaces, open surfaces, boundaries by open edges or via sub-geometry, open polygons, open polygonal curves, mixed topology and non-manifold geometry. It can handle edge cases where OpenSubdiv fails, or produces undesirable results, i.e. creating gaps between the sub-geometry and the rest of the geometry. One of the biggest improvement over OpenSubdiv is, it preserves all boundaries of sub-geometry, so it doesn't introduce new holes into the input geometry, whereas OpenSubdiv will just break up the geometry, like blasting the sub-geometry, subdividing it and merging both geometries as is. Houdini Catmull-Clark also produces undesirable results in some cases, i.e. mixed topology, where it will either have some points misplaced or just crash Houdini due to the use of sub-geometry (bug pending). Another major improvement is for open polygonal curves, where it will produce a smoother curve, because the default Subdivide SOP will fix the points of the previous iteration in subsequent iterations which produces different results if you subdivide an open polygonal curve 2 times in a single node vs 1 time in 2 nodes, one after the other. This is not the case for polygonal surfaces. VEX Subdivide SOP will apply the same operation at each iteration regardless of topology. All numerical point attributes are interpolated using Catmull-Clark interpolation. Vertex attributes are interpolated using bilinear interpolation like OpenSubdiv. Houdini Catmull-Clark implicitly fuses vertex attributes to be interpolated just like point attributes. Primitive attributes are copied. All groups are preserved except edge groups for performance reasons. Combined VEX code is ~500 lines of code.
-
- 2
-
- vex
- performance
-
(and 7 more)
Tagged with:
-
I made this HDA to streamline the process of versioning caches. It will automatically produce a file path and file name for your cache, and load it back in once it is exported. You can flip through different versions easily by using the version slider, or using the 'Create File Node for This Version' button and wiring the file nodes up to a switch node. You can write detail attribute strings to store notes about the cache such as simulation parameters - very useful when referring back to old sim caches. At the moment this is a non-commercial HDA. Download link down this page beneath the video. Get at me with thoughts, comments, questions etc. SOP_MI_version_filecache.hda
-
Hi! I've been working lots of rigidbody simulation lately. I am happy with what Sidefx has done for RBD Workflow specially when they release houdini 18. Sample workflow: -Prep objects. Categories/separate like (GLASS,METAL,CONCRETE,WOOD) to Groups. You will use this for RBD configure. -Clean objects. Check for holes, wrong normals, unused points,etc.(Clean Sop/Facet Sop) -Check the UV.(This is super important after the simulation so that you will not have a hard time doing the Materials) -Prefragment objects.(RBD Fracture Material). For the constraint i prefer doing it after the fracturing. -Make inside UV for inside faces(UVunwrap selecting inside group). -Cache Prefragments Per Object.(RBD I/O) 1 frame only. -Make Constraint. -RBD configure(this will help the solver a lot because you can set the density/type of material like GLASS,METAL,CONCRETE,WOOD) -RBD solver. Configure the Constraint for breaking, Forces and Collisions. -RBD I/O to cache the simulation. It is good for simple shots or scenes but when it comes to heavy scenes i notice that it is to heavy to load when opening the scene especially when you have lots of object to prefragment(RBD Fracture Material). Anyone having the same problem? even if you cache your objects the scene is still heavy/too slow?
-
Hi, So I was fiddeling with the popnet and finally got a nice stream of particles, but it doesnt render. It's in the viewport. But it does not render. What I have are several lines that are used for creating geometry. all these lines are in one geoSOP. I want to use the a single line as a source for popnets for my particles. Also for all other lines. So I created a subnet for this so it can be copied and paste. So everything in in one geosop. Now for some reason the particles only show up in my viewport, but the particles does not show up as spherers in the render. Therefore I made a little test and created an empty geosop on obj level with a popnet. This time the render does show the particles as spheres. So, what exactly is the right workflow for particle effects, so that I can render them. And what is wrong with my first approach. Hope you can help out. Thanks in advance, M
-
http://patreon.com/posts/32631275 http://gumroad.com/l/houdinisupercharged In this video I will go through the GUI customizations I did for Houdini 18. So let's get into it.
- 7 replies
-
- 2
-
- workflow
- performance
- (and 8 more)
-
Hello everyone, I am currently setting up an asset pipeline for a project in houdini. My main question I have is how I handle the versioning of Digital Assets. I prepared an overview where you can see basically the 2 ways I came up with: The One option is the" self contained" one. When ever you want a new version of your asset you: 1.duplicate the subnet where you work in. 2.Then you have "subnet_v002" that gets turned in to a digital asset "asset_v002.hda" with the definition "asset::2.0" Every new version gets its own asset library and in every library there is only ONE definition of an asset as you can see in the picture. This workflow is good if on your drive you only want one .hda file associated with one asset version. The other option is the packed one. Same as the other when ever you want a new version of your asset you: 1.duplicate the subnet where you worked in. 2.But then you don't create a seperate library you only have one library with multiple definitions. One definition for one version. That means if the library asset.hda already exists with the definition asset::1.0 you will append the latest definition asset:2.0 to the library. In this case you have all the definitions packed in to one library. For me this option is a little confusing in that sense that you dont have a file seperation of the different versions and one asset library can become potentially huge in file size. One big point of assets is being able to update the assets in a houdini scene easily without destroying relations to other nodes. No when it comes to that for option "packed" as every version is in the same library you can simply write a script that updates all nodes with the same library to latest definition is that correct? But if i like the file seperation on disk for each new version, HOW would I in option "self contained" update my "asset_v002.hda" with asset::2.0 to "asset_v003.hda" with asset::3.0 as a definitio? All of this should happen kind of automated so I am searching a python way to do this. I would be really interested if anyone has update solution for the option "self contained" and what you generally think which option is the better one to handle digital assets. Thanks a lot Paul
- 1 reply
-
- hda
- digitalassets
-
(and 3 more)
Tagged with:
-
http://patreon.com/posts/33249763 http://gumroad.com/l/houdinisupercharged In this video I will show you some of the inner workings of the context-sensitive rule-based hotkey system that I implemented and one I have been using for a while to speed up my workflow inside Houdini. It's a very light-weight and flexible system that allows an arbitrary number of actions to be assigned to any key, with extensive modifier key and state support (Ctrl, Shift, Alt, Space, LMB, MMB, RMB, selection state). It's deeply integrated into the overlay network editor workflow.
-
- 5
-
- optimization
- walkthrough
-
(and 11 more)
Tagged with:
-
Hello, newbie here. I'm following an old tutorial and in order to delete closed/un-closed primitives the guy created an attribute called "closed" which is then referenced in a delete node. He uses $CLOSED in the delete node but when I do the same it shows it as an error. Is this a deprecated way of calling the attribute? I tried @CLOSED instead and that seamed to work but I have troubles further down the line and I'm trying to figure out if this makes a difference.
- 1 reply
-
- depricated
- old
-
(and 4 more)
Tagged with:
-
How are you managing your caches? If I have 4-5 sims that builds on top of each other, all of them has a cache, and I change something in the first sim, I need to go and re-cache all the other caches (That maybe are located different places in my network). Are there a way to manage this "chain" of caches, so it's possible to have a better overview of all caches in the sim, and be able to quickly re-cache the sims that need it? What is a good workflow for this?
- 10 replies
-
- simulation
- cache
-
(and 1 more)
Tagged with:
-
I've been researching this topic for some time and can not find any valuable information. I have a character with a skeleton and blend shapes. This character have multiple animations ether on Takes or in multiple files (with HDAs). I can not find any reasonable way to bring this animation to Unity. Making all animations in one timeline and splitting it in Unity or exporting multiple FBXes with the same mesh sound absurd to me. Can anyone recommend a proper workflow?
-
Hi everyone, I've been tasked to lay down a decent workflow pipeline plan for my team revolving around creating motion graphics or key visuals for advertising in houdini and I was hoping to find some help/suggestions for it. I chose houdini as the core element just to avoid extra plug-ins that from my experience were creating a wast amount of issues in the long run, but I feel I won't be able to do everything I need quick enough using it alone. My first idea was to get license for zbrush, substance and redshift to complement houdini, so basically it would be like this: start by modeling non procedural things in zbrush, texturing them in substance, animating everything in houdini and finally render in redshift. When I started testing the workflow, I immediately noticed that substance designer plugin is not compatible with latest versions of houdini and my biggest fear came back, since I was hopping to achieve something at least slightly more seamless. I am now considering making all maps and textures through photoshop and zbrush only, skipping substance altogether and sticking as much as possible to procedural shaders in houdini if I can produce them quick enough. Anyway, what kind of workflows do you guys have or recommend? Could you give me any tips on how I could make things go smoother and avoid unexpected compatibility issues? How was your experience using Substance painter or designer within the workflow so far? Cheers!
-
This operator allows you to call a collection of nodes on any data or simply no data (generators). It gives you full control over how the lambda function should be run.
- 3 replies
-
- 6
-
- workflow
- productivity
-
(and 5 more)
Tagged with:
-
I want to start working on a short film next month and my plan is to create as much of it as I can in Houdini. If my film has characters with, hair, and clothing dynamics, what is the best workflow to handle this? Is it possible to have all the dynamics inside the character asset with a switch that turns them on and off? like an animation mode, and then a simulation/rendering mode?
-
Hi everyone, I am working with our IT department to built up a desktop for heavily FX work(mostly water and pyro simulations). Below are three options that we are considering about it, I am looking for any comments or recommendation on different configuration. Thank you
-
Hey magicians, I'm working on a project with a client that involves some large simulations, around 400gb stuff, my internet connection isn't the best on my city (10mb down, 5mb up), so I wanted to ask what's the best pipeline in this cases. The client gave me the idea to save the sim in a hard drive, and send to them over fedex or something, so they got all the stuff within some days and they can render there. Another solution I think can work is using some service like Gridmarkets where I gave them the files and they render out. Any ideas will be more than welcome Thank you!
-
Hello magicians, I'm working on a large scale FLIP scene and need some tips regarding to workflow. The main scene has 2 FLIP sims, 1 for a waterfall, and 1 for a river. On my current setup I made the river using narrow band, and the waterfall using a emitter, not sure if this is the best workflow in terms of speed and direction, the current setup looks like this (blue = waterfall / red = river) And here is a viewport sample Questions: 1) Is this the best approach for mixing a waterfall with a river? emitter + narrow band? 2) I saw in other post that some people breaks the geometry in equal modules and then put them together, should I break the river within 3 equal parts to save time and quality? 3) I readed that when you export particles, is useful to delete attributes that won't be used, I did a quick test with all attributes and 1 frame was 800mb, deleted some and took it to 200mb, is this a right approach? 4) For final meshing, should I create VDB / polygon soup and export passes in order to make detail stuff like foam? 5) Should I export particles in wedges? 6) There is a dop workflow to upres particle/flip mesh like "gasupres" within pyro? Would love to hear any tips regarding to large scale flip, will keep reading on the forum in the meantime Thanks!
-
Hey all, I have started using redshift a few months ago and so far i like it a lot.! I currently have one 1080ti 11gb gpu in the machine I am using it on. Would like to get either another one of those cards since the gpu prices seem to have dropped lately back to something more reasonable, or get 2 1070ti 8gb cards. I have heard that the performance diff is only like a 20% gain for the 1080 over the 1070's, so might be better off with getting 2 of the 1070's. The main question is though, what happens when you hit the render button on your redshift node for a sequence inside houdini if you have more than one gpu.? If I had 3 gpu's, would it automatically render frame one on the 1st gpu, frame 2 on the second, frame 3 on the 3rd and so on..? Would it render the first frame across all 3 cards simultaneously by breaking up the frame to submit to the different cards, is that even possible.? Do the extra gpu's only get used if you use a distributed rendering software like deadline or by launching RS renders from command line.? It seems like you have to launch from command line in order to get the other gpu's to render, but I have never worked with a machine with more than one gpu installed. If I were to submit a 100 frame render from within houdini by clicking the render to disk button on my RS node, would it only use the main gpu even with other gpu's installed.? Any info from artists using multi gpu systems would be great. I didn't find a lot of info about this on the redshift site, but might not have looked deep enough. The end goal would be to have the ability to kick off say 2 sequences to render on 2 of the gpu's while leaving my main gpu free to continue working in houdini, and if its time to end the work day, allow the main gpu to then render another sequence so all 3 cards are running at the same time. I will most likely need to get a bigger PSU for the machine, but that is fine. I am running windows 10 pro on a I7-6850K hex core 3.6ghz cpu with 64gig ram in an asus x99-Deluxe II MB if that helps evaluate the potential of the setup. One last question, sli bridges.? Do you need to do this if trying to setup 2 additional gpu's that will only be used to render RS on.? I do not wish to have the extra gpu's combined to increase power in say a video game. I don't know much about sli and when/why it's needed in multi gpu setups, but was under the impression that is to combine the power of the cards for gaming performance. Thanks for any info E
-
Hi there! I would like to render my water sim and whitewater in c4d to integrate it with an animated scene. I got two questions. 1. I was wondering what a good workflow is to render the simulation out over to c4d, are alembics still the best way to do this or is houdini engine faster or are there other ways? my 7 million points seem to be very very heavy for c4d in alembic... 2. How do I render out the whitewater correctly in c4d (octane render) does this need to be GEO or particles for rendering? and also how do I get that nice lifespan on it too. Thank you for taking the time to help me out! Cheers!