Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


skomdra last won the day on October 12 2020

skomdra had the most liked content!

Community Reputation

8 Neutral

About skomdra

  • Rank

Personal Information

  • Name
    Drasko Ivezic
  • Location
  1. if the topology is changing, the UV's will change as well, so you have to find a way to deform your geometry with your generated growth geo, without of changing the topology, and then the geometry will keep the UV's and stretch accordingly. If you use triplanar, it projects every time from the same distance, therefore, UV seem like they stuck in place but the geometry is moving.
  2. Hi, no, I don't know Igor Zanic, the rabelway guy? He is from Serbia, I guess, our countries are not so close, unfortunately, especially now during the COVID, I wish I know him did you try with a soft pin? Can you upload a hip file?
  3. I would in that case build different tools for different kinds of scenarios and make these tools work together, depending on the situation. For example, separate tool for the whole city, separate for the neighborhood, separate for the city block and separate for one building, and separate for inside of the building. Then I would offer the pdg scenarios to bake buildings depending on small LOD or to have hero buildings, then the ones which have to be destructed with built in different fracturing modes, constraints. If you need to build more different cities like for a game, then it can be a very complex setup, especially for someone who is just a beginner, so I would suggest the usual scenario - split your big problem in smaller ones, then even each of it on even smaller one, then attack one after another. And step by step, you will figure this out. I didn't work in a game studio, but I would guess, this can easily take weeks or even months to build such set of tools from scratch. As I said, there is no one button rules them all solution in houdini, houdini is an operating system, workflow rather, but you still need to do your planning and engineering based on the scope of your project.
  4. It looks like you have a nice little engineering problem on your shoulders here I wish I can help you, but this would really take a lot of experimenting and trying different pipelines. I understand what you need, it just takes time to figure out all those problems, not something which can be solved quickly, but rather by studding the tools, writing down some possible solutions, testing, and like that until you make it. In one point, the question is, is it really worthy, for what purpose you are doing it, what is the final outcome, etc. And I don't know your parameters, budget, scope, calendar, size of your team, etc.
  5. If you need manually adding floors, you would use some procedural way to do that rather than hand modeling it and that system can work independently from your scattering and city generating system. It can use many layered procedural systems, for example, you can have a hda for the building iteration, then scatter those buildings and for any wrong result you can just replace it and iterate over results for that specific building. It is still a lot of less time needed than hand placing it and solving the rest. You can also manually paint attributes to use it as additional art direction of the generation. I would combine this based on your preferences, and still leave room for hand picking assets and replacing them when ever needed. Your need seems a bit complex, but it is doable, with some planning. In any case, there is no magic button which will solve all the problems, some things have to be project specific, but in building those interfaces you can use a lot of clever way how to avoid artists to dig under the hood and just play with tools, but it will take time to properly develop it, and a lot of testing.
  6. KineFX rig for a tail (line)

    Did you ever work with the hair vellum sim? You can configure your skeleton in a way that you use it in vellum hair configure with a pin point, your joint which you want to drag around, and then sim it, plug the output of the sim back to the bone deform and it should work.
  7. Why not edit after the generation of geometry? You can make a random result, pack it based on attribute, then when you notice the wrong ones, hand pick them, split them and unpack and edit as much as you wish. If you have some art direction situation it would anyway be dependent on what's seen in the shot, right?
  8. https://github.com/kamilhepner/kinefx_tools these tools might help you started, he created some hdas, just to make this bridge between traditional rigging and kinefx tools.
  9. Get normal from viewport pick

    if your points have normals, just select that point in the viewport, create new attribute wrangle and store that normal value into some attribute which you can later promote it to the detail level and use that information with detail function in expression. If you want to use that for the rotation, you should use some of the methods of converting normalized normal vector to the rotation value, depending on direction your object is facing (if that is what you are after)
  10. Did you try to make remesh with a bigger value and then test your struts how do they behave? You can always use that low poly sim to drive your high poly model with point deform, do you know how to set this up? It is rather simple. For such a simple movement you don't need so much detail in your cloth, unless you are making a very old and wrinkled shark
  11. Not sure, my impression is it stays the same, because they would need to make a complete new system from ground up. CHOPs is one of the first systems from Hermanovic who now uses it on his new product Touchdesigner. But new nodes in kinefx are using some vops with the operations on clips which could be replacement for CHOPs, so my guess is that everything keyframe based will gradually move to sops and clip operators, it makes so much more sense, visually and because multi threaded. You are right about the mocap and putting a big focus on this aspect, at the moment. It is something to keep in mind, houdini is mostly used for liveaction special effects still. But make no mistake, mocap is just one possibility which is featured in this version. New paradigms in houdini introduced from 18, like python states and powerful rig nodes from 18.5 under the hood are just the beginning. They are working very actively on expanding these tools, as we speak. If you find time, google kinefx, you will be surprised what are people pulling up already, from using skeleton for modeling, to making custom HDA-s by utilizing kinefx vops and wrangles. There are still some hick-ups with the UI, especially, showing the keyframes on the timeline and some not so intuitive problems with the animation editor, but they are minor compared to what you can already do by using sop nodes in updating your rig and animation on a fly. Compared to this, Maya's human IK is very rigid, actually. It opens endless possibilities for experimentation. The problem is to forget the traditional workflow, but once you make that step, you can just keep discovering new stuff and experiment. The most difficult is to transfer quickly if you are already bind with the specific pipeline, but if you are free to just play of joy, kinefx is your toy. I can't speak about facial animation and what goes there (other then clever using of different blend shapes) but you can easily do all kind of bending of the limbs with just making your joints more procedural, using different sop nodes and then also curve IK constraints to get pretty much what ever you can imagine.
  12. Walls for rooms in procedural house

    Many ways to do that. Start with the floor plan and try to figure out how would you slice it up with the attribute wrangle, then use those pieces as floor plans for individual rooms. If you are following Opara, you should be able to figure this out by yourself
  13. It is not easy to answer this quickly. I recommend checking the basic rigging series on sidefx by Goldfarb, just to understand what is involved in it, to get a basic idea. Later you can learn a lot by playing with autimatic rig, because it is basically a tool which creates nodes for you, based on a template, then you can look into it and understand what is under the hood. Other than that, there are a lot of little tricks, such as python codes to make things faster to build. The logic at the end is actually the same, there are some objects which are organized in hierarchy and they modify the geometry based on the weights on your skin. You can use few methods to capture the weights, you can paint weights later, this is almost the same like in other DCC-s. KineFX is a game changer because it is flexible since it uses the world space transform of named points for joints (check the introduction videos to understand better). Kinefx is a surface workflow rather than just a rig, since joints are basically points and you can use any geometry with points as a joint. This makes building, manipulating and changing joints on a fly procedural and very powerful layering tool for animation and all kind of constraints. Another advantage is speed, because object level rigs rely on CHOP networks for the constraints and chops can become a bit heavy and slow down the process. It will take time to understand how powerful it is, but it is worthy to dig and experiment, as with object rigging you will understand quickly, since it is not that different that what your usual DCC is doing. I hope I made myself clear enough
  14. How to get pyro to effect an rbd

    You can merge two rbd-s into one solver and use advect by volume force to move the rbd packed prims. They should already have necessary constraints, if you prepared them with material fracture nodes.
  15. Collecting assets

    The easiest solution: from now on, don't have your textures and assets everywhere but put them in the project structure Houdini creates for you, from file - new project. Every pipeline has to be designed in advance, so you don't have these problems later. And for gathering the assets, it could exist a python script somewhere, I did't search for it. Try Render > Pre-Flight Scene and see where it takes you.