Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

skomdra last won the day on October 12 2020

skomdra had the most liked content!

Community Reputation

10 Good

About skomdra

  • Rank

Contact Methods

  • Website URL

Personal Information

  • Name
    Drasko Ivezic
  • Location
  • Interests
    Animation, film-making, directing, writing, CGI
  1. Yes that is the best way, if you have something as simple as animating cubes, look into https://www.sidefx.com/docs/houdini/nodes/sop/kinefx--capturepackedgeo.html and use ROP FBX Character Output. Alembic for this case (if you have non-deforming geo) is an overkill.
  2. Houdini 20 Wishlist

    Very interesting discussion guys! I was wondering, isn't it possible to make your own hda-s for that (a LOT of workflow methodology to follow) and not repeat tedious tasks over and over again? There are even TOPs which can help you rename your assets, put them in the right folders, etc, and help you never do any extra cost of time and money. The big team vs indie studio argument seems a bit superficial since even the indie artists will in one point want to take a part in a bigger production, involving more collaborators, so I would imagine it is in their interest to learn the big studio methods. And if we are talking about one person experimental instagram shops, how can this be a priority for a serious software company like Sidefx? In my opinion, Sidefx can offer some high level HDA-s, python scripts, etc, to offer some good scenarios, and that way help people to adopt USD technology more quickly, rather than going backwards and trying to make a fist full of indie artists happy (and I am saying that as an indie artist myself)
  3. ReverseNormals and Union

    then reverse them after the loop? Just have them in the group?
  4. KineFX rig for a tail (line)

    not wire sim, hair sim. Not the same.
  5. Modeling part of a vehicle

    piece by piece. You can use some points, add sop, resample to control the shape, sweep, group by range, group by object, extrude, bevel, bend... Possibilities are different, maybe some ramp to control the silhouette, some copy to points, shapes, booleans. All kind of stuff is possible. Just try to break it down on basic shapes (sphere, tube, box) think of deformers, patterns, mirroring. One option would be to make it from many pieces and then merge everything for the rendering. If you don't have to deform it, it can have as many pieces as you want. Mastering sops would help.
  6. if the topology is changing, the UV's will change as well, so you have to find a way to deform your geometry with your generated growth geo, without of changing the topology, and then the geometry will keep the UV's and stretch accordingly. If you use triplanar, it projects every time from the same distance, therefore, UV seem like they stuck in place but the geometry is moving.
  7. Hi, no, I don't know Igor Zanic, the rabelway guy? He is from Serbia, I guess, our countries are not so close, unfortunately, especially now during the COVID, I wish I know him did you try with a soft pin? Can you upload a hip file?
  8. I would in that case build different tools for different kinds of scenarios and make these tools work together, depending on the situation. For example, separate tool for the whole city, separate for the neighborhood, separate for the city block and separate for one building, and separate for inside of the building. Then I would offer the pdg scenarios to bake buildings depending on small LOD or to have hero buildings, then the ones which have to be destructed with built in different fracturing modes, constraints. If you need to build more different cities like for a game, then it can be a very complex setup, especially for someone who is just a beginner, so I would suggest the usual scenario - split your big problem in smaller ones, then even each of it on even smaller one, then attack one after another. And step by step, you will figure this out. I didn't work in a game studio, but I would guess, this can easily take weeks or even months to build such set of tools from scratch. As I said, there is no one button rules them all solution in houdini, houdini is an operating system, workflow rather, but you still need to do your planning and engineering based on the scope of your project.
  9. It looks like you have a nice little engineering problem on your shoulders here I wish I can help you, but this would really take a lot of experimenting and trying different pipelines. I understand what you need, it just takes time to figure out all those problems, not something which can be solved quickly, but rather by studding the tools, writing down some possible solutions, testing, and like that until you make it. In one point, the question is, is it really worthy, for what purpose you are doing it, what is the final outcome, etc. And I don't know your parameters, budget, scope, calendar, size of your team, etc.
  10. If you need manually adding floors, you would use some procedural way to do that rather than hand modeling it and that system can work independently from your scattering and city generating system. It can use many layered procedural systems, for example, you can have a hda for the building iteration, then scatter those buildings and for any wrong result you can just replace it and iterate over results for that specific building. It is still a lot of less time needed than hand placing it and solving the rest. You can also manually paint attributes to use it as additional art direction of the generation. I would combine this based on your preferences, and still leave room for hand picking assets and replacing them when ever needed. Your need seems a bit complex, but it is doable, with some planning. In any case, there is no magic button which will solve all the problems, some things have to be project specific, but in building those interfaces you can use a lot of clever way how to avoid artists to dig under the hood and just play with tools, but it will take time to properly develop it, and a lot of testing.
  11. KineFX rig for a tail (line)

    Did you ever work with the hair vellum sim? You can configure your skeleton in a way that you use it in vellum hair configure with a pin point, your joint which you want to drag around, and then sim it, plug the output of the sim back to the bone deform and it should work.
  12. Why not edit after the generation of geometry? You can make a random result, pack it based on attribute, then when you notice the wrong ones, hand pick them, split them and unpack and edit as much as you wish. If you have some art direction situation it would anyway be dependent on what's seen in the shot, right?
  13. https://github.com/kamilhepner/kinefx_tools these tools might help you started, he created some hdas, just to make this bridge between traditional rigging and kinefx tools.
  14. Get normal from viewport pick

    if your points have normals, just select that point in the viewport, create new attribute wrangle and store that normal value into some attribute which you can later promote it to the detail level and use that information with detail function in expression. If you want to use that for the rotation, you should use some of the methods of converting normalized normal vector to the rotation value, depending on direction your object is facing (if that is what you are after)
  15. Did you try to make remesh with a bigger value and then test your struts how do they behave? You can always use that low poly sim to drive your high poly model with point deform, do you know how to set this up? It is rather simple. For such a simple movement you don't need so much detail in your cloth, unless you are making a very old and wrinkled shark