Jump to content

Milan

Members
  • Content count

    43
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    2

Milan last won the day on April 6 2015

Milan had the most liked content!

Community Reputation

12 Good

About Milan

  • Rank
    Peon

Contact Methods

  • Website URL
    http://www.mkolar.com
  • Skype
    mkolar.com

Personal Information

  • Name
    Milan
  • Location
    Prague

Recent Profile Visitors

2,135 profile views
  1. Damn. One of those where you bang your head against the wall once you see the solution. Obvious it is, but frankly, I probably wouldn't notice the problem without taking a day away from it (which I didn't ). Thanks a lot. I'm putting you on my 'Owe a beer to...' list
  2. Great example. I used part of it for a setup I have for a current job, however I'm running into a strange issue. I have 2 sets of constraints exactly as your example. But I can't get my object to break into clusters. It's either breaking into small pieces, or it stays together as a full rigid piece. I'm attaching a simplified version of the file if anyone fancies having a look. What I'm trying to achieve in the bigger picture is a simple object breaking through a surface, cracking it into pieces of different sizes. HOwever I'm unable to get event simple clustering to work. Any help would be greatly appreciated. The example has a few nodes deactivated (they are normaly responsible for only activating pieces within range of the breaker object to keep the rest from falling down.) cracking_example.hip
  3. Hscript Beautifier

    You should be able to add this to ATOM's beautify package ( https://atom.io/packages/atom-beautify ). I had a quick look at the code and should be fairly straightforward.
  4. Export alembic with Python

    Yeah I found the alembic extensions but as Oslo mentioned, it's of no use for exporting. Feels like guys at SESI left the alembic integration at a halfway point, which is a bit shame. Creating dop network with alembic rop should of course work, but it feels like an ugly hack to something that I (apparently incorrectly) consider one of the basic functions of a software. Oh well, can't be Christmas everyday.
  5. Export alembic with Python

    I'm either blind or overworked, but I can't for life of me find any information about exporting alembic data from houdini via python rather than using alembic ROP. Does anyone have any pointers whether this is possible in houdini or are python bindings missing completely? Cheers
  6. execute in main thread with results

    We were integrating http://www.pyblish.com into houdini. It runs outside of houdini as endpoint service. This approach was used in maya and nuke, so we were wondering if it was possible to plug it into houdini in the same fashion to keep the same structure of the integration. I was trying to just use join(), however it kept on freezing my houdini (which most likely means I was doing something very wrong ) Anyways we got it working thanks to SESI support and pyblish for houdini will be out and usable as soon as the code get's a tiny bit of cleanup and when it get's integrated into the main package. Turns out houdini has exactly what we were looking for import hdefereval hdefereval.executeInMainThreadWithResult(houdini_command) The way we got it working is this. import hou import hdefereval import threading def houdini_command(): hou.node('/obj').createNode('geo') def worker(): n = 0 while n < 5: hdefereval.executeInMainThreadWithResult(houdini_command) n += 1 thread = threading.Thread(target=worker) thread.daemon = True thread.start() and this was a try with join() which I gave up on. import hou import hdefereval import threading def worker(): n = 0 while n < 10: houdini_command() n += 1 thread = threading.Thread(target=worker) thread.daemon = True thread.start() thread.join()
  7. execute in main thread with results

    Hmm I though so too, but it seem to be running constantly (I assume on every ui update). And I'm unable to stop it.
  8. Hey guys. I'm trying to find equivalent to maya's and nuke's executeInMainThreadWithResult helper function. The point is that I'd like to spawn a child thread using python in houdini and have it communicate back to the main thread. Ideally waiting for response. Has anyone tried that, and if so how successful were you? The best explanation of what I'll need to do is having a look at these 2 pages of nuke and maya docs, which explain it very clearly. http://docs.thefoundry.co.uk/nuke/63/pythondevguide/threading.html http://knowledge.autodesk.com/support/maya/learn-explore/caas/CloudHelp/cloudhelp/2015/ENU/Maya/files/Python-Python-and-threading-htm.html I'm planning to start ingegrating pyblish (http://www.pyblish.com/) into houdini and this would make it fairly straightforward. Cheers
  9. Sesi has just fixed the issues with copying animation between rigs if anyone is interested. Can't wait to give it a shot. Aside from that. I was hoping I wouldn't have to say this, but the latest Maya release has (at least in our eyes) effectively cancelled any thoughts for future animating in Houdini. Fully threaded and GPU supported rigs + delta mush support out of the box? oh yes thank you.
  10. Well if I exclude facial bones which are not needed in simple cases, then it's around 100 (fingers + toes alone are 60-80 bones, plus at least one twist bone for each limb part). For high end character we'd be looking at much more than that. Even game characters are easily reaching 100 bones nowadays, so it's really not that crazy. Point groups are a must I totally agree with that. However this is what I was trying to say, number of bones aside. Comparison of identical mesh and skeleton skinned in maya and houdini. No painting no settings tweaks. Just select mesh and bind it. Geodesic Voxel in maya, default settings, Proximity in Hou (I only tweaked drop off a bit). Maya 90% there, Houdini 90% left to do. Capture region are in this case useless, as they never seem to be created at roughly the right size. In this case, they were so small, that they didn't catch any mesh at all apart from fingers. I know this is extreme case and no one expect to get one click final results. Spending 8 hours and 1 hour on cleanup though is a massive difference. Plus I simply can't get proper volume preservation in houdini without extra tricks.
  11. I'm afraid it's not necessarily due to incompetence. Skinning in Houdini is very much behind current standards. That doesn't mean you can't get good results. But is will take a lot of time, workarounds and custom solutions. For instance I recently needed to rig a simple human mesh exported from makehuman app. I had time so I tried both maya and houdini to get it done. (I only needed skinning as it was used as a bit of a glorified proxy) In maya I had usable result within 15 minutes (their geodesic voxel binding with dual quaternion is really amazing), in houdini I stopped after about 3 hours of painting and fixing, because the default weighting is completely unusable most of the time and tweaking 160 capture regions one by one is beyond my patience. (bones were imported from makehuman and the capture regions houdini created for them were humongous). On top of that, without dual quaternions I wouldn't get the same result without extra bones or deformers anyways (it was quite muscular man so I was losing a lot of volume). Of course it's possible that I'm just incompetent too (even though I have done extensive maya rigging in past and rigged around 10-12 character in houdini over the last 9 months). However if that's the case, then it would only be point against houdini, as finding really skilled people is very hard and if software makes it easy to achieve good results with less experienced, hence cheaper people, me and our producer are all up for it. To wrap it up. I totally agree that getting good (let's not even mention fast) deformation in houdini is currently a bit of an uphill battle.
  12. Well we had a few animators working in houdini over the last 6 months. None of them touched it before and they were mix of maya and max users previously. To my big surprise, the interface was the least of the worries. Actually not a single person complained about it because it's so customisable, it just takes a few minutes of setup to make the UI simple enough for animators. The real issues we are having all come from 3 areas. I think this is a very productive discussion now so I'll list them here. I'm not having a rant at all, just stating our findings so far. 1. Viewport speed and stability. I had to radically change the way I like to make rigs for animator for them to be usable in terms of speed. Good example is transparent control objects. Animators love them. Being able to grab forearm geometry instead of bone or null feels very organic and it's easy to setup. However it kills the rig speed instantly in H14 (or the transparency on the bones in general). The same goes for skinning and most of the deformers. I wouldn't have animators work with all the heavy deformation rig in maya either, but for preview it's great to see what mesh is doing once in a while. Is it slow in maya if you turn on heavier meshes, of course, but houdini comes to standstill. Textures, no chance for good speed. We are getting decent speed right now, when we use the simplest proxies and only animate bones and nulls, but put more characters in and you're in for a trouble. 2. Animation reuse. Almost unthinkable right now. Copying keys between scenes, crash. Exporting and importing .chan/.chn files, impossible to get the import correctly back unless you are super precise with scoping the channels. Presets work most of the time, but then your preset list is really, really long after 20-30 shots. (And we'll be outputing 200 a month come next january.) The only way around that right now would be to write fully custom anim library kind of thing, but frankly we simply don't have time to deal with that any time soon. 3. Digital asset workflow. As much as I really love it for most of the pipeline steps. It's very cumbersome for standard animation shot work. Yes if you have enough time to be doing really meticulous layout then it's fine, but in really fast turnarounds it tends to be mess and very often you just need to override one or two things in rig, environment, etc. but still need to keep the asset live to get potential updates. This one I don't know how to solve. Mainly because after last few months I have a phobia from updating rigs in the scene to their new version. Always 50:50 whether it crashes or not. And outside of these it's simply overal stability. We are having without any exaggeration 10-20 crashes a day when animating. Whether it's moving keys, working in animation editor, updating rigs, scrubing the timeline, you name it. I'm certain many of them come down to wrong drivers, wrong graphic cards (we're all on GTX770 and 760), wrong rigs but it seems like awful lot of them anyway. Quite frankly I'm getting a bit tired of bug reports, simply because it is starting to be very difficult to narrow the problem down so we can post a clear reproducible bug. Edit: just noticed a few meaning changing typos. Of course I meant wouldn't have animators work with heavy deformation rig in maya. I'm not a monster
  13. GRANULAR SOLIDS RnD

    Next time I need rubber, I know exactly what to use . It is quite stable if you don't mind the jumpiness.
  14. I absolutely agree with this. Animators don't touch nodes almost at all, that would be mayhem. With well packaged rigs and tweaked (simplified) animation layout, all animators who tried is so far actually enjoyed it a lot. There are some issues of course (reusing animation, viewport speeds etc.), but overall every maya/max animator who we had to try it, was animating within a day, without moaning too much. Quite the opposite, they love fetch and blend nodes. It's very easy for them to switch spaces for anything. Modelling is of course completely outside Hou. I don't need a bunch of modellers on antidepressants .
  15. Which FX Houses have Houdini FX TDs?

    Newly set up studio Kredenc in Prague. Currently just 2 seats, but evaluating for full use on a long term full cg project. So over next couple of months we'll probably go up to 3-4 seats. From Jan 2016 around 10-12 seats or 5-6 depends if we switch rigs to maya for animators. I'll be hunting in after summer
×