Jump to content

Mcronin

Members
  • Content count

    587
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Mcronin

  • Rank
    Houdini Master
  • Birthday 03/30/1973
  1. Houdini 10 Wish List

    This got me thinking. It seems to me all this comes down to transferring attributes. I've used Massive and honestly for all the fuzzy logic and brain talk it seems more complicated that it need be. The attributes, which can be represented 0-1, on an agent together make up the state of the agent's mind and determine what course of action the agent will take. In order to create this state you need the agent to be aware of the state of the objects around them by proximity. Now you could set up a reasonable version of this without programming or writing a ray marcher using something like the attribute transfer SOP. It'd be a laborious process, though. Maybe a suggestion for a new general purpose tool to aid in doing something like this could be an "attribute aggregator" SOP and/or POP. It'd work like attribute transfer, but have an interface like a blend shapes SOP, with a bit of the merge DOP. I see it like this: You can specify many inputs. Each input has broadcast and receiver areas and functions. The area can be omni directional, or directional akin to a spotlight, maybe the user could even specify bounding geometry for the area min/max. The function is a user selectable falloff function. The user can set which of each input's attributes are broadcasting, and which attributes the input should listen for. As the sim runs the inputs aggregate each other's attributes based on proximity, falloff, and their broadcast and reception areas. Anyway, it seems like a simple idea that could go a long way towards getting you Massive like functionality, not to mention a tool like this could have many, many other uses.
  2. Where Are You Now?

    I'm in Portland, OR. I think we'll have enough Houdini users here soon to form a pretty impressive user group.
  3. Houdini 10 Wish List

    I'm going to be echoing a lot of what Jason asked for I think.. * Authoring of realtime shaders via VOPs. I would like to be able to spit out code for GLSL and HLSL. I would like to be able to implement fragment shaders in COPs. How about making some COPs take advantage of the GPU. Screw it, let me apply fragment shaders to the viewport or camera as well. Maybe this is interesting, a camera that is able to render with multiple takes to texture storage on the GPU and those images then composited real time with a fragment shader in the viewport. How freaking cool would that be? I think very. * Proper Vertex normal support. We are running into this at work now. Quite annoying actually as Maya seems to always want to create vertex normals. * Improve viewport picking and handles. Maybe look at how picking works in Silo as an example. * Image paint SOP - or perhaps a whole new TOPs context (maybe it could be part of COPs with better COPs integration). I can envision building an image with a network of layers, filters, curves, fonts, paint operations, etc, in the same way you build anything else in Houdini and it makes me smile. * Threading for DOPs, especially fluids. * Relighting/Interactive Lighting - I've seen several attempts at using COPs for relighting... maybe we can do better. nvidia demoed some really impressive relighting and interactive lighting tools for Gelato and the last couple of Siggraphs, maybe look to them for inspiration. * HDK, is there anything that can be done to make it more accessible to mere mortals, and turn it into a real API? Python is great, but there are some things it's never going to do well, such as building geometry. * A good brush tool, something that can be used to comb longer hair, and behaves the way an artist would expect. The current comb tool rotates normals from their root. Brush needs to drag multi-segment hair from the tip in screen space and solve in an IK like fashion. * Improve stability * More filters on the geometry spreadsheet (filter by group name, filter by datatype) * Add support for tangents and bi-normals to the point SOP.
  4. Production Issues

    Are you pulling something directly from COPs into your scene? Maybe you could take advantage of render dependencies and render out the COP network to disk and reference the files instead.
  5. Good work. I really like the video.
  6. Quest 3d

    I do not like Torque either. I use XNA. It's basically the replacement for managed Direct X, but it eliminates most of the BS you have to deal with using Direct X and the Windows API, while still allowing you to create some really high performance applications. You have pretty much unrestricted access to the graphics hardware with XNA. It's based around C# and .Net, which I have grown to love. There's a Torque XNA engine, but I find working with XNA is easy enough, no need for an engine on top of it.
  7. Quest 3d

    I used it years ago. It is great, the problem, at least when I used it, is that is was suprisingly inflexible. Though it uses nodes so users can program visually I couldn't do anything procedural with it. I couldn't construct geometry or do anything other than simple transformations, and I was limited to it's existing features and shaders (there were only two basic shaders at the time). There was no way to extend it. Any complex geometry or animation had to come from outside the software usually in the format of an X file. This all may be different now. Like I said, this was years ago.
  8. I should've guessed. They've been doing some great stuff lately.
  9. This is great. I can't get this song out of my head. Shame the sound and video don't seem to sync up on my computer. Anyone know who's responsible? http://www.youtube.com/watch?v=kJEacTZmd7I
  10. I tried writing an output driver in older versions of Houdini, and it proved extremely difficult for me given my lack of understanding of the HDK, and I never finished it. What I ended up doing was writing all my geometry out to geos, then writing a standalone converter for the geo format. Doing it this way you can avoid using the HDK all together. It was suprisingly easy to do. I wrote one version in Perl, one in C++ and one in C#. The C# version was by far the easiest and probbaly the quickest version, because it has a bunch of nice methods built in for sorting and searching data, regular expressions, as well as generics which I found very useful. For the C++ version I had to rely on STL and the Boost libraries for certain things, which made things a bit more difficult. The perl version was the slowest, add to that I just don't like perl. When you do move to 9, you can easily put something together using python and HOM. I have a working exporter that writes geoemtry and some additional scene information from Houdini directly to a couple different formats encapsulated in an HDA for Houdini 9. It was really easy to create and it's very fast.
  11. Converting Renderings To Imax

    So, potentially, we are talking well in excess of 1 million dollars for a single finished IMAX print of a 90 minute feature. I'm still not at all suprised.
  12. Houdini La Meeting

    There was an LA user group a while back. Did you try asking on the mailing list or contacting Side FX to see if it was still active?
  13. Converting Renderings To Imax

    When people say Imax I always think 3D. Yeah, anyone can do stereo 3D, and you can get film printed at any number of places, but if we are speaking specifically about Imax theaters or Imax 3D, you are probably going to have to contact the Imax company. Imax owns the process and the theaters, they won't let anyone throw just anything in their theaters, they do all the scheduling themselves. I can tell you this. Creating a single 35 mm film print is expensive. A friend of mine shot a student film at 35 mm. It was about 15 minutes long and it cost him almost 20,000 US to get a single print made. I once destroyed a 35mm print of Dracula by misthreading a projector, that print was over 110,000 dollars. 70mm is much much more expensive. Just guessing here, but I would not be at all shocked to hear a single 90 minute 70mm print cost a quarter of a million dollars US.
  14. Converting Renderings To Imax

    It'll be very, very expensive. The Imax company can do it and as far as I know they are the only company you can go through to produce the final film for an Imax theater. If all you have are images (single camera, no interocular) you will either need to rerender everything with an interocular camera setup, or you will have to pass the images off to the Imax folks so they can extract layers to fake a dual camera setup. It's a ridiculously expensive process. The film they use is 70mm, I believe, they are the only ones who print it, I've heard it takes them a whole day just to create a single print, they can only make one print a day, and they are the only ones who distribute movies to Imax theaters (the theaters are scheduled years in advance). So, if you seriously have something that can play in an Imax theater, you are going to need to talk to the Imax company.
×