Jump to content

pclaes

Members
  • Content count

    807
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    48

pclaes last won the day on August 21

pclaes had the most liked content!

Community Reputation

254 Excellent

About pclaes

  • Rank
    Initiate
  • Birthday 07/13/1985

Contact Methods

  • Website URL
    https://www.linkedin.com/in/peter-claes-10a4854

Personal Information

  • Name
    Peter
  • Location
    The Mill, Los Angeles
  • Interests
    houdini, fluids, particles, dynamics, shaders, procedural animation, lighting and rendering, VR.

Recent Profile Visitors

15,873 profile views
  1. Houdini 18 Wishlist

    Small one, but would be nice to have: Would be great to have the blackbody shop work on points in an attribute vop rather than only on fields/voxels in a shader. (perhaps this is already possible and I'm missing how to do it.) With Arnold you can use a blackbody to shade all kinds of things (points, polygons, voxels) and I like the way it ramps the colors. Would be great to have the blackbody available similar to how a regular 'ramp' is available.
  2. Not that recent, but excellent execution:
  3. Houdini Growth Masterclass

    Whoops, should be June 3rd - I've updated the original post. Basically 7 days from today.
  4. Houdini Growth Masterclass

    Hey, On June 3rd I will be teaching an advanced Houdini masterclass at the Effects America conference in Montreal. I will be covering growth systems. Should be good fun: https://www.effects-events.com/en/master-classes/ Description: During this advanced Houdini masterclass you will learn how to create an art-directable growth system. Digitally constructing things can be as challenging if not more challenging than destruction, this class will focus on the former. The class is split into two main sections. The first section of the class will dive into building the growth solver prototype tool. This covers solvers, some vector math, chaos theory, 2d & 3d growth, custom forces and tool development. The second section of the class dives into using the tools to grow a 2d and 3d pattern that is procedurally animated and prepared for rendering. This covers path finding, procedural animation, combining 2d and 3d patterns, custom attributes and aovs/render passes for comp. Take away: Understand the algorithm and concepts for building a growth solver. Build a user-friendly and efficient tool that can scale from small scale single growth to growing large datasets for entire vfx sequences. Understanding and making use of Houdini’s data acceleration structures. Gaining insight into the art-direction and approval process for both the grown pattern as well as the procedural animation. The audience: This course is intended for intermediate to advanced Houdini users. Users should have a working understanding of the Houdini interface and overall data flow (contexts, attributes, datatypes). Houdini Apprentice can be used for this class. Hope to see you there or perhaps at the conference, Peter
  5. Houdini or Katana

    Since you specifically mention Arnold. I would also recommend to go with Houdini. The HtoA implementation is quite good. This became more feasibly with the introduction of the alembic rendertime procedural for Arnold over the past couple of months. Combined with Arnold Operators you can make a lot of changes 'at rendertime' without having to load the heavy data in the the scene description file. For certain things there are limitations (no pointcloud reading or volumesample support yet for shaders at rendertime), but you can use mantra for that. Also if you are forward looking, usd and the lops context will be huge. Already out now is the TOPS context which can trigger huge dependency trees and spit out final rendered (potentially even slapcomped) renders. The customizability, extensibility, stability of Houdini combined with a company that truly listens and supports its clients is unmatched by other vendors in my opinion. In regards to being able to handle huge assets, this mostly comes down to how you optimize and organize your scene assembly. Eg. Understanding and making heavy use of instancing, packed disk primitives, rendertime procedurals, level of detail, mipmapping, rat/tx conversion. The kind of control you have over instancing combined with Arnold Operators is quite powerful. Mantra is not the fastest renderer, but it is robust and can handle lots of custom scenarios.So in regards to handling large amounts of data, Houdini is great at that.
  6. Compress vdbs

    Before going into rendering we tend to compress all volumes to 16 bit. There is no noticeable difference for rendering and almost a 50% space savings. Only thing to really be careful with vdb is pruning with rest volumes (they really should not be pruned as 'zero' is a valid rest value.) During the simulation you want the full 32-bit as that accuracy is needed for the fluid solve, but once you are done with the sim (and any post processing) you can go down to 16-bit.
  7. Arnold Instance Offset Volumes

    Hey Max, So without access to .ass files I don't know if what you are trying to do is possible. ( There are always hacks, but let's get into that later). Instancing in htoa generally requires .ass files for your instancefile attribute. Your example would then be more like: D:/Gnomon/.../geo/my_archive.1001.ass D:/Gnomon/.../geo/my_archive.1002.ass D:/Gnomon/.../geo/my_archive.1003.ass So what goes into the .ass file and how do you make it? *) In the object level context, drop down an 'Arnold Volume' object and put your path to your vdb sequence inside of it, also specify the relevant grids. The 'Arnold Volume' is a procedural that allows Arnold to render the vdb at rendertime. The voxel data will not be stored in the .ass file. The Arnold volume which points to the vdb files will be stored in the .ass files. So basically the 'Arnold Volume' object is a wrapper for the .vdb sequence. Also at this point you want to assign an arnold standard_volume to the 'Arnold_volume object. Your filename inside of of the Arnold Volume would look like: D:/Gnomon/.../geo/my_volume.$F.vdb Next you have to create the archive that will be loaded by the instance points by Arnold. So you can use an Arnold node in the rop context and put your object level 'Arnold Volume' as a Force Object in your rop. Then under Archive, turn on 'Export ASS File' and for the checkboxes, turn on 'Binary encoding', 'Export Shapes' and 'Export Shaders' - turn off the other checkboxes. Note that the shader is included, this is important. Write our your sequence of .ass files. Next at the object level make use of a houdini instance object. Inside the instance object, have a single file read node that reads the points that have the 'instancefile' attribute. Don't use an object_merge - I've had issues with this. Also do not assign a volume material to the object level houdini instance node - again had problems with this. You should now be able to render the various instanced volumes. If you don't have access to writing out ass files... then you will need to hack it: *) you can create 100 Arnold_volume objects, each pointing to a different frame of your volume sequence. ( Probably best to do this in a subnet, use opdigits ). *) Then use the instancepath to point to the various Arnold_volume objects. Good luck and let me know if you get stuck. I don't have indie so I don't have those limitiations. I'm surprised Indie does not let you write out .ass files, because that is what you would need to render on a farm. Peter
  8. Why no beer tutorial?

    I'm sure if you ask Johnny nicely he can put a file together for you - and/or donate to his patreon: https://www.patreon.com/Farmfield
  9. Random link of interest

    Congrats to Sesi! https://www.oscars.org/news/10-scientific-and-technical-achievements-be-honored-academy-awards
  10. Advecting Pyro with Object (Help)

    Hey Cristina, So in regards to your question about masking the turbulence field. I've attached a scene file showing how to mask a turbulence field. In the example I've built there are a few steps to get this working: 1) in dops you can use the 'gas match field' node to create a new field ('turbmask_custom') based on an existing field. In this case I first want to build the field that is going to be used for masking. (this is the initialization of the variable). 2) then I need to put a value inside the voxels of the newly created field, so I am using a source volume for this, but in the sop to dop bindings, the density field will put its value in the 'turbmask_custom' field instead of adding it to density. 3) the next step is to add the gasturbulence node to create a noise field, but in the control field tab we can specify the mask. So that is where I put 'turbmask_custom' and also set the control influence to 1 so 100% of my field is used. In the case of your skull, ultimately you need a fog volume density field that contains those masking values. In the file I've added in a sphere to represent your skull. I first build an SDF from the sphere geo, then dilate the sdf, then turn it into a fog volume, then rename the field so it is not named 'surface' but instead is named 'density' so my sourcing in dops will be able to find the density field. I've also turned on the velocity visualization in the pyro object so you can see what the velocity field is doing (especially visible if you turn on/off the control field on the gas turbulence). You can get much more fancy by using dynamic fields that change over time (like temperature or heat, or some other custom field), but this should provide a good base example of masking a field in dops. Good luck and definitely post back the results of your project when you're done with it! dop_masking_v001.hip
  11. Custom PC build advice

    I recently built a $3k workstation with this in it: i7 6850 (6 cores/ 12 threads, but high clock speed and potential for overclocking) gtx1080ti (11Gb video ram - evga founders edition) samsung 500Gb M.2 (for fast caching, a bit pricey but this thing is really fast) samsung 1tb solid state (intermediate speed) seagate 4tb data quiete base case (very quiet - big case so I have room to expand) noctua NHD15 (cpu cooler - air based) 64 gb ram I come from a dual xeon background - and you could get some older xeon chips on ebay as an alternative, but for more single threaded things the i7 6850 hits a sweet spot in terms of number of procs vs clock speed). I also think that in the future the industry will move towards a gpu rendering approach, if not a real-time (unreal-engine) approach. Depends a bit on your needs, for me this workstation is aimed at development work mostly. Not so much at rendering sequences... If I need to render really heavy final sequences I would do that on a farm (gridmarkets or at work depending on what I am doing).
  12. TD mentorship program

    I've followed Allan for a long time and he's got good tips and general effects advice and has a lot of experience creating high end effects work. But his main tool is 3dsmax, so if you want to become an expert at that it might be worth it. If you want to learn Houdini at the high-end my vote would go to the CG Circuit series as well as Matt Estela's Tokeru mixed in with a bit of Entagma. Also on the main sidefx website you can go through the masterclasses too: https://sidefx.com/tutorials/?title=&user=&categories=&level=2&version=&paid= If you are near LA you could come take my houdini class at Gnomon. My class is an advanced houdini class, I used to teach the beginner and then the intermediate levels as well, but now I only teach the advanced Houdini level. My class goes into some of the underlying computer graphics principles and data management as well. Over the last 3 years I've taught around 30-40 new junior Houdini artists. The majority of them are all working as fx artists in the LA area. The main reason why I started teaching was to create a houdini talent pool in LA that I could draw from as an fx supervisor. Also to teach the knowledge that I think the students should know and that prepares them for industry. The side effect was that a bunch of other studios also gained access to more junior houdini talent and that some of the more traditional Maya or Max houses started to adopt Houdini in their fx pipeline. So by the time I need more mid level talent the original students have had a bit more experience at a variety of studios and then they can move around. Also I have directly hired students from my own course as production needs arose. Some of my students also became assistant technical directors or show technical directors because they gained a deep understanding of cg. This is the curriculum (10 weeks total, 3 hours per class, I think Gnomon charges students around $2k - so not cheap either) Class 1: Lightning bolts setup, tool building, custom mask for comp, case study Class 2: Clustering techniques for efficiently processing large volumetric data sets and breaking down complexity. Class 3: growth systems 1: growing patterns, tool building, chaos theory, feedback systems, growth behaviors, 2d & 3d growth, reconnecting branches. Class 4: growth systems 2: procedural animation, tool building, pathfinding, custom masks for comp. Class 5: Art directed destruction 1: rigid body dynamics, fracture patterns, constraint networks, case study Class 6: Art directed destruction 2: constraint networks, hero pieces flying at camera, secondary simulations Class 7: Art directed destruction 3: Destroying a production asset and bringing all elements together. Class 8: volumes 1: liquid explosion - pyro sim sourced from flip simulation, detailing sims and shader, fractals. Class 9: volumes 2: nebula, tornado, cloud puff, portal, and custom volumetric solver. Class 10: Controlling data through multiple phase changes - rigid, melting, liquid, evaporation, rigid Since you are in Canada, you might also want to consider Andrew Lowells program at Lost Boys - I've heard good things about it and Andrew is a long time houdini user as well: https://lostboys-studios.com/effects-technical-director-fx-td-program/ I think overall there are a lot of good resources available (mostly for free or for less than ~$500 online), it will mostly cost you time. There are no short-cuts. For me the in-person teaching experience is something I enjoy as well as you can instantly get a students level of understanding and expand on where they need more clarification or sometimes expand on a tangent that they are interested in. Also a physical teacher will generally have good ties with industry and can help you build connections. - I recommend each of my students to be active on the forums, post to vimeo and share some of their hip files so they can get to know the community and the community gets to know them. I just provide a stepping stone. Good luck!
  13. Bubble burst effect

    This would probably require a two stage effect: 1) the hiding of the bubble mesh (in a shader) 2) the emission of liquid particles. Both should be driven by a similar reveal mask. You can create the reveal mask by calculating the cost using a find shortest path node. I would probably duplicate the shortest path node so I get two sets of geometry out of it. The first being the original buble geo with the cost attribute - this can then be remapped to create a black and white mask that is animating the hiding in the shader. The second is a set of curves that create a growth pattern similar to a bunch of lightning bolts/ branching curves ( I think the entagma guys might have done a tutorial on this). Then you can use those curves as your emission geometry for your particles (probably using a blend between surface normal and curve tangent normal as the emission direction) - and using the black and white mask that you are using for your shader to trigger the emission of the particles. Good luck!
  14. Car Crash

    Hey Matt, pretty much what Joao said. You are currently smashing a static rbd object into a bunch of active rbd objects. You need to smash active rbd into active rbd. Also there is an old Sidefx masterclass (on cloth I believe) where they smash/deform a car into a pole that you might want to take a look at. That can help show you how you can deform the car during the collision. You could even try a hybrid approach where you fracture your car, link all the pieces with hinge and spring constraints and then use the resulting simmed pieces to deform the original mesh. That might be easier because you are sticking to bullet-bullet interaction instead of trying to do multiple solvers interacting (cloth/rbd). If you do want to go with a more deformation based approach, you can update your car collision geometry with a sopsolver: http://ihoudini.blogspot.com/2011/07/deforming-rbds.html Good luck and welcome to Odforce!
  15. Vray for Houdini Alpha

    Hey Luke, As far as I could tell, you have to take care of farm submission by yourself. There are no 'submit to hqueue' vray specific nodes. So basically your farm should be able to handle running a shell command. That's why I used shell rops. And then from the shell rop you can call the vray commands, the first that creates the vrscene (which is very similar to houdini baking the scene into ifd). And the second will call the vray command for standalone rendering. - you only need houdini (engine/batch license on the farm) for the first step: the generation of the vrscene. - for the second step, you probably don't want to trigger that through houdini, because otherwise it will consume a batch license as well as a vray license which is inefficent. If you have more specific questions or issues I would suggest to post on the github area as the developers or other users might be able to help you out further. I have been out of that development for the last 6 months, so maybe they have updated things since then. I think you might try asking this question over at the Sidefx hqueue forum too, because it is relevant to launching any program through hqueue - not just mantra. And some people are launching nuke fxcomps through hqueue so launching the vray command should be possible. Good luck!
×