Jump to content

Recommended Posts

Thank you for all your feedback! It is much appreciated and I look forward to continue to create cool fx and be part of the Houdini community.

We'll see what happens next. The whole visa/residency/citizenship thing is always a little tricky and it takes time to get that sorted, but I'm quite optimistic something can be arranged one way or another that allows me to continue doing what I love to do.

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

AhmedSaady, on 15 Apr 2014 - 12:39 AM, said:

can you tell us please about how to achieve John Carter: Thern aggregation effect. ??

Hi Ahmed,

The Thern aggregation effect was quite a substantial effect. I can try to break it down a bit, but there were many components to it.

So here goes:

look:

- The entire effect - both the animation and the environment - was built out of curves. Tiny curves with tube shaders which makes them look like branches.

- Other approaches were tried out, but eventually rejected because of lack of detail, lack of control or huge memory footprint.

modeling:

*) fractal nature:

- the thern had a fractal quality to it. Meaning that a certain branching pattern is happening both at the large scale as well as at the smaller scale.

- the large scale defines the global structure, composition and sense of space.

- the larger scale branches are in turn made up out of smaller scale branches.

- the large scale branches are not equally large everywhere, in some areas they are more dense and more adaptive ( the floor of the staircase vs the walls -- everywhere the actors walk it grows more densely).

*) chaos theory:

- the curves were procedurally grown. There were quite a few parameters to control how to grow it and one of my colleagues at the time (Coen Klosters) created this growth system that ultimately led to the final look.

- to use that system, you need to understand it and think in a way of feedback loops and chaos theory. At the time I was learning a lot about these things and sometimes struggled with it as I was used to defining and controlling exact shapes exactly how I wanted. With chaos theory you build the rules and the complexity comes from the feedback and the fuzzy logic within the system. When you understand that, it changes the way you look at effects and is applicable in a huge amount of scenarios - and it is desirable to lose some of that control to gain all that rich detail.

- this system was much like a smart particle system, the branches could avoid themselves for a while before reconnecting to already grown branches.

- you could grow multiple generations, either one generation at a time, or multiple generations all at once. Both gave different looks.

- the particles had basic intelligence in regards as to what they could 'see' (dot product lookup in a pointcloud vopsop to define the field of view - typical for flocking setups as well).

- growing in a 2d plane forces a lot of more collisions and connections than in a 3d plane. So sometimes a part of the structure was grown in 2d (and creep/deformed into a different shape) and other times it was grown in full 3d, using volumes to define the area where it was allowed to grow.

data management:

- Especially for the large scale full cg shots the data management was quite intense. All those curves were partitioned into boxes. Each box could be processed and grown in parallel.

- So first the large scale was grown, then a thickness was defined, then a box was "sliced" out of this - so now things are parallel and one box really represents 5000-10000 unique different boxes.

- Each box now contains a subsection of the large scale thern, this subsection gets polywired and turned into a volume. In this volume, the small scale thern is grown. Finally certain animation attributes are transferred from the large scale curves to help with the timing and guidance of the small scale growth.

animation:

- modeling and animation were separated. This was crucial, as otherwise each time you would slightly change the animation, the resulting growth pattern would change as well.

- The large flow of animation was largely based of a dijkstra pathfinding algorithm. This would define distance based information on each of the vertices of each line segment. When you map this over a time period, you can trigger certain segments at certain frames based off the distance you calculated with the dijkstra (I think there are some pathfinding sops in h13 now).

- We could control the speed at which certain areas grew by manipulating attributes that hindered the growth that was calculated with the dijkstra. Using noise patterns, metaball weights, gradients, distance to a line,... Similar as to how you would grow an attribute inside a sopsolver.

- The dijkstra was first computed on the larger grown curves. This step took a while as this was not parallel and was needed to help drive the growth animation for the small scale thern.

- At the smallest scale, each line segment would basically grow out from one side, but it would not simply be scaled up to the full length, instead there were a few strands that would make up a segment.

*** There were strands that grew out whilst rotating around the first pivot, resulting in a flailing effect.

*** There were other strands that would be resampled so there were some points in the middle, these strand would basically rotate twice, the first time around the base pivot, and the second time around the middle pivot, this resulted in more of a bending effect.

*** There were yet more strands that were not a single strand, but instead 2 or 3 strands, which would perform the bending or flailing animations, but offset in time by a few frames. This resulted in almost a 'ghosting' effect where it would take a few frames for a single strand to solidify.

- all of these strand animations combined and driven by the natural flow of the dijkstra and the guidance of the controlling gradients by the artists, made for some very high detail, finessed growth animations.

Rendering:

- initially we started rendering with Mantra, which held up quite well, but eventually we switched to rendering with prman because the rest of the show was using that and prman has great support for rendertime procedurals and some of the custom data formats for feeding it those partioned boxes was implemented in prman.

- so each box could be delayed loaded into memory and thrown out of memory as soon as a render bucket was done rendering it.

Shading:

- the shader was basically tube based, but the normals were a blend between the normals of the large scale curves and the small scale curves. This really helped to solidify the larger scale look and not get lost in the fizzy detail of the small scale growth. This is the kind of thing that Pixar has done for shading hair as well. Where they would use volumes to shade curves instead of computing the normal of every individual curve. Or they would blend the normal of the curve with the gradient of the volume.

General:

- any really cool great effect tends to have layers of complexity to it. With destruction, it's not just the main destruction, it's the secondary and tertiary fractures, the debris that triggers at the right time, the dust trails, the inside and outside textures, the procedural noise in the cracks, the natural flow of the cracks, it's the amount of rotation and bounce in the dynamic simulation, the right sense of weight and timing. In other words, a lot of layers of complexity working together.

- With the thern it was similar, there were many layers (and people) working together to get to the final result. I would argue it is harder than a destruction effect since it is a 'fantastical effect'. The brief was 'nanofoam' :).

- in regards to reference for this, there were many growth patterns we looked at: snow/ice, branches, roots, chemical reactions, ferrofluids, ground cracking, voronoi. But there was nothing quite like it, so when that look was finally achieved we were quite happy.

- Of course this was a team effort and myself and others in the team spend a lot of time on this effect.

I learned a huge amount on this project during the almost two years I was on it - also consider that this was my first project I worked on right after I finished my MSc:

*) vopsops: matrix operations, pointclouds, orient quaternions, up vectors and Normals, neighbour connected point, prim attribute, layering noises, refitting & normalizing attributes, vopsops architecture and parallel nature and limitations of certain vops.

*) sops: foreach, curve operations like carve, prim sop, resample, facet, divide, voronoi fracture, vertex cusping, weighted subdivision, optimizing animation tests by using vops, deforming using metaball-magnets, warping using cloth-like wrappers, creeping one mesh onto another, lsystems,...

*) dops: sopsolver, data flow, building my own particle system, building my own growth system, building my own sort of dykstra in a sopsolver before it was implemented in the hdk by one of my colleagues. Solving over attributes and achieving different beautiful growht patterns (that can drive any effect). Building my own feedback volumetric systems based off energy transferring -- these are really cool as you can build an entire ecosystem inside a dynamic simulation where one state phase-changes into another and another and another, each phase triggering different effects -- eg: planet earth :P.

*) HDK: I wrote my own rendertime procedural in the hdk to render a huge amount of instanced geometry driven by a pointcloud that would load at rendertime (this is now implemented by fast instancing and the Point Instance Procedural shop in houdini). I am not a great programmer so the code was a bit hacky, but it worked and it was great working with others to find ways of how certain attributes on the pointcloud could help control the data that was to be loaded and manipulated from disk. -- You have to realize that this was pretty wild to me, because I was starting to learn that you have access to all of houdini's functionality in the hdk -- and that sidefx is kind enough to provide an unlimeted amount of mantra licenses. And so in theory I was having fun thinking I could use mantra to batch process my geometry. Almost like: 'Why would you create things in houdini if you could program them and create them at rendertime?'. Of course in practice that is not feasible, but again it opened up a world of possibilities in my mind.

The prman procedural was written by a colleague of mine who was much more senior and had more experience doing this - but I learned a lot about how data for render engines work and how you can actually render a ridiculous amount of polygons if you are willing to deal with certain restrictions.

*) general: data management! This was such a heavy project in terms of data requirements that to be able to process all that data efficiently we had to partition and work in parallel. This kind of data management has now been integrated in many other workflows (and also laid some of the foundations of the clustering technology that is now implemented in houdini.)

Richard Pickler, one of the senior technical fx supervisors did a presentation on the effect at Siggraph (that was a fun summer as I was presenting with Method for Wrath of the Titans and we met again at Siggraph in 2012 ).

I have not seen his presentation, but I would think it is worth watching. I personally don't have access to Siggraph content at the moment and I think you need the streaming access in order to see it.

*) presentation (should be cool):

http://siggraphencore.myshopify.com/products/2012-tk156

*) pdf (not quite as cool):

http://dl.acm.org/citation.cfm?id=2343074

I actually did a (basic) presentation at Bournemouth for a class of effects students when I gave a talk there, and I might still have that. I will have a look for it, not sure how much of that I can share though. At Bournemouth there was a clause that I could show production stuff if it was for educational purposes.

Hope you found it interesting.

  • Like 3
Link to comment
Share on other sites

  • 1 month later...

Great work Peter! - hope youre enjoying LA? - certainly looks like the good work on Jam Man has been kept up ever since!

 

ps oh and yes .. I was just about to post my first post question (on adding tool handles / manipulators to OTLs ) when I saw your featured reel and couldnt resist! :P

Link to comment
Share on other sites

Hey Adam,

 

Welcome to the Houdini side :). I have no doubt you will enjoy the flow of working with it - and any of the the knowhow you acquire in vops might soon come in handy again when Bifrost is rolled out in Maya.

 

Thanks for the comments in regards to my work. It's been a while since the Jam Man, but I have fond memories of my time and work at Bournemouth. I still enjoy a lot what I do and I have been fortunate to have been working with some very talented artists. The communication, brainstorming and cross-pollination with colleagues has helped tremendously with growing my own artistic and technical skill. Since I have posted my showreel I have received several job offers, including one for Fx supervisor which would be an interesting avenue. Unfortunately that position was outside of the US. Method has also let me know they want to extend my H1B visa for another 3 years, which is great. They are growing and I am growing with them.

 

I am aiming to remain here a bit longer - trying to get a green card before I perhaps start traveling the world again. I like LA a lot - it is much more in tune with my personality than London, the weather here is great, the projects at Method have been fun and varied, the colleagues are fun and bright, there is diversity in people and food in the city, the cost of living is reasonable ( a lot less than London, NY, San Francisco or Vancouver), also having access to the US financial markets is a big plus. Unfortunately the industry as a whole here has been on a rocky path, so who knows how long this will continue.

 

This past year I have learned a lot about vfx, but I have also spend some time learning about finance (there are some great free courses on coursera) - both personal finance as well as corporate finance. The corporate finance side is to get a better idea about how Method (and any business really) as a whole operates and get a better insight in the bidding/scheduling process. These are also skills that will help me when I eventually grow into a visual effects supervisor.

On a more personal level in the US you have to manage your own retirement accounts and investments. I think any artist would gain a lot from learning a bit about finance to help plan and navigate their careers, no matter where they are in the world. Especially since buying a house is a very tricky proposition if you travel to different locations every few years, whereas a portfolio of stocks can be managed from anywhere in the world and can provide a bit of passive income in case you are in-between shows (although I don't know many houdini artists that are in-between shows for a long time).

 

For fun I've actually been thinking of doing a 'subliminal' video tutorial on some motion graphics stuff but use finance data/knowledge at the same time, so I'd be teaching both - I really think it is that important. It has made a big impact on my own life. It made me realize that my second most powerful software (after houdini) is actually excel/google sheets - especially with google sheets you can pull in a lot of data from a variety of websites. I realize I enjoy making sense of and managing large amounts of data, both in fx work as well as in economic or financial data. The goal of that data is either to get to a pretty picture, or to some insight in an economic trend or diversification of your portfolio or calculate statistics on a part of the population.... so many uses.

 

Looking forward I want to learn a bit about iOS development, unity, the houdini engine and see what I can come up with in that area on the side. That way by the time I finally get that green card I will have done a lot of learning and will finally be ready (and allowed) to set up my own company and get some apps, video tutorials, consulting, 3d designs, etc going on the side - that is probably 3-5 years away from now. Exciting times!

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...