Jump to content

Recommended Posts

header_header.jpg


Helga*(working title)


WIP thread for our animated diploma short film


 


Logline


A rodent-knight, his wife and his best pal.Celebrating the return from a bloody crusade, the two brothers in arms boozing tankard after tankard in a rustic tavern.


The night was good if it wasn’t for the knight’s short tempered wife. Claiming her husband only for her own she antagonizes the two pals leading them straight


into a fast paced bloody duel.


 


Genre


A 3 minute black comedy shortfilm. Miniature built set (eventually digitalized with photogrametry) combined with CGI characters.


 


Characters


 


A rodent knight: Ulfbert                                                              Short tempered wife: Helga                                                           Best pal: Snorri (he's joining soon)


ulfbert_concept_01_odforce.jpghelga_concept_01_odforce.jpg


Sculpts


 


ulfbert_sculpt_01_odforce.jpg


 


Moods


 


medieval_feeling_01_odforce.jpg


 


tavern_feeling_01_odforce.jpg


 


Techniques


As the directors goal is to reach some creepy semi-realism lookwise, we are trying to rescue that handmade feeling of a real set over in CG.


 


Miniature Set:


A real set is currently being built (photos soon).


Photogrametry:


We are currently experimenting with using photogrametry and scanning the real set to digitalize it. Tests are in progress and i we can show images soon.


The goal is to transfer the rich detail and special look of a handcrafted set into CG to harness the power and freedom of full CG animation workflows.


In the optimal case we would end up with geometry and diffuse textures captured from set, reassembled and light/rendered in CG. Cameras could be full CG


too, then.


CG character integration:


The characters will be CG in any case.


 


 


Software


Here's what our CG pipeline looks like.


 


Animation, Rigging:


Maya


Effects, Cloth, Fur, Lighting/Shading:


Houdini/Mantra


Comp:


Nuke


Photogrametry:


Agisoft Photoscan


 


Why this thread?


We, as the TD team, made the decision to run our CG pipeline partly with Houdini. Not because we are experienced users, but more because we want to learn the software :D


Animation and rigging will be done in Maya and effects, lighting/shading, fur and cloth will be done in Houdini. As beginners with Houdini we are sure to encounter lots of questions


(problems, workflows, best practices, pipeline organization) along the way, and we'd like to use this thread to raise them as they appear.


 


See you soon for some progress :)


Cheers,


Helga Team


Edited by timmwagener
  • Like 1

Share this post


Link to post
Share on other sites

Very nice concepts!

Just one question: why not use houdini for rigging and animation?

 

Hey MindThrower, at the moment we simply dont have the artists to do that. Also an Alembic based workflow should be pretty painless.

 

Photogrametry Test Shoot

Apart from that, I am just coming from a serious 3 day photogrametry session, where we took thousands of images in order to test capture and 

digitalize the miniature set with Agisoft Photoscan. It was exhausting (i think during the last five nights i was fully awake for three of them) but also

great fun. We have some footage about the wohle setup, i will post it here in a few days, along with hopefully a few successful test solves in 3D :)

 

Let me finish with some random WIPs:

ulfbert_head_overpaint_odforce.jpg

-

ulfbert_wams_displace_render_0001_odforc

Edited by timmwagener

Share this post


Link to post
Share on other sites

Hello guys,

 

I'm the Cloth TD on this project and I have a few questions. 

 

  • When the sleeve of the doublet of Ulfbert ist bend, it doesn't bend back. How can I influence the simulation so that it does? (https://dl.dropboxusercontent.com/u/23791935/ulfbert_doublet.mp4)
  • I also want the sleeve as a whole to move forward. How would I do that?
  • The cloth has to be at a certain position at the start of the shot. What's the best way to preroll the simulation. I would have the character 1 sec in T-Pose, then animate two seconds into the pose at the beginning of the shot.

Feel free to criticize any errors or provide any workflow tips in my .hip-file. Thank you in advance!

 

Johannes

ulfbert_cloth_rnd.hipnc

Edited by Quazo

Share this post


Link to post
Share on other sites

Photogrammetry Test

Hey guys let me show you some of the first results from our photogrammetry test session.

 

Photogrammetry....wtf?

In case you have never heard of that term, it basically means to capture an environment with photos and

turn it into 3D objects. Like camera tracking does for cameras, just that it will result in the geometry from the photos.

Of course it's not error safe, like always when real world data is involved, but we wanted to test in how far we can make use of it

in a production scenario.

 

Why Photogrammetry?

Actually the motivation results from two main directions:

1. We want to preserve that hand made miniature look, that may be hard to match in Full CG

2. We want to have the freedom of dynamic camera animation and Full CG production methods.

 

First results

So here is a quick video showing our Full CG hero character (wip) next to the scanned geometry of a miniature set prop.

Rendered in Mantra:

 

Our test setup

Here are some impressions of our setup:

 

Specs

Our setup consisted of:
* An Arduino turntable shortwired to a Canon 5D Mark3
* 3 workstations connected through a network
* The first workstation was connected to the camera
* The second workstation had a python service running to start a live key of the photos in Nuke
* The third workstation could be used for test solves.

 

Not yet sure if the production will use this method, or if we will default to traditional VFX methods like camera tracking, rotoscoping etc.

Edited by timmwagener

Share this post


Link to post
Share on other sites

Love the photogrammetry setup! Very very cool and really interesting. I've been doing a few Arduino projects recently and would very interested in seeing your arduino code and your python code to grab the files and do the live key if you fancied sharing them!

 

What was the mechanism for telling the camera to fire at each arduino/motor rotation or time increment. Basically did you somehow script your camera taking photos at specified time increments rather than taking the pictures manually and how did you sync this to everything else?

 

Great work.

Share this post


Link to post
Share on other sites
Love the photogrammetry setup! Very very cool and really interesting. I've been doing a few Arduino projects recently and would very interested in seeing your arduino code and your python code to grab the files and do the live key if you fancied sharing them!

 

 

Hey Tom, i uploaded it here. I basically coded it in the first (sleepless) night of the shoot, so it contains some unpretty stuff (print debugging etc.), but at least i tried to comment it in an understandable fashion before uploading.

I can upload the Arduino scripts once i get back to Filmakademie.

 

What was the mechanism for telling the camera to fire at each arduino/motor rotation or time increment. Basically did you somehow script your camera taking photos at specified time increments rather than taking the pictures manually and how did you sync this to everything else?

 

 

 

Ghetto style: We cut a normal remote control for the Canon 5D, which works by closing a contact when you push a button. Then in the Arduino script, we set the number of images we wanted, rotated the servo for the turntable, paused for letting the prop rest and then rotated another servo to close the contact of our ripped remote. You see it in action at about 24sec. in the video. Its a simple mechanic solution, rather than a smart solution of any kind, but it worked.

The good thing was we never had to worry about synchronicity.

Edited by timmwagener

Share this post


Link to post
Share on other sites

Hah, love the DIY aspect of it all. Nice one! You've got me wanting to make one now...
Thanks for the python script and arduino code. Really interesting post - Look forward to hearing more.

Share this post


Link to post
Share on other sites

Nice project, as usual Tim. Its even more interesting that you have decided to use a photogramettry technique, when 3D printing is getting a lot more accessible and many people go the other way around :) Will be great to post some progress and write about the experiences during these last 2 months.

 

Schöne Grüße!

Share this post


Link to post
Share on other sites

Hello guys,

 

I'm the Cloth TD on this project and I have a few questions. 

 

  • When the sleeve of the doublet of Ulfbert ist bend, it doesn't bend back. How can I influence the simulation so that it does? (https://dl.dropboxusercontent.com/u/23791935/ulfbert_doublet.mp4)
  • I also want the sleeve as a whole to move forward. How would I do that?
  • The cloth has to be at a certain position at the start of the shot. What's the best way to preroll the simulation. I would have the character 1 sec in T-Pose, then animate two seconds into the pose at the beginning of the shot.

Feel free to criticize any errors or provide any workflow tips in my .hip-file. Thank you in advance!

 

Johannes

maybe it's quite late for it now, but just the heads up, your scene doesn't come with any geometry since you've locked nodes contained packed alembic, which is still just a reference to the original .abc file, so no luck without that file

Share this post


Link to post
Share on other sites

Status Update


 

Yo dudes, here's a quick update on what we have been doing the past months,

coupled with some questions on the next steps concerning the lighting and shading

pipeline with Houdini  :D

 

Real Photogrammetry Session (not testing anymore)


 

We had our huge, one week long, photogrammetry session. It was a quite exhausting week

with little sleep. But in the end, the setup worked really well, the organization was good and

i would call it quite a success.

We returned with approx. 150gb of photos (jpgs) and we should 

have stressed the camera (Canon5D Mark3) with about 70.000 captures ;)

Here are some impressions

 

Solving


 

Right now about 85% of the props are solved with Agisoft Photoscan.

Here are rough first rendertests. No shading or artistry happened on them at all.

Just quickly assembled the solved meshes and textures together or threw them

into our Lighting stage HDAs.

 

 

Assetizing/Animation Pipeline


 

While our riggers are building the rigs i'm currently working on the process of assetizing

shots, props and characters for the animation pipeline in Maya.

Meaning i try to setup a metadata system that simplifies the interaction with the objects

for  the animators, ensuring they are exporting the right thing, with the right

settings (implicitly adding simulation preroll on export etc.) in a multithreaded fashion

with one (or two) clicks.

 

Impressions:

post-10542-0-97063400-1409047688_thumb.j - post-10542-0-89337700-1409047699_thumb.j

 

Lighting/Shading Pipeline


 

As soon as the Animation export is automated the lighting and shading pipeline with Houdini / Mantra will be blocked out.

There are a few things we would be happy for some advice on the Houdini-ish ways of achieving things....

 

Lighting/Shading pipeline thread 


post-10542-0-97063400-1409047688_thumb.j

post-10542-0-89337700-1409047699_thumb.j

Edited by timmwagener

Share this post


Link to post
Share on other sites

More test renderings


Hey guys, the first more serious test renderings are dropping in.

No comp on these and the textures are straight out of Photoscan.

The geometry had some postprocessing though. It is mostly all quads now and fairly high-res.

Details will be coming from the geometry and bump maps, so right now we switched to

a displacement less approach. The final scene will have a polycount of about 50 million and

we will totally rely on Alembic Delayed Load archives.

 

These assets will serve as a first start, all further tweaks will be done as the shots

demand it

 

 

Those renderings right now are made in VRay as i made them during the process of preparing all

the assets for the Animators in Maya, but we will soon start the lookdev phase with Mantra.

Edited by timmwagener

Share this post


Link to post
Share on other sites

Hey guys,
 
here are some updates on the helga project, and some visual stuff.
 
Maya to Houdini Pipeline

 


 
On the Maya side:
-----------------------------------------------------------------------------

We use a PySide tool called Asset Manager, to export the needed content for a shot. Its features include:

- Multithreaded Alembic export
- Interface not just a UI to trigger commands, its always in sync with Maya
- Stable metadata system
- Interface to run custom control scripts in the export process (For example run scripts that enforce a restpose for simulation or re-assign attributes etc.)
 
 
On the Houdini side:
-----------------------------------------------------------------------------
- Custom Python HDAs for Alembic import
- Build Alembic hierarchies for our specific needs

- Rebuild based on Alembic Attributes within the assets
- Bypasses node types like Alembic XForm using entirely standard Houdini node types to be more compatible with standard shading workflows.

 

 

Lookdev with Houdini/Mantra

 

 

Rendertests of our photogrammetry assets with Houdinis renderer Mantra.
 
Engine: Raytracing (in pbr mode with compute lighting nodes)
Diffuse BRDF: Ashikhmin diffuse and pbrdiffuse (for rough surfaces)
Reflectance BRDF: GGX
Additional SSS for wood.
 
Energy conserving all the way. (except for adding the SSS on top)
Render as is, no comp on these.
We are using the ashikhmin diffuse and GGX shaders from the BRDF Bonanza thread at odforce, thanks to the authors!! :D
Thanks also to Dennis Albus from this forum for valuable Houdini/Mantra support!
 
One issue though: The renders do have a lot of noise!! It comes mostly from the indirect diffuse, but also the reflection component.
Any ideas on how to reduce this? Would it be an option to use another GI method and leave diffuse samples at 0? Is there something like Final Gathering
available in Raytracing mode (We use raytracing anyways and evaluate the BRDFs with compute_lighting nodes).....
Edited by timmwagener

Share this post


Link to post
Share on other sites

Hi guys,

 

I have been working on the fur setup and shading for our main character Ulfbert.

Here is a WIP turntable of the head (no shading work has been done on the clothes yet):

 

 

The fur setup is an HDA containing multiple Houdini Fur Setups for the head, beard, hands and feet. Fur shading has been done using the Mantra Hair Model. 

 

The grooming of the short fur is attribute based and completely done in Houdini, whereas the beard is based on exported guides from Blender like suggested in this great tutorial (which helped me a lot, because I did fur for the first time ever and felt quite lost at the beginning):

However, I had to put quite some effort into this workflow to make it work with an arbitrary animated alembic cache of our Ulfbert character.

 

The surface shading (skin, teeth, ...) is pretty much conform to the standards that Timm has outlined in his previous post.

 

Does anybody have some experience with fur collisions (Like fur colliding with the clothes)? I was wondering if it is preferred to set the Wiresolver to SDF or Geometric collisions? Any tipps or experiences with that are highly appreciated ;)

  • Like 1

Share this post


Link to post
Share on other sites

Holy wow, you ain't foolin' around. Thank you for all the progress pics! Inspiring to say the least.

Share this post


Link to post
Share on other sites

you can collide the wire objects against volumes or polygons,  polygons will be faster since doesn't have to calculate the SDFeach frame of the deforming object + you avoid having weird volumes shapes since some times you get some spikes ( like tubes that go from one side to the other  since the ray didn't hit geo). 

 

if you use the tools from the shelf you will see that the wire collision are set to use polygons as default.

Edited by pelos

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×