Jump to content

Best caching workflows in production


magneto

Recommended Posts

Hi,

 

I noticed some big studios have half baked solutions to cache data from Houdini so this forces some of the artists roll out their own solutions.

 

I saw some flavors of File Cache SOP with versions, auto names from node name and creating a proper folder with a lot of data regarding the cache. But when you cache so many iterations of so many types of data, it gets challenging to find the right data, let alone which scene it came from.

 

How do you track this sort of things? What's the best workflows and practice for caching in production?

 

 

Thanks :)

Link to comment
Share on other sites

Now, I haven't worked with any of the production trackers myself, but as I understand it, most studios today use either Ftrack, Shotgun, and/or similar inhouse tracking solutions together with a scripted solutions automating the versioning hierarchy for scene, shot and element/assets. This is just to important to leave up to individual artists, you just gotta automate the sh!t out of it just to be sure stuff is where it needs to be without the risk of any single individual messing it up.

 

But it would be interesting to hear from someone at one of these houses do a breakdown on the pipeline in regard to shot progression and versioning. :)

  • Like 1
Link to comment
Share on other sites

I think these are some valuable insights:

 

Saving geo to disk:

 

1. .hip file is copied as a snapshot and archived elsewhere

2. geo contains metadata as detail attributes saved to info block, this allows the metadata to be retrieved without loading the entire data or even opening houdini.  The gstat command line tool can be used to query the metadata.  This metadata includes the following (at a minimum):

  • path to current 'live' .hip file
  • path to archived 'snapshot' .hip file
  • houdini operator path to the node generating data i.e. /obj/geo1/rop_geometry1

This is extremely valuable in tracking down files that generated specific geom if ever needed.

With this data you can regenerate the geometry without even opening houdini.  Just Hbatch to the file and render the node responsible.

 

3. it's helpful to have an option to save in disk in background.  The method sidefx added in H14 (I think it was that version) is OK but really lacks options so it's nice to have one built using hbatch/hscript.

4. maybe alternative proxy files are saved as well for quick viewport interaction (useful for things like large volumes, even a pointcloud representation can be useful for lighting and repo-ing quickly)

5. the performance monitor can be scripted to give you .csv or .hperf files which are valuable logs to have so you know not only how long the geom write took, but which parts of your scene are slowing it down the most.  This is useful as it allows you to optimize with each iteration.

 

 

These are some basic things that can be setup quite easily and don't add too much bloat to disk space or cook times.  I don't care too much for the tracking side of things as that's usually more political and not worth getting caught up in.  Maybe there's some things I've forgotten to mention but this should be a good starting point.

 

Similar methods can be used for Mantra rendering, especially .exr metadata.  If comp is using some render from 124 versions ago you should be able to track stuff down fairly easily (no trial and error rifling through files)

Edited by jkunz07
  • Like 2
Link to comment
Share on other sites

everywhere i've been for the last few years has used rops to handle this stuff.  it takes a big longer to handle since you've gotta do explicit writes/reads, but it also makes it easier to manage from the tool development side of things.

 

so you have a custom geo rop that exposes specific options, but hard-codes the destination based on your shot environment.  the only thing the artist can do is give the element a name which is stuffed into the mix with all the other identifiers (show, sequence, shot, department, task, version, element, format, maybe even resolution).  the rop is a convenient container to retain all the info so when you version up your file, running the rop again puts it in the same place aside from your version change.  the write in bg is kinda nice, but it's usually better to just launch on the farm (assuming you have a farm).

 

the same goes for mantra ... custom rop that handles the naming for you.  really, an artist shouldn't have to worry about where things go.

 

of course, the other side of this is a means to read in your data.  having a custom node to read in your data without having to specify the exact path is also helpful.  at rhythm, we used a tokenized mechanism that acted as a live-link kind of thing.  you'd specify things using only the show/sequence/shot information and it'd figure out where that was on disk.  using "latest" for the version would pull the most recent version automatically, so you wouldn't have to manually change things.

 

i've implemented stuff along these lines the last couple places i've been.  of course, you also need to have more of a back-end system in place to handle your tracking and such.  shotgun is kind of sucky at this, imo, so i've written my own asset manager...

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...