Jump to content

Search the Community

Showing results for tags 'load'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
    • Marketplace
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering + Solaris!
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Product Groups

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Skype


Name


Location


Interests

Found 5 results

  1. I created a rather complex Digital Asset. It consticts of ~2000 nodes, and also there is ~60 instances of another asset inside this asset. I optimized it thought cleaning nodes throught performance monitor, and now It computes rather fast after loading, but LOADING takes nearly 10 seconds. It's became a real problem because in my hip file I need to use it rapidly many times. Is there any way to speed up loading of asset? And after creating every instance of this asset Houdini increases ram usage at ~500 mb, it's also a problem...
  2. I made this HDA to streamline the process of versioning caches. It will automatically produce a file path and file name for your cache, and load it back in once it is exported. You can flip through different versions easily by using the version slider, or using the 'Create File Node for This Version' button and wiring the file nodes up to a switch node. You can write detail attribute strings to store notes about the cache such as simulation parameters - very useful when referring back to old sim caches. At the moment this is a non-commercial HDA. Download link down this page beneath the video. Get at me with thoughts, comments, questions etc. SOP_MI_version_filecache.hda
  3. Hello, now when I am loading VDB volumes with File SOP it sometime takes some time, especially with big volumes stored on network. I was wondering whether it is possible to load only specified volumes from a VDB file, for example to load only heat, to keep loading times slower. This would save a lot of time in situations where I am blasting all other volumes which I do not need directly after File node. I quickly checked docs and found that it is possible to firstly read the header to find out information about stored VDBs and then to load needed volume (in Reading and modifying a grid section). Any ideas if it is supported right now in Houdini implementation / or if there are any plans to add this? Juraj
  4. Opening one scene I get a 'Load failed for... Unexpected end of .hip file' message. The hip file does open but most of the nodes are missing.. Does anyone know why this is? Is there a way to fix the hip file? The previous two versions are also corrupt but the one before that is fine. I can't think of anything that might have caused it. I updated from Arnold4 to Arnold5 and its possible that happened between those versions but my other hip files seem to be fine..
  5. I have a question that has been bugging me for some time and I couldn't find much information about it. Which is the best and most efficient way to render many polygons? Using delayed load procedurals or using packed disk primitives? Or, am I confused and are they both doing the same thing and there's no difference between the 2 workflows? As far as I know, they both create instance geometry. The documentation doesn't help much either, half of the things I read talk bout optimizing a render using delayed load procedurals, and the other half about using packed primitives. I'm wondering if packed primitives is the new workflow and using delayed load procedurals was the old way of doing it as is now obsolete? Here are the 2 workflows I'm talking about: Packed Disk Primitives Here I pack all my geometry and write it out to disk. I then load it back and change the load setting to "Packed Disk Primitives". Then I generate my IFDs and they are now referencing the geometry from disk instead of having to write them out (And the IFDs are a few KB or MB big). I then render using those IFDs. Here is what the documentation says about it: "Packed Primitives express a procedure to generate geometry at render time." "Because Packed Disk Primitives by their nature are geometry streamed from a file, similar to Alembic primitives, we don’t have to use a special procedural to get smaller IFDs." Delayed Load Procedurals Here I write out my geometry (not packed) as bgeo and then make a Delayed Load Procedural shader and select the bgeo files I just wrote to disk. I then go to the Rendering -> Geometry tab of my object and load my Procedural Shader. I then create my IFDs and then render them out. In the documentation about the delayed load procedurals, it talks about optimizing geometry this way. So I know there are these 2 ways, but are they both equally the same, or is one of them better than the other? Which workflow do you use? Also, when using the packed disk primitives, if the geometry you want to render is unique and it can't be instanced (or there's just no point in doing it), do you still pack it (so its only 1 packed prim) and save it out? Or do you use the delayed load procedurals? Do you use any other workflow? Any advice on this would be greatly appreciated! Thanks
×
×
  • Create New...