Jump to content

Search the Community

Showing results for tags 'optimisation'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
    • Marketplace
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering + Solaris!
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Product Groups

There are no results to display.


Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Website URL


Skype


Name


Location


Interests

Found 4 results

  1. Hi, I've just got a question about compile blocks. I'm working on a relatively simple process that I need to run on extremely heavy data sets (groom curves), and I've noticed something odd. I need to do a check on each curve, and for reasons I need to do it on every curve in isolation so I'm putting my loop inside a compile block, If I loop over 100'000 curves and run this process then I get extremely fast results for the first 10'000 or so curves, however after about the halfway point I see my CPU usage drops right off and instead of seeing it calculate hundreds of curves per second it's down to 10-20 curves per second. It calculates the first half of the data set in about 4 mins but takes 30 mins to do the second half! At first I thought that this was due to the data set getting progressively heavier, however if that was true I would expect the CPU usage to remain high, despite the progress slowing down? I also had a go at randomising the sort order too, to see if that changed the result and it was the same. So I guess my question is this: What factors, apart from the size of the data-set could cause a compiled block process to slow down over time? Any insights are very welcome! Thanks in advance! M
  2. Hi guys, Could you any of you please help how to setup frustum culling or geo/obj culling based on bbox intersection with a given cube/box/sphere? Please let me know if you guys can help me out? Appreciate your time! Cheers!
  3. Hi, I recently decided to recreate an old Maya project in Houdini. For that I had to export animated mesh from M to H so I used alembic. The 960f was 119Mb I did some modifications and tried exporting it as bgeo but it exploded to 106Gb so I tried abc which turned out to be 35Gb. And yes i added some attributes, subdivided few things and stuff, but it still seemed too much for the changes I made so I decided to explore. I exported animated abc from M (2019) of just 11f (step 0.25f) and that was 94Mb. I opened the abc in H with basic alembic node (default H18 settings) and export that with rop_alembic_output node straight from the alembic read node, with the same frame range & step. I've tried different formats (HDF5 and Ogawa (used by Maya) ) and settings here but all had pretty much the same size results of 1.5Gb. I didn't find any settings that would relay to compression or anything like that. Is it the same for you guys too? Does anyone have an explanation of this and perhaps a solution/idea how to reduce the abc size from H? Thanks Martin --------- Houdini 18.0.349 Maya 2019
  4. Hi everyone! I'm currently working on a fairly "illuminated" scene, where I use loads of dim instanced point lights. Normal lights provide the option of setting an attenuation radius, which usually speeds up rendertime a lot, but the "light template" does not provide that option. As shaders I use the asadlight and rayshadow. Asadlight has something similar, which is an attenuation ramp that gets multiplied with the attenuation, so you can set it to black after a certain distance. I know this should visually have the same effect, but is it as efficient as the radius option in the normal point light? Is there any other way to "clip" light? The reason I'm asking about this is because I set the attenuation radius to black after a certain distance. I set up a render and it takes ages, despite the fact, that its only in range of about 4 lights. Once I deactivate the other lights, the render goes super fast. Any advice, experience, wisdom or other info is much appreciated! Thanks a lot Sustaxata
×
×
  • Create New...