Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

symek

Members+
  • Content count

    1,791
  • Joined

  • Last visited

  • Days Won

    60

Community Reputation

245 Excellent

2 Followers

About symek

  • Rank
    Grand Master
  • Birthday 09/26/1975

Contact Methods

  • Skype szymon.kapeniak

Personal Information

  • Name Szymon
  • Location Waw/Pol

Recent Profile Visitors

19,962 profile views
  1. It seems they don't work in 16.0.547 (don't produce any Alpha as you discovered). Only LumaMatte works. Bug submitted.
  2. hmm... In 15.0 they both work as expected. I would assume it's a bug rather than deliberate change. I will submit a bug. Thanks!
  3. Lumamatte. (don't ask me why).
  4. You could use Occlusion VOP in your surface material. Remember to change max distance from -1 to something close to an interior size (otherwise every ray hit will be treated as an occlusion == black render). Also Background color has to be set to 2.0.
  5. Houdini really needs a lot more feedback and pressure from users in lighting / render departments. Its architecture is so powerful in this respect. It's a matter of rounding corners or / and finishing what's already there.
  6. Sorry if I misled you before, but after second thought I realized Attribute Reorient is perhaps easier method if you already have transformed geometry. That is if I understand your problem... Deformation Wrangler is another option though. In the attached scene, I don't touch your attribute, I only rotate geometry based on template orientation (like copySOP does), but vector_to_transform magically follows this transformation. Is it the result you were after? transform_attributes_skk.hiplc
  7. Houdini supports adaptive sampling for secondary (indirect) rays (check ray variance controls on Sampling Tab and Quality multipliers on lights), but not for primary (direct) rays, which was postponed afaik.
  8. There is no THE definite way of doing passes in Houdini, which might be somewhat frustrating for someone coming from XSI with particularly nicely designed passes system. There are a number of approaches with pros and cons each, that are mostly limited, afaik, not by their design or implementation, but user ability to manage complexity they involve. That is, for example Takes are excellent concept, very powerful, but some people tend to avoid them, because they are very hard to manage in a reach, production scenes. Since using a number of Mantra Rops with object's states overrides, pass dependencies, exchangeable sets of shades, switches in Sops (or different flags for display and render) smart bundles and naming expressions (objects membership based on names patterns) is so natural and easy way of working in Houdini, there is really hardly any reason to use Takes or anything more advance. Of course you can mix any of these ideas, find best angle for Takes taming or go even deeper with shaders that can be driven by objects states and custom properties. From my experience simple solution like naming objects with _NOSHAD as postfix and use it wildcards in light linker are best methods for complexity. You can built pretty advance setup based on these basic concepts by making own Rops, extending Objects parameters and using shaders with renderstate() imports .
  9. 1. The main difference is that using IFD, you don't have to bother Houdini to do rendering, Mantra (and only Mantra) is involved. IFD is simply a render scene description and possibly cache for geometry to be rendered. 2. There are some other considerations that you probably don't have to worry about, like an extra flexibility derived from that fact that IFD files can be filtered with python what effectively allows you to reuse from for different passes (for example replacing all shaders in a scene and render masks instead of beauty passes). IFD is also good for debuging if something is wrong in your frames. For most basic cases this is not important. More over IFD are not as sexy as it seems. HIP files are usually much smaller than IFDs they generate. This is actually one of the darkest side of IFD pipeline, you really have to figure it out how to mange them on disk. There used to be other benefits usually coming from RIB/IFD pipelines, since originally these files were designed to be as editable by humans or scripts, includable one into another etc. These procedures are mostly obsolete now, not to mention they are technically challenging (practically impossible?) for IFDs. 3. Obviously these opinions come after considering main issue: Houdini/Engine costs money, Mantra (IFD rendering) is usually free.
  10. Perhaps the easiest way is to use http://www.sidefx.com/docs/houdini/nodes/sop/deformationwrangle
  11. http://www.awn.com/news/deluxe-names-craig-zerouni-head-technology-vfx
  12. We've rendered entire show in Clarisse on our farm. I don't think it's possible to split single frame between machines. As to comparisons, Redshift is obviously a lot faster in most common cases, Although Clarisse will probably bit it with heavily instanced and textured scenes. Think closeups versus total shots. We haven't pushed Clarisse to its limits though. Octane is out of discussion for me, as it lacks of too many production features (including support).
  13. I would say this had something to do with scheduler optimization (CompiledSOP and such). Striping away problematic things (stamping) allows to optimize core functionality (copy/instancing). If statistics is like 1:10 for latter one, it's a good reason to split them apart. hmm, me thinks at least...
  14. Nope, but this is nice idea in fact. You can RFE it on SESI site. https://www.sidefx.com/bugs/submit/
  15. I really hope they won't rewrite viewport again... last time it took a couple of years to stabilize it. Fortunately OpenGL4.3+ has all (most?) features to work with the same speed as Vulcan.