Jump to content

pbd

Members
  • Posts

    14
  • Joined

  • Last visited

  • Days Won

    1

pbd last won the day on November 14 2015

pbd had the most liked content!

Personal Information

  • Name
    Paolo
  • Location
    Tokyo

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

pbd's Achievements

Newbie

Newbie (1/14)

4

Reputation

  1. Hello there Houdini masters, I'd like to share some basic videos that illustrate "multiverse for Houdini". In case you don't know what multiverse is: http://multi-verse.io http://multi-verse.io/plugins/houdini-- it's free! http://multi-verse.io/plugins/maya So, back to the videos, here is what I am showing: 1. mfh_UI.mov https://www.dropbox.com/s/0brhabk2u9wzsxp/mfh_UI.mov?dl=0 Here I am just showing the additional nodes and UI extension to the Houdini Alembic tools (basically the "revision" parameter). 2. mfh_example https://www.dropbox.com/s/xddvi7sd66x3xuq/mfh_example.mov?dl=0 Here you can see a simple example of storing multiple revisions in the same abc files. You can also see that things work as expected when rendering with Mantra (with motion blur) and that you can swap revisions pretty much interactively. 3. houdini_velocityblur_varying_topology https://www.dropbox.com/s/z0k8tpqdfgh49nh/houdini_velocityblur_varying_topology.mov?dl=0 I think the fie name says it all, this is a simple splash sim (outputted with the regular Alembic ROP -- since this was done before multiverse for Houdini existed) with velocity blur. We show that this cache contains velocity informations and we save this to ABC and want to bring it in Maya and render it in Maya+3Delight in the next step. 4. mfm_velocity_blur_from_houdini https://www.dropbox.com/s/8x7iv4izp77q6y7/mfm_velocity_blur_from_houdini.mov?dl=0 Here we read the ABC cache generated in the previous video with multiverse for Maya. We render it in Maya with 3Delight, and we match the velocity motion blur of Mantra, because the units are different we also scale the velocity via a custom attribute we added to 3delight for Maya. That's it. Any questions, mail me directly, or mail support@j-cube.jp May the od|force be with you. Paolo
  2. The future of 3dsmax is definitely outside of the FX domain, and towards architecture and design. Maya will (try) to take the role of FX in the AD product line. If you are proficient at all those 3dsmax plug-ins you have absolutely zero excuses: start spending time with Houdini FX asap! You will find yourself deeply immersed in an environment where everything is connected and where there is a strong bond with the geometry (SOPs) and the powerful VEX context will allow you to control everything. But ultimately what really is amazing in Houdini is that things make sense, so if you can manage to *unlearn* all the nonsense that comes from blackboxed products with abstract UI, ad put your mind to think in a logical and procedural way, then the benefits are unlimited. You will also learn how non-destructive procedures can save your ass in production as well as be reused between jobs. Have fun with it!
  3. I encountered this problem too (on a fracture sequence) and my conclusion is that the alembic ROP in Houdini is not exporting the data correctly, although I do not think it is a bug but more like a "special case" which is not handled. I have not had the time to report to SESI yet, but inspecting the abc exported as HDF5 from Houdini did prove that the data was not correct. Edit: actually using the timeblend trick worked and I can force all time samples in the ABC archive. thanks for there tip! I still think the output of the Alembic ROP should be "fixed" in Houdini vanilla though.
  4. First of all we should clarify on which type of noise you are trying to resolve. It looks like you are speaking of indirect specular (reflections). Now I don't really know in 3dfs/xsi but in general 3delight does the same, if the BSDF is not so reflective or dark, it will send less rays just like mantra does. By highering min reflection ratio in you weight up the samples on all specular BSDFs (indiscriminately), in 3delight you don't have a global control, and you should higher the max samples on your shader (above the total pixel samples amount per pixel). It would be interesting to know also at which pixel samples values you were rendering. I usually render with DOF and motion blur and use high pixel samples values, 16x16 or even more (and it's still damn fast), in such case I don't have to higher samples locally because I already have enough subsamples within the pixel. Usually when DOF and motion blur quality is locked there is very little local sampling to touch. Anyway it's hard to compare in different packages/integrations. I personally love both 3Delight and Mantra, for me they are in a league on their own and the others are behind. I am just looking now at how to update 3delight integration in Houdini as we'd like to get the same result we get in Maya. But it is my general belief that Mantra (and VEX) in Houdini will always be unbeatable and uncomparable as it is so well integrated in all contexts. This is one of the main reason of why Maya will always, ever and forever, suck lollipops: no shading language applicable to all contexts and in general a package done without rendering in mind, which is a disgrace since rendering being at the end of the pipe collects everything (garbage included) and can teach one a lot, for example about scene assembling. Certainly insisting on using mental ray won't help them As per the Jedi stuff... pic.twitter.com/CQW49HYovA
  5. Some very cool ideas in here! Kudos.
  6. About the fabio system: no I did not know about it, but it does not look like it has its own fast dynamic solver so we are back at square one. With H14 I think the problem of styling is solved, the only problem is fast dynamics, and when I say fast I compare them with nHair dynamics.
  7. H-experts, is there anything new in Houdini 14 (or any external plug-in/asset) for hair/long-fur dynamics or the only choice is the wire solver? We currently use Maya Hair in Maya for both styling & dynamics of long hair, but I'd really like to offset everything into Houdini. If there is absolutely nothing available for the community, do you use anything in-house to resolve long hair dynamics quickly? If so, can you point out some relative bibliography? P
  8. Actually there is no such thing as a "perfect filter for displacement" It really depends from the displacement texture "contents", the texture can be a procedural or an actual file texture (ultimately also the procedural gets mipmapped prior rendering but it should have its own filtering too within the procedural code). Think about a texture as a "signal"or (just imagine a texture is a 2D math function: there is no one-filtering-to-rule-them-signals all. And as it has been said the 'no filtering' option (which is basically point sampling) is definitely not a good choice. The best you can do is: - use a 16 bit float depth texture (exr half) or a float 32 bit EXR/TIFF - keep resolution reasonably high - edit: you might want to check in case you use a procedural if its functions are properly anti-aliased
  9. In 3delight there is no dicing of subdivision surfaces when in path tracing (which is nowadays the new default rendering mode of 3delight). The only exception is when the subdivision surfaces are displaced, in such case they are diced. So it is completely normal that you get much faster results. Could you elaborate in which set of conditions? I believe it is quite the opposite.
  10. The pointreplicate is nice but the best would have point generation at *render time* via a point procedural, is there anything out of the box in 12.5 for generating points at rendertime or one has still to rely on 3rd part / in-house solutions?Anyway this is a great release. Simply light years ahead of anyone else.P
  11. Sorry to hear that No. The only shaders that will work with mr are the mental ray one (goto shop, hit tab, anything in the mentalray sections). No, mainly because mental ray does not have a shading language (their attempt to copy and extend rsl called "metasl" is not production-ready and has no VM compiler, nor via C++ compile you can get stable shaders as of today). You need to rewrite them by hand in C++ via the mental ray API (we could help you in this, but do you really need to? ). Just use VEX with Mantra or RSL shaders with a RenderMan renderer, once you see their strengths it you will ditch mental ray without remorse. P.
  12. The second keyframe was the main catch. Thanks a lot. Thanks also to Michael for the example. P.
  13. Hi there, I have a question regarding the 'wirecapture' node. If you look in the docs you will find: U Radius and Lookup ------------------- The two parameter components here are multiplied together to determine the effective capturing radius of the primitives. If the second component (Lookup) contains a channel, then it is evaluated using the current u value of the primitive instead of time. This allows you to specify a varying radius along the u length of the primitive. The length of the channel (ie. from its first to last key) is scaled to the u length of the primitive. Note that if the lookup curve is a channel reference, then the length of the channel must match that of the referenced one. I can't manage to get a varying radius. I do have a channel on 'lookup' (actually I am gaetting a chopchanel with chopcf() ), so that should be fine. But I have no idea how to specify a varying 'radius', I tried even putting a rand(10) but it does not vary, it's always the same across the curve. Obviously here I am missing something. Hints? p
×
×
  • Create New...