Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

amm last won the day on August 11 2018

amm had the most liked content!

Community Reputation

84 Excellent

About amm

  • Rank

Contact Methods

  • Website URL

Personal Information

  • Name
  • Location
    Zagreb, Croatia

Recent Profile Visitors

4,476 profile views
  1. Mesh Blend

    Hello Noel It's here on Orbolt. There is small description about limitations as well. That is, it does nothing related to topology, welding the vertices and such. Later, I've modified this thing, but there's nothing significantly new.
  2. I believe (could be wrong) the important word in docs is volume ''Simulate subsurface scattering by path tracing through it as a volume''. In other word, tricky part in any subsurface shader is, how to get the distances from hit point to let's say exit point, that's where usual ray tracing refraction models are not the best, especially for small parts on human body like nostrils or ears. So, Mantra path tracing sss tries to bypass the problem by using volume as much more realistic media. Regrading pixar's solutions, have to admit I'm not a big fan of that cuisine. In past, had chance to try a good number of Renderman shaders, hair and skin (with 3delight). Always been an negative impression of really fragile constructions, also a really free interpretations of 'realistic' models. After all, they were courageous enough to put the word 'photorealistic' in name of renderer, when literally nothing was photorealistic. That is, just personally, I'd be looking for solutions already approved by wider audience, freelancers and studios, like V-Ray, Cycles or Arnold.
  3. Speed in my shader mix (well, relative speed) belongs to single scatter, that's all about. Of course that's always faster than multi scatter. Regarding ray-tracing vs path-tracing differencein Mantra subsurface shader, according to docs , difference is technical, which method is used to get the scattering effect. About shadows, there's nothing special in that network. Theoretically one could add colored shadows instead of plain opaques, but, I'm afraid this could create yet another level of dis-balance, perhaps you'll get what you want in one area and something undesired in another. Just personally, I'd try to go without any diffuse, using only SSS and reflections, however that (usually) makes harder to get displacements and (possibly) takes more to render (to remove the noise).
  4. Hello, sorry for late answer. Thing in img6 is a bit modified screen blending, widely used in compositing software, Photoshop and such. This one is a bit modified, to allow the result to go over RGB 1, here the maximum is 'pedestal' value. Of course this is completely arbitrary approach, there's nothing physically plausible, however as far as I know, default 'energy conservation' in Mantra shaders doesn't looking much smarter, I think this is clamp of everything to max of diffuse result, or something like (at least in times of H 16, don't know what happened later). Screen blending as mixing mode, is inspired by old, quick Mental Ray shaders from 2006 or so, well known Fast SSS. What screen blending is doing in practice, is a provided 'pedestal' as maximum, even with insane strong light, let's say lighting coming from extraterrestrial ship. It won't burnt into yellow or red, it will stay exactly at 'pedestal'. That's only smart part here, I'd say. Color used for SSS (thing coming from nowhere) is just a variance of skin color, slightly more saturated, so nothing important about that. Just wanted to have a bit extra control. The rest, like putting some results into Ce, is a bit of desperate trial to skip 'not so mixable' F. Regarding SSS functions, phase or so, what's provided by Mantra shader, probably pbrsss, that's what you have here, all the rest is mixing.
  5. First you want to give some positive value to targetstiffness , and that could go to values like 1000 or more for something like rod antenna. Damping is attribute related to evaluation of stiffness through time, once, if, there is some stiffness. Higher damping value is smoother movement but could take longer to calculate. Damping is usually some small value, 0.01 or so. As you probably know, some values are actually multipliers in case the attribute is created before DOP network. I'm pretty sure targetstiffness in wire object is a multiplier. That is, if you created per point targetstiffness attribute before DOP network, and value of attribute is let's say 500 and targetstiffness value in wire object is let's say 2, result is 1000. Another, brute force approach is to combine the wire solver with Geometry VOP in DOP network, using Blend Solver. In Geometry VOP you just import the non-simulated P value and blend it with P in DOP network. At full blend of P value, and by multiplying the velocity down to zero, your sim will just stick back to original geometry. Anyway that's really brute force, as it introduces another solver (geometry VOP) into mix, with risk of unstable simulation.
  6. No Skin

    Hello In last few months I found some time to rewrite this thing. HDA with playable model is available for download on Orbolt. What's new: - real time playback. Precisely for around 15K quads on quad core i7 machine, here is 40+ FPS. I'd believe this makes a competitive speed - quality ratio, against traditional GPU powered skin-blend shape solutions in other apps. This is enabled by reducing a number of serial, one after another evaluations. Also, there are around four sections (legs, spine, arm, head) evaluated independently. Just for illustration of sensitivity to that speed of evaluation, let's say I had to keep all ramp parameters in static evaluation part. All operators become more complex as well. - more unified 'capturing' system, mainly based on deformed planes placed in between bones, directly in Houdini. Advantages of such system, in my opinion are: - topology independent. - parametric nature allows to have precise, exponential or whatever falloffs impossible to get by manual painting of weights. - ability to completely change the behavior according to certain angle or another condition. - it can be done only in Houdini since it happened what happened with Softimage and Fabric. And bad points: - skeleton dependent. While just simple Mocap style FK rig is enough, it has to be very exact hierarchy structure, naming, and bone local orientations. At least for now. - for now, a lot of not-intuitive parameters (not-intuitive = based on internal structure) - it can be done only in Houdini In short, thing is still a demonstrator. Anyway if it could be an inspiration for anyone, I'll be glad to help.
  7. Local space does work here - each pair of boots is separate object, noise pattern is following the transformation. In this example, ''current'' space is camera space.
  8. Screen space, basically. Name was introduced by OpenGl or something, as normalized position of mouse pointer on screen, and such. One corner is 0 0 while diagonal corner is 1 1. If you doing something with SHOP shaders, you'll want to use other options as well. P in SHOP is in camera space, so if you want some noise pattern in world space instead, you'l convert P with this thing.
  9. Fusion vs Nuke

    Natron has support for OpenFX, and lot other 'open' things, let's say ffmpeg allows it to write apple ProRes using Windows OS. If I remember correctly, there were some issues exactly with Lenscare on OpenFX, but these are already recognized by developers. Generally one can not expect everything to work smoothly when such cross-platform thing is used, anywhere. Natron is free, they are providing some example projects, interface is looking like copy of Nuke. Download link is here.
  10. By the way I've tried what I've suggested, and... really problematic part seems to be building the per-point orientation, somehow it is always arbitrary. So at the end of the day, three stages are minimum to build an curve based interpolation, as an robust method, just like you did , I guess. If I'm correct, there's something for preserving the curve segment's length in your method, like Maya curve 'lock length'. About ready for use solvers, while ago I've played with RBD constraints and Bullet, as an replacement for H wire solver, it showed great self-collisions, but entire sim was a bit too wild, more like ropes - 'loose' SDF based collision in Wire solver looked more realistic. H 'connect adjacent pieces' wasn't enough for building the relations (it takes nearby points in 3d space), had to build the replacement of 'connect...' thing. Other than that, VDB Advect or FEM soft body comes to mind, however there's too long way of forth-back conversions with both (for my taste) to use as rigging tool.
  11. If both stages are given, just for something like that, should be possible to get the angle by comparing the normals, also a line between two stages. These two, line and angle, should be enough elements to calculate the center of rotation by using some triangle formula (of right triangle). Cross product of normals (at first and last stage) should be a rotation axis. While entire thing will be rotating inward, unless you'll lower the ambition to less than 180 degree between stages. Not in front of Houdini, anyway I'd believe it will work. Multiple stages would be interesting to get by some SLERP , while this a bit too much for me to visualize, right now...
  12. Fusion Studio

    Well, people are usually going in opposite directions in last ten years or so, toward After Effects or Nuke, depending of what they are doing. Should not be hard to get into basics, however in my opinion, it's really old fashion app, full of specific solutions from 90s. Starting from numerical inputs where you are able to use sliders somewhere, somewhere not, somehow unusual choice of blending modes, different behaviors of copy - paste node, so on. Let's say if you want to re-create something like (not built-in) light wrap effect by nodes, that would be a way easier and straightforward in Blender compositor. While generally, Natron is much more Nuke - like, also, Natron provides you controls where you expect to find them, well at least for my taste. Nodal or not, I think After Effects is a way more unified app, making it easy to just override disadvantages of layers, in many cases. I'd say, if there is a particular advantage of Fusion, use it just for that. However, complete switch is hard to imagine, actually it sounds impossible.
  13. imported UVs

    Generally, if they are on vertices, keep them there. UV as vertex attribute makes possible to have "breaks" between UV islands, even on connected polygons, hard edge in case of normals, also ''per polygon'' vertex color (vertex color in case of FBX). Once they are promoted to point, you'll get UV connected all around, let's say between first and last UV edge of cylindrical UV map, so on. That's not what you want. In other word, promoting from vertex to point has sense only if each UV island edge is corresponding to polygon boundary edge. Why that difference is happening, I don't know, my wild guess is that Houdini performs some kind on optimization on import, if it is possible to promote UV or normals form vertices to points harmlessly , H will do that, otherwise it won't (I could be wrong for this 'who is doing that' part).
  14. OBJ transform to SOP lvl

    Wouldn't hurt. By the way, does anyone have some experience with SideFX GoZ for Houdini.
  15. OBJ transform to SOP lvl

    Not all the time. If you're using exporter from file>export menu, then yes, positions are baked into global space before exporting, in case of obj file. In case of FBX, it writes both, global and local. Same with Maya. However if you write something out using ICE Cache On File node, it's local position. If you take a Houdini File SOP as equivalent of ICE Cache On File node, it should be clear what happens. SOP VOP deformation, as well as ICE tree, is applied in local (object) space.