Jump to content

amm

Members
  • Content count

    88
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    14

Everything posted by amm

  1. First you want to give some positive value to targetstiffness , and that could go to values like 1000 or more for something like rod antenna. Damping is attribute related to evaluation of stiffness through time, once, if, there is some stiffness. Higher damping value is smoother movement but could take longer to calculate. Damping is usually some small value, 0.01 or so. As you probably know, some values are actually multipliers in case the attribute is created before DOP network. I'm pretty sure targetstiffness in wire object is a multiplier. That is, if you created per point targetstiffness attribute before DOP network, and value of attribute is let's say 500 and targetstiffness value in wire object is let's say 2, result is 1000. Another, brute force approach is to combine the wire solver with Geometry VOP in DOP network, using Blend Solver. In Geometry VOP you just import the non-simulated P value and blend it with P in DOP network. At full blend of P value, and by multiplying the velocity down to zero, your sim will just stick back to original geometry. Anyway that's really brute force, as it introduces another solver (geometry VOP) into mix, with risk of unstable simulation.
  2. No Skin

    Hello In last few months I found some time to rewrite this thing. HDA with playable model is available for download on Orbolt. What's new: - real time playback. Precisely for around 15K quads on quad core i7 machine, here is 40+ FPS. I'd believe this makes a competitive speed - quality ratio, against traditional GPU powered skin-blend shape solutions in other apps. This is enabled by reducing a number of serial, one after another evaluations. Also, there are around four sections (legs, spine, arm, head) evaluated independently. Just for illustration of sensitivity to that speed of evaluation, let's say I had to keep all ramp parameters in static evaluation part. All operators become more complex as well. - more unified 'capturing' system, mainly based on deformed planes placed in between bones, directly in Houdini. Advantages of such system, in my opinion are: - topology independent. - parametric nature allows to have precise, exponential or whatever falloffs impossible to get by manual painting of weights. - ability to completely change the behavior according to certain angle or another condition. - it can be done only in Houdini since it happened what happened with Softimage and Fabric. And bad points: - skeleton dependent. While just simple Mocap style FK rig is enough, it has to be very exact hierarchy structure, naming, and bone local orientations. At least for now. - for now, a lot of not-intuitive parameters (not-intuitive = based on internal structure) - it can be done only in Houdini In short, thing is still a demonstrator. Anyway if it could be an inspiration for anyone, I'll be glad to help.
  3. No Skin

    Hello, just wanted to show my long time summer project. It is a human body deformation system, completely based on Houdini VOPs. So, no any of usual, pre defined operators here, like skin, blend shape or lattice. Instead, idea is to allow to use anything that fits to particular body part. For now, 2d joints are simple, bone axis based blending - positions are firstly moved to static inverse matrix of bone, then there's some deformation, then they are moved back to animated bone. Spine is a mix of b - spline for center of rotations, while rotations are quaternion SLERP. 3d joints are similar to NURBS blends. Used the things like XYZ distance VOP in animated part, only for shoulders area, because just wanted the very exact shape. And so on. Related to 'pose space deformation' story, something here is PSD, something is not. VOPs are executed one after another, from Childs to parents. More or less, it interprets the matrices as structure of mocap rig, where only the last matrix somewhere around torso, holds a full transformation, while everything else is rotation - but this is not a rule. Influences are result of distances to skeletons, additional NURBS geometry, and groups derived by first two methods. For getting the matrices, I've used a bunch of in line optransfrom functions, about 80, saved as detail attribute on separated object. Reference pose is one cached frame of that object. Now the question, why on the earth to create this thing, instead of common solutions. Beside obvious reasons, that I always wanted to created something like, or just felt boring of same solution for decade(s): - It's completely independent of topology. No any VOP does something based on point number. - Parametric 'weighting' allows to use much, much more precise falloffs than manual painting. - A lot of parameters rely on distances to bone, or between them, so it's quite adaptable by self. - Custom, by body part method allows a really easy ways to forget the traditional drawbacks of linear blend skinning, like 'candy paper' collapsing. Of course there are negative points: - Thing works only in specific 3d app, I don't see reliable way to convert it to skinning for export in game or so. - Speed. Not so fast, while is much faster than I expected ( whatever that means ). For now, for around 12k quads and quad core i7 machine, it's able to keep 25 FPS, only without operators related to fingers. Otherwise it is around, 10 - 20 ( of course this depends on how heavy are operators). I have to admit, that I cared only about very basic optimization, just to keep the all static calculations separated of animated ones. My wild guess, a lot of forth - back transforms of points to static pose and back, causes the main slowdown (classic skinning don't need that). While is hard to say - more than half of time showed by performance monitor, belongs to 'unaccounted'. On positive side, it's enough to run the Houdini Poly Reduce, just after file node, to get back into desirable playback. Proxy mesh becomes just by design. Screen shot shows Houdini network, also a set of NURBS wrappers I've used for various purposes. Also, perhaps the simplest operator, for third finger bone. I hope I'll share a few generic examples of deformation methods I've found interesting, in next few days. Thank you for reading !
  4. Local space does work here - each pair of boots is separate object, noise pattern is following the transformation. In this example, ''current'' space is camera space.
  5. Screen space, basically. Name was introduced by OpenGl or something, as normalized position of mouse pointer on screen, and such. One corner is 0 0 while diagonal corner is 1 1. If you doing something with SHOP shaders, you'll want to use other options as well. P in SHOP is in camera space, so if you want some noise pattern in world space instead, you'l convert P with this thing.
  6. Fusion vs Nuke

    Natron has support for OpenFX, and lot other 'open' things, let's say ffmpeg allows it to write apple ProRes using Windows OS. If I remember correctly, there were some issues exactly with Lenscare on OpenFX, but these are already recognized by developers. Generally one can not expect everything to work smoothly when such cross-platform thing is used, anywhere. Natron is free, they are providing some example projects, interface is looking like copy of Nuke. Download link is here.
  7. By the way I've tried what I've suggested, and... really problematic part seems to be building the per-point orientation, somehow it is always arbitrary. So at the end of the day, three stages are minimum to build an curve based interpolation, as an robust method, just like you did , I guess. If I'm correct, there's something for preserving the curve segment's length in your method, like Maya curve 'lock length'. About ready for use solvers, while ago I've played with RBD constraints and Bullet, as an replacement for H wire solver, it showed great self-collisions, but entire sim was a bit too wild, more like ropes - 'loose' SDF based collision in Wire solver looked more realistic. H 'connect adjacent pieces' wasn't enough for building the relations (it takes nearby points in 3d space), had to build the replacement of 'connect...' thing. Other than that, VDB Advect or FEM soft body comes to mind, however there's too long way of forth-back conversions with both (for my taste) to use as rigging tool.
  8. If both stages are given, just for something like that, should be possible to get the angle by comparing the normals, also a line between two stages. These two, line and angle, should be enough elements to calculate the center of rotation by using some triangle formula (of right triangle). Cross product of normals (at first and last stage) should be a rotation axis. While entire thing will be rotating inward, unless you'll lower the ambition to less than 180 degree between stages. Not in front of Houdini, anyway I'd believe it will work. Multiple stages would be interesting to get by some SLERP , while this a bit too much for me to visualize, right now...
  9. Fusion Studio

    Well, people are usually going in opposite directions in last ten years or so, toward After Effects or Nuke, depending of what they are doing. Should not be hard to get into basics, however in my opinion, it's really old fashion app, full of specific solutions from 90s. Starting from numerical inputs where you are able to use sliders somewhere, somewhere not, somehow unusual choice of blending modes, different behaviors of copy - paste node, so on. Let's say if you want to re-create something like (not built-in) light wrap effect by nodes, that would be a way easier and straightforward in Blender compositor. While generally, Natron is much more Nuke - like, also, Natron provides you controls where you expect to find them, well at least for my taste. Nodal or not, I think After Effects is a way more unified app, making it easy to just override disadvantages of layers, in many cases. I'd say, if there is a particular advantage of Fusion, use it just for that. However, complete switch is hard to imagine, actually it sounds impossible.
  10. imported UVs

    Generally, if they are on vertices, keep them there. UV as vertex attribute makes possible to have "breaks" between UV islands, even on connected polygons, hard edge in case of normals, also ''per polygon'' vertex color (vertex color in case of FBX). Once they are promoted to point, you'll get UV connected all around, let's say between first and last UV edge of cylindrical UV map, so on. That's not what you want. In other word, promoting from vertex to point has sense only if each UV island edge is corresponding to polygon boundary edge. Why that difference is happening, I don't know, my wild guess is that Houdini performs some kind on optimization on import, if it is possible to promote UV or normals form vertices to points harmlessly , H will do that, otherwise it won't (I could be wrong for this 'who is doing that' part).
  11. OBJ transform to SOP lvl

    Wouldn't hurt. By the way, does anyone have some experience with SideFX GoZ for Houdini.
  12. OBJ transform to SOP lvl

    Not all the time. If you're using exporter from file>export menu, then yes, positions are baked into global space before exporting, in case of obj file. In case of FBX, it writes both, global and local. Same with Maya. However if you write something out using ICE Cache On File node, it's local position. If you take a Houdini File SOP as equivalent of ICE Cache On File node, it should be clear what happens. SOP VOP deformation, as well as ICE tree, is applied in local (object) space.
  13. OBJ transform to SOP lvl

    As far as I know, Obj format does not store any transforms at object level, there are point positions in global space and that's it. On Softimage - Maya - Houdini route, default up axis is same Y, result should be the same, as you already noticed. Most likely, zBrush did something according to its preferences or something else, these zBrush arbitrary changes are somehow traditional. Just for info another app with similar habits is Blender (because of Z up). If adjusting zB preferences doesn't help, should be enough to use Transform SOP, that's relative cheap, all at once deformation. If Houdini is able to display the mesh, should be able to evaluate Transform SOP, too.
  14. Space Suit

    Hello I've started with this around H 16 release. Basically wanted to explore, to which level I'd be able to use procedural modeling when it comes to characters. So, "non procedural" part here belongs to another app, exactly Maya, where I've created a base body model, rig, posing - while Houdini part is hair of all sorts (hair, eyelashes, eyebrows..), also a lot of suit. Detailed map, what exactly belongs to which app is here. Let's say that 'harness system' is what I'm considering as most successful part. Later, started with Mantra renders, which turned out in kind of addiction - here are few of around hundred renders of this thing, I did in Mantra.
  15. Distance from center in shops

    Well, then something else went wrong. From my small experience, it works....
  16. Distance from center in shops

    Should be something like in pic. That's SHOP, ''current'' space is camera.
  17. So here's attempt First of all, this is not an universal skin shader, exact replica or anything like that, I just tried to put some typical 'contemporary' methods into network. It is a mix of PBR Diffuse, PBR Single Scatter ( introduced in 16.5), and two PBR Reflections. It should showing what is possible to do, feel free to experiment with nodes inside shader network. There's just one subsurface scattering shader in mix. Houdini PBR Single Scatter has all map-able attributes, like scattering distance or attenuation color (while in old Mental Ray's Fast Skin, blurring radius is not) - things like exaggerated back scatter around ears or variances in attenuation color, are performed by simple trick: point color is used as mask for modulation. One SSS shader should be much faster to render than three, obviously. So at the end of the day, only typical Mental Ray's Fast Skin feature is screen blending of layers. I added switches between screen blend and plain additive. Important parameters: - Diffuse and SSS tint: that's 'modern' method, to multiply diffuse and SSS texture by complementary color, diffuse in light blue, SSS in orange. Overall is nearly the same as original color of texture, while complementary colors are there to get stronger diffuse - SSS difference. - Diffuse and SSS power: actually a gamma value, 1 is original, more than 1 is exaggerated darker color. - Diffuse and SSS Weights: for more of 'old paintings' style, feel free to raise the SSS weight. - Screen blending, Diffuse vs SSS, Reflections vs Diffuse and SSS: feel free to disable them just to see what happens. Nice side effect of screen blend is clumping of maximum direct lighting ( inside shader I set this to 1.25), making it easier to sample by Mantra. - Two reflections, first is wide, it acts more like additional diffuse shading. Second is sharp, intentionally disabled at edges. Bu some convention, shading model is set to Phong, as Phong does not fade the wide reflections (contrary to GGX) - so somehow Phong fits better, here. Any reflection parameter is a subject for tweaking, except maybe just one: in case of skin, it's always blueish tone, AFAIK that's natural effect of exaggerating the complementary reflection color in case of scattering media, something not automatically set by layered shading used in renderers like Mantra. Regarding wikihuman files, I've reduced resolution of bitmaps to make a smaller download, also I've used only three maps: diffuse color, specular color, and main displacement. Get files (around 30 Mb) there.
  18. I'd try to post Houdini scene this weekend, based on wikihuman files. Regarding Lee Perry's head, I'd say, enough is enough :).
  19. Hello McNistor, just a few thoughts: 'edge factor' in Mental Ray's fast skin (and default reflection falloff in all variances of MIA) could be a kind of very simplified Fresnel effect, precisely number 5 is exponent of dot product of shading normal and negated eye direction. In H, that could be 'normal falloff' node with mentioned exponent of 5 (or a bit more or less), plugged into 'fit range' node, where 'destination min' and 'destination max' are facing - edge weight, all that used to fade the reflection. That's simplified but gives a cleaner control over reflection weights than 'true' Fresnel. With unmodified 'fit range', it's zero at facing and one at edge. For subsurface shaders introduced in H 16 (honestly I don't know what is included in Principled thing), I'd definitively go to 16.5. Ones in 16 are utilizing indirect rays (while in 16.5 this switched back to direct) - so, I'm afraid, everything you'll be able to get with 16.0 versions, could be a long forum thread about long render times, fireflies and such. Unfortunately, there is no control over reflection color and face-edge weighting in H Skin shader, which makes it close-to-unusable. Why is that, I have no idea. PBR story should not be excuse for such brutal approach, IMO. IOR/Fresnel is present, but without precise facing/edge weighting control. Regarding Mental Ray's fast skin, 'subsurface' is actually based on wild approximations, it's baked diffuse shading to vertices and blurred later, that's why there's need for three layers. Anyway, author had a great artistic talent to get a good look out of all that. One important 'artistic' element is under 'advanced' tab, it is screen blending of layers, not plain additive, which helps to avoid 'burning' with strong lighting. That's in short. Last few months I've played a lot with this subject , basically I was able to get what I wanted. However, solution probably is not suitable for sharing all around. Anyway, it's possible to re-create a 'Master Zap's style' of shader components mixing in H. If anyone wants this, I'll be happy to help.
  20. Help With Speeding Up Render

    In case of glass, there's old trick utilized in some custom Mental Ray shaders, to distribute sampling, reflection *or* refraction, which on is more prominent for certain pixel. Worked well ten years ago, using machines of these times. Don't know what happens there, anyway result is really bad and unexpected. I've also tried your scene, tonight.
  21. Leaf Shader - How to get realistic translucency?

    Perhaps something like this. It is PBR Diffuse in translucency mode, 'shade both sides as front' is disabled. I think it should go as additive on top of some usual shader, as it seems it shades only opposite side of light. Alone is not enough.
  22. Smoothing Lighting in Material

    How about some light with soft shadows, that is, anything else than point or distant light. Another that comes in mind is blend based on luminance, instead of switch. Let's say 'fit' node with luminance as input, source max to something like 0.1, fit node as input of bias of color mix . Third option could be some subsurface shader (could be slower...) - some SSS shaders are based exactly on blurred diffuse shading, while I'm not sure does it apply to ones in Houdini. From my understanding, dealing with normals won't affect the terminator edge (that's old school name for edge between illuminated and no illuminated area). In case of lights with sharp shading and shadows, that's always sharp edge, while shading based on normal is able to smoothly fade the shading, down to zero on terminator edge - but not to change it.
  23. Space Suit

    Here is hiplc with complete ground with ground shader, I've cleaned scene a bit. For distribution of rocks, I think there are better methods described in tutorials around. I've used a sort of simple repeating by modulo to get multiple instances into positions of scattered points. Shader is somewhat special, I've tried to get as much bright 'dry' look, together with as much dark shadows from sun, typical for NASA photos of Mars. So finally there's mix of two PBR diffuse VOPs, one has faded front face to act as a 'dry reflection'. Mix is a bit modified screen blending of these two - instead of 'classic screen blending' where colors are subtracted from RGB 1, here's arbitrary value, I called it 'pedestal'. Effect is exaggerated brightness compared to standard diffuse shading, while color still can't go over 'pedestal' value, also bright parts are gradually blended to that max color, here it is some bright orange. Normally, someone will do such things in post. Doing that directly in render is a bit risky business - as it is for now, I'm not sure is it able to respond correctly to possible additional indirect light. Anyway it seems that Mantra is nicely resistant to such methods.
  24. Space Suit

    Thank You Yeah, ground is also Houdini, while there's nothing special there, IMO. Except instanced rocks, they are 'manual' work, it was just easier to get desired shape, still keeping as much low resolution, for around ten of initial instances. Perhaps only special part is small 'blending area' around each big rock, created by VDB combining, smoothing and deleting unwanted polygons. I believe that 'blending area' helped big rocks to fit to ground plane better, visually. Will see is there a way for as much generic mini-braids generation. For now it's based on (my own) system which generates really too much of guides in such case. Also has probably unnecessary loops, for syncing the along-length-distribution with arbitrary braid thickness, so on. In short, while it works, it's too slow and messy for going in public.
  25. Bird rig

    Thanks. Yeah that's something impossible to imagine by my Softimage - Maya mind. It seems they made me Cylon long time ago....
×