Jump to content

amm

Members
  • Content count

    81
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    13

Everything posted by amm

  1. Fusion Studio

    Well, people are usually going in opposite directions in last ten years or so, toward After Effects or Nuke, depending of what they are doing. Should not be hard to get into basics, however in my opinion, it's really old fashion app, full of specific solutions from 90s. Starting from numerical inputs where you are able to use sliders somewhere, somewhere not, somehow unusual choice of blending modes, different behaviors of copy - paste node, so on. Let's say if you want to re-create something like (not built-in) light wrap effect by nodes, that would be a way easier and straightforward in Blender compositor. While generally, Natron is much more Nuke - like, also, Natron provides you controls where you expect to find them, well at least for my taste. Nodal or not, I think After Effects is a way more unified app, making it easy to just override disadvantages of layers, in many cases. I'd say, if there is a particular advantage of Fusion, use it just for that. However, complete switch is hard to imagine, actually it sounds impossible.
  2. imported UVs

    Generally, if they are on vertices, keep them there. UV as vertex attribute makes possible to have "breaks" between UV islands, even on connected polygons, hard edge in case of normals, also ''per polygon'' vertex color (vertex color in case of FBX). Once they are promoted to point, you'll get UV connected all around, let's say between first and last UV edge of cylindrical UV map, so on. That's not what you want. In other word, promoting from vertex to point has sense only if each UV island edge is corresponding to polygon boundary edge. Why that difference is happening, I don't know, my wild guess is that Houdini performs some kind on optimization on import, if it is possible to promote UV or normals form vertices to points harmlessly , H will do that, otherwise it won't (I could be wrong for this 'who is doing that' part).
  3. OBJ transform to SOP lvl

    Wouldn't hurt. By the way, does anyone have some experience with SideFX GoZ for Houdini.
  4. OBJ transform to SOP lvl

    Not all the time. If you're using exporter from file>export menu, then yes, positions are baked into global space before exporting, in case of obj file. In case of FBX, it writes both, global and local. Same with Maya. However if you write something out using ICE Cache On File node, it's local position. If you take a Houdini File SOP as equivalent of ICE Cache On File node, it should be clear what happens. SOP VOP deformation, as well as ICE tree, is applied in local (object) space.
  5. OBJ transform to SOP lvl

    As far as I know, Obj format does not store any transforms at object level, there are point positions in global space and that's it. On Softimage - Maya - Houdini route, default up axis is same Y, result should be the same, as you already noticed. Most likely, zBrush did something according to its preferences or something else, these zBrush arbitrary changes are somehow traditional. Just for info another app with similar habits is Blender (because of Z up). If adjusting zB preferences doesn't help, should be enough to use Transform SOP, that's relative cheap, all at once deformation. If Houdini is able to display the mesh, should be able to evaluate Transform SOP, too.
  6. Space Suit

    Hello I've started with this around H 16 release. Basically wanted to explore, to which level I'd be able to use procedural modeling when it comes to characters. So, "non procedural" part here belongs to another app, exactly Maya, where I've created a base body model, rig, posing - while Houdini part is hair of all sorts (hair, eyelashes, eyebrows..), also a lot of suit. Detailed map, what exactly belongs to which app is here. Let's say that 'harness system' is what I'm considering as most successful part. Later, started with Mantra renders, which turned out in kind of addiction - here are few of around hundred renders of this thing, I did in Mantra.
  7. Distance from center in shops

    Well, then something else went wrong. From my small experience, it works....
  8. Distance from center in shops

    Should be something like in pic. That's SHOP, ''current'' space is camera.
  9. So here's attempt First of all, this is not an universal skin shader, exact replica or anything like that, I just tried to put some typical 'contemporary' methods into network. It is a mix of PBR Diffuse, PBR Single Scatter ( introduced in 16.5), and two PBR Reflections. It should showing what is possible to do, feel free to experiment with nodes inside shader network. There's just one subsurface scattering shader in mix. Houdini PBR Single Scatter has all map-able attributes, like scattering distance or attenuation color (while in old Mental Ray's Fast Skin, blurring radius is not) - things like exaggerated back scatter around ears or variances in attenuation color, are performed by simple trick: point color is used as mask for modulation. One SSS shader should be much faster to render than three, obviously. So at the end of the day, only typical Mental Ray's Fast Skin feature is screen blending of layers. I added switches between screen blend and plain additive. Important parameters: - Diffuse and SSS tint: that's 'modern' method, to multiply diffuse and SSS texture by complementary color, diffuse in light blue, SSS in orange. Overall is nearly the same as original color of texture, while complementary colors are there to get stronger diffuse - SSS difference. - Diffuse and SSS power: actually a gamma value, 1 is original, more than 1 is exaggerated darker color. - Diffuse and SSS Weights: for more of 'old paintings' style, feel free to raise the SSS weight. - Screen blending, Diffuse vs SSS, Reflections vs Diffuse and SSS: feel free to disable them just to see what happens. Nice side effect of screen blend is clumping of maximum direct lighting ( inside shader I set this to 1.25), making it easier to sample by Mantra. - Two reflections, first is wide, it acts more like additional diffuse shading. Second is sharp, intentionally disabled at edges. Bu some convention, shading model is set to Phong, as Phong does not fade the wide reflections (contrary to GGX) - so somehow Phong fits better, here. Any reflection parameter is a subject for tweaking, except maybe just one: in case of skin, it's always blueish tone, AFAIK that's natural effect of exaggerating the complementary reflection color in case of scattering media, something not automatically set by layered shading used in renderers like Mantra. Regarding wikihuman files, I've reduced resolution of bitmaps to make a smaller download, also I've used only three maps: diffuse color, specular color, and main displacement. Get files (around 30 Mb) there.
  10. I'd try to post Houdini scene this weekend, based on wikihuman files. Regarding Lee Perry's head, I'd say, enough is enough :).
  11. Hello McNistor, just a few thoughts: 'edge factor' in Mental Ray's fast skin (and default reflection falloff in all variances of MIA) could be a kind of very simplified Fresnel effect, precisely number 5 is exponent of dot product of shading normal and negated eye direction. In H, that could be 'normal falloff' node with mentioned exponent of 5 (or a bit more or less), plugged into 'fit range' node, where 'destination min' and 'destination max' are facing - edge weight, all that used to fade the reflection. That's simplified but gives a cleaner control over reflection weights than 'true' Fresnel. With unmodified 'fit range', it's zero at facing and one at edge. For subsurface shaders introduced in H 16 (honestly I don't know what is included in Principled thing), I'd definitively go to 16.5. Ones in 16 are utilizing indirect rays (while in 16.5 this switched back to direct) - so, I'm afraid, everything you'll be able to get with 16.0 versions, could be a long forum thread about long render times, fireflies and such. Unfortunately, there is no control over reflection color and face-edge weighting in H Skin shader, which makes it close-to-unusable. Why is that, I have no idea. PBR story should not be excuse for such brutal approach, IMO. IOR/Fresnel is present, but without precise facing/edge weighting control. Regarding Mental Ray's fast skin, 'subsurface' is actually based on wild approximations, it's baked diffuse shading to vertices and blurred later, that's why there's need for three layers. Anyway, author had a great artistic talent to get a good look out of all that. One important 'artistic' element is under 'advanced' tab, it is screen blending of layers, not plain additive, which helps to avoid 'burning' with strong lighting. That's in short. Last few months I've played a lot with this subject , basically I was able to get what I wanted. However, solution probably is not suitable for sharing all around. Anyway, it's possible to re-create a 'Master Zap's style' of shader components mixing in H. If anyone wants this, I'll be happy to help.
  12. Help With Speeding Up Render

    In case of glass, there's old trick utilized in some custom Mental Ray shaders, to distribute sampling, reflection *or* refraction, which on is more prominent for certain pixel. Worked well ten years ago, using machines of these times. Don't know what happens there, anyway result is really bad and unexpected. I've also tried your scene, tonight.
  13. Leaf Shader - How to get realistic translucency?

    Perhaps something like this. It is PBR Diffuse in translucency mode, 'shade both sides as front' is disabled. I think it should go as additive on top of some usual shader, as it seems it shades only opposite side of light. Alone is not enough.
  14. Smoothing Lighting in Material

    How about some light with soft shadows, that is, anything else than point or distant light. Another that comes in mind is blend based on luminance, instead of switch. Let's say 'fit' node with luminance as input, source max to something like 0.1, fit node as input of bias of color mix . Third option could be some subsurface shader (could be slower...) - some SSS shaders are based exactly on blurred diffuse shading, while I'm not sure does it apply to ones in Houdini. From my understanding, dealing with normals won't affect the terminator edge (that's old school name for edge between illuminated and no illuminated area). In case of lights with sharp shading and shadows, that's always sharp edge, while shading based on normal is able to smoothly fade the shading, down to zero on terminator edge - but not to change it.
  15. Space Suit

    Here is hiplc with complete ground with ground shader, I've cleaned scene a bit. For distribution of rocks, I think there are better methods described in tutorials around. I've used a sort of simple repeating by modulo to get multiple instances into positions of scattered points. Shader is somewhat special, I've tried to get as much bright 'dry' look, together with as much dark shadows from sun, typical for NASA photos of Mars. So finally there's mix of two PBR diffuse VOPs, one has faded front face to act as a 'dry reflection'. Mix is a bit modified screen blending of these two - instead of 'classic screen blending' where colors are subtracted from RGB 1, here's arbitrary value, I called it 'pedestal'. Effect is exaggerated brightness compared to standard diffuse shading, while color still can't go over 'pedestal' value, also bright parts are gradually blended to that max color, here it is some bright orange. Normally, someone will do such things in post. Doing that directly in render is a bit risky business - as it is for now, I'm not sure is it able to respond correctly to possible additional indirect light. Anyway it seems that Mantra is nicely resistant to such methods.
  16. Space Suit

    Thank You Yeah, ground is also Houdini, while there's nothing special there, IMO. Except instanced rocks, they are 'manual' work, it was just easier to get desired shape, still keeping as much low resolution, for around ten of initial instances. Perhaps only special part is small 'blending area' around each big rock, created by VDB combining, smoothing and deleting unwanted polygons. I believe that 'blending area' helped big rocks to fit to ground plane better, visually. Will see is there a way for as much generic mini-braids generation. For now it's based on (my own) system which generates really too much of guides in such case. Also has probably unnecessary loops, for syncing the along-length-distribution with arbitrary braid thickness, so on. In short, while it works, it's too slow and messy for going in public.
  17. Bird rig

    Thanks. Yeah that's something impossible to imagine by my Softimage - Maya mind. It seems they made me Cylon long time ago....
  18. Bird rig

    Forget me if I misunderstood, you mean 'diving' into bone node, or what. If so, that is giving some info about geometry, more or less in same way as Maya Shape node, but, not that much about transforms. By the way, in Houdini, things like Blend or Fetch Object, Look At Constraint , IK solver, they are behaving like parent, local transform is not affected. While in apps like Softimage or Maya, perhaps Blender too, "constrained transform" is firstly converted to local space of driven object, and modification is finally applied as blend-able override of local transform. In other words, if someone for example wants to read the local orientation of IK driven bone, this is immediately possible in mentioned apps. While in Houdini, this has to be calculated first, somehow, simple because ''local transform'' is not affected by some of mentioned "constraints" . As far as I know, one way is to use some specialized CHOP for that, another is optransform function. My vote for optransform, together with as much simple Houdini rig, because: - it allows to use SOP/VOP network, as imho only advantage against Maya or Blender when it comes to rigging. For any further deformation, distribution of instances, feathers or so. - Houdini kinematics is not so fast to evaluate, especially with a lot of CHOPs all around (to say politely). - it makes easier to replace the complete Houdini rig by animated hierarchy imported form another app, even in case when hierarchy does not match (because of global matrices and necessary re-construction of parent-child relationship).
  19. Bird rig

    I think the most versatile way is still a bunch optransform functions, with this you'll get global matrices into SOP/VOP network . You save the matrices as detail attribute, later you'll be able to deal with them in similar way as in Softimage ICE kinematics, as you know, inverse of one matrix multiplied by another, so on . For equivalent of Static Kine State probably you'll want to save a 'snapshot' of matrices into external bgeo file or something. So, 'Bone length' is a distance between bones. How to setup optransform function, I think you'll be able to find somewhere on forums, or download this thing from there (it could appear a little bit mangled in H 16, because of different working of Blend Object, but VOP structure should behave in same way). Few tips: Houdini bone uses a fixed rotation order, I think it's ZYX and Z is Local Rotation Axis. Whatever you're doing, take care of keeping the relative paths, to be able to create HDA, later. HDA creation seems to be allergic to too many linked parameters (or at least it still was in first public version of H 16), so, for a bit longer chain (parameter linked to already linked parameter), perhaps you'll want to build HDA first - while it's generally good idea to avoid any 'chain', if you can. For best evaluation, you'll want to have SOP/VOP network under the same hierarchy with your rig.
  20. Mesh Blend

    Hello, just wanted to see how to get something like this in Houdini. Basically it tries to re-assemble a common blend - fillet method from NURBS modeling, but this time, over highly subdivided mesh. This one could be called a 'distance based fillet with g2 blend' in NURBS modeling world. There are Polycut and Polyextrude SOPs, visible in viewport, Polyextrude is completely reshaped by VOPs and some vector math trickery, usual for such task. For now, it can handle multiple intersections, but only between two components. One component could be a Merge SOP, anyway. Created parts are just stacked, there is no mesh continuity this time, also I'm not optimist when it comes to getting the smooth, man made, continuous shape around blends, at least without running into methods, usual for NURBS meshing, like custom normals. But, for things like procedurally created trees, I think it should be easier to get the all in one mesh.
  21. Mesh Blend

    Thank you. I've uploaded to Orbolt .
  22. Mesh Blend

    Hello, I've revamped this thing in H16. Basically it's same, just changed a long distance geometry query to VDB sampling, it seems to fits best. Also added some auxiliary functionalities, like reversing polygons if needed, uv creation and such. It's created by H SOP and VOP arsenal. Get hiplc here. How it works: First step is volume sample of VDB representation of another mesh. Sampled SDF is saved as float attribute, zero SDF is used by PolyCut SOP to create intersection curve. Final offset Cuts on meshes are also PolyCut SOP done by spatial query, XYZ Distance VOP and such. Intersection curve is re-sampled down and converted to NURBS, to get as much smooth fillet. From that curve, there's spatial query to cuts on meshes, to get closed points. In next step, curve is re-sampled again to final fillet resolution, also there's new spatial query to cuts, this time only to match the final position, while orientations are derived from low res curve. This is to avoid 'bulging', invoked by linear cuts over polygons. Last step is six point bezier curve, well known as G2 blend in NurbS world, used to loft the fillets, by Skin SOP. More specific, what it can and can not do: - it automatically creates NURBS style fillets around intersections of two polygon meshes. - it wants two closed meshes as inputs, while second mesh has to be perfectly closed (no boundary edges) - will see is there a simple way to improve that. - it is able to perform fillets over fillets - only in case of closed second input. - it is able to deal with multiple intersections, or multiple (closed) volumes, let's say created by Merge SOP. - it creates fillets from union, intersection or subtraction. Default is union. - it creates UVs on fillets. If there are existing UVs on inputs, H will keep them. - it aligns normals (or exactly, vertex order) of created fillet, to first input. - each intersection has to be 'closed', that is, resulting in closed curve, in order to work properly. - meshes has to be nicely subdivided before inputs. It's just cutting over supplied inputs, it won't create new, smaller polygons. - it does not work well with sharp curvature - will see is there a way to improve that. - fillets should not overlap. - resulting meshes are just stacked, there is no any re-meshing, at this point.
  23. Skin Slide Deformer

    Usually there's underlying, nicely rounded 'profile' surface, while for that all-smoothed example, a plain copy should be enough. Then there's sort of spatial query, XYZ Distance or Intersect, to store the static offset against profile surface. Deformation is stored delta against new location on profile surface. Depending on implementation, the same should be calculated for driver object, null or whatever. Method is simple and well known for ages, there are many implementations around. While in Maya, I think these spatial queries won't fit into OpenCL - VP2 optimization, so, speed is probably comparable to Houdini VOPs or Sodtimage ICE - with a lot of queries, thing is not so fast on high resolutions. However it's possible to make it lighter, by performing everything on grid of locators, where locators are driving the skin deformer. Here is https://www.sidefx.com/forum/topic/34710/ Houdini equivalent of what I've called 'Deform by UV' for Softimage ICE, which utilizes UV texture for interpolation. It was popular while ago, anyway people used it mainly for modeling, as far as I know. Simplest variance could be a plain Maya Shrink Wrap / H Ray SOP driven by some sort of cluster deformer, but not so handy for deformations around higher angles.
  24. Houdini 16 Wishlist

    I think this really depends of what someone is doing. When it comes to rigging, probably the best graph is skeleton itself. It's 3d representation, having a nicely distributed, selectable elements. Maya has nice built in helpers for isolated selections, like pick walk, highlighting. So, navigation starts from 3d view, that's it. On other side, even just a little ambitious rig has two, three or more parallel structures (rig, deformer carrier, constraints, whatever). It's almost impossible to display all them at once in one 2d graph and keep the graph easy to understand. Not easy to make decision in advance, what should be nested. But, yeah, this is about rigging. By the way, why 3d apps are insisting on 2d graphs, only. How about 3d representation of everything, let's say to have a typical Houdini SOP - DOP network as two 3d structures, where networks are connected by sort of 3d 'cables'. Something similar to molecular visualization. I know this is a lot work to implement, but, we can thinking about...
  25. Volumetric Hair?

    Well, mentioned Kajiya and Kay’s illumination model really isn't that complex, I think there are a few dot or cross products of hair tangent, camera and light direction, something like. Should be easy to find a Renderman sl example somewhere and rebuilt in H. However this model is really really an old fashioned, limited trick. Works only with 'classic' lights, not with environment lighting, it's very hard to get sharp highlights. Decades behind of hair shader in H. Normals used for diffuse shading or something else, seems to be derived from SDF and interpolated back to curves. Should be easy to utilize built-in functionality in H to get 'normals' from VDB SDF, how to interpolate this back to curves, I don't know. However again, entire story imho has a bit too much a 'dirty tricks' for today's criteria, not easy to keep it believable unless you're Pixar. A lot of interpolations from low to high resolution will work nicely for "mono volumes" but it can output something weird just in case of yours, 'multi volume' example. Anyway if there's goal to utilize volumes for hair shading, there's simple trick to render a 'real' hair, visible only to primary rays, while VDB volume is used for any other sort of rays. So, VDB volume, without converting back to polys, having a volume shader with high density, around 100 times more than default, with color close to dark hair color. This gives a smooth shadowing look, similar to shadow map, also it is around 10 tomes faster to render.
×