Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

amm

Members
  • Content count

    65
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by amm

  1. Thanks. Yeah that's something impossible to imagine by my Softimage - Maya mind. It seems they made me Cylon long time ago....
  2. Forget me if I misunderstood, you mean 'diving' into bone node, or what. If so, that is giving some info about geometry, more or less in same way as Maya Shape node, but, not that much about transforms. By the way, in Houdini, things like Blend or Fetch Object, Look At Constraint , IK solver, they are behaving like parent, local transform is not affected. While in apps like Softimage or Maya, perhaps Blender too, "constrained transform" is firstly converted to local space of driven object, and modification is finally applied as blend-able override of local transform. In other words, if someone for example wants to read the local orientation of IK driven bone, this is immediately possible in mentioned apps. While in Houdini, this has to be calculated first, somehow, simple because ''local transform'' is not affected by some of mentioned "constraints" . As far as I know, one way is to use some specialized CHOP for that, another is optransform function. My vote for optransform, together with as much simple Houdini rig, because: - it allows to use SOP/VOP network, as imho only advantage against Maya or Blender when it comes to rigging. For any further deformation, distribution of instances, feathers or so. - Houdini kinematics is not so fast to evaluate, especially with a lot of CHOPs all around (to say politely). - it makes easier to replace the complete Houdini rig by animated hierarchy imported form another app, even in case when hierarchy does not match (because of global matrices and necessary re-construction of parent-child relationship).
  3. I think the most versatile way is still a bunch optransform functions, with this you'll get global matrices into SOP/VOP network . You save the matrices as detail attribute, later you'll be able to deal with them in similar way as in Softimage ICE kinematics, as you know, inverse of one matrix multiplied by another, so on . For equivalent of Static Kine State probably you'll want to save a 'snapshot' of matrices into external bgeo file or something. So, 'Bone length' is a distance between bones. How to setup optransform function, I think you'll be able to find somewhere on forums, or download this thing from there (it could appear a little bit mangled in H 16, because of different working of Blend Object, but VOP structure should behave in same way). Few tips: Houdini bone uses a fixed rotation order, I think it's ZYX and Z is Local Rotation Axis. Whatever you're doing, take care of keeping the relative paths, to be able to create HDA, later. HDA creation seems to be allergic to too many linked parameters (or at least it still was in first public version of H 16), so, for a bit longer chain (parameter linked to already linked parameter), perhaps you'll want to build HDA first - while it's generally good idea to avoid any 'chain', if you can. For best evaluation, you'll want to have SOP/VOP network under the same hierarchy with your rig.
  4. Hello, just wanted to see how to get something like this in Houdini. Basically it tries to re-assemble a common blend - fillet method from NURBS modeling, but this time, over highly subdivided mesh. This one could be called a 'distance based fillet with g2 blend' in NURBS modeling world. There are Polycut and Polyextrude SOPs, visible in viewport, Polyextrude is completely reshaped by VOPs and some vector math trickery, usual for such task. For now, it can handle multiple intersections, but only between two components. One component could be a Merge SOP, anyway. Created parts are just stacked, there is no mesh continuity this time, also I'm not optimist when it comes to getting the smooth, man made, continuous shape around blends, at least without running into methods, usual for NURBS meshing, like custom normals. But, for things like procedurally created trees, I think it should be easier to get the all in one mesh.
  5. Thank you. I've uploaded to Orbolt .
  6. Hello, I've revamped this thing in H16. Basically it's same, just changed a long distance geometry query to VDB sampling, it seems to fits best. Also added some auxiliary functionalities, like reversing polygons if needed, uv creation and such. It's created by H SOP and VOP arsenal. Get hiplc here. How it works: First step is volume sample of VDB representation of another mesh. Sampled SDF is saved as float attribute, zero SDF is used by PolyCut SOP to create intersection curve. Final offset Cuts on meshes are also PolyCut SOP done by spatial query, XYZ Distance VOP and such. Intersection curve is re-sampled down and converted to NURBS, to get as much smooth fillet. From that curve, there's spatial query to cuts on meshes, to get closed points. In next step, curve is re-sampled again to final fillet resolution, also there's new spatial query to cuts, this time only to match the final position, while orientations are derived from low res curve. This is to avoid 'bulging', invoked by linear cuts over polygons. Last step is six point bezier curve, well known as G2 blend in NurbS world, used to loft the fillets, by Skin SOP. More specific, what it can and can not do: - it automatically creates NURBS style fillets around intersections of two polygon meshes. - it wants two closed meshes as inputs, while second mesh has to be perfectly closed (no boundary edges) - will see is there a simple way to improve that. - it is able to perform fillets over fillets - only in case of closed second input. - it is able to deal with multiple intersections, or multiple (closed) volumes, let's say created by Merge SOP. - it creates fillets from union, intersection or subtraction. Default is union. - it creates UVs on fillets. If there are existing UVs on inputs, H will keep them. - it aligns normals (or exactly, vertex order) of created fillet, to first input. - each intersection has to be 'closed', that is, resulting in closed curve, in order to work properly. - meshes has to be nicely subdivided before inputs. It's just cutting over supplied inputs, it won't create new, smaller polygons. - it does not work well with sharp curvature - will see is there a way to improve that. - fillets should not overlap. - resulting meshes are just stacked, there is no any re-meshing, at this point.
  7. Usually there's underlying, nicely rounded 'profile' surface, while for that all-smoothed example, a plain copy should be enough. Then there's sort of spatial query, XYZ Distance or Intersect, to store the static offset against profile surface. Deformation is stored delta against new location on profile surface. Depending on implementation, the same should be calculated for driver object, null or whatever. Method is simple and well known for ages, there are many implementations around. While in Maya, I think these spatial queries won't fit into OpenCL - VP2 optimization, so, speed is probably comparable to Houdini VOPs or Sodtimage ICE - with a lot of queries, thing is not so fast on high resolutions. However it's possible to make it lighter, by performing everything on grid of locators, where locators are driving the skin deformer. Here is https://www.sidefx.com/forum/topic/34710/ Houdini equivalent of what I've called 'Deform by UV' for Softimage ICE, which utilizes UV texture for interpolation. It was popular while ago, anyway people used it mainly for modeling, as far as I know. Simplest variance could be a plain Maya Shrink Wrap / H Ray SOP driven by some sort of cluster deformer, but not so handy for deformations around higher angles.
  8. I think this really depends of what someone is doing. When it comes to rigging, probably the best graph is skeleton itself. It's 3d representation, having a nicely distributed, selectable elements. Maya has nice built in helpers for isolated selections, like pick walk, highlighting. So, navigation starts from 3d view, that's it. On other side, even just a little ambitious rig has two, three or more parallel structures (rig, deformer carrier, constraints, whatever). It's almost impossible to display all them at once in one 2d graph and keep the graph easy to understand. Not easy to make decision in advance, what should be nested. But, yeah, this is about rigging. By the way, why 3d apps are insisting on 2d graphs, only. How about 3d representation of everything, let's say to have a typical Houdini SOP - DOP network as two 3d structures, where networks are connected by sort of 3d 'cables'. Something similar to molecular visualization. I know this is a lot work to implement, but, we can thinking about...
  9. Well, mentioned Kajiya and Kay’s illumination model really isn't that complex, I think there are a few dot or cross products of hair tangent, camera and light direction, something like. Should be easy to find a Renderman sl example somewhere and rebuilt in H. However this model is really really an old fashioned, limited trick. Works only with 'classic' lights, not with environment lighting, it's very hard to get sharp highlights. Decades behind of hair shader in H. Normals used for diffuse shading or something else, seems to be derived from SDF and interpolated back to curves. Should be easy to utilize built-in functionality in H to get 'normals' from VDB SDF, how to interpolate this back to curves, I don't know. However again, entire story imho has a bit too much a 'dirty tricks' for today's criteria, not easy to keep it believable unless you're Pixar. A lot of interpolations from low to high resolution will work nicely for "mono volumes" but it can output something weird just in case of yours, 'multi volume' example. Anyway if there's goal to utilize volumes for hair shading, there's simple trick to render a 'real' hair, visible only to primary rays, while VDB volume is used for any other sort of rays. So, VDB volume, without converting back to polys, having a volume shader with high density, around 100 times more than default, with color close to dark hair color. This gives a smooth shadowing look, similar to shadow map, also it is around 10 tomes faster to render.
  10. Honestly don't know how exactly this works in H, tried once while ago and it worked. Generally, purpose of animation layer, it is an offset on top of mocap animation or such, such layers system somehow comes together with mocap system - zero value is no offset, something else is additive. I think there is no need to bake it back to original, as long you're in H.
  11. Perhaps there's difference between mocap actress and your 3d model. Thinner figure allows wider angles of arm's movement. Animation layers system is tool for adjusting these offsets, where is needed.
  12. About re-creating rotation from just points, there is Inverse kinematics chain in every 3d app, doing just that. Another method is look at constrain ( Track To in Blender, Aim in Maya, Direction constrain in Softimage), as a sort of complement to IK chain. Math behind IK chain of two elements is pretty much simple and well known, it's some method of calculating the sides of triangle, you know angle or length of two, then you get what you need for third. For more than two bones, it's different story, because this one always arbitrary, but two bone chain should be enough, here. In apps like Maya or Softimage, 'manual' motion re-targeting, using constraints or IK is pretty much possible procedure, while some skill is definitively needed to get it to work. For example, if two bones from mocap are parallel at some frame, there should be special solution to get an up vector, otherwise it's enough to use midpoint between root and end of last bone, so on. However it's still only skill, no need to know math behind or even no need for (that much) of scripting. Blender should be fine, too. Pro solution for re-targeting is Full Body IK solver, these solvers are able to work with complete skeleton at once, providing the info for re-targeting and filtering the mocap data in same time. Now...... Houdini has a bit different approach compared to Maya or Softimage or Blender. Things like Blend or Fetch object, or built in Look At, I think even IK CHOP, they are acting as a parent of ''constrained'' object, not as an override of local transform like in mentioned three, and rest of 3d world ( I think). There are other differences, too. Most likely it's perfectly possible to build some manual motion re-targeting in H, but, with how many steps, and, how fast this setup is, I don't know. For doing the natural based animation without using mocap, someone has to be a very, very, very skilled animator to get this believable. For movements like dancing, martial arts or so, having a lot of contacts with other objects, one would like to have a rig with a bit more than just a common set of IK solvers or constraints, robust IK FK matching mechanism as well.
  13. I'd say it depends what is considered as support. While it can load floating point exr and tiff and display effects typical for range over 1, alpha is always processed as something that looks like a double multiply, or some clamp. Some built in effects are still limited to 8 or 16 bit, so entire comp became clamped in such case, with small warning icon over particular effect. Similar with Photoshop, only some effects are working, maximum allowed range is 20. It's not mistake to round all that to zero, I think, especially in times of free Fusion.
  14. For this one I raised up the diameter (width * 2) to equivalent of 2 - 3 cm, just to show effect. Worked before with half of that value. For keeping the layered groom only by collisions, I'm not optimist, even with much more than around 220 guides like there. I think this have a chance to to work believably, starting with thousands of guides or more. So, used a strong Target stiffness just to fix the part around roots. Also used a blend with plain animated deformation, after DOP import, to fix the spring movement. This and this is how final looks like.
  15. Yeah noticed that with SDF collision, however it still produced a consistent transition between frames. More of resolve passes make it more accurate. Here's how one my test looks. By the way, part close to hair roots does not belongs to collision, it's just a very strong 'target stiffness', gradually faded along curves. For my taste, main problem with H wire solver, I don't know how to avoid effectively, is not natural springy movement.
  16. Regarding Kristinka hair, closest operator could be *adaptation* of 'kh form follow curve', where input geometry could be only a set of 'spine' curves, used for deforming the polygonal cylinders in Maya - at least I believe this was the modeling method. However this won't create a rounded cross section, it will be a sort of Voronoi shape defined by closest first points on curves. That is, Kristinka strictly creates all hairs 'from head', usually just one emitter for all, then it deforms the strands, later - it does not works directly from grooming geometry. This to get as much even distribution of hair roots, important for realistic renders. By the way, 'emit directly from grooming geometry' approach, which I think fits better to supplied... 'free form'... , in Softimage environment belongs to Melena branch, which is not 'translated' to H, yet. Supplied Kristinka for Houdini operators based on nurbs surfaces are far away of forms in image, even polygons are converted to nurbs. It will output something weird. In short, if voronoi style of cross sections, along set of 'spine' curves is ok , together with parametrically defined distances to curve (by spline ramps), I'll take a time this weekend. While I'm pretty sure about setup and math, just discovered how I completely forgot to work with HDAs (didn't had chance to play with H for some time), so simply no results right now.
  17. Pcfind returns an array, [] , while limits like radius and max number are still here. Can't say about speed comarison. Anyway it works just fine, let's say for bind export, and calling the attribute later. Find closest points before some deformation, do an average of positions after deformation, so on.
  18. If point number is exported as custom attribute, it will be resistant against any deletion, later. But it will be copied to possible 'clones' (it seems this depend on method, how point are added), so there should be a 'counter' after each new creation, for newly created points, something like maximum value of all saved attributtes, plus number of added points. I don't see anything shorter than that.
  19. About first, generally you interpolate the relative, parametric UV location from first polygon, to extruded ones. These positions are control points for some kind of curve interpolation. Let's say in Houdini, Scatter SOP from initial set of polygons, XYZ distance VOP to find parametric UV, Primitive Attribute VOP for matching, Spline VOP to fit the lines on curves, while lines are copied from points. There should be a mechanism to find the corresponding polygon - I think this part should be easier in H if sort of copying is used (polyextrude with 'side' option off, or like), because H XYZ distance VOP returns the primitive number, too. About Melena 'hair from nurbs', basically it's 'boosted' H Creep SOP or what is called 'deform by surface' in SI or 'wrap' in Maya. However, to get it 'boosted', there's need to re construct the functionality with VOPs or something else. Here in 'finished work' section, there is a sort of translation to H, of SI ICE hair system, which was inspired by one Melena's predecessor, having a lot of appropriate trickery inside - not the same as Melena, anyway. Note that, if something like believable hair is goal, work seems to be similar to building a rig for character - while nothing of required knowledge belongs to rocket science, only after few complete iterations, from scratch to final and believable result - you'll know what's 'appropriate' solution. So maybe it's best to create a 1:1 copy of some well known, widely accepted solution. Of course if related, possible copyrights allows such approach.
  20. Hello, I'd feel free to offer the Houdini version of Kristinka Hair. It is a hair styling system, created with Sofimage ICE. Generally it combines the external geometry used to get the main shape, together with sequence of procedural modifiers for details. It started as an publicity available experiment, few months after first release of ICE at the end of 2008. It had an enormous luck to be constantly tested, commented and used by Softimage artists, to this very day. Perhaps best known use case is version od AMDs Ruby , Nose Job TV commercial as well - while Psyop people, as far as I know, created a lot of additional solutions, on top of its base structure. 2011 it is promoted by Autodesk Area people . What is available to find in this, I hope only initial Houdini release, it is a snapshot of latest version. I think some recognizable, good and bad points are: 1: Mentioned 'from top to bottom' approach. After initial hedgehog's hairdo, there's one or many 'form' nodes, for shaping by external geometry, usually NurbS surfaces or curves. Next step is sequence of procedural 'modifiers'. Somewhere in 'modifier' stage, there is hair filler. 2: Only one network for all, flat context for everything. In Houdini it could be literally everything in one network - except of course an optional connection to DOP network or like. First node is polygonal emitter, last node is Subdivide SOP for rendering. 3: Direct rendering, shader is applied directly on hair object. 4: Each node tries to create an immediate, visually recognizable effect on final result - or at least, it tries to avoid things like initializers or so. 5: It is focused on human hair, or exactly, what some humans have on top of head. Nothing else, for now. 6. It's not generic, because of very specific structure, dependent of internally created attributes - generally it's not supposed to work with imported curves from zBrush, or anything like this. It will work only partially with Houdini Fur SOP - while I think there is no sense to go with this. 7: Working version is created exclusively from factory nodes. Has no scripted super structure, has no underlying dlls. This is to make it easier to manage, to make it available for changes by wider range of authors. Finally and most important, should be independent of Houdini version as well. 8: It has own hair filler, based on predefined attributes and specific structure of guides (filler does not rely on spatial query or other expensive method). 9: It heavily rely on NURBS surfaces, so, basic knowledge about NURBS is really a good idea - NURBS parametrization and so on. This initial release is free. I made it with Houdini Indie. There are few HIPlc samples and HDAlc nodes in download, one is setup for DOP network connection. For now, I added a bunch of comments in each sample, that's all about docs. However I'd believe this should be enough to see, what to do next. Get it there. Thanks for reading !
  21. Thank You guys. Just need to say, that main design of this stuff belongs to 2009 - 2011, when it wasn't really clear how typical 'hair enveloper' should like. Today I'll go with something like Maya GMH2, as it looks like much more compact and simpler solution, also a better candidate to hold a 'long hair scene' in Houdini word. Personally I'll stay with subject of this thread, anyway I'm pretty sure there are a lot of talented people around, perfectly able to create something like gmh thing. I think the ground for hair rendering is already there. Things like subdivide SOP that works with curves in H 15, really nice evaluation of thousands (polygonal) curves . Also for pic above, been able to reduce Mantra render time for about ten times, by using VDB volume for all secondary rays - not any renderer can do that.
  22. Usual trick to get it directly in VOP, could be something like pic. Worley gives four points, I think (not completely sure) some fitting across these four is enough to get the all well known appearances.
  23. Usual trick to get it directly in VOP, could be something like pic. Worley gives four points, I think (not completely sure) some fitting across these four is enough to get the all well known appearances.
  24. So what you're trying to do. By the way, if Attribute VOP is running over primitives, you could export P, which is a center of primitive (or average of points, not sure). But, promoting to points could return something useful in case of disconnected polygons, curves or so. Otherwise, when each point 'belongs' to many primitives, I don't think so.
  25. Bit off topic but can't resist. Fianna are you working for Side Effects now, or.... something happened with Side effects...