Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Posts posted by lepetitnono

  1. thanks a lot DaJuice,

    I knew this, but it is not exactly what i was looking for,
    this is usefull is you want to customise one node at a time but i'd like to have them all tinted in a more dark grey as default,
    then from that point, i'd apply custom colors for each node that need  it > say dopnet popnet null and so...
    There is no way to apply one color for every nodes ?
    thank you.

    What i do is then adding nodes after nodes, scenes after scenes the new color for the nodes manually > ctrl on color and apply to nodes,
    then dupplicate the default nodes colors in themes ../ Network view display options  and giving it the same name,
    so it adds the new colors to the current theme colors.

  2. Hello DaJuice,
    Thanks for this smoke super theme, it rocks !   it's cleaner and more serious looking too !

    Now i'd like to change the default color for every new node created in every new hip file,
    seems that by default the nodes color is set at about GREY(0.7) but i'd like to have my every created nodes at GREY(0.2) or DARK

    I've looked inside the ST-Smoke.hcs did not found the correct name for what i'm looking for, may you please help me by answering with the correct name ?

    or a technique to change this default base color for every new node ?

    Thanks ;)   ++

  3. Hello @amm    thanks for your feedback,    so as you mentioned before i'm right where you stated  : single scatter ...
    while waiting for better computational force like 256 cores on a single cpu or a killer new algo that would took a volume without actually calculating a volume (who knows !) may be some sort of nulls to reference on the geo where noze eyes ears lips frontier are and then compute another map from that
    okay that's the scatter map !!     well then there's no other answers than single scatter !   or singles scatters
    thank you @amm    /   ++   ;)

  4. Hello,

    Thanks for your reply @amm
    i 'll have a look at no diffuse with attribute to control weight in my next spare time.

    I had read the doc about pathtracing sss (thanks for the link ;) ) and still could not anderstand : 
    but i've found this video to be a good explanation of the different tracing options available for subsurfacing
    This is regarding RenderMan 21-22 (see super diagrams from 16:30), you see Renderman options "seems" on top of the dev road,
    and i've dropped to sidefx a demand so to implement this Renderman 22 "new algo" :  to the sss arsenal of Mantra.
    You can add your +1 in reply to sustain this over sidefx forum here : https://www.sidefx.com/forum/topic/62376/  if you want it.

    The pdf about this new development is here : http://graphics.pixar.com/library/PathTracedSubsurface/paper.pdf

    Hopefully Sidefx will put this in Mantra in the future ;)


  5. Thanks a lot @amm !   it's more clear now ;)

    The results i got so far with this shader : downside of the nose the shadow to up upper lips is not having enough saturation, like we would get in a real shot : a bit redish. 
    The shadow is looking like a diffuse shadow without sss.
    Is there a workaround ? or did i missed something in the shader params ?

    So i did a try with a new shader : "full sss" : test with both pathtraced and raytraced sss and this shadow looks natural but render times are going insanelly longer than the shader you provided
    (if i remember it goes from 15 mn with your shader to 6h30mn with the one i tried > yes : no comment).


    I guess there is no solution that would be both looking real and fast to render.

    By the way : do you know the difference between raytraced and pathtraced sss : i tried to anderstand the difference (google) but couldn't get a clear explanation of that.
    From what i've seen, in the tests i did,  the raytraced sss is giving back uniforme saturated transmission color and pathtraced are givin both a more controlled saturated transmission and a more feeling "like real"  transmission to "deepness" or scattering,
    it feels a bit like the difference between a 16bit to a linear 32bit image but according to sss. Sorry for the lack of explanation /


  6. Hello Anto,

    Thank you very much for this file, it renders pretty fast i've never seen such speed vs result.
    I've downloaded your precious reference files, and tried to anderstand what happened into your shader. (img1)







    i've rearrange your nodes (img2) for anderstanding purposes but still some calculations happening are hard to follow,
    i've added a texture to control Cd for scattering, some displacements, Normals everywhere i can, and parameters for color correct the diffuse and the scattering texture.


    img3 is the render comped in fusion with caustics for the eyes and there's your scattering Cd attribute called from nowhere as i rendered without adding the proper texture for scattering (my bad)

    I've come to the wikihuman website trying to find "papers" about this "contemporary" approach for skin shading : with no luck, do you have a link to further information about that ?
    i need a global approach on calculations to anderstand what is divide by what for what purpose and what is mixed with what for which intentions,
    explaining the complete process would take too much of your time that's why  ;)


    The main calculation area i don't anderstand is this one : img6
    it is used in the mix of Diffuse with SSS and in the mix of Diffuse+SSS with Reflexion but what is it doing and why ? 

    I come from this approach from within Octane Render (img4) with 3 SSS and 2 reflexions :  is it the approach that's not relevant anymore ? i mean not the "contemporary" way to do it ?


    PHASE :
    How did you get to -0.1 float number in the phase ?
    here is a diagram i've found from Unity forum (img5)   seems the phase needs to be splitted to positive for the first sss and negative to the second one (or '+ + - ' in the case of 3 sss components)


    Thank you very much for all : this precious file and all the informations you provided so far, 


  7. Hello Stepbtstepvfx,

    You answered me on the french houdinimatic Discord, linking to here,
    i'd need further explanation as i'm a newbie user and that i'm trying something that a part of your code may resolve, but don't know which is which.

    So the main idea is to delete parts of a quad geometry, let's take for example a sphere : i want to delete all but keep the "lines" that follow the geo in the up axis (given by a polyframe).

    I've made several tests with no luck, and basically something like adding primitives along the N vector (which is the up converted to N) would be cool but this is not working.
    i have a geometry with only points and those points have their Normals pointing from each pt to the next pt in the Y direction following the geo which is i think a good starting base but after that i cannot create lines that would connect each point to the next in regards to this N vector. 

    Need your magic ;)  ;)   
    Thank you very much /.



  8. Hello there,

    i'm still trying to figure out how to make it working,
    i've tried with two other hair styles cards from DAZ models, some you can buy, some you have at starting with daz /
    results differ and they are not working : hair cards are not same kind of topology from hair style to hair style so what seemed working for short hairs is not for long ones and so for the short hairs,
    it was working in that particular case but won't for the others.

    Ribbons :
    Hair cards are sort of Ribbons, there is either 4 columns or two columns ones so i've tried to put 4 and 2 into different groups and then afterwards treat two columns ribbons and 4 columns ribbons in different loops,
    the 2 groups selection was based on measuring the area of the polygons in a for each connected pieces prims as a starter.
    Groups differenciation based on prims area OR number of prims is not stable result, some two columns ribbons exists in the 4 columns group and vice versa.

    So what is the process :

    Say you have a a geometry object which is only the hair cards, you put that in a for each connected pieces, so to have then each ribbon at your disposale for dissection.
    Now for each ribbon you need to get a curve as a result, that's the holy graal.

    Solution 1 : sort and delete points 
    So what i did with the ribbon is delete points by range, if you delete 3 points out of 4 you only get one point per prim (thankfully there's only quads in daz haicards),
    So that should work right ?
    but it's not as the point numbers are NOT numbered as to following the final curve we want (and that we see just in front of our eyes, shame!).
    Then you place a sort node which, well, sorts the points numbers based on an attribute or a distance from any position in space of another point or from an axis you choose what you need.
    It works in some cases but not them all > then you delete points and you get a curve that mimic the initial ribbon.
    As i said it's not working in everycases and you have to adapt to each scenario so that's not automatic.
    Place a polypath after the loop which will make a path from point to point that is recognised then as guides for hair.

    Solution 2 : scale down each prim and fuze points / delete points / sort / and then polypath
    Each ribbon is made of primitives polygons connected each other, you split them so each point shared on two polys is then 2 points. Then you resize each polygon to a very very (very very) small sized poly,
    and then you fuse the points based on a small distance : each polygon is now a single point on the center of the polygon you had at loop begin / put down a delete node so to delete primitives, polygons everything you don't need
    to keep it clear with points only.
    Now you see with what you get that you can just delete some of the points to get something that looks like unique points following the surface of the ribbon in straight line.
    If you work from a ribbon with only one column do nothing and place a polypath after the loop which will make a path from point to point that is recognised then as guide for hair.
    If you work with two columns ribbons then delete one point out of two with the delete by range and follow the process /
    If you work with 4 columns delete 3 out of 4 in the delete node and so on and so on.

    For this there is something i lack : it is to determine outside or inside the loop : the number of the columns there is on the ribbon !
    as i've wrote : i've tried to detemine it from the measure node : measuring area : 4 columns will be of a greater area than two i thought : but some ribbons have two columns and are very long : area big,
    while some ribbons with 4 columns are small enough resulting is smaller area than 2 columns ribbons...  
    Then i gave a try at the number of prims and the same scenario happens again as 2 columns with super long ribbon will be superior to 4 columns in a small ribbon /.
    So here is where i stopped for now.

    If someone could push it further it would be cool ;)


  9. Hey,

    i will post an explanation for those asking about a hip,
    but i need to be sure this really works in other conditions and i am just another new user too so this will be at your own risk ;)

    As you mentioned Mawi

    3 hours ago, mawi said:

    Looks cool.

    With the long hair, I think the biggest problem is your input. Its som much easier if each "ribbon" is a single connected patch. Then you can run a forEach and copy curves to each patch in uv-space.

    I`ve made a few of these systems and I always use NURBS. Its really simple to plot out hair if you know the parametric u and v direction.


    Shout if you need some simple scenes.

    There is a need for uvz just to have them along each curve and i was on my way thinking about it but now that you brought to the table another thing that looks important,
    i'm interested in knowing a bit more about this as, i'm just in houdini for 8 months lots of the new concepts brought by houdini concerning basic 3d principles are still missing.

    So you put a second for each at the end of the first and copy curves on each segment of the curve so to have correct uvs ? am i right ?
    i don't anderstand,
    what if you put uvs directly on the curves created in the first loop ?    i've seen a tut on putting uv coords "manually" (vex) on curves at gametutor would that work in that manner ?

    I have not tried this with long hairs yet and there might be a problem with those long geos ...  

    And thank you Mawi for your feedback,

    Ok tomorrow, it's 2:00 here...     Cheers.


    • Sad 1

  10. Hello Soulssaga, Hello everyone,
    I'm a new houdini user as well i have not made it to work as well but anyway here is my process, it seems close to working as the basic stuff needed are there (i hope so).

    Process :

    1- HairCards to Curves :
    extract haircards and skin geo in different hierarchies,
    put the haircards geo in a class loop - extract the origin point then project it on the skin geo - merge - simplify - smooth - add the root restroot rest attributs
    and convert to nurbs curve
    At the end ot he loop you have curves that match more or less the base haircards geo shape.

    2- Hair Generate guides
    generate guides on the skin with density attribut

    3- Advect generated guides along curves created in step 1
    Should work but does not.
    Nodes are there to help : volumevelocityfromcurves and guideadvect node but the drawcurve is difficult to pass i think might the curves lack as treated as drawn curves inside this draw node.
    Anyway : i have curves, i have vdbskin, skin, hair guides, and advection, there are material here that are needed for this to work but something is not well done

    My file is here if some is interested in helping, it is 3mo large and purged all but essentials, 3mo because there's a need for the base geo for this to work : https://we.tl/t-ivQlV9vAhJ


    • Thanks 1