Jump to content

acey195

Members
  • Content count

    556
  • Donations

    10.00 CAD 
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by acey195


  1. Yes and no.

    The stuff he teaches is indeed great :)
    and some of it will be transferable (at least the basic coding logic)

    However, the syntax of that language is quite different from Vex,
    and it would work the closest to Vex, run over "detail (only once)",
    which while valid for some purposes, is not the most efficient use of VEX.
    In this case using a Python SOP will be about the same speed,
    which is probably easier to learn (has more learning resources outside of Houdini)

    VEX is kind of designed to process existing data, rather than generating everything from scratch. (but note that it CAN do it)
    When VEX is run over "points" or "primitives", it acts more like a shader,
    executing exactly the same code for each of them, which allows multi-threading, unlocking its true power.

    Hope that helps!

    • Thanks 1

  2. Hey, Welcome to Houdini!

    May I suggest using the convert SOP (converting it to third order NURBS)
    you may also find it useful to:

    • use the facet SOP using the "Remove Inline Points" toggle and then play with the distance to remove some "oversampling"
    • use the fuse SOP to merge nearby points in corners (to avoid having sharp corners using convert)
    • resample with a large value for "Maximum Segment Length" to reduce the noise in your samples

    Or alternatively, you can also keep your geometry at polygons with the resample SOP, setting the "Treat Polygon As" dropdown to "Subdivision Curves"

    image.png.0466c3a390e1463b7cf4abbe2c602460.png


  3. The easiest way is to use a python sop in a forloop,
    causing the renderbutton to be pressed.
    note that your node may have to be "unlocked", so having it inside a locked hda will require some additional work

    something along the lines of:

    #python code:
    hou.parm("../ropnet1/fbxnode/execute).pressButton()


  4. I thought it was mainly, so it would be able to save animations (using the time frame)
    but yeah you'll need some kind of looping structure, or shenanigans using the pre/post render scripts :P


  5. On 27-10-2019 at 11:21 AM, Andr1 said:

    I don't have experience with sims, but I constantly finding myself falling in the following traps when developing some tool:

    1- focus on efficiency in the first stages (wrong)
    2- aiming at  a 101% procedural solution  (wronger)
    3- fighting against Houdini's poor parameters UI and control, with the goal in mind to produce a good user experience (lot of python trickery involved)

    These obsessions are very time consuming, and sucks much energy and motivation from the greater creative goal.

    For me it reaaaally depends what the end goal is. (Speaking from a Gamedev perspective)

    I would say #1 is not a trap, as soon as you know what you are doing, for me building something efficient right from the bat (using a lot of vex) is way faster,
    than building something with a lot of nodes, only to tear it all down later and rebuild it.
    Especially as you lose a lot of time, tracking all the attributes and stuff, and getting lost in a node network that is presumably larger, than it really needs to be

    Edit: basically give yourself some time to think of the general process/flow of the process beforehand, instead of diving headfirst into anything.
    Especially if you keep building on top of it, then it always turns into a mess I find.

    for #2 this is Only wrong, if this process is a step in the process near the very end. The earlier on in the chain of processes the system is,
    the more annoying fixing things by hand will be. In my experience spending an Hour of work on the procedure, that saves one second of hand edit,
    is almost Always worth it, if you sum up all those seconds of hand editing (and having to completely redo all the steps after.
    For example, an HDA that turns a triangle mesh into a quad mesh, has to work 100% of the time,
    unless you have specific manpower assigned to fix the results every time. (and I'm not going to myself :P)

    for UI, the Houdini interface can do a lot with the basic stuff. The trick is to not have too many parameters to begin with,
    trying to make as many things as possible relative to each other (providing overrides for people who think they know better than you :P)
    but yeah, python created dropdown menu's are very nice to have in a lot of cases.

    • Like 1

  6. On 12-8-2019 at 3:59 PM, philpappas said:

    to get some perspective on the amount of geo, i should mention that i'm trying to process an OSM file. if you take a look at the file posted you can see the basis of what i'm trying to do, but multiply that by tens of thousands of city blocks.

    ok, had some time to actually at the file, what I suggested earlier should work for your case, since you are only checking the center of every primitive, once.
    So for the final code, I would just do this:

    int outPr;
    float maxdist = chf("radius");
    vector outUV; //to use the range overflow of the xyzdist() you also have to query the primitive and uv for some reason
    string grp = sprintf("!%d", @primnum);
    
    f@test = xyzdist(0, grp, @P, outPr, outUV, maxdist + 0.001);
    
    if(@test<maxdist){
        s@near = "close";
        }

    with a lot more primitives, its going to be more costly of course, but probably still less than putting it in a loop.
    That said, with a very large amount of geometry, you may have to do the calculation in multiple steps,
    so not every primitive has to check Every other primitive, but just the ones that are close.

    Also, you could add an heuristic, resampling all primitives, adding a center point to all edges and check those first with a nearpoint() expression,
    then afterwards, doing the xyzdist() for all the remaining primitives.
    That way you could greatly optimize all the plots that have similarly sized buildings next to each other,
    as those points will in a lot of cases, nicely line up with the center of the neighbouring primitive in that case,
    if you are really afraid about performance.


  7. 26 minutes ago, philpappas said:

    i thought this would be a big no no if i'm already dealing with huge amount of geo. resampling would explode my pc, no?

    Well, resampling will indeed increase the usage of RAM, and GPU if you are display it,
    but in terms of calculation, using nearpoints() is a way faster (lighter on the CPU) operation than xyzdist()

    Also, you could set your resample node's parameter "Create Only Points" (destroying the primitives for the calculation)
    which will greatly lower the GPU and a bit of the RAM usage.

    It of course matters, what kind of fidelity you need for this, if you really need 0.001m accuracy this method is of course not going to work.
    Though, there are certain work-arounds, like measuring the distance to the 2 closest points (instead of 1) and using some geometry math,
    to find out where along that edge, lies the actual closest position.


  8. the meta data block, (which you can generate with a button from the input of a foreach)
    will give you a detail attribute, with the current data its looping over "value", incase you are running over numbers
    as well as an "iteration" attribute, you can use in other cases.
    you can use those values, to make sure your wrangle fetches the right data/does the right thing, depending on the iteration of the loop.

    Also,
    generally speaking, putting everything in a single wrangle only gives you a very slight performance increase in terms of overhead,
    which is almost always outweighed, by the multi threading benefit you get from using multiple nodes.
    In addition, the overhead can be completely eliminated by using a compile block
    (which only really starts to make sense with larger numbers of nodes, or if you are taking the loop approach)  

    • Like 1

  9. yeah looping over all primitives like this, even using groups is going to be very expensive.

    One optimization, is giving it a maximum search range (which will speed up the function a lot)
    if you have more or less similar expected distances.

    float  xyzdist(<geometry>geometry, vector origin, int &prim, vector &uv, float maxdist)

    or

    float  xyzdist(<geometry>geometry, string primgroup, vector origin, int &prim, vector &uv,float maxdist)

    https://www.sidefx.com/docs/houdini/vex/functions/xyzdist.html

    generally what I would do is:

    int outPr;
    float range = chf("range");
    vector outUV;
    
    float dist = xyzdist(1, v@P, outPr, outUV, range+0.001);
    if(dist > range)
    	return;//or continue if in loop

    Alternatively, if you are dealing with reaaaly large amounts of geo,
    I would suggest just resampling your primitives, saving the primitive number to those new points,
    and check the nearest points, instead of using xyzdist() 

     

     


  10. Or you could write your own quadrify node, with some wrangles/python and a dissolve node.

    What we did, was finding for every triangle, the longest edge (out of 3)
    Then, if a neighboring triangle has the same edge as its longest edge, group it.
    Afterwards you can just dissolve this group and you have a quite reasonable quadrify process.

    It will of course keep some triangles this way, depending on how have modeled the "joints"
    Also can't share the code unfortunately, but that could give you a start.


  11. In general, yes if you only really use point or primitive mode, if the order of processing is not important
    (or you compensate for it in another way, such as calculating the same data again in other points that need to access it,
    although this may lower the speed by such an amount that running in detail may be faster anyways)

    but there are other things you can do, like Skybar mentioned, or simply putting the wrangle in a for loop, potentially using the meta data block

    Also, don't be afraid to mix and match point and detail mode, and divide your code over multiple wrangles, so you can have the best of both worlds

    • Like 1

  12. Not really using the polyExpand2d, you may also want to try the polyWire sop (and then flatten the result and removing the bottom)

    another solutions is putting a grid under it, making a distance field of the curve on the grid, and then use a clip sop based on this value (by transforming the height using the distance field)
    but this method creates an unaligned topology (which it sounds like you don't want)


     


  13. well that error suggests that you are deleting nodes by cooking things :P

    I can understand if you write a shelf tool in python to do something like that,
    or maybe a post-render script in a ROP node,
    but I'm really curious what kind of thing you want to achieve with this,
    as it indeed sounds like something that is not without risk


  14. when you have vex functions that can output different types, its always a good idea to cast them directly

    for example, if you want a random greyscale color, based on the position:

    v@Cd = float(rand(v@P)); // this will cast it back to a vector in the end, with the same value for all components.

    v@Cd = rand(v@P); // this will cast it back to a vector in the end, but with different values for the components.

     


  15. That kinda depends on what the input primitives are.
    There's many different ones that can do it:

    for triangles:

    *divide sop, with the triangulation disabled, and remove shared edges enabled
       *Or similarily have a edge group with a dissolve sop

    for poly lines:

    *add sop, with remove primitives, and then recreate it in the other tab

    *polypath

    *join (if the vertices are pre-sorted)

     

×