Jump to content

acey195

Members
  • Content count

    549
  • Donations

    10.00 CAD 
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by acey195


  1. On 12-8-2019 at 3:59 PM, philpappas said:

    to get some perspective on the amount of geo, i should mention that i'm trying to process an OSM file. if you take a look at the file posted you can see the basis of what i'm trying to do, but multiply that by tens of thousands of city blocks.

    ok, had some time to actually at the file, what I suggested earlier should work for your case, since you are only checking the center of every primitive, once.
    So for the final code, I would just do this:

    int outPr;
    float maxdist = chf("radius");
    vector outUV; //to use the range overflow of the xyzdist() you also have to query the primitive and uv for some reason
    string grp = sprintf("!%d", @primnum);
    
    f@test = xyzdist(0, grp, @P, outPr, outUV, maxdist + 0.001);
    
    if(@test<maxdist){
        s@near = "close";
        }

    with a lot more primitives, its going to be more costly of course, but probably still less than putting it in a loop.
    That said, with a very large amount of geometry, you may have to do the calculation in multiple steps,
    so not every primitive has to check Every other primitive, but just the ones that are close.

    Also, you could add an heuristic, resampling all primitives, adding a center point to all edges and check those first with a nearpoint() expression,
    then afterwards, doing the xyzdist() for all the remaining primitives.
    That way you could greatly optimize all the plots that have similarly sized buildings next to each other,
    as those points will in a lot of cases, nicely line up with the center of the neighbouring primitive in that case,
    if you are really afraid about performance.


  2. 26 minutes ago, philpappas said:

    i thought this would be a big no no if i'm already dealing with huge amount of geo. resampling would explode my pc, no?

    Well, resampling will indeed increase the usage of RAM, and GPU if you are display it,
    but in terms of calculation, using nearpoints() is a way faster (lighter on the CPU) operation than xyzdist()

    Also, you could set your resample node's parameter "Create Only Points" (destroying the primitives for the calculation)
    which will greatly lower the GPU and a bit of the RAM usage.

    It of course matters, what kind of fidelity you need for this, if you really need 0.001m accuracy this method is of course not going to work.
    Though, there are certain work-arounds, like measuring the distance to the 2 closest points (instead of 1) and using some geometry math,
    to find out where along that edge, lies the actual closest position.


  3. the meta data block, (which you can generate with a button from the input of a foreach)
    will give you a detail attribute, with the current data its looping over "value", incase you are running over numbers
    as well as an "iteration" attribute, you can use in other cases.
    you can use those values, to make sure your wrangle fetches the right data/does the right thing, depending on the iteration of the loop.

    Also,
    generally speaking, putting everything in a single wrangle only gives you a very slight performance increase in terms of overhead,
    which is almost always outweighed, by the multi threading benefit you get from using multiple nodes.
    In addition, the overhead can be completely eliminated by using a compile block
    (which only really starts to make sense with larger numbers of nodes, or if you are taking the loop approach)  

    • Like 1

  4. yeah looping over all primitives like this, even using groups is going to be very expensive.

    One optimization, is giving it a maximum search range (which will speed up the function a lot)
    if you have more or less similar expected distances.

    float  xyzdist(<geometry>geometry, vector origin, int &prim, vector &uv, float maxdist)

    or

    float  xyzdist(<geometry>geometry, string primgroup, vector origin, int &prim, vector &uv,float maxdist)

    https://www.sidefx.com/docs/houdini/vex/functions/xyzdist.html

    generally what I would do is:

    int outPr;
    float range = chf("range");
    vector outUV;
    
    float dist = xyzdist(1, v@P, outPr, outUV, range+0.001);
    if(dist > range)
    	return;//or continue if in loop

    Alternatively, if you are dealing with reaaaly large amounts of geo,
    I would suggest just resampling your primitives, saving the primitive number to those new points,
    and check the nearest points, instead of using xyzdist() 

     

     


  5. Or you could write your own quadrify node, with some wrangles/python and a dissolve node.

    What we did, was finding for every triangle, the longest edge (out of 3)
    Then, if a neighboring triangle has the same edge as its longest edge, group it.
    Afterwards you can just dissolve this group and you have a quite reasonable quadrify process.

    It will of course keep some triangles this way, depending on how have modeled the "joints"
    Also can't share the code unfortunately, but that could give you a start.


  6. In general, yes if you only really use point or primitive mode, if the order of processing is not important
    (or you compensate for it in another way, such as calculating the same data again in other points that need to access it,
    although this may lower the speed by such an amount that running in detail may be faster anyways)

    but there are other things you can do, like Skybar mentioned, or simply putting the wrangle in a for loop, potentially using the meta data block

    Also, don't be afraid to mix and match point and detail mode, and divide your code over multiple wrangles, so you can have the best of both worlds

    • Like 1

  7. Not really using the polyExpand2d, you may also want to try the polyWire sop (and then flatten the result and removing the bottom)

    another solutions is putting a grid under it, making a distance field of the curve on the grid, and then use a clip sop based on this value (by transforming the height using the distance field)
    but this method creates an unaligned topology (which it sounds like you don't want)


     


  8. well that error suggests that you are deleting nodes by cooking things :P

    I can understand if you write a shelf tool in python to do something like that,
    or maybe a post-render script in a ROP node,
    but I'm really curious what kind of thing you want to achieve with this,
    as it indeed sounds like something that is not without risk


  9. when you have vex functions that can output different types, its always a good idea to cast them directly

    for example, if you want a random greyscale color, based on the position:

    v@Cd = float(rand(v@P)); // this will cast it back to a vector in the end, with the same value for all components.

    v@Cd = rand(v@P); // this will cast it back to a vector in the end, but with different values for the components.

     


  10. The overblown color, is what you get when you go above 1 with your color,
    so your addition probably works fine, its just that multiple points may be adding to the same target point.


  11. That kinda depends on what the input primitives are.
    There's many different ones that can do it:

    for triangles:

    *divide sop, with the triangulation disabled, and remove shared edges enabled
       *Or similarily have a edge group with a dissolve sop

    for poly lines:

    *add sop, with remove primitives, and then recreate it in the other tab

    *polypath

    *join (if the vertices are pre-sorted)

     


  12. 56 minutes ago, DominikL said:

    They mentioned available in Houdini Core and FX, i hope that doesn't mean this will not be available for Indie :mellow:

    Normally there's more available in indie than core (minus render resolution / profit cap)
    also if you are talking about PDG, they show demovideos of it in Unity, which would be an exact usecase for Houdini-Indie,
    so I would sure hope so :P (disclaimer: have yet to watch the video later today) 


  13. Hey!

    Yup, they are like n-dimensional vector attributes.

    Binding with @, will generate the attribute for the output if it did not already exist, or read it if it does.
    One difference between using binding, and using attribute import, is that you can update the attribute throughout the code multiple times with binding.
    On the other hand, using for example the point() expression, will always fetch the value of the attribute of the input (and not the intermediate value)

    • Thanks 1

  14. 29 minutes ago, kleer001 said:

    Group the edge then Peak SOP and transform by normals?

    that would work for the straight bits, but for the corners you would have to increase the magnitude by which you move them to maintain the angles.

    from the top of my head.. I think it was dividing the offset by either the cosine of the angle, or the cosine of half the angle.
    So an offset at the corner, at 45 degrees would be 1/cos(45deg) which is 1/0.70710678118 = 1.41421356237 if my memory serves me right.
    Which makes sense, as that is the square root of 2, and pythagoras and stuff :P

×