Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

28 Excellent

About Alain2131

  • Rank

Personal Information

  • Name
  • Location

Recent Profile Visitors

1,102 profile views
  1. Connect opposite points in vex

    Hello, In a Point Wrangle, make a for loop going over every points, and compute the direction it has with the "current one". Points that faces each other will match the Normal, and others won't. So, if some point match with the normal of the current point, then you know you need to connect those points together. As for the connecting, we can't really connect all points when they found a match, because you'd have two lines for each pair. What I propose is to only make the connection when the match has an id higher than the current point. So, say the match is 9 and 15. 9 matches with 15, 9 is lower than 15, so a line is created. 15 also match with 9, but 15 is higher than 9, so we don't connect. (In the example below, I dodge this issue altogether by not including points with a lower id than the current point in the loop) Here's untested code, to "visually" explain what I mean. This should go in an Attribute Wrangle, running over Points. // We loop over the other points. // We start at the current point +1, // so we know we will never have "double-match" for(int i=@ptnum+1, i<npoints(0); i++) { vector P1 = point(0, "P", i); // Get the position of the other point vector dir = normalize(P1 - @P); // Get the direction of the other point relative to the current one if(dir == v@N) // If the direction match with the normal { addprim(0, "polyline", @ptnum, i); // Add the polyline break; // Since we have a match, we know that we need to stop looking for other points. } } That's the idea anyways. Maybe the computed "dir" won't match perfectly with the Normal, and you'll need some threshold to compare dir and N (if equal within a margin of 0.001 or whatever). A quick way to do that is abs( a - b ) <= threshold. (I think that should work for vectors, but it might not be the ideal way to do it. Although it should be fine for your case) EDIT : I quickly recreated the scene, and it seems to work I had to use the "almost equal" trick, because it had connected only 2 out of the 9 pairs. Next time, please share a scene file. connect_by_dir.hipnc
  2. Hey, glad it worked out !
  3. Okay, here's what I understand, and I got some questions. On different datasets, you got a string attribute named "type" with different values. Let's say you import once. The tool auto populates B F C D. You now have a multiparm or whatever that you can control the color and whatnot as you want. But now, you import again, and now you got F X A S F is still there, but X A and S are new. What do you expect to happen then ? The easiest to do is on each new import, wipe the entire UI and start from scratch But this means that for different datasets, if you got the same value, you'll have to re-enter the parameter manually. Here is a mockup, where when you import a new dataset, it resets all parms and populate with default ones. dynamic_HDA_parms_mockup.hipnc Basically, with a button, it reads the unique values of the attribute "type" (this info was fetched from within VEX into the types detail attribute) and sets the multiparm to the amount of unique values in the attribute. node = hou.pwd() geo = hou.node("./OUT_script").geometry() # After the attribute wrangle fetching the unique values of the type attribute types = geo.stringListAttribValue("types") node.parm("multiparm").set(0) # Reset multiparm node.parm("multiparm").set(len(types)) # Set Name Parameter for i, type in enumerate(types): parm = hou.parm("name_" + str(i+1)) # parm starts at 1, not 0 parm.set(type) A few clarifications and details I input the data by wiring it, but this can be adapted to an input from a file. You said your data was on the prims, this is doable as well. If you want the values of the parameters from previous datasets to remain, I see two options. 1 - Instead of wiping the multiparm each time, we go through the current parameters, look if we got any new ones, and create only the missing ones. 2 - Have a file somewhere on disk containing the relevant values for the parameters, and after resetting the multiparm, auto-fill the parameters using those values. I think this will be harder to do, though.
  4. One-liner for point(0,"P",8).z in point wrangle

    You indeed need to specify that point will return a vector by encapsulating it with the vector keyword. @P.z = vector(point(0, "P", 8)).z;
  5. how to load .bgo.sc without creating file node?

    I suggest looking into TOPs (aka PDG) for this. It can most likely answer your needs. Super convenient to work on multiple assets at the same time and execute the same procedure onto them all. Each file will be a work item, onto which you can do basically any process you want (HDA, geometry network, etc etc).
  6. How to change the tangent direction

    I suggest transferring N from the base mesh to the curves as well, it should give the missing info to CopyToPoints.
  7. Spliting Kitbash pack

    Salut Davidt, Let's break down the error. "'method' object is not iterable" at line 7. What is line 7, and what do you do there ? for prim in geo.prims: You iterate on geo.prims But it says that geo.prims is not iterable. So geo.prims is the culprit. If you go see in the Geometry Help page and search for prims, you'll see that it has parenthesis after it. Simply put, you need to write geo.prims() and that should fix your issue. That is because geo.prims is the equivalent to hou.Geometry.prims, which is the method (a quick Google search will explain what is a method to you) that is used to return the data you want. But it's not the same as calling it. You need the parenthesis to do that. EDIT : A good way to test and poke around a bit is to print out the result of the stuff you try to do. node = hou.pwd() geo = node.geometry() print(geo.prims) Returns : <bound method Geometry.prims of <hou.Geometry frozen at 00000000E4000800>> But, with the parenthesis.. node = hou.pwd() geo = node.geometry() print(geo.prims()) Returns : (<hou.Polygon prim #0 of geometry frozen at 00000000E4003200>, <hou.Polygon prim #1 of geometry frozen at 00000000E4003200>, <hou.Polygon prim #2 of geometry frozen at 00000000E4003200>, <hou.Polygon prim #3 of geometry frozen at 00000000E4003200>, <hou.Polygon prim #4 of geometry frozen at 00000000E4003200>, <hou.Polygon prim #5 of geometry frozen at 00000000E4003200>) This is for a cube. Be aware that printing to the console is very slow, and can take a very long time depending on the amount of stuff to print. The Python Shell (Windows - Python Shell) is much faster to print to.
  8. Only allow HDA inputs of a particular node type

    Hey Hristo, That's pretty cool ! I like your Try Except to set the switch as a workaround to "Error while cooking." Yet another way to do it would be to make the check in onInputChanged, and if problem, switch to the input 0. In the input 0, you can put an error node with your message. That way, it doesn't have the "Error: Error generated by python node." update_curve_coords.hda P.S. Nice cloud =P
  9. Only allow HDA inputs of a particular node type

    Hi Hristo, yes you can. With a Python node as the first node in your HDA, you can fetch the input node's type, and check if it is a type you want or not. In the example below, I check against a list to see if it is a type I don't want. The opposite would be easy to implement. wrongTypes = ["add", "box", "testgeometry_pighead"] node = hou.node(".").inputs()[0] nodeType = node.type().name() #print(nodeType) for type in wrongTypes: if nodeType == type: raise hou.NodeError("Node \"%s\" not compatible !" % nodeType) This method might not be the best, since if the input is animated, the Python node will evaluate at each frame. Since you have an HDA, you could go into the Type Properties, Script, then add an Event Handler "On Input Changed", and run (almost) the same code. That should be cleaner. errorOnInputNodeType.hipnc EDIT : Here's the code to do the opposite, to keep only the nodes for which the type has been specified. goodTypes = ["add", "box"] node = hou.node(".").inputs()[0] nodeType = node.type().name() #print(nodeType) found = 0 for type in goodTypes: if nodeType == type: found = 1 break if found == 0: raise hou.NodeError("Node \"%s\" not compatible !" % nodeType)
  10. Bullet solver strange behavior

    It's hard to say what's wrong with your setup, well, without it. Here's a similar setup, using the SOP RBD Solver. With 10 iterations, there's some jitter (nothing as much as yours). 25 is pretty good, and 50 is very good, enough for the Sleeping to kick in. The low FPS is caused by the gif itself. simpleChain_sim_anim.hipnc
  11. Delete Boundary primitives

    ... or, if you're adamant on doing it with VEX, here's my take on it int pts[] = primpoints(0, @primnum); int count = 0; foreach(int pt; pts){ count += neighbourcount(0, pt); } if(count <= 6) removeprim(0, @primnum, 1); Basically, the idea is to get how many points are connected to the current primitive's points. There should be less neighboring points on the contour compared to in the "middle". In a Wrangle set to Run Over Primitives, get all the current points. Then, for each points, add up the neighbourcount. Looking into the Geometry Spreadsheet, we can see that the two lowest numbers are 5 and 6. So, if the count is 6 or lower, removeprim(0, @primnum, 1). It feels a bit cleaner with only one node, but it's a bit more complicated. It's up to you what you prefer more ! I'm sure there are plenty of other ways.
  12. Delete Boundary primitives

    Hi ! You can do an Edge Group on the grid before the convertline, set to "Include by Edges - Unshared Edges". This'll select the contour of the grid. The convertline sadly gets rid of all groups, but with a Group Copy, you can copy it back after the convertline. You'll need to set the "Group Name Conflict" to "Overwrite" on the Group Copy node. Then, a Dissolve will delete the edges as you want. Blast or Delete won't work with this method.
  13. matrix decrypting text effect

    Hi bonassus ! You can get around the "input issue" with an expression referencing a string from somewhere else. In practice, you can use the details expression to fetch a string from a specific node. This allows you to do any manipulation to that string, and the Font will happily regenerate every frame. Here's my crude implementation of this idea scramble_text.hipnc
  14. Turn string values into numbers

    Hey Tim, What I'd do is do a for each loop with the name attribute as a piece Create a Meta Import, plug this into a wrangle, and use the iteration value as the unique value. If you just need unique values, that'll work. But if you need the value to be consistent with the names between the point clouds, it won't be. A side effect of this method is that it will sort the points based on the name. If you don't want the points to be reordered, you can do this instead, might be simpler. Note that this is slower with millions of points. Hope that helps ! uniqueValueByName.hipnc
  15. Violation of strict nesting of blocks...

    Hi ! So, Compile Blocks.. Super useful, but a handful to work with. I'll try to explain what's going on. Part 1 - The foreach_end2 loop. You do not actually need a Fetch Input for the loop, the problem is the references that are made from within the foreach_end2 loop. I could not have figured this out from this message alone. From working with Compile Blocks a bit, I know that references are much more strict than normal. So, knowing that, I tried to strip out the references out from anything that reference out of it, and anything that references into it. Based on the dotted blue lines, the two places that goes "in" and "out" are on the "generate_line_points" (out), and "bend1" (in). For now, we'll just strip out the out reference, and will go back to it later. But for the Spare Input 1 of the bend1, we can replace it with the "grass_height_mult" node. Notice that I remove the expression on the Length parameter; I took a note of it, and will use it later. The rationale for the bend1 case is that if you reference a forloop_begin, its data will change at each loop. Without a Compile Block it's no big deal, since the loop will just finish, and the reference will just have the data of the last loop. But in a Compile Block which has to know everything in advance, it does not like this at all. In your case, referencing the input works perfectly, so that fixes this issue. Part 2 - The "distanceAlongGeometry" nodes. At this point, the Compile End should have another message. Something about the "distancealonggeometry1", some reference stuff. Okay, we'll ignore that for now. Bypass. The Compile now complains about the "distancealonggeometry2". Right okay, complains for one, complains for the other, makes sense. Bypass. And now, huh ! The Compile Block actually has no error anymore ! But, uhm, the result is now nowhere near the original one. Part 3 - Bring the result back on par with the original. Alright, now that we've got the culprits down, we can start working on rehabilitating making them Compile-friendly. I will be doing a few tricks to get mostly the same result as you, but I won't go in all the details, as this already-long answer would go on forever. (After finishing, looks like I gave all the details anyways.) Part 3.1 - The foreach_end2. I suggest this guy to not be a loop at all. Instead of iterating over each points individually, making a line and copying the line on the point, we can get rid of the copytopoints and the loop using one trick with the add node. We can specify an attribute to say "hey, I want the add to connect the points with this same attribute value". So, if we've got an attribute, say "class", that is the same on each points of the lines, the add will only connect those together, ridding us of the loop. With a bit of a modification to the wrangle, we can get there. Here's the new wrangle (Running over Points) : float length = point(-1, 'height', 0) * 0.85 * f@height_mult; int npt = 5; float increment = length / npt; vector pos = @P; for (int i = 0; i <= npt; i++) { int pt = addpoint(0, pos); setpointattrib(0, "class", pt, @ptnum); pos.y += increment; } removepoint(0, @ptnum); Note that I got rid of all spare parameters, and added back only one, referencing the main loop's "foreach_begin1". So, for the length computation, instead of doing it in a parameter, I do it in the wrangle itself, and I don't need the second spare input, since the height_mult value is already accessible on the current point of the wrangle. Doing the "removepoint" at the end might seem weird, but you'll notice in the gif that once adding the class attribute in the add node, it connects the bottom points. That's because all the original points are still there, with class being 0. Removing the point is the quick and dirty way to do it, but there are other ways to do it. No matter though, as this works just fine. Part 3.2 - The distanceAlongGeometry1 node. This is a bit tricky. Actually, both are, in a different way. For the (1), what you do with it is compute the distance from the bottom to the top, map it to the longest strand and multiply the result with a curve. We'll have to do this manually. So first, we need to know what's the length of the longest strand. A measure, set to perimeter, will give me the length of each strand as an attribute named "perimeter". I can then promote it to detail, set to Maximum to get the largest value. Second, we will need to know where each points are on its primitive. This is known as the curveu in Houdini. This is a value between 0 and 1. This can be computed using the resample node. Untick "Maximum Segment Length", and tick "Curve U Attribute". Third, we need to get the "multiply the result with a curve" section in. A Wrangle will take care of this. Wrangle over Points, first input is the resample, second input is attribpromote. float max = detail(1, "maxPerimeter"); float dist = prim(0, "perimeter", @primnum); float sample = fit(f@curveu * dist, 0, max, 0, 1); f@peak_mask = chramp("Remap", sample); Alright, looking good. Part 3.3 - The distanceAlongGeometry2 node. Let's break down what you are doing : you are giving a profile to a narrow strip of polygon, forming a single leaf. I would like to suggest another way of doing the same thing. Start from a single line instead, and then leverage the sweep node to do the profile. We then start with a line node. Add a spare parameter to reference the main loop's foreach_begin1, and then copy-paste the expression in sizey from the grid into line1's Length. Set the Points to 5, matching the Grid's Rows. Now, onto the sweep. Place one under the Line. The Shape is a Ribbon. Its Construction Plane is different from the Grid, so in Construction, in Up Vectors, set the Target Up Vector to Z Axis. We need UVs, so in UVs and Attributes, tick Compute UVs. While comparing the UVs with the original version, I found that we need to untick Length-Weighted UVs, and untick Flip Computed UVs. Back to Surface, set the Columns to 1, as we don't want subdivisions along the leaf. We then want to scale along the curve, which is the name of the parameter that we want. Apply Scale Along Curve. You can copy the original curve's values into this one. At this point it works, but we're missing the width scale. So add a spare parameter, once again pointing to the foreach_begin1, and copy the grid's sizex expression into the sweep's Width. While templating "pinch_along_length", I found while tweaking the multiplication at the end that I needed to do *2 (instead of the original *0.4). Which makes sense, since you then multiply by 5 inside pinch_along_length, and 0.4*5 = 2. And voilà, the result should now be the same as before, and the Compile Block should work just fine ! .. Although I said that, you'll notice a difference between the original one and the Compiled version. This is due to the bend1, referencing the first point from "grass_height_mult". Remember what I said about the non-compiled version referencing the data from the last iteration of the nested for loop ? To prove this, we can change the point expression on the bend1 that fetches the height_mult to get the data from the last point. But this is hard-coded, and the last point is based on the "grass_amount". This is just to prove a point, I don't believe it's actually important to fetch the last point's information for this. Phew, that was a long-winded one. I hope that made sense ! Here's the scene - but I don't have some of your plugin, so I recommend you doing those changes in your scene instead. And my version is Apprentice. Reeds_test_fix.hipnc