Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

19 Good

About Alain2131

  • Rank

Personal Information

  • Name
  • Location

Recent Profile Visitors

948 profile views
  1. Violation of strict nesting of blocks...

    Hi ! So, Compile Blocks.. Super useful, but a handful to work with. I'll try to explain what's going on. Part 1 - The foreach_end2 loop. You do not actually need a Fetch Input for the loop, the problem is the references that are made from within the foreach_end2 loop. I could not have figured this out from this message alone. From working with Compile Blocks a bit, I know that references are much more strict than normal. So, knowing that, I tried to strip out the references out from anything that reference out of it, and anything that references into it. Based on the dotted blue lines, the two places that goes "in" and "out" are on the "generate_line_points" (out), and "bend1" (in). For now, we'll just strip out the out reference, and will go back to it later. But for the Spare Input 1 of the bend1, we can replace it with the "grass_height_mult" node. Notice that I remove the expression on the Length parameter; I took a note of it, and will use it later. The rationale for the bend1 case is that if you reference a forloop_begin, its data will change at each loop. Without a Compile Block it's no big deal, since the loop will just finish, and the reference will just have the data of the last loop. But in a Compile Block which has to know everything in advance, it does not like this at all. In your case, referencing the input works perfectly, so that fixes this issue. Part 2 - The "distanceAlongGeometry" nodes. At this point, the Compile End should have another message. Something about the "distancealonggeometry1", some reference stuff. Okay, we'll ignore that for now. Bypass. The Compile now complains about the "distancealonggeometry2". Right okay, complains for one, complains for the other, makes sense. Bypass. And now, huh ! The Compile Block actually has no error anymore ! But, uhm, the result is now nowhere near the original one. Part 3 - Bring the result back on par with the original. Alright, now that we've got the culprits down, we can start working on rehabilitating making them Compile-friendly. I will be doing a few tricks to get mostly the same result as you, but I won't go in all the details, as this already-long answer would go on forever. (After finishing, looks like I gave all the details anyways.) Part 3.1 - The foreach_end2. I suggest this guy to not be a loop at all. Instead of iterating over each points individually, making a line and copying the line on the point, we can get rid of the copytopoints and the loop using one trick with the add node. We can specify an attribute to say "hey, I want the add to connect the points with this same attribute value". So, if we've got an attribute, say "class", that is the same on each points of the lines, the add will only connect those together, ridding us of the loop. With a bit of a modification to the wrangle, we can get there. Here's the new wrangle (Running over Points) : float length = point(-1, 'height', 0) * 0.85 * f@height_mult; int npt = 5; float increment = length / npt; vector pos = @P; for (int i = 0; i <= npt; i++) { int pt = addpoint(0, pos); setpointattrib(0, "class", pt, @ptnum); pos.y += increment; } removepoint(0, @ptnum); Note that I got rid of all spare parameters, and added back only one, referencing the main loop's "foreach_begin1". So, for the length computation, instead of doing it in a parameter, I do it in the wrangle itself, and I don't need the second spare input, since the height_mult value is already accessible on the current point of the wrangle. Doing the "removepoint" at the end might seem weird, but you'll notice in the gif that once adding the class attribute in the add node, it connects the bottom points. That's because all the original points are still there, with class being 0. Removing the point is the quick and dirty way to do it, but there are other ways to do it. No matter though, as this works just fine. Part 3.2 - The distanceAlongGeometry1 node. This is a bit tricky. Actually, both are, in a different way. For the (1), what you do with it is compute the distance from the bottom to the top, map it to the longest strand and multiply the result with a curve. We'll have to do this manually. So first, we need to know what's the length of the longest strand. A measure, set to perimeter, will give me the length of each strand as an attribute named "perimeter". I can then promote it to detail, set to Maximum to get the largest value. Second, we will need to know where each points are on its primitive. This is known as the curveu in Houdini. This is a value between 0 and 1. This can be computed using the resample node. Untick "Maximum Segment Length", and tick "Curve U Attribute". Third, we need to get the "multiply the result with a curve" section in. A Wrangle will take care of this. Wrangle over Points, first input is the resample, second input is attribpromote. float max = detail(1, "maxPerimeter"); float dist = prim(0, "perimeter", @primnum); float sample = fit(f@curveu * dist, 0, max, 0, 1); f@peak_mask = chramp("Remap", sample); Alright, looking good. Part 3.3 - The distanceAlongGeometry2 node. Let's break down what you are doing : you are giving a profile to a narrow strip of polygon, forming a single leaf. I would like to suggest another way of doing the same thing. Start from a single line instead, and then leverage the sweep node to do the profile. We then start with a line node. Add a spare parameter to reference the main loop's foreach_begin1, and then copy-paste the expression in sizey from the grid into line1's Length. Set the Points to 5, matching the Grid's Rows. Now, onto the sweep. Place one under the Line. The Shape is a Ribbon. Its Construction Plane is different from the Grid, so in Construction, in Up Vectors, set the Target Up Vector to Z Axis. We need UVs, so in UVs and Attributes, tick Compute UVs. While comparing the UVs with the original version, I found that we need to untick Length-Weighted UVs, and untick Flip Computed UVs. Back to Surface, set the Columns to 1, as we don't want subdivisions along the leaf. We then want to scale along the curve, which is the name of the parameter that we want. Apply Scale Along Curve. You can copy the original curve's values into this one. At this point it works, but we're missing the width scale. So add a spare parameter, once again pointing to the foreach_begin1, and copy the grid's sizex expression into the sweep's Width. While templating "pinch_along_length", I found while tweaking the multiplication at the end that I needed to do *2 (instead of the original *0.4). Which makes sense, since you then multiply by 5 inside pinch_along_length, and 0.4*5 = 2. And voilà, the result should now be the same as before, and the Compile Block should work just fine ! .. Although I said that, you'll notice a difference between the original one and the Compiled version. This is due to the bend1, referencing the first point from "grass_height_mult". Remember what I said about the non-compiled version referencing the data from the last iteration of the nested for loop ? To prove this, we can change the point expression on the bend1 that fetches the height_mult to get the data from the last point. But this is hard-coded, and the last point is based on the "grass_amount". This is just to prove a point, I don't believe it's actually important to fetch the last point's information for this. Phew, that was a long-winded one. I hope that made sense ! Here's the scene - but I don't have some of your plugin, so I recommend you doing those changes in your scene instead. And my version is Apprentice. Reeds_test_fix.hipnc
  2. generate to fill up wire curves in tube meshes

    You're right, this requires a clean tube (with the ends uncapped, and constant point number in the rows) If your geometry isn't as clean as that (like you said, remeshed), then you could maybe use the Labs Straight Skeleton to generate the curve It would be a bit harder to generate the pscale and Normal though, but not impossible Glad you like it though !
  3. generate to fill up wire curves in tube meshes

    Hey Tagosaku ! What I would do is to first convert that tube into a polyline, extracting the pscale and the Normal as well. Then, I'd copy circles to these points to create the cross-sections, scatter points on these cross-sections, and finally connect them. Like so Hope that helps ! findShortestPathInMesh_v004_del.hiplc
  4. By providing an example scene it'd be easier for me (and anyone) to understand and help. So you have an array attribute on each points, each containing every points to be deleted ? Or it is a string/int/float attribute (like toDelete = "yes", toDelete = 1, toDelete = 1.0) ? But there is a possibility that every points would be deleted, so you want to at least keep one in that case, even if it's in the list you have ? I really don't get the thing about blasting all points but one, but also including all points from the array ?.. Do you mean to also delete all points in the array ? But you already delete all points but one, so that doesn't make much sense to me... Or do you mean to delete all points but the ones in the array ? And if the array is empty, then just keep one ? It might be possible to do with just nodes (blast the points, and then a switch checks if there is 0 points ( if(npoints(0)==0, 1, 0) ). In that case, switch to second input which has a blast that keeps only one point), but you still need to blast the points first. I would really do a wrangle for any of that, it'll be easy to do anything you want. I still feel like I didn't understand exactly what you want, so do post a scene.
  5. You have an array attribute (presumably on the detail), and you want to blast these points ? I don't know about a VEXpression directly, but what I often do is have a wrangle that sets an attribute or a group on the points to delete, and then use the blast with @toDelete=1 or whatever. You can also just do a removepoint(0, @ptnum) inside the wrangle, but it's actually slower than a blast (use the Performance Monitor to get the real speed reading). So the wrangle on detail would look like int ptArray[] = detail(0, "pointsToDelete"); foreach(int pt; ptArray){ setpointattribute(0, "toDelete", pt, 1); } And then the blast node (set to points) with @toDelete=1 as I mentioned before. But yeah, this might not be as elegant as a VEXpression that would work directly in the Group parameter of the blast node. Although I don't know how to do it with an array directly, if your attribute is a string, with a space separating each point number (something like toDelete = "0 5 6 8 12 13 24 35 86"), then you could use "details(0, "toDelete") and that's it. All you really need is the point numbers one after the other with a space in-between. If you're adamant on using an array attribute specifically, I'd look into trying to convert that array to a string. Maybe try to do that with a Python Expression. I never used those, so I don't know how.
  6. Wrangle Cook Time

    Hey Skybar, sorry for not updating this post, but here's an update : I filed a RFE, and just received a mail confirming that this is a bug, and it has been logged in SESI's database; Bug (ID#95752) She said she'll update me when the developers will have produced a fix, so I'll update this post then. Thank you for your answer
  7. Sort points

    Hey Philip ! That's an interesting one. I took a slightly different approach than you did So basically, in a wrangle over detail, I set some general variables. Mainly, two arrays. One to store the centers, and another one to store the ymin and ymax (as a vector2). The idea is to then run over each points, and compare their position to the centers (with a threshold, like you did). If the position match one of the centers, then we get the relevant ymin and ymax, and update them if the current position is higher or lower. If there isn't a match to the centers, then that means we have a point from a new line. So we add the point as a new center, as well as ymin and ymax (initialized to the same value). And that's it ! There is a second loop to create the polylines from the data collected by the main loop. int almostEqual(float a, b, threshold){ return b + threshold >= a && b - threshold <= a; } /* Those two arrays will always have the same length. Their index correspond to the same center */ vector centers[]; // contains x and z positions for each center vector2 yvals[]; // contains ymin and ymax for each center /* The idea is for each points, compare its position to the centers When it matches, update the yvals if it is higher or lower If there is no match, then we add a new entry to both centers and yvals */ float threshold = 1; // this is the same as your tolerance for(int i=0; i<npoints(1); i++){ vector curP = point(1, "P", i); int found = 0; foreach(int j; vector center; centers){ int x = almostEqual(curP.x, center.x, threshold); int z = almostEqual(curP.z, center.z, threshold); if(x==1 && z==1){ found ++; // update yvals if the current height is higher or lower float ymin = yvals[j][0]; float ymax = yvals[j][1]; if(curP.y < ymin) yvals[j][0] = curP.y; else if(curP.y > ymax) yvals[j][1] = curP.y; // We already matched, so we know that no other center interrests us. break; } } if(found == 0){ //add to center and yvals append(centers, set(curP.x, 0, curP.z)); append(yvals, set(curP.y, curP.y)); } } // Create Lines for(int i=0; i<len(centers); i++){ vector center = centers[i]; vector2 yval = yvals[i]; vector p0 = center; p0.y = yval[0]; vector p1 = center; p1.y = yval[1]; int newP0 = addpoint(0, p0); int newP1 = addpoint(0, p1); addprim(0, "polyline", newP0, newP1); } Here's a quick gif overview Hope that helps ! sort.hipnc P.S. In retrospect, I could have used an array of vector4, used x and w to store the x and z center, and used y and z to store the ymin and ymax. That would leave us with only one array. Would it be cleaner ? Or simply harder to understand ? You can be the judge of that.
  8. Python Callback and Multiparm

    I do agree that a Python node isn't super ideal. Another issue with this is that if you have an animation, it'll be executed at each frame, which you definitely don't want. I think you'll have to have a button to kind of "stash" the input. This will make your life easier, and potentially the user's as well. (I think you should literally stash the input, this might save you some headache when referencing the input geometry inside of your subnet) That way, you don't have to worry about having to automatically detect a change, and this gives the user more control. I don't think that's sad, I think it's simply better that way. A few of SideFX's tools are made that way. Look at the Attribute Paint node. If the input changes, the paint becomes all mangled, until the user manually presses on "Recache Strokes". What I'd do is make a button that calls a script on the HDA (instead of a OnInputChanged it'd be a simple Python Script). That function would handle the creation of the UI stuff, as well as the creation of the subnets inside the HDA, and making sure that the UI stuff is referenced in the relevant places in the subnet. So instead of trying to make the tool detect a change automatically, changing the UI, and then when the UI changes, trying to call another script to then create the subnet, which has scripts all over the place, all your scripts would be in one location. I'm guessing that you want to do that dynamic callback to make it automatic if the user adds or removes items from the multiparameter list ? My opinion is that the user shouldn't even be able to do that, since this should always be relative to the number of pieces. What if there are 10 pieces, the multiparameter has them all, but the user adds one ? The piece name can't be any of them, all the pieces are accounted for. And the user can't even change the piece name. (I would even go as far as to not make it a multiparameter, and straight up automatically add spare parameters. And if the user doesn't want a piece, I'd add a tickbox with a name like "Ignore Piece") Just my two cents. I know I dodged the "trigger callback script when modifying multiparameter by script" question. I don't know about that, I just think you can go about without doing it. Sorry I couldn't be of help for that (which was the question in and of itself). If you still want to stick with the multiparameter and automatic creation along with callback, then yeah, a button to force the update should work.
  9. Hey Ronald ! The easiest way I found to handle rotation like you want is to create an initial orient attribute on your points before any transformation. And that's it ! The awesome thing with this is that the transform node has an awesome feature that not only transforms the P, but also any other attributes it can, based on their type. So if there is, for instance, an existing orient attribute, the transform node will modify it accordingly, without you having to do any matrix math ! And you can daisy-chain multiple transforms without having to bother re-computing the orient each time In this gif, I show exactly this. The extract_smth_smth wrangle is superfluous, only to have the x y z attributes extracted from the orient, to be able to visualize them with the visualize node. (also - I just found out that I could have used orient directly in the visualize node instead of the x y z like I did. So that wrangle really is superfluous)
  10. Stroke node input geometry force recook

    Sorry for double-post, but you can see this as a TL;DR The reason why you cannot isolate a part to draw on is that the stroke state (what allows you to draw on the geometry, which doesn't know about the stroke node you have in your HDA) uses the input of the HDA to gather the projection geometry. It's easy to prove - plug a sphere in the stroke node, go back to the HDA, and draw on the geometry. To fix that, maybe you could figure out how to call the stroke state manually from a custom state, and modify the kwargs parameter to be the node you want. But I don't know if this is easily doable, or doable at all.
  11. Stroke node input geometry force recook

    Hey Sebastian (or salut, fellow quebecer), I'm sorry I don't have a solution, just findings, thoughts and assumptions. It looks like the issue is with the handle of the stroke. I think that behind the scenes, whatever does the projection onto the geometry uses the input of the current node for the projection. So what I assume (this is only speculation) is that since the handle is on your HDA, it then uses the input of the HDA, instead of the input of the stroke node. When I go inside the HDA, and change the Isolate group toggle, drawing on the isolated part works correctly, same when not isolated. And when changing the input of the node itself (removing a cube from there), the drawing works (which means the projection geometry is being updated, just might not be what we expect). That's the reason I made that previous assumption, why would it work when we're inside the HDA (using the stroke's "real" handle) versus when outside the HDA (using the HDA intermediary handle). So, erhm, yeah, don't know if that helps at all, or if this is anything close to the real issue. EDIT: It sounds like, from SideFX's documentation on the stroke state (what I meant by handle), that the stroke indeed uses the HDA's input instead of the actual stroke node for geometry projection. They wouldn't have an input selection option for the actual stroke node. Sssooooo, the only solution I can come up with is to create a custom state, to handle the projection yourself, where you'll be able to project onto any geo you want. Maybe copy the stroke stuff, and modify only the part for the projection geometry input. But I don't know how to create a custom state thingy. I'd start from here.
  12. Python Callback and Multiparm

    Hey Zhaie ! Or bonjour, as you're in Montréal So, I'm having a hard time understanding your intentions First and foremost, you want to update a multiparm to be the same number as the number of pieces in your input geometry. Right ? As I understand it, you got this part working. And now, you want to fill the individual parameters inside of the multiparm ? Like, naming the Piece Name automatically ? I'm not sure about the two multiparms, one has a python script, the other doesn't Do you want to update one of them, or both of them dynamically ? What I would do is to have a wrangle in the node to gather the info you need in a clean fashion. So instead of passing on the input geo, I'd do something like get the uniquevals of the name attribute, and create a point for each of them with said name. So I'd have a resulting X amount of points, X being the number of unique names in the input. Then, I'd place a Python node, with the first input in the main execution chain, and the second input from that wrangle. I do that instead of On Input Changed on the HDA itself, since that doesn't take into account when the input geometry changes. I'd handle the UI updating stuff in there, gathering the number of points, setting the multiparameter to that, and then for each points, get the name attribute value, and set the individual parameter values. Kinda like this : I'm very not sure if that's what you need, but hopefully that helps ! In the first part, you can see that with the On Input Changed, it works when changing the input node, but when modifying an upstream node, it is not updated. That's the only reason I choose a python node inside the HDA to update the UI over using the On Input Changed. And do keep in mind that if the user can change any of the parameter updated by the script, it'll get overwritten by the script. I see that you have disabled the parameters for the name, that's a good solution. But again, if you have some non-script updating parm that the user needs to edit himself, and somehow the input ends up with 1 piece, then all of the custom-entered data will be wiped away. You might want an "Update UI" button to workaround that, if that is a potential issue for you. Here's the Wrangle script (plug geometry in the second input, and Run Over Detail) string names[] = uniquevals(1, "point", "name"); for(int i=0; i<len(names); i++){ int pt = addpoint(0, 0); setpointattrib(0, "name", pt, names[i]); } Here's the Python Script (plug wrangle in second input) node = hou.pwd().parent() geo = hou.pwd().inputs()[1].geometry() # Set multiparameter number count = len(geo.points()) node.parm("numberOfPieces").set(count) # For each points, get the name, then set the pieceName parameter for i, point in enumerate(geo.points()): name = point.attribValue("name") node.parm("pieceName" + str(i+1)).set(name)
  13. Hey Masoud, a quick Google search and you'd have found a few places telling you the answer. An odForce search would have also answered you. Real quick, the data is stored into a detail attribute on the node, so using the detail function, you can retrieve it.
  14. Extract rotation from packed geometry to a bone (Python)

    Hey guys ! Just thought I'd come back to this for a bit, adding some functionality. I had completely forgotten about this until I received some notifications of activity on the thread. I'm glad it's useful to some people, at least ! 1) There is now a toggle leaving you the choice between separating each packed pieces into its own geometry node, or to have only one geo with all of the skinning. The previous default was separate pieces for each packed geo, which is heavier in general. 2) I added some UI to be able to name the subnet and the geometry nodes, for convenience. 3) There is also a progress bar showing the progress of the baking process. 4) And finally, I applied a Euler Filter to the rotation. I did some other inconsequential small tweaks. I checked, and having scaled packed pieces works perfectly, no matter if it is before the simulation, after it, or both. Same for pre/post translation/rotation. The extracttransform node really work like a charm. rigidBody_baker_tests.hip RigidBody_Baker.hda P.S. This have now turned more into an Tools (HDA) topic than scripting. If an admin would prefer that I create a new topic in the Tools section, do tell me.
  15. Pop source

    Hi there ! Let's break it down a bit, and start just by emitting particles from frame 30 To do that, we can use a comparison with the current frame "Is the current frame above 30" So this is really simple ↓ @Frame >= 30 // The equal to make sure that it is true as soon as frame 30 is reached This will evaluate as 0 before frame 30, then 1 at the frame 30 and after. For emitting below frame 40, the logic is the same, with the comparison sign changing direction @Frame <= 40 But you want above frame 30 and below frame 40 There is an "and" for this, the double ampersand → && So those two separate expressions becomes @Frame >= 30 && @Frame <= 40 Hope that helps !