Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


Everything posted by Feather

  1. Fluid not colliding with collider

    Thin walled objects and flip are not friends XD Unless you intend to render these objects as transparent and hope to see the fluid inside, the easiest way is to just start your sim on the frame just before the two objects collide and make sure the fluid has initial velocity in the direction of travel. Otherwise, a couple of things you can do is rather than use the same collision volume for every frame, make a much much thicker version of the containers that is used on every frame just before the collision, then using a switch, transition to the fractured volume. This means pointing your rbd object collision volume proxy to the vdb and using "volume sample" as the method. Alternatively, and this ones for like, if you have to see the fluid moving inside and you're shaking it. First first run sim A. This is your object and the fluid held static at zero with an animated force that represents the inverse velocity of your objects motion. You run this simulation at 0,0,0 and the forces slosh the fluid around inside the container up until the contact frame. Next add to that simulation the animation of your closed object. This becomes the rest position. inside sim B you would have a pop vop that, up until the frame of collision, is overriding point position with rest and velocity with of course the velocity of the previous sim. Then, on the collision frame, a switch flips over to using standard P and V allowing it to simulate normally.
  2. whitewater of emit particle fluid error

    Your white water source node is not producing source volumes at all. The emission your getting is an error in general, it should not exist in the first place. This is usually just a matter of tweaking your curvature/acceleration and vorticity settings. Regardless of how you do it, when you display the "whitewatersource" node, you should see a volume of some kind where the emission takes place. It should not be blank. The velocity of the fluid you're shooting across is high so the velocity volume created is sending the particles into random directions.
  3. river sim - curvy shape

    It's difficult to figure out what scale this simulation is supposed to represent. It would be a creek and those are stores or a river and those are boulders. Do you have reference for what you're trying to achieve?
  4. Delta relative to tangent space

    By tangent space do you mean along the curved surface of the geometry or literally like a tangent plane of each point on the mesh?
  5. similar effect like look up constraint

    By default a copy to points node will orient the bee's to the normal direction of the point onto which they're copied. If instead using vex to first define the location of the bee( as in it doesn't exist otherwise) and the bee is then generated in that position, the orientation towards the target is mathematically rotating each bee by a 3X3 matrix. If you generate them in that position, you would need to use a for loop to rotate each bee individually after the fact which is probably too slow depending on how many bee's you have there. Instead, generate the bee at that origin then position it afterwards by multiplying it by a 4 x 4 matrix where the position of the matrix is the position you generated before and the rotation vectors are the 3 angles between each respective x y z orientation vectors. Watching this video series if you're unfamiliar with vector math will help immensely.
  6. ix this your assemble node needs to add a prefix to the names that is aware of the iteration of the loop so the resulting names are board0_piece0 board0_piece1 and board1_piece0 board1_piece1 etc. Then just copy this loop and do the same thing with your other assemble node thats spitting out the actual geo so the names match. To f
  7. When an assemble node is both responsible for packing AND naming your geometry. That means the result will be different when you perform the operation inside a for each vs not using one. For example, inside the loop, iteration class0 (first board) its going to name each piece 1 through 20 or whatever as piece0, piece1 etc. Then it makes constraints glueing together piece0, piece1 etc. Then it moves onto the next board and repeats. So while you have 262 points. The names of those pieces overlaps. You have 6-7 piece0's so yes the glue constraint shows up in your simulation but only 1 of the piece0's is actually glued. The other 5 or so piece0's are glued to nothing and thus fall to the ground.
  8. Light Sampling Issue (Mantra)

    SOLVED: It's not a light, its not a sample. ITS FOAM. When writing out the spectra and you're not using the foam, turn it off. Turns out those circles are gigantic, I mean impossible to recognize as such foam particles.
  9. Quick question. This is the diffuse AOV from a render using nothing but a distant light. To give you an idea, this scene is rather large. The geometry in this scene is a ocean volume and a plane. I have removed the displacement map from the material to isolate this as much as possible. I can not for the life of me find what sampling quality I'm missing here. Does anyone recognize this kind of circle pattern coming from the light?
  10. Scripted access to Bundle panel

    I wasn't familiar with bundles until you mentioned them so for anyone reading this, its a group of nodes, not necessarily connected but that follows some pattern. So like, grab all transform nodes inside of geometry node X. @cloud68 I don't know the answer to your question but it may help others to know if you're using smart bundles and if you're trying to answer the question of, which nodes did this smart bundle's rule set grab or if its something else?
  11. Conditional output switch?

    No worries, that little buffer of "needs to be approved by admin" right? Just a heads up, your current solution may set you up with a rather frustrating workflow in the future. In that way the switch functioning properly is entirely dependent on whether or not the detail attribute exists. The switch, while usually correctly set to 1, on a weird frame that geometry disappears or changes defaults to 0 rather than stays on its correct value if set globally at render time.
  12. Conditional output switch?

    You can achieve the same result in the reverse if the switch is not being driven by an expression and you are just setting the value. A Geometry ROP can activate the switch for you before rendering to disk using prerender scripts and a global variable. Edit/Aliases & Variables - > click the second tab at the top for global variables. There's boxes at the bottom for creating a new one so do something like "SWITCH" | 0 Then in your network you can use $SWITCH to call that variable in your switch node. On the geometry Rop you can add a prerender script. set -g "SWITCH" = 0 You can add this to a second geometry rop as set -g "SWITCH" = 1 And output the result to a different location while switch is set to 1. Keep in mind, if you're using an expression to drive the switch, a prerender script may not able to evaluate the expression to determine the answer.
  13. @LibrarianThe real time thing is awesome, that guy also did some work in Houdini with evolution networks as well and posted it to his vimeo. A Convolution Network is certainly to follow once I have this working properly.
  14. I didn't see much implementation of machine learning in Houdini so I wanted to give it a shot. Still just starting this rabbit hole but figured I'd post the progress. Maybe someone else out there is working on this too. First of all I know most of this is super inefficient and there are faster ways to achieve the results, but that's not the point. The goal is to get as many machine learning basics functioning in Houdini as possible without python libraries just glossing over the math. I want to create visual explanations of how this stuff works. It helps me ensure I understand what's going on and maybe it will help someone else who learns visually to understand. So... from the very bottom up the first thing to understand is Gradient Descent because that's the basic underlying function of a neural network. So can we create that in sops without python? Sure we can and it's crazy slow. On the left is just normal Gradient Descent. Once you start to iterate over more than 30 data points this starts to chug. So on the right is a Stochastic Gradient Descent hybrid which, using small random batches, fits the line using over 500 data points. It's a little jittery because my step size is too big but hey it works so.. small victories. Okay so Gradient Descent works, awesome, lets use it for some actual machine learning stuff right? The hello world of machine learning is image recognition of hand written digits using the MNIST dataset. MNIST is a collection of 60 thousand 28 by 28 pixel images of hand written digits. Each one has a label of what they're supposed to be so we can use it to train a network. The data is stored as a binary file so I had to use a bit of python to interpret the files but here it is. Now that I can access the data next is actually getting this thing to a trainable state. Still figuring this stuff out as I go so I'll probably post updates over the holiday weekend. in the mean time. anyone else out there playing with this stuff?
  15. As I'm going through the maths, because each of the inputs is actually a neuron with its own bias and weight to consider, the following image is a better representation of whats actually happening. The boxes above are part of the mini-batch process and difficult to show at this scale.
  16. Glad I'm not alone in enjoying this stuff. Thanks for the videos guys! This update took a lot longer than I thought it wanted to give a slight preview. I had to go back and learn some of the maths behind this stuff to really breakdown what a lot of the python scripts were doing so I can rebuild this network in a visual way. n case anyone wants to understand the math going on behind these networks, a really good resource is youtube channel 3Blue1Brown. He has an entire series on calculus and another short series on neural networks. If you're missing that foundations in linear algebra you can watch another series by a youtuber named george soilis. At first I thought I could get away with something similar to the video's I had been watching that used aggregate sums to define the value of each neuron. Unfortunately that doesn't give quite the intuitive result I was looking for so... introducing neural net 2.0 below. It's not 100% done but once its finished you'll be able to watch literally every single vector change as each neuron learns.
  17. Hard constraints stretching beyond rest length?

    This method seems to be far more controllable. Thanks again Pavel!
  18. I can't figure out why the hard constraints are stretching before the glue constraints break. The hard constraints have a rest length and as far as I'm aware its not being updated. I've set the constraint iterations to 30. 60. and even 100 which does help but they continue to stretch. Upping this iteration number isn't viable in terms of time for the scale simulations I'm looking to use this and so I'm hoping someone can help me. Maybe I just shouldn't be using glue here at all? I've put together a quick and dirty example file of this problem and attached it below. Snap_Constraints_Example.hip
  19. alembic objects into copy to points

    Have you tried using alembic archive?
  20. Hard constraints stretching beyond rest length?

    Thanks Pavel, I've been able to get it to remove prims based on exceeding a certain force limit as you suggested. They do not stretch in this case much at all but It does seem to apply forces evenly across the entire object. Once it reaches the threshold all of the constraints break at the same time. I'm sure I could set variable thresholds for the break to occur where I want it to but without anything impacting the object I would be basically painting certain weak points so I doubt it would produce a very natural result. The object I'm breaking is made of long beam structures that pull and rotate each other as they fall. I want to make sure that as they are getting torqued I can split and splinter those beams naturally under their own weight. Without an impact, determining where torsion would occur would take a bunch of simulations and repainting. Is there a more natural way to find stress points between pieces and weaken them dynamically?
  21. solver not working on animated mesh

    Sorry if I was unclear. Plugging your wrangle into the Object_Merge is not all of what I was suggesting. You you cannot simply switch the input of your wrangle. Important information is coming from each of those nodes. (Object_Merge)Input_1 is the geometry being fed into input 1 of your solver. It updates each frame and pulls in whatever is plugged into input 1 into your solver. Meaning this is your animated geometry. From this object merge you need to take the point position and use that to update the position of the points you're solving the infection on which is Previous_frame.
  22. Hard constraints stretching beyond rest length?

    Thanks for taking a look and no worries on the experience thing this is like.. exactly what the forums are for, I don't know either haha I guess it very well may be an unsolvable thing like unstoppable force vs immovable object. BUT this is a pretty common situation in destruction where you have a base simulation and want to do secondary fractures. For what its worth I've just tried glue constraints with different strengths. For example the would be hard constraints are -1 and the middle is set to some random strength. THESE STRETCH TOO. They don't snap back to their rest length like the hard constraints though.
  23. Hard constraints stretching beyond rest length?

    Continuing this in hopes someone can help me out. I had a suspicion that the hard overwrite of the "Animated Static Object" may no play nicely with the constraints and so I tried using a constant force to see what happens. Turns out even with a constant force of like 10^10 you get ... this. It does seem to respect the hard constraints more but not really all that spectacular a failure. >.<
  24. Hard constraints stretching beyond rest length?

    Still trying to find a viable balance between the glue and hard constraints but they simply don't behave as I'd expect. Sti I would expect that hard constraints are respected explicitly and do not break or stretch. I would also expect that glue constraints not matter how strong, when a piece with a hard constraint is under force would always break. This seems to not be the case, different levels of glue strength and RBD object density result in long and shorter stretch lengths before the glue decideds to snap in a catastrophic failure sort of way. From what I've gathered, A glue strength of 1 per 1000 units of density is the balance point for glue that will stretch the hard constraint only a slight bit but I can't seem to drive this otherwise and it's not very intuitive. Attached is a more simply animation example to really make this obvious.Snap_Constraints_Example_2.hip snap.mp4
  25. solver not working on animated mesh

    First of all, if you can help it, use Alembics. Will save you a lot of headache. Second, you're running your simulation using previous frame as the input to your wrangle. If the previous frame was the first frame of anim... that's what your solvers going to use every frame because its never looking at the new anim. You need to use the object merge Input_1 to update the position of all your points to match the new position of your animated FBX. Previous frame is where you can pull the last frames infection data from and apply it to the points of the new anim frame before solving the next frame.