Jump to content

Feather

Members
  • Content count

    43
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    1

Feather last won the day on October 17 2019

Feather had the most liked content!

Community Reputation

16 Good

About Feather

  • Rank
    Peon

Personal Information

  • Name
    Vlad
  • Location
    Los Angeles
  1. Scripted access to Bundle panel

    I wasn't familiar with bundles until you mentioned them so for anyone reading this, its a group of nodes, not necessarily connected but that follows some pattern. So like, grab all transform nodes inside of geometry node X. @cloud68 I don't know the answer to your question but it may help others to know if you're using smart bundles and if you're trying to answer the question of, which nodes did this smart bundle's rule set grab or if its something else?
  2. Conditional output switch?

    No worries, that little buffer of "needs to be approved by admin" right? Just a heads up, your current solution may set you up with a rather frustrating workflow in the future. In that way the switch functioning properly is entirely dependent on whether or not the detail attribute exists. The switch, while usually correctly set to 1, on a weird frame that geometry disappears or changes defaults to 0 rather than stays on its correct value if set globally at render time.
  3. Conditional output switch?

    You can achieve the same result in the reverse if the switch is not being driven by an expression and you are just setting the value. A Geometry ROP can activate the switch for you before rendering to disk using prerender scripts and a global variable. Edit/Aliases & Variables - > click the second tab at the top for global variables. There's boxes at the bottom for creating a new one so do something like "SWITCH" | 0 Then in your network you can use $SWITCH to call that variable in your switch node. On the geometry Rop you can add a prerender script. set -g "SWITCH" = 0 You can add this to a second geometry rop as set -g "SWITCH" = 1 And output the result to a different location while switch is set to 1. Keep in mind, if you're using an expression to drive the switch, a prerender script may not able to evaluate the expression to determine the answer.
  4. @LibrarianThe real time thing is awesome, that guy also did some work in Houdini with evolution networks as well and posted it to his vimeo. A Convolution Network is certainly to follow once I have this working properly.
  5. As I'm going through the maths, because each of the inputs is actually a neuron with its own bias and weight to consider, the following image is a better representation of whats actually happening. The boxes above are part of the mini-batch process and difficult to show at this scale.
  6. Glad I'm not alone in enjoying this stuff. Thanks for the videos guys! This update took a lot longer than I thought it wanted to give a slight preview. I had to go back and learn some of the maths behind this stuff to really breakdown what a lot of the python scripts were doing so I can rebuild this network in a visual way. n case anyone wants to understand the math going on behind these networks, a really good resource is youtube channel 3Blue1Brown. He has an entire series on calculus and another short series on neural networks. If you're missing that foundations in linear algebra you can watch another series by a youtuber named george soilis. At first I thought I could get away with something similar to the video's I had been watching that used aggregate sums to define the value of each neuron. Unfortunately that doesn't give quite the intuitive result I was looking for so... introducing neural net 2.0 below. It's not 100% done but once its finished you'll be able to watch literally every single vector change as each neuron learns.
  7. I didn't see much implementation of machine learning in Houdini so I wanted to give it a shot. Still just starting this rabbit hole but figured I'd post the progress. Maybe someone else out there is working on this too. First of all I know most of this is super inefficient and there are faster ways to achieve the results, but that's not the point. The goal is to get as many machine learning basics functioning in Houdini as possible without python libraries just glossing over the math. I want to create visual explanations of how this stuff works. It helps me ensure I understand what's going on and maybe it will help someone else who learns visually to understand. So... from the very bottom up the first thing to understand is Gradient Descent because that's the basic underlying function of a neural network. So can we create that in sops without python? Sure we can and it's crazy slow. On the left is just normal Gradient Descent. Once you start to iterate over more than 30 data points this starts to chug. So on the right is a Stochastic Gradient Descent hybrid which, using small random batches, fits the line using over 500 data points. It's a little jittery because my step size is too big but hey it works so.. small victories. Okay so Gradient Descent works, awesome, lets use it for some actual machine learning stuff right? The hello world of machine learning is image recognition of hand written digits using the MNIST dataset. MNIST is a collection of 60 thousand 28 by 28 pixel images of hand written digits. Each one has a label of what they're supposed to be so we can use it to train a network. The data is stored as a binary file so I had to use a bit of python to interpret the files but here it is. Now that I can access the data next is actually getting this thing to a trainable state. Still figuring this stuff out as I go so I'll probably post updates over the holiday weekend. in the mean time. anyone else out there playing with this stuff?
  8. Hard constraints stretching beyond rest length?

    This method seems to be far more controllable. Thanks again Pavel!
  9. alembic objects into copy to points

    Have you tried using alembic archive?
  10. Hard constraints stretching beyond rest length?

    Thanks Pavel, I've been able to get it to remove prims based on exceeding a certain force limit as you suggested. They do not stretch in this case much at all but It does seem to apply forces evenly across the entire object. Once it reaches the threshold all of the constraints break at the same time. I'm sure I could set variable thresholds for the break to occur where I want it to but without anything impacting the object I would be basically painting certain weak points so I doubt it would produce a very natural result. The object I'm breaking is made of long beam structures that pull and rotate each other as they fall. I want to make sure that as they are getting torqued I can split and splinter those beams naturally under their own weight. Without an impact, determining where torsion would occur would take a bunch of simulations and repainting. Is there a more natural way to find stress points between pieces and weaken them dynamically?
  11. solver not working on animated mesh

    Sorry if I was unclear. Plugging your wrangle into the Object_Merge is not all of what I was suggesting. You you cannot simply switch the input of your wrangle. Important information is coming from each of those nodes. (Object_Merge)Input_1 is the geometry being fed into input 1 of your solver. It updates each frame and pulls in whatever is plugged into input 1 into your solver. Meaning this is your animated geometry. From this object merge you need to take the point position and use that to update the position of the points you're solving the infection on which is Previous_frame.
  12. Hard constraints stretching beyond rest length?

    Thanks for taking a look and no worries on the experience thing this is like.. exactly what the forums are for, I don't know either haha I guess it very well may be an unsolvable thing like unstoppable force vs immovable object. BUT this is a pretty common situation in destruction where you have a base simulation and want to do secondary fractures. For what its worth I've just tried glue constraints with different strengths. For example the would be hard constraints are -1 and the middle is set to some random strength. THESE STRETCH TOO. They don't snap back to their rest length like the hard constraints though.
  13. Hard constraints stretching beyond rest length?

    Continuing this in hopes someone can help me out. I had a suspicion that the hard overwrite of the "Animated Static Object" may no play nicely with the constraints and so I tried using a constant force to see what happens. Turns out even with a constant force of like 10^10 you get ... this. It does seem to respect the hard constraints more but not really all that spectacular a failure. >.<
  14. Hard constraints stretching beyond rest length?

    Still trying to find a viable balance between the glue and hard constraints but they simply don't behave as I'd expect. Sti I would expect that hard constraints are respected explicitly and do not break or stretch. I would also expect that glue constraints not matter how strong, when a piece with a hard constraint is under force would always break. This seems to not be the case, different levels of glue strength and RBD object density result in long and shorter stretch lengths before the glue decideds to snap in a catastrophic failure sort of way. From what I've gathered, A glue strength of 1 per 1000 units of density is the balance point for glue that will stretch the hard constraint only a slight bit but I can't seem to drive this otherwise and it's not very intuitive. Attached is a more simply animation example to really make this obvious.Snap_Constraints_Example_2.hip snap.mp4
  15. solver not working on animated mesh

    First of all, if you can help it, use Alembics. Will save you a lot of headache. Second, you're running your simulation using previous frame as the input to your wrangle. If the previous frame was the first frame of anim... that's what your solvers going to use every frame because its never looking at the new anim. You need to use the object merge Input_1 to update the position of all your points to match the new position of your animated FBX. Previous frame is where you can pull the last frames infection data from and apply it to the points of the new anim frame before solving the next frame.
×