Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


Feather last won the day on June 8

Feather had the most liked content!

Community Reputation

20 Excellent

About Feather

  • Rank

Personal Information

  • Name
  • Location
    Los Angeles
  1. Fluid not colliding with collider

    Thin walled objects and flip are not friends XD Unless you intend to render these objects as transparent and hope to see the fluid inside, the easiest way is to just start your sim on the frame just before the two objects collide and make sure the fluid has initial velocity in the direction of travel. Otherwise, a couple of things you can do is rather than use the same collision volume for every frame, make a much much thicker version of the containers that is used on every frame just before the collision, then using a switch, transition to the fractured volume. This means pointing your rbd object collision volume proxy to the vdb and using "volume sample" as the method. Alternatively, and this ones for like, if you have to see the fluid moving inside and you're shaking it. First first run sim A. This is your object and the fluid held static at zero with an animated force that represents the inverse velocity of your objects motion. You run this simulation at 0,0,0 and the forces slosh the fluid around inside the container up until the contact frame. Next add to that simulation the animation of your closed object. This becomes the rest position. inside sim B you would have a pop vop that, up until the frame of collision, is overriding point position with rest and velocity with of course the velocity of the previous sim. Then, on the collision frame, a switch flips over to using standard P and V allowing it to simulate normally.
  2. whitewater of emit particle fluid error

    Your white water source node is not producing source volumes at all. The emission your getting is an error in general, it should not exist in the first place. This is usually just a matter of tweaking your curvature/acceleration and vorticity settings. Regardless of how you do it, when you display the "whitewatersource" node, you should see a volume of some kind where the emission takes place. It should not be blank. The velocity of the fluid you're shooting across is high so the velocity volume created is sending the particles into random directions.
  3. river sim - curvy shape

    It's difficult to figure out what scale this simulation is supposed to represent. It would be a creek and those are stores or a river and those are boulders. Do you have reference for what you're trying to achieve?
  4. Delta relative to tangent space

    By tangent space do you mean along the curved surface of the geometry or literally like a tangent plane of each point on the mesh?
  5. similar effect like look up constraint

    By default a copy to points node will orient the bee's to the normal direction of the point onto which they're copied. If instead using vex to first define the location of the bee( as in it doesn't exist otherwise) and the bee is then generated in that position, the orientation towards the target is mathematically rotating each bee by a 3X3 matrix. If you generate them in that position, you would need to use a for loop to rotate each bee individually after the fact which is probably too slow depending on how many bee's you have there. Instead, generate the bee at that origin then position it afterwards by multiplying it by a 4 x 4 matrix where the position of the matrix is the position you generated before and the rotation vectors are the 3 angles between each respective x y z orientation vectors. Watching this video series if you're unfamiliar with vector math will help immensely.
  6. ix this your assemble node needs to add a prefix to the names that is aware of the iteration of the loop so the resulting names are board0_piece0 board0_piece1 and board1_piece0 board1_piece1 etc. Then just copy this loop and do the same thing with your other assemble node thats spitting out the actual geo so the names match. To f
  7. When an assemble node is both responsible for packing AND naming your geometry. That means the result will be different when you perform the operation inside a for each vs not using one. For example, inside the loop, iteration class0 (first board) its going to name each piece 1 through 20 or whatever as piece0, piece1 etc. Then it makes constraints glueing together piece0, piece1 etc. Then it moves onto the next board and repeats. So while you have 262 points. The names of those pieces overlaps. You have 6-7 piece0's so yes the glue constraint shows up in your simulation but only 1 of the piece0's is actually glued. The other 5 or so piece0's are glued to nothing and thus fall to the ground.
  8. Light Sampling Issue (Mantra)

    SOLVED: It's not a light, its not a sample. ITS FOAM. When writing out the spectra and you're not using the foam, turn it off. Turns out those circles are gigantic, I mean impossible to recognize as such foam particles.
  9. Quick question. This is the diffuse AOV from a render using nothing but a distant light. To give you an idea, this scene is rather large. The geometry in this scene is a ocean volume and a plane. I have removed the displacement map from the material to isolate this as much as possible. I can not for the life of me find what sampling quality I'm missing here. Does anyone recognize this kind of circle pattern coming from the light?
  10. Scripted access to Bundle panel

    I wasn't familiar with bundles until you mentioned them so for anyone reading this, its a group of nodes, not necessarily connected but that follows some pattern. So like, grab all transform nodes inside of geometry node X. @cloud68 I don't know the answer to your question but it may help others to know if you're using smart bundles and if you're trying to answer the question of, which nodes did this smart bundle's rule set grab or if its something else?
  11. Conditional output switch?

    No worries, that little buffer of "needs to be approved by admin" right? Just a heads up, your current solution may set you up with a rather frustrating workflow in the future. In that way the switch functioning properly is entirely dependent on whether or not the detail attribute exists. The switch, while usually correctly set to 1, on a weird frame that geometry disappears or changes defaults to 0 rather than stays on its correct value if set globally at render time.
  12. Conditional output switch?

    You can achieve the same result in the reverse if the switch is not being driven by an expression and you are just setting the value. A Geometry ROP can activate the switch for you before rendering to disk using prerender scripts and a global variable. Edit/Aliases & Variables - > click the second tab at the top for global variables. There's boxes at the bottom for creating a new one so do something like "SWITCH" | 0 Then in your network you can use $SWITCH to call that variable in your switch node. On the geometry Rop you can add a prerender script. set -g "SWITCH" = 0 You can add this to a second geometry rop as set -g "SWITCH" = 1 And output the result to a different location while switch is set to 1. Keep in mind, if you're using an expression to drive the switch, a prerender script may not able to evaluate the expression to determine the answer.
  13. @LibrarianThe real time thing is awesome, that guy also did some work in Houdini with evolution networks as well and posted it to his vimeo. A Convolution Network is certainly to follow once I have this working properly.
  14. As I'm going through the maths, because each of the inputs is actually a neuron with its own bias and weight to consider, the following image is a better representation of whats actually happening. The boxes above are part of the mini-batch process and difficult to show at this scale.
  15. Glad I'm not alone in enjoying this stuff. Thanks for the videos guys! This update took a lot longer than I thought it wanted to give a slight preview. I had to go back and learn some of the maths behind this stuff to really breakdown what a lot of the python scripts were doing so I can rebuild this network in a visual way. n case anyone wants to understand the math going on behind these networks, a really good resource is youtube channel 3Blue1Brown. He has an entire series on calculus and another short series on neural networks. If you're missing that foundations in linear algebra you can watch another series by a youtuber named george soilis. At first I thought I could get away with something similar to the video's I had been watching that used aggregate sums to define the value of each neuron. Unfortunately that doesn't give quite the intuitive result I was looking for so... introducing neural net 2.0 below. It's not 100% done but once its finished you'll be able to watch literally every single vector change as each neuron learns.