Jump to content

All Activity

This stream auto-updates     

  1. Today
  2. multiple solver node execution order

    Yes, this is exactly what I was confused about. Thank you!
  3. add noise to points but only in x and y axis

    The point jitter sop was perfect. I'm in the middle of a VEX "course" (joy of vex on cgwiki, really awesome), so i'm kinda playing with that code, trying to make the effect bigger/smoother, etc. thanks, really helpful!
  4. volume displacement based of UV

    Yes toNDC is a good way to achieve this displacement effect BUT... it ONLY works when camera don't move, and the object must be static. In other words the displacement will move when camera move. That's not what I need.
  5. volume displacement based of UV

    I tried , but it doesn't work
  6. There is something else wrong in that file as well. When I supply a simple checkerboard black and white image, the UVs only work for the top half. Using toNDC as a replacement for UVs kind of works. It clearly illustrates that the bind of UVs doesn't seem to work. You can get camera facing displacement that way, however.
  7. multiple solver node execution order

    Assuming I've understood your explanation correctly, solver1 will always be computed before solver2 simply because when solver2 tries to getattrib from solver1, solver1 will be computed for the current frame. Everything in Houdini cooks (computes) from "top" to "bottom", "left" to "right" in a node tree and when you call nodes with getattrib or through a node like "object merge" those nodes will be placed above the current node in the node tree/execution order.
  8. volume displacement based of UV

    It happens because the displacement is hitting the maximum bounds of your volume in that area. You'll need to add more exterior bound voxels to the volume and might have to add a "VDB activate" node after the "VDB From Polygons" node and increase the "Expand" option.
  9. @Librarian No, the motion from the second simulation is correct. Only the transform pieces in the end doesn't work
  10. I stumbled over a problem with the transform pieces SOP when i use a timeshift between two simulations. In my setup i simulate the pieces with a rigidbodysolver, extract the points with dopimport, transfer the pieces onto those points and simulate over the pieces again. And the final step is a transform pieces. This works fine, as long as there is no timeshift between the two simulations. When i introduce a timeshift, the transform pieces SOP places the pieces nowhere near the extracted points. nested-sim-timeshift-2.hipnc
  11. Hey, glad it worked out !
  12. Yesterday
  13. @Alain2131 You just made my month! To clarify i do not need the values from previous datasets to remain, the file you shared answers all my questions!! Im gonna need to do some changes to implement it for my use but it looks amazing from what i just looked at. thanks so much! Edit: Just finished implementing it, and it works like a charm but now im trying to use material instead of color which ive having touble implementing, do you have any suggestions? So far everything i tried assigning a material to values by reversing the blast selection so it only leaves 1 value for each iteration and then assigning a material node after that references the material specified by the parameter. the problem is i keep getting an error "bad parameter reference" when im using the same conventions. I attached a file showing what im explaining. EDIT 2 Just got the material setup working! ill attach a file for anyone who looks at this in the future. dyanmic UI with materials on prims.hipnc
  14. Hello everyone, Let's say in SOP I make a vdb density volume from a sphere which have uv attribute. And in "VDB from polygon" node I inherit that UV to a VDB field. And then in the material, I want inherit that UV field and make volume displacement using a jpg tex based of that UV when I hit render , the result looks strange (Bottom right corner) regardless I rotate the uv position Here is the file and tex untitled.hip shu.zip
  15. How to make destruction with color like in gif? A sample scene would be great. - constraint limit map
  16. Okay, here's what I understand, and I got some questions. On different datasets, you got a string attribute named "type" with different values. Let's say you import once. The tool auto populates B F C D. You now have a multiparm or whatever that you can control the color and whatnot as you want. But now, you import again, and now you got F X A S F is still there, but X A and S are new. What do you expect to happen then ? The easiest to do is on each new import, wipe the entire UI and start from scratch But this means that for different datasets, if you got the same value, you'll have to re-enter the parameter manually. Here is a mockup, where when you import a new dataset, it resets all parms and populate with default ones. dynamic_HDA_parms_mockup.hipnc Basically, with a button, it reads the unique values of the attribute "type" (this info was fetched from within VEX into the types detail attribute) and sets the multiparm to the amount of unique values in the attribute. node = hou.pwd() geo = hou.node("./OUT_script").geometry() # After the attribute wrangle fetching the unique values of the type attribute types = geo.stringListAttribValue("types") node.parm("multiparm").set(0) # Reset multiparm node.parm("multiparm").set(len(types)) # Set Name Parameter for i, type in enumerate(types): parm = hou.parm("name_" + str(i+1)) # parm starts at 1, not 0 parm.set(type) A few clarifications and details I input the data by wiring it, but this can be adapted to an input from a file. You said your data was on the prims, this is doable as well. If you want the values of the parameters from previous datasets to remain, I see two options. 1 - Instead of wiping the multiparm each time, we go through the current parameters, look if we got any new ones, and create only the missing ones. 2 - Have a file somewhere on disk containing the relevant values for the parameters, and after resetting the multiparm, auto-fill the parameters using those values. I think this will be harder to do, though.
  17. Procedural Drips

    Procedurally generated drips within Houdini's SOP using poly lines. Random acceleration, speed and start motion is achieved by: 1. Placing poly lines in a single axis zero to one e.g v@P = set ( f@curveu, 0, 0 ) 2. Points on poly lines are shifted to create ease in, ease out and staggered noise motion curve. 3. Poly lines being stretched and squashed to create slow and fast motion. 4. Poly lines being offset in a single axis creating random start frames. 5. Poly lines been clipped in a single axis and restore to original position. SDF Modeling : 1. Standard VDB from particles. 2. Lookup the SDF above with a noise. 3. Stretch the SDF to fake gravity force. 4. Squash the SDF against the surface normal. Velocity Blur 1. Only tip of each animated poly line being kept to calculate the velocity. 2. On frame where “drips” disappears from frame, velocity explicitly set to set( 0.0, -9.80665*@Timeinc, 0.0 ) to fake gravity fall. Rendered with 3Delight. https://953u6015t.gumroad.com/l/mOABG
  18. Flip volume gain

    Hello Gangland, I have done a similar simulation with champagne being poured into a glass. To simulate volume, I would first suggest activating the Particle Separation in the FLIP Solver node even before experimenting with divergence. A common problem that we encounter with FLIP fluids is that fluids with higher velocities tend to compress the particles. The same is also true for fluids that build up in a certain section. This effect is not so much evident when the fluid is flowing from one section to another, simply because it's difficult for our brains to keep track of minor volume losses in liquid when they are in relatively fast motion. So for the vast majority of fluid simulations, particle separation is turned off by default to speed up calculations. But when the fluid starts to build up, initially the solver will try to retain the volume but as the simulation goes on and more particles are introduced, the solver has a hard time trying to keep the separation between the particles. Especially with low velocities, the effect becomes more apparent. When particle separation is turned on the FLIP solver will do additional passes to ensure that the particles keep their spacing. However, like all solver algorithms, this method is not 100% accurate. As we keep increasing the Separation Iterations, the simulation gets more and more accurate and we will be able to observe less volume loss. As a starting point, I would suggest a Separation Iteration of 4. As for the Separation Rate and Separation Scale, the default values should be fine and rarely any needs a change. If particle separation does not produce the desired results, then we can think of introducing divergence into the sim. The combined use of divergence and particle separation can sometimes yield better output. The error that you made in your sim is that you turned on the divergence calculation in your simulation, but didn't initialize the particles with a divergence attribute. To fix this problem put down a pop wrangle in your DOPNET, connect it to the second input (particle velocity) of the FLIP Solver. Then type in the following code: f@divergence = 1.0; Then change the float value in the pop wrangle to adjust the divergence attribute to get the desired fluid expansion. I highly suggest experimenting with the particle separation method first before you add any divergence to the simulation. I hope this helps you out.
  19. Resources for Art Fundamentals

    Yes color theory... Good to know, but don't really help. it's like with music, you can find lesson to learn an Instrument, learn orchestration rules, solfege, but no one can teach you how to find a nice melodie or write a nice song. Color is a bit like melodies. The rule here is personnal culture. Look at a maximum of artistes digital or classics. Find the ones you like, try to reproduce some piece you like. Build a sort of "internal" personnal library. The bigger it is, the more you can get out of it. Another analogy is Quentin Tarentino, He never went to film school, but he had a huge, huge film culture, Look at the result. There's no one way to learn.
  20. Resources for Art Fundamentals

    You've been a great help, and I will examine all the resources you've referred me too. Exactly what I was looking for so thank you. I suppose I'll have to at least feign competence in 2D, as it turns out. I went to the wrong type of post-graduate studies (not art), and I now feel inadequate. I have actually been studying some color theory, but it's quite abstract, so the references you've provided will be invaluable to me, thank you! There's quite a bit for me to study, hopefully it's something I can do on my own.
  21. Resources for Art Fundamentals

    Thank you! I'll check it out for certain.
  22. Resources for Art Fundamentals

    Maybe I didn't understand what you were looking for. But you talked about artist and composition. May be you need to be more specific about what your after. If you want to do explosions, liquid simulations, or destructions , basically be a FX Artist, no you don't really need to learn drawing and painting. If you want to make great visual, yes drawing or painting help a lot. At least to acquire the notions that you lack, and eventually make sketch of what you want. Look at these houdini artist : Lukas Vojir et AlexaSirbu https://www.instagram.com/alexa_sirbu/ https://www.instagram.com/lukasvojir/ Mikhail Sedov - https://www.behance.net/msedov Joey Camacho - https://www.behance.net/rawandrendered joshchilders - https://www.joshchilders.com/ Christoph Bader - https://deskriptiv.com/ They do thing who really are well know houdini effects. Why then do they make great visuals and you forget what they did it with ? Because they have a strong artistic backround. Composition and colour are two things that are easier to learn in 2D. Simply because you have to experiment a lot. Composition is about the lines and masses of your final image, i.e. a 2D plane. You can make drastic changes in a few seconds. In 3D you are well... in a 3D space ! Each modification will have to remain coherent in space, and that will take time. By the time you've done one test in 3D, you'll be doing ten in 2D, which is why you'll learn. I'm not saying that it's not possible, but that it's better to have a good notion before working directly in 3D. The same goes for colour. You don't need to be a good drawer. Just be able (even if you don't do it) to sketch what you want, in order to evolve comfortably in 3D.
  23. Last week
  24. @carthick Sorry it was hard to explain, i know how to color my existing attributed what im looking for a is a way to turn them into parameters for an HDA. I need a procedural way so that when the dataset changes it generates new parameters.
  25. Procedural Lightning Tool | Gumroad

    Yeah probably me lol. This is from 2018
  26. Resources for Art Fundamentals

    Thank you! What was your approach to utilizing his lessons, did you isolate yourself to just a section? I only skimmed the page (I just logged onto my home computer for the first time today) and it seems immense!
  27. Resources for Art Fundamentals

    Thank you for such an exhaustive list of recommendations! I am beginning to delve into those and they seem to be phenomenal recs thus far. Drawing has always been hard for me, I've had more luck sculpting than doing anything in 2D. Do you really think I ought to take the time to try to develop some digital painting/drawing skills? I once thought I should, but it seemed like the time commitment would take away from developing Houdini and film making skills, to the point that I'd be overburdened at the others expense.
  1. Load more activity
×