Jump to content

Leaderboard


Popular Content

Showing most liked content since 06/23/2021 in all areas

  1. 5 points
    Here is another implementation of reaction-diffusion running across a mesh using OpenCL. reaction_diffusion_ocl.hiplc
  2. 4 points
    Hi Ezequiel, you could design these shapes with voronoi cells, too. shoe.hiplc
  3. 4 points
    I think I would use particles to drive the entire effect. ap_smoke_dissolve_along_one_axis.hipnc
  4. 3 points
    Hi, thought I'd share this in this section too: I wrote an article for the german “Digital Production” magazine about my free LYNX VFX toolset. For the article I made a couple of renderings using the LYNX fabric tools. Luckily it even made the cover Here are my personal favorites, the rest of the images can be found on Artstation. You can also find the complete scene on GitHub under the Demo Files. So now anyone can design an ugly Christmas Sweater;) Looking forward to seeing what you guys come up with, enjoy! Links: LYNX VFX Toolset Odforce Thread: https://forums.odforce.net/topic/42741-lynx-free-opensource-vfx-pipeline-tools/ LYNX VFX Toolset (Sweater Scene File included): https://github.com/LucaScheller/VFX-LYNX Artstation (HighRes Renderings): https://www.artstation.com/artwork/OyeY6g Digital Production Magazin: https://www.digitalproduction.com/ausgabe/digital-production-01-2020/ Alternatively view the article in my latest blog post: https://www.lucascheller.de/vfx/2019/12/15/ynybp7wpiqtshoy/
  5. 3 points
    It is. Just transfer a bunch of curves from tubes, grids and lines with pscale and velocity attributes into a volume. Reshape VDB will blend all shapes eventually. dinosaur.hiplc
  6. 3 points
    Feature Request: Can we have Houdini remember the last Window Size and location? I open Redshift and resize the Render View window, and when I close the window and open up again in a few minutes it is in the default location. Also, for pop up dialogs, they seem to open in most inconvenient places, like top right, and minimised, have to resize them, make a selection etc. Next time they are opened they are in the defaul position. This would be a great little time saver.
  7. 3 points
    partition... here's a mock up file, in group, it has groups based on attrib so you can pick from dropdown group from attrib.hipnc
  8. 3 points
    Visible voxels means you have pushed density to the maximum. It's like clipping audio, after passing the red line on the meter all you hear is distortion. Try cutting your density in half inside a volume wrangle. @density *= 0.5; Alternately, add more dissipation to remove density. Also conduct some test renders. Sometimes the viewport is misleading.
  9. 2 points
    Pragmatic VEX is listed as part of the official VEX learning path! Thanks SideFX! https://www.sidefx.com/learn/vex
  10. 2 points
    Okay then just make a cht() function, SideFX :-) I’m laizy
  11. 2 points
    A basic ray marching / sphere tracing renderer written in VEX inside COPs. Entirely based on a shader toy-tutorial by Martijn Steinrucken / @The_ArtOfCode: https://odysee.com/@TheArtOfCode:b/ray-marching-for-dummies:f // SCENE function float get_dist(vector pos){ vector pos_sphere = set(sin(TIME), 1.55, 6.0); float radius_sphere = 0.75; float dist_sphere = length(pos - pos_sphere) - radius_sphere; float height_plane = noise(pos * 2.75) * 0.2; float dist_plane = pos.y - height_plane; float dist_min = min(dist_sphere, dist_plane); return dist_min; } // TRACING function float raymarch(vector pos_cam, dir_cam){ float dist_orig = 0.0; float dist_srf = 1e-3; float dist_max = 1e1; int steps_max = 200; for(int i = 0; i < steps_max; i++){ vector pos = pos_cam + dir_cam * dist_orig; float dist_scene = get_dist(pos); dist_orig += dist_scene; if(dist_scene < dist_srf || dist_scene > dist_max){ break; } } return dist_orig; } // NORMALS function vector get_normal(vector pos){ vector offset = {0.01, 0.0, 0.0}; float dist = get_dist(pos); float dx = get_dist(pos - offset.xyy); float dy = get_dist(pos - offset.yxy); float dz = get_dist(pos - offset.yyx); vector nml = normalize(dist - set(dx, dy, dz)); return nml; } // LIGHTING function float get_light(vector pos){ vector pos_light = set(1.0, 4.0, 3.0); pos_light.x += sin(TIME*8); pos_light.z += cos(TIME*8); vector dir_light = normalize(pos_light - pos); vector nml_srf = get_normal(pos); float amount = max(dot(nml_srf, dir_light), 0.0); float dist = raymarch(pos + nml_srf * 1e-1, dir_light); if(dist < length(pos_light - pos)){ amount *= 0.2; } return amount; } // CANVAS float aspect = XRES / float(YRES); vector uvw = set(X - 0.5, (Y - 0.5) / aspect, 0.0); // CAMERA vector pos_cam = {0.0, 1.5, 0.0}; vector dir_cam = normalize(set(uvw.x, uvw.y, 1.0)); // PROCESS float dist_field = raymarch(pos_cam, dir_cam); vector pos_world = pos_cam + dir_cam * dist_field; float diffuse = get_light(pos_world); float mask_clip = dist_field < 55.0; // OUTPUT vector color = diffuse * mask_clip; assign(R, G, B, color); ray_marching.hipnc
  12. 2 points
    Another similar approach using deltamush, distances and polycuts. helmet2.hiplc
  13. 2 points
  14. 2 points
    Hi emesse92, I have a different approach to this problem. So, I made some changes to your file: 1. Changed the scale in the gas wrangle node inside your DOP Network to 1, so that we are not deleting any density. 2. Added a gas diffuse micro-solver to your DOP Network. Like Atom said, the density voxels are getting clamped. So basically it's an aliasing problem. I am assuming this is partly due to your velocity fields. One way of dealing with aliasing is to add a very small diffusion; more like adding a slight blur in photoshop to fix aliasing in images. The same concept somewhat applies, but here its done in all 3 dimensions. This is exactly what the gas-diffuse micro-solver does, pushing a small fraction of density onto the adjacent voxels. The diffusion has to be very small, like say 2.5% otherwise the simulation will look mushy. I did some tests on the file and even at half your original resolution, I was able to get rid of most of the sharp voxels. At the original resolution, I expect the results to be a bit better. Try experimenting with the diffusion rate to find out what works for you. The only downside is that you might lose some detail in your simulation. 3. Changed the mode under volume collision in static object to volume sample and linked a precomputed SDF volume of collision geometry into the proxy volume slot. Also changed the division method to size. For me, the previous setup was taking up a lot of processing time before the actual simulation. So your sim should be even faster now. In my opinion, with the division method to set to size, its easier to set the voxel size of the SDF collision geometry and also gives you the option to link this with your existing voxel size essential giving you more precise control. Please refer back to your obj/COMPRESSOR_COLLIDER to see all the changes made. For some odd reason, the 'attribnoise' nodes don't work in my version of Houdini. I believe this might be because I'm using a newer version of Houdini. So if some nodes don't work for you, try to recreate the changes within your version of the file. The setup is pretty simple and you should have all the other nodes that I used. Hope this helps you out. Turbine7_Modified.hip
  15. 2 points
    I think it's called "screen window X/Y" in Houdini: https://www.sidefx.com/docs/houdini/nodes/obj/cam.html
  16. 2 points
    For cigarette smoke you have to look at this nice tutorial by Alessandro Pepe and eventually take a look at the filament solver
  17. 2 points
  18. 2 points
    Hi guys, I found this helpful document on the SideFX help page: https://www.sidefx.com/docs/houdini/grains/stablepile.html
  19. 2 points
    no worries, of course I did NOT know it was of string type...so I simply put down a collisionignore node (not wrangle) that worked fine so I simply looked in the spreadsheet and it says................................................"*" No prizes for guessing that it's not an integer but string. Just reverse engineer something you know that works.
  20. 2 points
    In addition to Atom's excellent suggestions, double-check your viewport display options: in the Texturing tab, make sure you're not limiting the 3D Texture display resolution which can lead to blocky and low-res viewport visualization
  21. 2 points
    Convert your Alembic into a single piece of deforming geometry. The easiest way might be to re-write the s@name so all parts share the same. You may have to unpack it first. Inside a primitive wrangle: s@name = "single_mesh"; Here is a simple example showing rotating font meshes twirling and interacting with the FLIP simulation while floating upon it. ap_deforming_hero_object_floating_on_flip.hiplc
  22. 2 points
    Thank you @animatrix. Currently expanding it into a constructive solid geometry (CSG) stack tool. It can even write to a 3d volume while it creates a really crisp rendering.
  23. 2 points
    Yes it's true, but that's exactly what I like. I hate tutorials that take an hour for the same thing. In fact I think it's hard to find a tutorial that both goes in depth and covers all aspects of set-up construction simply because it would be too big, or too long. The entagma tutorials focus on specific points, and assume that you have the required knowledge. And indeed Rohan Dalvi is also a good source, which both goes deep, and remains accessible to the "beginner". But even though I'm far from being an expert they are too long for me. I'm not a patient guy haha. Don't worry about it. It's true that learning houdini can be frustrating at times, but there is so much to learn. With a bit of time, those parts that seem obscure to you will no longer be a problem. And you can always ask here. As you can see Houdini has a great community By the way, another great source is the Procedural Go Masterclass. This is the source of all the other tutorials. They are quite long and very detailed. The problem is that you often have to start from old version, so if you look at a masterclass on fluid flip for houdini 18 for example, there will be only the changes or the novelties that will be detailed. So you will have to go back to H15, or H14 masterclass or... to find the starting point.
  24. 2 points
    This is called casting in C/C++ more specifically a static upcast. This form is an old C style casing, C++ also adds static_cast syntax which does the same. parent = static_cast<OP_Network*>(OPgetDirector()->findNode("/obj")); Static means it's a compile time procedure and upcast means you get a pointer to the the parent base class which is higher in class hierarchy (therefore upcast). This is very common in C++.
  25. 2 points
    Hi Shantanu, Drop this into the wrangle @pscale = fit01(@nage,chf("start_size"),0); Then clock the little icon under the arrow drop down on the right of code and you can set you start size and the rest will work as you expect. Particles start with the size and then they scale down to 0 at the end of their life
  26. 2 points
    Simply put, n-gons and tris disrupt edge flow and can be a hassle to work with if you are applying subdivisions. Tris are somewhat forgiven because they resolve triangulation ambiguity (which sometimes causes normals-related issues with non-planar quads), but n-gons usually subdivide into a weird quad-pole due to the nature of the algorithm. There are methods to resolving such issues, such as adding supporting edge loops via insetting or bevels, but it's entirely within the realm of feasibility to not introduce problems in the first place.
  27. 2 points
    I created this as a camera lock indicator:
  28. 1 point
    After more than 5 months of unimaginable amount of work, I am proud to release my first in-depth Houdini course on VEX More details in the video description and the website. Active Patreon members will receive additional discounts proportional to their lifetime support (25% of their lifetime support). Message me on Patreon for your discount coupon. Enjoy! Table of Contents 01 - Introduction [Point Clouds] 02 - Introduction [pcopen() vs pcfind() vs nearpoints()] 03 - Introduction 04 - Implementation 05 - pcfilter() Implementation for pcfind() 06 - pgfind() 07 - pcfind_radius() 08 - Excluding the Current Point & Ad-Hoc Groups 09 - Finding Min & Max Neighbour Points [Unique Pair Matching] 10 - Concept 11 - Implementation [Camera Based Occlusion with Variable Pscale] 12 - Concept 13 - Implementation [Uniform Point Distribution Over Polygonal Surfaces [Point Relaxation]] 14 - Concept 15 - Implementation 16 - Decoupling Operators [Convolution Kernels] 17 - Introduction 18 - Border Handling [Connectivity & k-Depth Point Neighbours Using Edges] 19 - Introduction 20 - Concept 21 - Implementation [Connectivity & k-Depth Point Neighbours Using Primitives] 22 - Concept 23 - Implementation [Extending k-Depth Point Neighbours Using Edges] 24 - Introduction 25 - Concept 26 - Implementation [Extending k-Depth Point Neighbours Using Primitives] 27 - Concept 28 - Implementation [smoothstep() [Cubic Hermite Interpolation]] 29 - Concept 30 - Implementation [Shaping Functions] 31 - Introduction 32 - Implementation 33 - Blurring Attributes [Sharpening Attributes Using Unsharp Mask] 34 - Concept 35 - Implementation [Generalizing the Kernel Code to Handle All Attribute Types] 36 - Concept 37 - Implementation [Attribute Gradient] 38 - Introduction 39 - Concept 40 - Implementation [Gradient Ascent & Descent] 41 - Planar Geometry - Introduction 42 - Planar Geometry - Concept 43 - Planar Geometry - Implementation 44 - 3D Geometry [Contour Lines] 45 - Introduction 46 - Concept 47 - Implementation 48 - Heightfields [Geometric Advection - Orthogonalization & Flowlines] 49 - Introduction 50 - Concept 51 - Implementation [Clustering & Quadtrees] 52 - Concept 53 - Implementation [Adaptive Subdivision] 54 - Introduction 55 - Implementation 56 - Hashing [Adaptive Subdivision] 57 - Improving OpenSubdiv Catmull-Clark Subdivision Surfaces Algorithm 58 - Half-Edges [Adaptive Subdivision] [Aggressive Performance Optimizations] 59 - Eliminating Groups 60 - Custom Fusing In VEX 61 - Recreating Proximity Structures In VEX 62 - Get Unshared Edges In VEX 63 - Final Optimizations [Limit Surface Sampling] 64 - Introduction 65 - OpenSubdiv Patches 66 - Moving Points to the Subdivision Limit Surface 67 - Scattering Points on the Subdivision Limit Surface 68 - Generating a Point Cloud on the Subdivision Limit Surface 69 - Pre-Generating a Point Cloud on the Subdivision Limit Surface 70 - Creating Isolines on the Subdivision Limit Surface [Adaptive Subdivision] 71 - Computing Surface Normals from the Subdivision Limit Surface [Custom Subdivision Surfaces] [Splitting Edges [Edge Divide]] 72 - Concept 73 - Converting Edges to Primitives 74 - Creating New Edge Points [Rebuilding Polygons] 75 - Concept 76 - Implementation 77 - Preserving & Interpolating Attributes 78 - Multithreading by Connectivity 79 - C++ vs VEX 80 - Preserving Groups 81 - Final Optimizations [Implementing Bilinear Subdivision] 82 - Introduction 83 - Concept 84 - Modeling Test Geometry 85 - Starting from Edge Divide 86 - Creating New Face Points 87 - Creating New Edge Points [Creating New Closed Polygons] 88 - Concept 89 - Implementation [Creating New Open Polygons] 90 - Concept 91 - Implementation 92 - Preserving Primitive Groups & Interpolating Primitive Attributes [Preserving Vertex Groups & Interpolating Vertex Attributes for Closed Polygons] 93 - Concept 94 - Implementation 95 - Preserving Vertex Groups & Interpolating Vertex Attributes for Open Polygons 96 - Implementing Iterations 97 - Preserving Literal Groups 98 - Creating Neighbour Primitives 99 - Final Changes 100 - Testing On Complex Geometry [Implementing Catmull-Clark Subdivision] 101 - Introduction [Closed Surfaces] 102 - Rules [Gathering Edge & Face Points] 103 - Concept 104 - Implementation [Computing Weights for New Edge Points] 105 - Concept 106 - Implementation [Computing Weights for Original Points] 107 - Concept 108 - Implementation [Attribute Interpolation] 109 - Concept 110 - Implementation [Boundary Interpolation Rules for New Edge Points] 111 - Concept 112 - Implementation [Boundary Interpolation Rules for Original Points] 113 - Concept 114 - Implementation 115 - Open Surfaces - Handling Corner Points 116 - Handling Non-Manifold Topology [Open Polygons] [Computing Weights for Original Points] 117 - Reverse Engineering OpenSubdiv 118 - Implementation [Computing Weights for New Edge Points] 119 - Reverse Engineering OpenSubdiv 120 - Implementation 121 - Handling Open Polygonal Curves [Handling Mixed Topology] 122 - Full Geometry 123 - Sub-Geometry 124 - Testing On Complex Geometry [Performance] 125 - Profiling [Grouping Boundary Edges from Primitive Group] 126 - Concept 127 - Implementation 128 - VEX vs C++ [Caustics] 129 - Introduction 130 - Sea Caustics 131 - Pool Caustics 132 - Conclusion
  29. 1 point
    This is easily possible by packing the object, initialising a normal if necessary (not required if your object was already aligned to the world axes to begin with), and then using the Path attribute in the ROP FBX. Here's a preview of the exported tube: https://imgur.com/8xoYEYM Scene attached here. 210706_ROTATE_path_attr.hiplc
  30. 1 point
    It's something like this: class SimpleWidget(QtWidgets.QWidget): def __init__(self, parent, x, y, w, h): QtWidgets.QWidget.__init__(self, parent, QtGui.Qt.WindowStaysOnTopHint) self.initialize(x, y, w, h) def initialize(self, x, y, w, h): self.setGeometry(x, y, w, h) self.setMask(QtGui.QRegion(0, 0, w, h)) p = self.palette() p.setColor(QtGui.QPalette.Window, QtGui.Qt.red) self.setPalette(p) def drawViewportOutline(thickness): if hasattr(hou.session, "viewportOutlineWindows"): for w in hou.session.viewportOutlineWindows: w.close() hou.session.viewportOutlineWindows = [] p = hou.session.overlayviewpos s = hou.session.overlayviewsize w = s.width() - thickness h = s.height() - thickness outlineWindows = [] w1 = SimpleWidget(hou.ui.mainQtWindow(), p.x(), p.y(), thickness, h) w1.setParent(hou.qt.floatingPanelWindow(None), QtGui.Qt.Window) outlineWindows.append(w1) w1.show() w2 = SimpleWidget(hou.ui.mainQtWindow(), p.x(), p.y(), w, thickness) w2.setParent(hou.qt.floatingPanelWindow(None), QtGui.Qt.Window) outlineWindows.append(w2) w2.show() w3 = SimpleWidget(hou.ui.mainQtWindow(), p.x() + w, p.y(), thickness, h) w3.setParent(hou.qt.floatingPanelWindow(None), QtGui.Qt.Window) outlineWindows.append(w3) w3.show() w4 = SimpleWidget(hou.ui.mainQtWindow(), p.x(), p.y() + h, w, thickness) w4.setParent(hou.qt.floatingPanelWindow(None), QtGui.Qt.Window) outlineWindows.append(w4) w4.show() hou.session.viewportOutlineWindows = outlineWindows def drawViewportOutline(): # Draw red outline if viewport is camera-locked sceneviewer = hou.ui.paneTabOfType(hou.paneTabType.SceneViewer) currentViewport = sceneviewer.curViewport() if hasattr(hou.session, "viewportOutlineWindows"): if currentViewport.isCameraLockedToView() and currentViewport.camera(): if len(hou.session.viewportOutlineWindows) == 0: drawViewportOutline(2) else: if len(hou.session.viewportOutlineWindows) > 0: for w in hou.session.viewportOutlineWindows: w.close() hou.session.viewportOutlineWindows = [] hou.ui.addEventLoopCallback(drawViewportOutline) The callback function is described here: https://www.sidefx.com/docs/houdini/hom/hou/ui.html Register a Python callback to be called whenever Houdini’s event loop is idle. This callback is called approximately every 50ms, unless Houdini is busy processing events. Basically you can do anything in this function and that means if you can query if the viewport is set to a camera and it's locked then you can toggle the viewport outline.
  31. 1 point
    In PDG network, how can I access Outputs (see image) of a preview node withing Python Processor context? In other words how can I access @pdg_output or workItem.output from Python Processor node? Update Feb 23, 2:13 pm I tried the following but it prints an empty array. for upstream_item in upstream_items: new_item = item_holder.addWorkItem(parent=upstream_item) print(new_item.inputResultData) Update Feb 23, 2:24 pm (Solved) After struggling with this for 4 hours I finally was able to figure it out by reading HDAProcessor code located at C:\Program Files\Side Effects Software\Houdini 18.5.408\houdini\pdg\types\houdini\hda.py It seams like most of default nodes store outputs with item.addExpectedResultData(...) call. So to get output values of a previous PDG node for upstream_item in upstream_items: new_item = item_holder.addWorkItem(parent=upstream_item) parent_outputs = new_item.expectedInputResultData I hope it helps someone.
  32. 1 point
    https://www.tokeru.com/cgwiki/index.php?title=ConstraintNetworks
  33. 1 point
    There were a few little things wrong in your approach. First, don't copy stamp. Copy stamping is dead. Second, you need to create a string attribute for the Material SOP to override... Redshift is dumb and their texture sampler doesn't have an actual parameter for the texture path that you can override, so you have to instead manually create a spare parameter on the Material Builder, and then channel reference the RS Texture Sampler to follow that attribute. Then you can tell the Material SOP to override that value, after the copies are made. The override is meant to apply to the finished objects; you don't need to do anything before the copy operation other than create your range of random texture values. Here's your file back with the changes made. Wall_paper_02_toadstorm.hip
  34. 1 point
    You forgot to name the bind output in the volume VOP. vel in this case. And also the ramp (ouside the vop) is all set to zero. Like I said, I'm not an expert, but I've done these kind of mistakes plenty of times. I don't get fooled by this kind of stuff anymore haha. Well... almost.
  35. 1 point
    I managed to work around this. My fix was to use vertex normals for the generated geometry and covert the @N to @orient before merging the instancing point with the geo. The Vex for the wrangle looks like: matrix3 m = maketransform(@N,@up); @orient = quaternion(m); Which came from the article on here Convert_N_and_Up_to_Orient
  36. 1 point
    Hi, i released my first Tool for Houdini. Asset Handler is a Python Panel for SideFX Houdini that allows easy creation and access HDAs from a Library. 00:07 HDAs General Handling 01:57 HDRI 02:30 RS-Proxies 03:30 Aixterior Assets & Scattering 09:15 Megascans Assets 11:23 Examples https://www.enoni.de/wp/asset-handler/
  37. 1 point
    Why do magicians pull rabbits out of hats? Full environment for making any living microorganisms (Microbes and Environment).."Message". https://gumroad.com/houd
  38. 1 point
    There is a nice method using a ray SOP to detect and avoid intersections too, it's explained in this tutorial:
  39. 1 point
    Learning more about noise- Polar cords ( how to make those Haeckel figures and attribute mirror ) and using @petz examples. I found something nice (because it's in Vops) so I like it, its file from Raphaël Gadot ..Mestela approach nice talk ( Hypercubes ) and combining with Qlib ...Need to learn more about, for-each, how to isolate those primitives-curves? More to learn. file from Gadot .. hypercube_rotation_simple.hipnc
  40. 1 point
    9 lesson. Now I must learn more about triangle carve and generally how to find in triangle something else . I found this Dude @sweetdude09 and @petz thanx for lessons and examples. I always wanted to have in Houdini (procedural for a tattoo , UI, Fui, Gui, UNx, Mechanic shapes, etc) roof tutorial @konstantin magnus Basically triangles in copy stamp bunch of ...endless possibility... then a group by the edge with various angle and just finding those shapes with some luck.
  41. 1 point
    4 lesson . I learned more about colors, for-each and chops from ( 簡單黎講 C Plus Plus [廣東話, Cantonese] and Jaroslava Chalásová(vimeo).
  42. 1 point
    Hi, Quick setup attached to illustrate the idea. Give it a go. fastGas.hipnc Cheers!
  43. 1 point
    hip file if anyone wants to take a look! planksv9.hipnc
  44. 1 point
    Thanks a bunch for the detailed setup! Im genuinely curious, where does one start if they want to learn these things?
  45. 1 point
    @marty@Infinite Rave Here's an attempt at creating convincing white hair. Funny trivia: polar bears are "actually" black - they have black skin and their fur strands are transparent. Multiple passes of light diffuse and refraction when it passes through each strand, makes the polar bear's fur look white. Inspired by this fact, that white hair IRL is actually transparent hair without pigment, I went and set the transparency (opacity tab of the hairshader) to a very low value. Also enabled "transparent shadows". Keep in mind that these will increase render times. A lot. The scene is lit with one direct light and an env. one. As soon as you add an hdr map, it will tint the color of the white hair. Ideally one would keep the original env. map for the entire scene and use a desaturated version of it (B&W is unrealistic) just for the hair. THis could be achieved in Houdini with light exclusions, but ideally one should be able to plug an env. map just for the hair material instead of using multiple lights. If this were Mentalray I'd tell you how to do that, alas it's Mantra and I don't know how or if it's at all possible. If Houdini wouldn't keep me way with its poor viewport and traditional modeling tools I'd probably be a lot more competent with Mantra too. In the image to the right is the default hair shader with colors set to white only and to the left with all the other adjustments. The lighting and all other settings are identical. Make sure you don't miss the attached .rar file - I've included the .hdr map so you'll get exact same results and if after your own experimentation you get better looking results and render times, don't forget to post them here. white_hairball.rar
  46. 1 point
    Here is my attempt at the effect. The particle separation is fairly high so the results are coarse. Feel free to lower the particle separation and re-sim. I have not included any whitewater, you can add that on top once you settle on a final particle separation. I have installed the pump, as best to my ability, following the instructions in the Jeff Wagner "flip_pump_bow_curl" hip file from the above link. I have enabled OpenCL on the solver to increase calculation speed. If you experience crashing try turning that off in the solver. I have linked the Division Size of the ship hull volume directly to the Particle Separation of the Flip tank to keep things balanced but as you drop the Particle Separation you may want to de-couple these two parameters as you may not need that much resolution on the hull volume. You could also just Clip off the top half of the boat, unless you expect waves to roll over top. I think eetu is right about the resolution. You really have to lower the particle separation get hi-quality results. ap_large_ship_wake_with_pump_1a.hipnc
  47. 1 point
    you need to modify intrinsic:transform of each primitive with intrinsic() and setintrinsic() VEX functions vector scale = fit01(vector(rand(@primnum)), 0.2,1); matrix3 trn = primintrinsic(0, "transform", @primnum); matrix scalem = maketransform(0, 0, {0,0,0}, {0,0,0}, scale, @P); trn *= matrix3(scalem); setprimintrinsic(0, "transform", @primnum, trn); attached file: ts_scale_packed_VEX.hip
  48. 1 point
    The Constraint Network DOP errors out because the 'constraint_name' primitive attribute is missing on the constraint network geometry.
  49. 1 point
    Heat is a product of the burn field. Burn is generated each frame with something like. if temperature > ignition temp burn = fuel * burnrate Heat is then created with max(burn,heat) and advected. In other words, heat is created at places with fuel and high temperature. Temperature may exist anywere.
  50. 1 point
    There are a lot of scenarios where you can emulate the behaviour of ForEach SOP in Wrangles. The downside is that you have to be in Details Mode, so I'm not sure if SIMD acceleration works, but it is still faster than ForEach. There are 4 VEX functions that will help you to do that : nuniqueval uniqueval findattribvalcount findattribval So first you have to find how many unique values you have. This will give you the number of pieces. Then create a loop with the same number of iterations. Inside this loop you can operate on each piece individually. Next get how many of this values are there and create another loop. Now you can traverse over each individual component of that piece. I always use this instead of ForEach if I can and find it much better alternative. Actually that's exactly what I'm doing while controlling bullet pieces with SOP animation.
×