Jump to content

Leaderboard


Popular Content

Showing most liked content since 04/20/2019 in all areas

  1. 5 points
    Hi all, I've made a Color Sampler asset that samples colors from an image. Colors can be sampled with points on a line, on a grid, randomly scattered points or with input points. It's also possible to sample the most frequently occurring colors. The sampled colors are stored in a color ramp which can be randomized. You can download it from https://techie.se/?p=gallery&type=houdini_assets&item_nr=6.
  2. 4 points
    Because the three values defines an imaginary line going from 0,0,0 to the position in space described by the 3 values. That point can be far away or i can be close to 0,0,0. This way the 3 values are able to define both a direction and a length. -b
  3. 3 points
    Hi, set point attribute i@found_overlap to 1 on your packed primitives in SOP. Then in DOP Network into the rbdpackedobject1 turn on parameter Solve on Creation Frame.
  4. 3 points
    what's up bb this is kind of annoying to do because the vellum constraints sop doesn't allow for many options when using pin to target constraints... at least not in my current build. i'm creating the constraint and then using a sop solver in the simulation to modify the ConstraintGeometry so that any primitives named "pin" or "pinorient" have their stiffness attribute set to zero based on an animation inside the solver. check it out: disable_pins_toadstorm.hip
  5. 3 points
    I've uploaded a tutorial video to generate maze on any given polygonal mesh.
  6. 3 points
    Hey guys! I used to be an FX artist with Houdini but made the transition to concept art, however I still have a love for a procedural workflow and am constantly looking for ways to implement Houdini into my art. I won't talk about scattering the snow because I am sure everyone knows how simple and awesome Houdini is for a thing like that! I will just say I createde CSV files for Octane to read in and render and the control is awesome. So I will just briefly talk about the cloth. For this project I wanted to create a MarvelousDesigner in H type of thing, and I have to say Houdini for clothing is amazing. The system that I built means it will auto stitch itself as long as I draw the curves in the same order which is very easy, and the biggest takeaway is that I don't have to struggle to get what I want at all - I can just grab a few points and say these are going to stick here and thats it. In testing this workflow I created clothing for a couple different characters in a matter of minutes. I'm really interested in using Houdini like this for my concepts, which need to be very quick... I think there is a lot of potential in a procedural workflow like this where you don't tweak it to make it perfect, you hack it to just make it good enough to paint over. I am just scratching the surface but clothing is one thing I'll definitely be doing in Houdini from now on. I'm also using VR a lot and have some interesting tests with creating geometry in houdini out of the curves from gravity sketch, but thats for another time if people are interested Thanks for reading Feel free to check out the full project here: https://www.artstation.com/artwork/2xYwrK
  7. 3 points
  8. 3 points
    Try dropping down a pointvelocity node, after the create surface. It allows you to add curl noise or bring in your own velocity attribute. ap_fluid_velocity_042119.hiplc
  9. 3 points
    I also moved all the tumblr example files I've been sharing onto my new website that you can find here. https://www.richlord.com/tools
  10. 2 points
    I know that I posted this video in another post as an mp4. So here is a vimeo video that shows all the stages for this simulation result! I hope you like it! Thank you! Alejandro
  11. 2 points
    @f1480187 the master has spoken! thanks once more man, really appreciate it. Just finished analizing your setup, always learning some new nodes from you, will read more about these: vertexsplit, falloff, polypath. That wrangle is gold to me, will study that one too. Beautiful setup as always. I'll tag you once I finish this study, but basically I'm making some hard surface helmets in movement, I asked that corner stuff to add some detail, now I'll dig a little into COPs. Thanks again, legend. Cheers!
  12. 2 points
    Ok, a bit off topic I should have tried to do it with vellum but I had this idea. What about using chop as post-process for filtering the noise/jittering? Usually the jittering starts or at least is visible when the object is almost static. In this case you can use the length of velocity to blend the cloth/vellum object between the filtered/non-jittering simulation and the non filtered object. This way you get the best from the two. This trick saved me a lot of times (if you consider three a lot) and the supervisor was happy about the "clean sim" vellum_basics__jitter_02_chop.hip
  13. 2 points
    Hey, I have been struggling to get a nice swirling motion in slower moving pyro smoke simulations. In Maya fluids I had an option called swirl witch worked beautifully. I found an old post addressing this issue, but it seems to have died before arriving at a good solution: http://forums.odforce.net/topic/23132-smoke-swirl-vorticity-with-smoke-solver/ Here is a simulation I did in Maya using the swirl attribute. https://youtu.be/Ifd6FJ2oHIc In houdini I have been using disturbance but I don't seem to get the results I want. Disturbance doesn't really seem to add swirl but more turbulent detail. For example, look at this sim I found on vimeo. You can see the disturbance start overpower the simulation towards the end, it doesn't really add a swirling motion. https://vimeo.com/220668349 What is a good Idea to get nice swirls? Thanks
  14. 2 points
    Hello! Here are some test that I did using my Implicit Buoyancy Model! I hope you like it!! Thanks!!
  15. 2 points
    Hi guys. Sorry for late answer. It's known issue with H17.5. DM 1.5, which works only with H17.5, is in last beta stage. Please wait a bit.
  16. 2 points
    Here is the intro sequence for a new video I am working on: Simple FLIP simulation with some smoke and whitewater sparks. Rendered in Redshift. Lens flares, DoF and Glow added in Nuke. The heat distortion from the smoke is the part I am least happy with. I took the volume tint AOV and created the difference from one frame to the next to drive STMap distortion. Would love to hear a better idea of doing that. Let me know what you think.
  17. 2 points
    Their are many ways to do it. I included two very simple ways to do it, just using some micro-solvers. I just have that running a few frames before sourcing my main sim. break.hip
  18. 2 points
    syntax inside the VEX field is : windvelocity, not @windvelocity.
  19. 2 points
    Before going into rendering we tend to compress all volumes to 16 bit. There is no noticeable difference for rendering and almost a 50% space savings. Only thing to really be careful with vdb is pruning with rest volumes (they really should not be pruned as 'zero' is a valid rest value.) During the simulation you want the full 32-bit as that accuracy is needed for the fluid solve, but once you are done with the sim (and any post processing) you can go down to 16-bit.
  20. 2 points
    If you use a sop solver to explicitly set the y point values of the cloth geo and an additional sop solver to do the same to the constraint geo, you can very effectively constrain your sim to just two axis. See attached for example. BTW, it may seem that you only need to manipulate the "Geometry" data, and in the simple attached demo that seems to work fine, however I have found that with complex vellum sims you really need to run over the "ConstraintGeometry" data as well. vallum_x_constraint.hip
  21. 2 points
    Sounds like a Room / Window / Parallax shader ; Sorry I couldn't find it for Houdini
  22. 2 points
    If you want to create a DNA strand, you can use two helices (with different offsets). You can transform these two curves to a guide path using a path deformer. After this you can create attributes for orientation / legth / positioning etc... . With these attributes you can copy/transform a geometry (a box for example) to each of these position. Here is a setup. Double_helix.hipnc
  23. 2 points
    I created a short python script to help out with the excessive nodes that are presented when importing detailed FBX files. My example is a vehicle with 278 object nodes referencing 37 materials inside the materials subnet generated by the import. This script scans the materials subnet and creates a new object level geo for each material it finds. Inside this new geo node it creates an ObjectMerge node and populates the node with all object references to the FBX material. It assigns the new material to this new geo node that points to /shop instead of the FBX materials subnet. Then it reviews the FBX materials and creates a new Redshift material for each FBX material detected. It scans the FBX Surface node and extracts a few parameters like diffuse color, specular etc... The net result is that I only have 37 nodes to manage instead of 278 after running the script. Also my nodes have Redshift placeholder materials assigned so I can get right to rendering. Add this code to a new shelf button and adjust the paths, at the bottom of the script, to point to your FBX subnet. The texture path is not really used at this time. # Scan a FBX subnet for materials. # Create a geo object with an object merge for each object that references the material. # Create a place holder Redshift material by reviewing the FBX materials in the subnet. # Atom 08-22-2018 # 10-14-2018 import hou, os, re def returnValidHoudiniNodeName(passedItem): # Thanks to Graham on OdForce for this function! # Replace any illegal characters for node names here. return re.sub("[^0-9a-zA-Z\.]+", "_", passedItem) def createRedshiftImageMapMaterial(passedSHOP, passedImageFilePath, passedName, passedDiffuse=[0,0,0], passedSpecular=[0,0,0], passedWeight=0.1, passedRoughness=0.23, passedIOR=1.0, passedOpacity=1.0): #print "->%s [%s] [%s]" % (passedSHOP, passedImageFilePath, passedName) rs_vop = hou.node(passedSHOP).createNode("redshift_vopnet",passedName) if rs_vop != None: rs_output = hou.node("%s/%s/redshift_material1" % (passedSHOP, passedName)) # Detect the default closure node that should be created by the redshift_vopnet. if rs_output != None: # Create. rs_mat = rs_vop.createNode("redshift::Material","rs_Mat") if rs_mat != None: # Set passed values. rs_mat.parm("diffuse_colorr").set(passedDiffuse[0]) rs_mat.parm("diffuse_colorg").set(passedDiffuse[1]) rs_mat.parm("diffuse_colorb").set(passedDiffuse[2]) rs_mat.parm("refl_colorr").set(passedSpecular[0]) rs_mat.parm("refl_colorg").set(passedSpecular[1]) rs_mat.parm("refl_colorb").set(passedSpecular[2]) rs_mat.parm("refl_weight").set(passedWeight) rs_mat.parm("refl_roughness").set(passedRoughness) if passedIOR ==0: # A zero based IOR means activate mirror mode for the reflection section. rs_mat.parm("refl_fresnel_mode").set("1") rs_mat.parm("refl_brdf").set("1") rs_mat.parm("refl_reflectivityr").set(0.961998) rs_mat.parm("refl_reflectivityg").set(0.949468) rs_mat.parm("refl_reflectivityb").set(0.91724) rs_mat.parm("refl_edge_tintr").set(0.998643) rs_mat.parm("refl_edge_tintg").set(0.998454) rs_mat.parm("refl_edge_tintb").set(0.998008) rs_mat.parm("refl_samples").set(128) rs_mat.parm("diffuse_weight").set(0) else: rs_mat.parm("refl_ior").set(passedIOR) rs_mat.parm("opacity_colorr").set(passedOpacity) rs_mat.parm("opacity_colorg").set(passedOpacity) rs_mat.parm("opacity_colorb").set(passedOpacity) rs_tex = rs_vop.createNode("redshift::TextureSampler",returnValidHoudiniNodeName("rs_Tex_%s" % passedName)) if rs_tex != None: # Wire try: rs_output.setInput(0,rs_mat) can_continue = True except: can_continue = False if can_continue: if passedImageFilePath.find("NOT_DETECTED")==-1: # Only plug in texture if the texture map was specified. rs_mat.setInput(0,rs_tex) # input #0 is diffuse color. extension = os.path.splitext(passedImageFilePath)[1] files_with_alphas = [".png",".PNG",".tga",".TGA",".tif",".TIF",".tiff",".TIFF",".exr",".EXR"] if extension in files_with_alphas: # Place a sprite after the rsMaterial to implment opacity support. rs_sprite = rs_vop.createNode("redshift::Sprite",returnValidHoudiniNodeName("rs_Sprite_%s" % passedName)) if rs_sprite != None: rs_sprite.parm("tex0").set(passedImageFilePath) # set the filename to the texture. rs_sprite.parm("mode").set("1") rs_sprite.setInput(0,rs_mat) rs_output.setInput(0,rs_sprite) #rs_mat.setInput(46,rs_tex) # input #46 is opacity color (i.e. alpha). rs_tex.parm("tex0").set(passedImageFilePath) # set the filename to the texture. # Remove luminosity from texture using a color corrector. rs_cc = rs_vop.createNode("redshift::RSColorCorrection",returnValidHoudiniNodeName("rs_CC_%s" % passedName)) if rs_cc != None: rs_cc.setInput(0,rs_tex) rs_cc.parm("saturation").set(0) # Add a slight bump using the greyscale value of the diffuse texture. rs_bump = rs_vop.createNode("redshift::BumpMap",returnValidHoudiniNodeName("rs_Bump_%s" % passedName)) if rs_bump != None: rs_bump.setInput(0,rs_cc) rs_bump.parm("scale").set(0.25) # Hard coded, feel free to adjust. rs_output.setInput(2,rs_bump) # Layout. rs_vop.moveToGoodPosition() rs_tex.moveToGoodPosition() rs_cc.moveToGoodPosition() rs_bump.moveToGoodPosition() rs_mat.moveToGoodPosition() rs_output.moveToGoodPosition() else: print "problem creating redshift::TextureSampler node." else: print "problem creating redshift::Material node." else: print "problem detecting redshift_material1 automatic closure." else: print "problem creating redshift vop net?" def childrenOfNode(node, filter): # Return nodes of type matching the filter (i.e. geo etc...). result = [] if node != None: for n in node.children(): t = str(n.type()) if t != None: for filter_item in filter: if (t.find(filter_item) != -1): # Filter nodes based upon passed list of strings. result.append((n.name(), t)) result += childrenOfNode(n, filter) return result def groupByFBXMaterials(node_path, rewrite_original=False): lst_geo_objs = [] lst_fbx_mats = [] s = "" material_nodes = childrenOfNode(hou.node("%s/materials" % node_path),["Shop material"]) #Other valid filters are Sop, Object, cam. for (name, type) in material_nodes: node_candidate = "%s/%s" % ("%s/materials" % node_path, name) n = hou.node(node_candidate) if n !=None: lst_fbx_mats.append(node_candidate) object_nodes = childrenOfNode(hou.node(node_path),["Object geo"]) #Other valid filters are Sop, Object, cam. for (name, type) in object_nodes: node_candidate = "%s/%s" % (node_path, name) n = hou.node(node_candidate) if n !=None: lst_geo_objs.append(node_candidate) # Make an object geo node for each material detected. # Inside the object will reside an object merge to fetch in each object that references the material. root = hou.node("/obj") if root != None: for mat in lst_fbx_mats: mat_name = os.path.basename(mat) shader_name = "rs_%s" % mat_name geo_name = "geo_%s" % mat_name ''' node_geo = root.createNode("geo", geo_name) if node_geo: # Delete the default File node that is automatically created as well. if (len(node_geo.children())) > 0: n = node_geo.children()[0] if n: n.destroy() node_geo.parm("shop_materialpath").set("/shop/%s" % shader_name) node_obm = node_geo.createNode("object_merge","object_merge1") if node_obm != None: p = node_obm.parm("objpath1") all_obj = "" for obj in lst_geo_objs: temp_node = hou.node(obj) if temp_node != None: smp = temp_node.parm("shop_materialpath").eval() if smp.find(mat_name) != -1: all_obj += "%s " % obj p.set(all_obj) node_obm.parm("xformtype").set(1) ''' # Make a place holder Redshift material by reviewing the FBX material. opacity = 1.0 ior = 1.025 reflection_weight = 0.1 reflection_roughness = 0.23 diffuse_color = [0,0,0] specular_color = [0,0,0] # Typically the FBX Surface Shader is the second node created in the FBX materials subnet. n = hou.node(mat).children()[1] if n != None: r = n.parm("Cdr").eval() g = n.parm("Cdg").eval() b = n.parm("Cdb").eval() diffuse_color = [r,g,b] sm = n.parm("specular_mult").eval() if sm > 1.0: sm = 1.0 reflection_weight = 1.0-sm if (sm==0) and (n.parm("Car").eval()+n.parm("Cdr").eval()==2): # Mirrors should use another Fresnel type. ior=0 r = n.parm("Csr").eval() g = n.parm("Csg").eval() b = n.parm("Csb").eval() specular_color = [r,g,b] opacity = n.parm("opacity_mult").eval() reflection_roughness = n.parm("shininess").eval()*0.01 em = n.parm("emission_mult").eval() if em > 0: # We should create an rsIncandescent shader, using this color, instead. r = n.parm("Cer").eval() g = n.parm("Ceg").eval() b = n.parm("Ceb").eval() # Try to fetch the diffuse image map, if any. tex_map = n.parm("map1").rawValue() if len(tex_map) > 0: pass else: tex_map = "%s/%s" % (texture_path,"NOT_DETECTED") createRedshiftImageMapMaterial("/shop", tex_map, shader_name, diffuse_color, specular_color, reflection_weight, reflection_roughness, ior, opacity) if rewrite_original: # Re-write the original object node's material reference to point to the Redshift material. for obj in lst_geo_objs: node_geo = hou.node(obj) if node_geo: m = node_geo.parm("shop_materialpath").eval() if len(m): mat_name = os.path.basename(m) shader_name = "/shop/rs_%s" % mat_name # To do this right, we need to add a material node to the end of the network and populate it with the shop_materialpath value. node_display = node_geo.displayNode() if node_display != None: node_mat = node_geo.createNode("material","material1") # Create new node. if node_mat != None: node_mat.parm("shop_materialpath1").set(shader_name) node_mat.setInput(0,node_display) # Wire it into the network. node_mat.setDisplayFlag(True) # Move the display flag to the new node. node_mat.setRenderFlag(True) # Move the render flag to the new node. node_mat.moveToGoodPosition() # Program starts here. texture_path = '/media/banedesh/Storage/Documents/Models/Ford/Ford_F-150_Raptor_2017_crewcab_fbx' #Not really used yet. fbx_subnet_path = "/obj/Container_Ship_Generic_FBX" groupByFBXMaterials(fbx_subnet_path, True)
  24. 2 points
    Wanted to try a pure vex solution. curvegap_01.hiplc
  25. 2 points
  26. 2 points
    This will split at the length along the curve you specify with a given doorway width. I think you could easily refactor to use percentage of length... splitbylength.hip
  27. 2 points
    Hi, I am working on an airplane destruction with a skeleton underneath the fuselage. At first this worked quite well. ( SimV15_V5_goodImpact ) But after importing a newer version of the model with wings it completely ruins the simulation. ( SimV15_V7_Glitch ). Once I removed the breaks in the fuselage and skeleton mesh it works as it should again but the skeleton glitches through the fuselage. ( SimV15_V7_WingFixSkeletonBug2 ). So I know why this happens but I don't know how to fix it. I also don't really understand why the skeleton glitches through the fuselage mesh all of a sudden. Can two deformable surfaces work together properly? In attachments, you can find the file ( Simtest_15_V10WingFix_SkeletonIssue ) and the ABC ( B25_V30_Turbo_AllWingsV2 ) Btw I used steven knippings rigids 3 tutorial to get the metal deformation. Thanks in advance SimV15_V5_goodImpact.mp4 SimV15_V7_Glitch.mp4 SimV15_V7_WingFixSkeletonBug2.mp4 Simtest_15_V10WingFix_SkeletonIssue.hipnc B25_V30_Turbo_AllWingsV2.ABC
  28. 2 points
    i using classic Julia Fractal to drive my spline : cool spline (Mantra) and black-and-white
  29. 2 points
    I see a couple of things happening. You have based your grain generation off of the proxy mesh for the ground plane. It is better to dedicate a unique object for this. Also I noticed that the points coming into the point deform have a count mis-match. This is due to the fact that when OpenCL is enabled, it generatsd a different amount of particles, compared to the CPU rest version. Tun Off OpenCL to reestablish the match. Basically you don't need the point deform version after making these modifications. But it could be used as a secondary material element. ap_LetterSetup_001_collProbs.hiplc
  30. 2 points
  31. 2 points
    Organics: "Seed" Organics: "Bud" Organics: "Leaf" These were all rendered in Redshift at very high resolutions (10800 square/14400x9600) for printing: https://www.artstation.com/thomashelzle/prints Love it! :-) Cheers, Tom
  32. 2 points
    1. +1 for hiring Alex Wanzhula 2. +1 for some kind of Mantra GPU
  33. 2 points
    alternatively you could also just have put $FF into your foreach-end single pass condition and then your rop after the foreach. Write out 10 frames, get 10 variations.
  34. 2 points
  35. 2 points
    A lot of people asked me to share this fake fire method.If you interested it, you can will check this simple hip. After rander i used ACES for a better look. fake_fire_rnd.hip
  36. 2 points
    sticky pillows....ewl.... vu_vellumstickyballs2.hiplc
  37. 2 points
    You can fit-range an attribute with setdetailattrib() set to "min" and "max". 1st pointwrangle: setdetailattrib(0, 'height_min', @height, "min"); setdetailattrib(0, 'height_max', @height, "max"); 2nd pointwrangle: float min = detail(0, "height_min", 0); float max = detail(0, "height_max", 0); @Cd.g = fit(@height, min, max, 0.0, 1.0); fit_range_VEX.hiplc
  38. 2 points
    I'm working on something related to art direct the swirly motion of gases; Its an implementation of a custom buoyancy model that let you art direct very easily the general swirly motion of gases without using masks, vorticles, temperature sourcing to have more swirly motion in specific zones, etc. Also it gets rid of the "Mushroom effect" for free with a basic turbulence setup. Here are some example previews. Some with normal motion, others with extreme parameters values to stress the pipeline. For the details is just a simple turbulence + a bit of disturbance in the vel field, nothing complex, because of this the sims are very fast (for constant sources: average voxel count 1.8 billions, vxl size 0.015, sim time 1h:40min (160 frames), for burst sources, vxl size 0.015, sim time 0h:28min). I'm working on a vimeo video to explain more this new buoyancy model that I'm working on. I hope you like it! Cheers, Alejandro constantSource_v004.mp4 constantSource_v002.mp4 burstSource_v004.mp4 constantSource_v001.mp4 burstSource_v002.mp4 burstSource_v003.mp4 burstSource_v001.mp4 constantSource_v003.mp4
  39. 2 points
    Gifstorm! First I've used a visualizer sop to show @v coming out of the trail sop: That makes sense so far. To make the next step easier to understand, I've shrunk the face that sits along +Z, and coloured the +Y face green, +X red, +Z blue. So, that done, here's that cube copied onto the points, with the v arrows overlaid too: The copied shapes are following the velocity arrows, but they're a bit poppy and unstable. So why are they following, and why are they unstable? The copy sop looks for various attributes to control the copied shapes, @v is one of them. If found, it will align the +Z of the shape down the @v vector. Unfortunately what it does if it has only @v is a little undefined; the shapes can spin on the @v axis when they get near certain critical angles, which is what causes the popping and spinning. To help the copy sop know where it should aim the +Y axis, you can add another attribute, @up. I've added a point wrangle before the trail, with the code @up = {0,1,0}; ie, along the worldspace Y axis: you can see all the green faces now try and stay facing up as much as they can (note the view axis in the lower left corner), but there's still some popping when the velocity scales to 0, then heads in the other direction. Not much you can do about that really, apart from try some other values for @up, see if they hide the problem a little better. What if we set @up to always point away from the origin? Because the circle is modelled at the origin, we can be lazy and set @up from @P (ie, draw a line from {0,0,0} to @P for each point, that's a vector that points away from the origin): Yep, all the green faces point away from the center, but there's still popping when @v scales down to 0 when the points change direction. Oh well. Maybe we can venture into silly territory? How about we measure the speed of v, and use it to blend to the @up direction when @v gets close to 0? Better! Still a bit poppy, but an improvement. Here's the scene with that last setup: vel_align_example.hipnc To answer the other key words in your topic title, I mentioned earlier that the copy sop looks for attributes, obviously @v and @up as we've used here, but if it finds others, they'll take priority. Eg, @N overrides @v. @N is still just a single vector like @v, so it too doesn't totally describe how to orient the shapes. You could bypass the trail and the wrangle so that there's no @v or @up, set @N to {0,1,0}, and all the shapes will point their blue face towards the top. Without any other guidance, it will point the red side of the shapes down +X. If you give it @N and @up, then it knows where point the green side, and you get a well defined orientation. While using 2 attributes to define rotation is perfectly valid, there are other options. The one that trumps all others is @orient. It's a single attribute, which is nice, and its party trick is that it defines orientation without ambiguity, using a 4 value vector. The downside is quaternions aren't easy to understand, but you don't really need to understand the maths behind it per-se, just understand what it represents. The simplest way is to think of it as @N and @up, but glommed into a single attribute. Another way is to think of it as a 3x3 matrix (which can be used to store rotation and scale), but isolated to just the rotation bits, so it only needs 4 values rather than 9 values. In houdini, you rarely, if ever, pluck quaternion values out of thin air. You normally generate what you need via other means, then at the last minute convert to quaternion. Lots of different ways to do this, coming up with ever funkier smug ways to generate them in 1 or 2 lines of vex is something I'm still learning from funkier smug-ier co-workers. Eg, we could take our fiddled @v, and convert it to a quaternion: @orient = dihedral({0,0,1} ,@v); What that's doing is taking the +Z axis of our shape-to-be-copied, and working out the quaternion to make it align to @v. You could then insert an attrib delete before the copy, remove @N, @v, @up, and now just with the single @orient, all the shapes rotate as you'd expect. vel_align_example_orient.hipnc
  40. 1 point
    @DoxiaStudio Peace & Love
  41. 1 point
    Here is my attempt. For sure you need unique names and ids. RBDs_SOP_solver_testing_01_fix.hip Hope this helps.
  42. 1 point
    Adding some curl noise to vel field before dop is right way. Usually I check range of the speed and use disturbance to specific values like 2-4 in range with disturbance 4-6 its usually add more brakes to high speed parts
  43. 1 point
  44. 1 point
    You could use Set Vector Component VOP to pick a specific component of a vector and set its "fval" to be the output of your ramp. The geometry vop outputs however expect P to be a vector and you will need to give it a vector. I dont think you can export just a specific component directly.
  45. 1 point
    FMX is back in town in and we'll have our Stuttgart HUG meetup + od-lunch on Wed May 1 during the FMX lunch break at 1pm. Location: Restaurant “Logo” inside the “Haus der Wirtschaft” at the FMX - We booked a table for 20 people. ( You don't need a FMX ticket for the restaurant area ) Meetup Event: https://www.meetup.com/Stuttgart-Houdini-User-Group-STHUG/events/260937509/ Big thanks to @Oliver Markowski for helping out setting this up! See you there!
  46. 1 point
    I have never tried so don't quote me on this but you should be able to do it. You can put down a "render node network" inside of your HDA and then create the wedge node inside of this one. This way you will carry the wedge setup with your digital asset
  47. 1 point
    DPX is a nightmare format to support, and as far as I know, no one actually supports the entire spec. It seems like most implementations only support the Cineon container portion. The rest is madness - like someone went spec crazy at Kodak trying to support every possible image use case under the sun. Anyway, we looked into it many years ago and passed on it in favour of EXR.
  48. 1 point
    I am using mostly wacom for many years but I started have hand terrible wrist pain about 6 months ago to. While i was searching solutions, i used many things that you can see at photo and some more thing like mouse pad with gel and other soft stuffs but they didnt help me. After a few terrible months , i notice that while i was using wacom or mouse, i am hardly pushing down my elbow to the desk and that cause the my wrist pain. Than i started to use my daughters volleyball knee pads on my elbow. In each day i stated to feel better and about 2 weeks later there was no more pain. After that i tried many knee pads but the best one was my daughter's cheap knee pads. I also attached its photo. I am using it for 3-4 month and i used to it. Maybe its little different than your problem but i thought that it can help also someone else.
  49. 1 point
    i am playing more with Vex right now: Vex Code: float resX = 600; float resY = 600; float deltaD = 0.5; int numVert = 3; float pi = 3.1415922653; float rotOffset = pi / 6.0; vector pos= {0,0,0}; float cx = (resX/2.0)*cos((2*pi/numVert) + rotOffset) + (resX /2); float cy = (resY/2.0)*sin((2*pi/numVert) + rotOffset) + (resY /2); int iter = chf("iteration"); for(int i=0; i<(iter+15); i++) { int num = int(((rand(i)*32767) % numVert) +1); float xVert = (resX/2) * cos((2*pi*num / numVert) + rotOffset) + (resX/2); float yVert = (resY/2) * sin((2*pi*num / numVert) + rotOffset) + (resY/2); cx += deltaD * (xVert - cx); cy += deltaD * (yVert - cy); if(i>15) { pos.x = cx; pos.y = cy; addpoint( 0, pos ); } }
  50. 1 point
    Lots of ideas. Lots. - old school fake it by copying grids to the particles that have rotates built properly. See Peter Quint Vimeo movie on how to rotate particles that have leaf geometry copied to them. Based on the velocity of the particles, apply noise to simulate deforming blowing paper. Use Curl Noise as your deformer. - Take your particle simulation and shove it in to a Flip solver. Grab the resultant velocity fields from the flip simulation and use these in a custom VOP SOP to add field noise to your copied grids to bend and fold. Use some sort of a deform weight map so the center of the paper doesn't bend but the further you go out, the more the points are affected by the velocity field. - Take the particles and pass them through a smoke container to modify the velocity field. Then use this velocity field to advect the grids in a POP Network as softbodies using the Advect By Volume POP. The grids should be sucked along by the velocity field if the particles were boxes a bit smaller than your grids. You could also use these velocity fields as a way to add deformations to your grids. - Literally copy the geometry to your particles then using the wire object tool, turn them in to a wire simulation. Use either gluetoanimation or pintoanimation applied to the geometry that you copy to the points to tug the copies along in the simulation. That's just putting a few seconds in to it. Attached is a quicky 2 minute test of the last option. wires_bound_to_particles_v001.hip
×