Jump to content

Leaderboard


Popular Content

Showing most liked content since 04/26/2019 in all areas

  1. 5 points
    Hi all, I've made a Color Sampler asset that samples colors from an image. Colors can be sampled with points on a line, on a grid, randomly scattered points or with input points. It's also possible to sample the most frequently occurring colors. The sampled colors are stored in a color ramp which can be randomized. You can download it from https://techie.se/?p=gallery&type=houdini_assets&item_nr=6.
  2. 3 points
    Hi Antoine, to pixelate a curve put a grid on top of it and remove primitives by their distance to the curves surface before removing shared edges with the divide SOP. if(xyzdist(1, v@P) > 0.01) removeprim(0, @primnum, 1); gridify_curve.hipnc
  3. 3 points
    ok, a friend on the french houdini discord give me the response. The HScript command opextern -RM mat return a liste of all external references missing AND the name of the node who use them. -M Check each file to see if it exists, and only show the missing files. -R Recurse into networks and subnets.
  4. 3 points
    Hello, Here is short video showing how volumes can be used to shade and texture semi-translucent organic things. Volumes For Organic Assets This approach allows to achieve close-up realistic organic look for semi-transclucent assets where SSS or colored refraction is not enough. An object with UV's is converted to signed distance volume with UVW grid. Then volume is used to set density and perform uv lookup in rendering. This way density can be adjusted by depth. I.e. not so dense near the surface and very dense at the core. Or not very dense, then dense patch like island of fat, then not dense again. UVW grid is used to texture volume. Different textures can be used at different depth. This approach allows very flexible yet powerful way to place any texture at given depth. Texture X at depth Y. I.e. big veins are 5mm under the surface. This approach is best for close up organic hero renders where SSS or refraction are not looking good enough. Attaching example file volumetric_textures_example.hip
  5. 3 points
    Hi, set point attribute i@found_overlap to 1 on your packed primitives in SOP. Then in DOP Network into the rbdpackedobject1 turn on parameter Solve on Creation Frame.
  6. 3 points
    Here is the intro sequence for a new video I am working on: Simple FLIP simulation with some smoke and whitewater sparks. Rendered in Redshift. Lens flares, DoF and Glow added in Nuke. The heat distortion from the smoke is the part I am least happy with. I took the volume tint AOV and created the difference from one frame to the next to drive STMap distortion. Would love to hear a better idea of doing that. Let me know what you think.
  7. 3 points
    what's up bb this is kind of annoying to do because the vellum constraints sop doesn't allow for many options when using pin to target constraints... at least not in my current build. i'm creating the constraint and then using a sop solver in the simulation to modify the ConstraintGeometry so that any primitives named "pin" or "pinorient" have their stiffness attribute set to zero based on an animation inside the solver. check it out: disable_pins_toadstorm.hip
  8. 3 points
    I've uploaded a tutorial video to generate maze on any given polygonal mesh.
  9. 3 points
    Hey guys! I used to be an FX artist with Houdini but made the transition to concept art, however I still have a love for a procedural workflow and am constantly looking for ways to implement Houdini into my art. I won't talk about scattering the snow because I am sure everyone knows how simple and awesome Houdini is for a thing like that! I will just say I createde CSV files for Octane to read in and render and the control is awesome. So I will just briefly talk about the cloth. For this project I wanted to create a MarvelousDesigner in H type of thing, and I have to say Houdini for clothing is amazing. The system that I built means it will auto stitch itself as long as I draw the curves in the same order which is very easy, and the biggest takeaway is that I don't have to struggle to get what I want at all - I can just grab a few points and say these are going to stick here and thats it. In testing this workflow I created clothing for a couple different characters in a matter of minutes. I'm really interested in using Houdini like this for my concepts, which need to be very quick... I think there is a lot of potential in a procedural workflow like this where you don't tweak it to make it perfect, you hack it to just make it good enough to paint over. I am just scratching the surface but clothing is one thing I'll definitely be doing in Houdini from now on. I'm also using VR a lot and have some interesting tests with creating geometry in houdini out of the curves from gravity sketch, but thats for another time if people are interested Thanks for reading Feel free to check out the full project here: https://www.artstation.com/artwork/2xYwrK
  10. 3 points
  11. 2 points
    That looks like it should work. But... I find VEX a little tricky with multiplying vectors by floats. It doesn't like to do that right after a function. I suggest assigning the noise to the variable and then in another line multiplying the vector by your float amp. Like: float amp = 4; vector noise1 = anoise(v@P); noise1 *= amp
  12. 2 points
    Just handed in my last university piece and I've created a showreel with all of my work for the past year. Feed back would be really nice thank you. Showreel: VJ loops created for my dissertation: https://vimeo.com/338014461
  13. 2 points
    I know that I posted this video in another post as an mp4. So here is a vimeo video that shows all the stages for this simulation result! I hope you like it! Thank you! Alejandro
  14. 2 points
    @f1480187 the master has spoken! thanks once more man, really appreciate it. Just finished analizing your setup, always learning some new nodes from you, will read more about these: vertexsplit, falloff, polypath. That wrangle is gold to me, will study that one too. Beautiful setup as always. I'll tag you once I finish this study, but basically I'm making some hard surface helmets in movement, I asked that corner stuff to add some detail, now I'll dig a little into COPs. Thanks again, legend. Cheers!
  15. 2 points
    Ok, a bit off topic I should have tried to do it with vellum but I had this idea. What about using chop as post-process for filtering the noise/jittering? Usually the jittering starts or at least is visible when the object is almost static. In this case you can use the length of velocity to blend the cloth/vellum object between the filtered/non-jittering simulation and the non filtered object. This way you get the best from the two. This trick saved me a lot of times (if you consider three a lot) and the supervisor was happy about the "clean sim" vellum_basics__jitter_02_chop.hip
  16. 2 points
    Hello! Here are some test that I did using my Implicit Buoyancy Model! I hope you like it!! Thanks!!
  17. 2 points
    Hey, I have been struggling to get a nice swirling motion in slower moving pyro smoke simulations. In Maya fluids I had an option called swirl witch worked beautifully. I found an old post addressing this issue, but it seems to have died before arriving at a good solution: http://forums.odforce.net/topic/23132-smoke-swirl-vorticity-with-smoke-solver/ Here is a simulation I did in Maya using the swirl attribute. https://youtu.be/Ifd6FJ2oHIc In houdini I have been using disturbance but I don't seem to get the results I want. Disturbance doesn't really seem to add swirl but more turbulent detail. For example, look at this sim I found on vimeo. You can see the disturbance start overpower the simulation towards the end, it doesn't really add a swirling motion. https://vimeo.com/220668349 What is a good Idea to get nice swirls? Thanks
  18. 2 points
    Hi guys. Sorry for late answer. It's known issue with H17.5. DM 1.5, which works only with H17.5, is in last beta stage. Please wait a bit.
  19. 2 points
    Their are many ways to do it. I included two very simple ways to do it, just using some micro-solvers. I just have that running a few frames before sourcing my main sim. break.hip
  20. 2 points
    syntax inside the VEX field is : windvelocity, not @windvelocity.
  21. 2 points
    Before going into rendering we tend to compress all volumes to 16 bit. There is no noticeable difference for rendering and almost a 50% space savings. Only thing to really be careful with vdb is pruning with rest volumes (they really should not be pruned as 'zero' is a valid rest value.) During the simulation you want the full 32-bit as that accuracy is needed for the fluid solve, but once you are done with the sim (and any post processing) you can go down to 16-bit.
  22. 2 points
    If you use a sop solver to explicitly set the y point values of the cloth geo and an additional sop solver to do the same to the constraint geo, you can very effectively constrain your sim to just two axis. See attached for example. BTW, it may seem that you only need to manipulate the "Geometry" data, and in the simple attached demo that seems to work fine, however I have found that with complex vellum sims you really need to run over the "ConstraintGeometry" data as well. vallum_x_constraint.hip
  23. 2 points
    42million seems a little low, for final quality, especially for up close camera work. Try getting your voxel count into the 65-165 million range. (physical memory is a factor, here) I use an estimator expression on my pyro object, just create a new blank float and paste the expression into the field. This expression reviews your box size and particle separation to generate the approximation value. Here is the expression I use. ch("sizex")/ch("divsize")*ch("sizey")/ch("divsize")*ch("sizez")/ch("divsize") NOTE: The result is in scientific notation. So when you see +06 on the end that means millions, +07 means 10 million and +08 means 100 million. The final value shown here 4.28345e+07 is ~42million voxels.
  24. 2 points
    Sounds like a Room / Window / Parallax shader ; Sorry I couldn't find it for Houdini
  25. 2 points
  26. 2 points
  27. 2 points
    If you want to create a DNA strand, you can use two helices (with different offsets). You can transform these two curves to a guide path using a path deformer. After this you can create attributes for orientation / legth / positioning etc... . With these attributes you can copy/transform a geometry (a box for example) to each of these position. Here is a setup. Double_helix.hipnc
  28. 2 points
    I created a short python script to help out with the excessive nodes that are presented when importing detailed FBX files. My example is a vehicle with 278 object nodes referencing 37 materials inside the materials subnet generated by the import. This script scans the materials subnet and creates a new object level geo for each material it finds. Inside this new geo node it creates an ObjectMerge node and populates the node with all object references to the FBX material. It assigns the new material to this new geo node that points to /shop instead of the FBX materials subnet. Then it reviews the FBX materials and creates a new Redshift material for each FBX material detected. It scans the FBX Surface node and extracts a few parameters like diffuse color, specular etc... The net result is that I only have 37 nodes to manage instead of 278 after running the script. Also my nodes have Redshift placeholder materials assigned so I can get right to rendering. Add this code to a new shelf button and adjust the paths, at the bottom of the script, to point to your FBX subnet. The texture path is not really used at this time. # Scan a FBX subnet for materials. # Create a geo object with an object merge for each object that references the material. # Create a place holder Redshift material by reviewing the FBX materials in the subnet. # Atom 08-22-2018 # 10-14-2018 import hou, os, re def returnValidHoudiniNodeName(passedItem): # Thanks to Graham on OdForce for this function! # Replace any illegal characters for node names here. return re.sub("[^0-9a-zA-Z\.]+", "_", passedItem) def createRedshiftImageMapMaterial(passedSHOP, passedImageFilePath, passedName, passedDiffuse=[0,0,0], passedSpecular=[0,0,0], passedWeight=0.1, passedRoughness=0.23, passedIOR=1.0, passedOpacity=1.0): #print "->%s [%s] [%s]" % (passedSHOP, passedImageFilePath, passedName) rs_vop = hou.node(passedSHOP).createNode("redshift_vopnet",passedName) if rs_vop != None: rs_output = hou.node("%s/%s/redshift_material1" % (passedSHOP, passedName)) # Detect the default closure node that should be created by the redshift_vopnet. if rs_output != None: # Create. rs_mat = rs_vop.createNode("redshift::Material","rs_Mat") if rs_mat != None: # Set passed values. rs_mat.parm("diffuse_colorr").set(passedDiffuse[0]) rs_mat.parm("diffuse_colorg").set(passedDiffuse[1]) rs_mat.parm("diffuse_colorb").set(passedDiffuse[2]) rs_mat.parm("refl_colorr").set(passedSpecular[0]) rs_mat.parm("refl_colorg").set(passedSpecular[1]) rs_mat.parm("refl_colorb").set(passedSpecular[2]) rs_mat.parm("refl_weight").set(passedWeight) rs_mat.parm("refl_roughness").set(passedRoughness) if passedIOR ==0: # A zero based IOR means activate mirror mode for the reflection section. rs_mat.parm("refl_fresnel_mode").set("1") rs_mat.parm("refl_brdf").set("1") rs_mat.parm("refl_reflectivityr").set(0.961998) rs_mat.parm("refl_reflectivityg").set(0.949468) rs_mat.parm("refl_reflectivityb").set(0.91724) rs_mat.parm("refl_edge_tintr").set(0.998643) rs_mat.parm("refl_edge_tintg").set(0.998454) rs_mat.parm("refl_edge_tintb").set(0.998008) rs_mat.parm("refl_samples").set(128) rs_mat.parm("diffuse_weight").set(0) else: rs_mat.parm("refl_ior").set(passedIOR) rs_mat.parm("opacity_colorr").set(passedOpacity) rs_mat.parm("opacity_colorg").set(passedOpacity) rs_mat.parm("opacity_colorb").set(passedOpacity) rs_tex = rs_vop.createNode("redshift::TextureSampler",returnValidHoudiniNodeName("rs_Tex_%s" % passedName)) if rs_tex != None: # Wire try: rs_output.setInput(0,rs_mat) can_continue = True except: can_continue = False if can_continue: if passedImageFilePath.find("NOT_DETECTED")==-1: # Only plug in texture if the texture map was specified. rs_mat.setInput(0,rs_tex) # input #0 is diffuse color. extension = os.path.splitext(passedImageFilePath)[1] files_with_alphas = [".png",".PNG",".tga",".TGA",".tif",".TIF",".tiff",".TIFF",".exr",".EXR"] if extension in files_with_alphas: # Place a sprite after the rsMaterial to implment opacity support. rs_sprite = rs_vop.createNode("redshift::Sprite",returnValidHoudiniNodeName("rs_Sprite_%s" % passedName)) if rs_sprite != None: rs_sprite.parm("tex0").set(passedImageFilePath) # set the filename to the texture. rs_sprite.parm("mode").set("1") rs_sprite.setInput(0,rs_mat) rs_output.setInput(0,rs_sprite) #rs_mat.setInput(46,rs_tex) # input #46 is opacity color (i.e. alpha). rs_tex.parm("tex0").set(passedImageFilePath) # set the filename to the texture. # Remove luminosity from texture using a color corrector. rs_cc = rs_vop.createNode("redshift::RSColorCorrection",returnValidHoudiniNodeName("rs_CC_%s" % passedName)) if rs_cc != None: rs_cc.setInput(0,rs_tex) rs_cc.parm("saturation").set(0) # Add a slight bump using the greyscale value of the diffuse texture. rs_bump = rs_vop.createNode("redshift::BumpMap",returnValidHoudiniNodeName("rs_Bump_%s" % passedName)) if rs_bump != None: rs_bump.setInput(0,rs_cc) rs_bump.parm("scale").set(0.25) # Hard coded, feel free to adjust. rs_output.setInput(2,rs_bump) # Layout. rs_vop.moveToGoodPosition() rs_tex.moveToGoodPosition() rs_cc.moveToGoodPosition() rs_bump.moveToGoodPosition() rs_mat.moveToGoodPosition() rs_output.moveToGoodPosition() else: print "problem creating redshift::TextureSampler node." else: print "problem creating redshift::Material node." else: print "problem detecting redshift_material1 automatic closure." else: print "problem creating redshift vop net?" def childrenOfNode(node, filter): # Return nodes of type matching the filter (i.e. geo etc...). result = [] if node != None: for n in node.children(): t = str(n.type()) if t != None: for filter_item in filter: if (t.find(filter_item) != -1): # Filter nodes based upon passed list of strings. result.append((n.name(), t)) result += childrenOfNode(n, filter) return result def groupByFBXMaterials(node_path, rewrite_original=False): lst_geo_objs = [] lst_fbx_mats = [] s = "" material_nodes = childrenOfNode(hou.node("%s/materials" % node_path),["Shop material"]) #Other valid filters are Sop, Object, cam. for (name, type) in material_nodes: node_candidate = "%s/%s" % ("%s/materials" % node_path, name) n = hou.node(node_candidate) if n !=None: lst_fbx_mats.append(node_candidate) object_nodes = childrenOfNode(hou.node(node_path),["Object geo"]) #Other valid filters are Sop, Object, cam. for (name, type) in object_nodes: node_candidate = "%s/%s" % (node_path, name) n = hou.node(node_candidate) if n !=None: lst_geo_objs.append(node_candidate) # Make an object geo node for each material detected. # Inside the object will reside an object merge to fetch in each object that references the material. root = hou.node("/obj") if root != None: for mat in lst_fbx_mats: mat_name = os.path.basename(mat) shader_name = "rs_%s" % mat_name geo_name = "geo_%s" % mat_name ''' node_geo = root.createNode("geo", geo_name) if node_geo: # Delete the default File node that is automatically created as well. if (len(node_geo.children())) > 0: n = node_geo.children()[0] if n: n.destroy() node_geo.parm("shop_materialpath").set("/shop/%s" % shader_name) node_obm = node_geo.createNode("object_merge","object_merge1") if node_obm != None: p = node_obm.parm("objpath1") all_obj = "" for obj in lst_geo_objs: temp_node = hou.node(obj) if temp_node != None: smp = temp_node.parm("shop_materialpath").eval() if smp.find(mat_name) != -1: all_obj += "%s " % obj p.set(all_obj) node_obm.parm("xformtype").set(1) ''' # Make a place holder Redshift material by reviewing the FBX material. opacity = 1.0 ior = 1.025 reflection_weight = 0.1 reflection_roughness = 0.23 diffuse_color = [0,0,0] specular_color = [0,0,0] # Typically the FBX Surface Shader is the second node created in the FBX materials subnet. n = hou.node(mat).children()[1] if n != None: r = n.parm("Cdr").eval() g = n.parm("Cdg").eval() b = n.parm("Cdb").eval() diffuse_color = [r,g,b] sm = n.parm("specular_mult").eval() if sm > 1.0: sm = 1.0 reflection_weight = 1.0-sm if (sm==0) and (n.parm("Car").eval()+n.parm("Cdr").eval()==2): # Mirrors should use another Fresnel type. ior=0 r = n.parm("Csr").eval() g = n.parm("Csg").eval() b = n.parm("Csb").eval() specular_color = [r,g,b] opacity = n.parm("opacity_mult").eval() reflection_roughness = n.parm("shininess").eval()*0.01 em = n.parm("emission_mult").eval() if em > 0: # We should create an rsIncandescent shader, using this color, instead. r = n.parm("Cer").eval() g = n.parm("Ceg").eval() b = n.parm("Ceb").eval() # Try to fetch the diffuse image map, if any. tex_map = n.parm("map1").rawValue() if len(tex_map) > 0: pass else: tex_map = "%s/%s" % (texture_path,"NOT_DETECTED") createRedshiftImageMapMaterial("/shop", tex_map, shader_name, diffuse_color, specular_color, reflection_weight, reflection_roughness, ior, opacity) if rewrite_original: # Re-write the original object node's material reference to point to the Redshift material. for obj in lst_geo_objs: node_geo = hou.node(obj) if node_geo: m = node_geo.parm("shop_materialpath").eval() if len(m): mat_name = os.path.basename(m) shader_name = "/shop/rs_%s" % mat_name # To do this right, we need to add a material node to the end of the network and populate it with the shop_materialpath value. node_display = node_geo.displayNode() if node_display != None: node_mat = node_geo.createNode("material","material1") # Create new node. if node_mat != None: node_mat.parm("shop_materialpath1").set(shader_name) node_mat.setInput(0,node_display) # Wire it into the network. node_mat.setDisplayFlag(True) # Move the display flag to the new node. node_mat.setRenderFlag(True) # Move the render flag to the new node. node_mat.moveToGoodPosition() # Program starts here. texture_path = '/media/banedesh/Storage/Documents/Models/Ford/Ford_F-150_Raptor_2017_crewcab_fbx' #Not really used yet. fbx_subnet_path = "/obj/Container_Ship_Generic_FBX" groupByFBXMaterials(fbx_subnet_path, True)
  29. 2 points
    Wanted to try a pure vex solution. curvegap_01.hiplc
  30. 2 points
  31. 2 points
    This will split at the length along the curve you specify with a given doorway width. I think you could easily refactor to use percentage of length... splitbylength.hip
  32. 2 points
    alternatively you could also just have put $FF into your foreach-end single pass condition and then your rop after the foreach. Write out 10 frames, get 10 variations.
  33. 2 points
    sticky pillows....ewl.... vu_vellumstickyballs2.hiplc
  34. 2 points
    You can fit-range an attribute with setdetailattrib() set to "min" and "max". 1st pointwrangle: setdetailattrib(0, 'height_min', @height, "min"); setdetailattrib(0, 'height_max', @height, "max"); 2nd pointwrangle: float min = detail(0, "height_min", 0); float max = detail(0, "height_max", 0); @Cd.g = fit(@height, min, max, 0.0, 1.0); fit_range_VEX.hiplc
  35. 2 points
    I'm working on something related to art direct the swirly motion of gases; Its an implementation of a custom buoyancy model that let you art direct very easily the general swirly motion of gases without using masks, vorticles, temperature sourcing to have more swirly motion in specific zones, etc. Also it gets rid of the "Mushroom effect" for free with a basic turbulence setup. Here are some example previews. Some with normal motion, others with extreme parameters values to stress the pipeline. For the details is just a simple turbulence + a bit of disturbance in the vel field, nothing complex, because of this the sims are very fast (for constant sources: average voxel count 1.8 billions, vxl size 0.015, sim time 1h:40min (160 frames), for burst sources, vxl size 0.015, sim time 0h:28min). I'm working on a vimeo video to explain more this new buoyancy model that I'm working on. I hope you like it! Cheers, Alejandro constantSource_v004.mp4 constantSource_v002.mp4 burstSource_v004.mp4 constantSource_v001.mp4 burstSource_v002.mp4 burstSource_v003.mp4 burstSource_v001.mp4 constantSource_v003.mp4
  36. 2 points
    Gifstorm! First I've used a visualizer sop to show @v coming out of the trail sop: That makes sense so far. To make the next step easier to understand, I've shrunk the face that sits along +Z, and coloured the +Y face green, +X red, +Z blue. So, that done, here's that cube copied onto the points, with the v arrows overlaid too: The copied shapes are following the velocity arrows, but they're a bit poppy and unstable. So why are they following, and why are they unstable? The copy sop looks for various attributes to control the copied shapes, @v is one of them. If found, it will align the +Z of the shape down the @v vector. Unfortunately what it does if it has only @v is a little undefined; the shapes can spin on the @v axis when they get near certain critical angles, which is what causes the popping and spinning. To help the copy sop know where it should aim the +Y axis, you can add another attribute, @up. I've added a point wrangle before the trail, with the code @up = {0,1,0}; ie, along the worldspace Y axis: you can see all the green faces now try and stay facing up as much as they can (note the view axis in the lower left corner), but there's still some popping when the velocity scales to 0, then heads in the other direction. Not much you can do about that really, apart from try some other values for @up, see if they hide the problem a little better. What if we set @up to always point away from the origin? Because the circle is modelled at the origin, we can be lazy and set @up from @P (ie, draw a line from {0,0,0} to @P for each point, that's a vector that points away from the origin): Yep, all the green faces point away from the center, but there's still popping when @v scales down to 0 when the points change direction. Oh well. Maybe we can venture into silly territory? How about we measure the speed of v, and use it to blend to the @up direction when @v gets close to 0? Better! Still a bit poppy, but an improvement. Here's the scene with that last setup: vel_align_example.hipnc To answer the other key words in your topic title, I mentioned earlier that the copy sop looks for attributes, obviously @v and @up as we've used here, but if it finds others, they'll take priority. Eg, @N overrides @v. @N is still just a single vector like @v, so it too doesn't totally describe how to orient the shapes. You could bypass the trail and the wrangle so that there's no @v or @up, set @N to {0,1,0}, and all the shapes will point their blue face towards the top. Without any other guidance, it will point the red side of the shapes down +X. If you give it @N and @up, then it knows where point the green side, and you get a well defined orientation. While using 2 attributes to define rotation is perfectly valid, there are other options. The one that trumps all others is @orient. It's a single attribute, which is nice, and its party trick is that it defines orientation without ambiguity, using a 4 value vector. The downside is quaternions aren't easy to understand, but you don't really need to understand the maths behind it per-se, just understand what it represents. The simplest way is to think of it as @N and @up, but glommed into a single attribute. Another way is to think of it as a 3x3 matrix (which can be used to store rotation and scale), but isolated to just the rotation bits, so it only needs 4 values rather than 9 values. In houdini, you rarely, if ever, pluck quaternion values out of thin air. You normally generate what you need via other means, then at the last minute convert to quaternion. Lots of different ways to do this, coming up with ever funkier smug ways to generate them in 1 or 2 lines of vex is something I'm still learning from funkier smug-ier co-workers. Eg, we could take our fiddled @v, and convert it to a quaternion: @orient = dihedral({0,0,1} ,@v); What that's doing is taking the +Z axis of our shape-to-be-copied, and working out the quaternion to make it align to @v. You could then insert an attrib delete before the copy, remove @N, @v, @up, and now just with the single @orient, all the shapes rotate as you'd expect. vel_align_example_orient.hipnc
  37. 1 point
    Threadripper is a great option if you don't need a ton of memory (128 gigabytes or less).
  38. 1 point
    You're going to have a very very hard time with this technical exercise of wrapping a rotating cloth around and around a small tube. This is a non trivial problem. You may need to crank up certain values like the density of points on the geometry or crank up the overall substeps to such a point that it either blows up or just barely works. In other words Vellum, IMHO, isn't designed for wrapping and sticking cloth. Instead I think it's designed more for draping cloth style with one or two wrinkles, not eight layers. But I'd be grateful if you proved me wrong and were able to find the right settings. To that end you'll want to change your polygon group selections to a more procedural nature and use a bounding box instead of an explicit manual selection of certain numbered polys. That way you can change your geometry more easily. Is this for a specific effect in a shot that you're working on? Is it a personal challenge? Is it for a class? There might be another way to solve this as a visual problem instead of a naive simulation. I would suggest a more manual animation style or blend shape style solution if you need a ring of cloth wrapped around a pole.
  39. 1 point
    RMB > Actions > Create Reference Copy
  40. 1 point
    I already have hired an artist and am paying a reasonable rate. I do have a scene file that I made from scratch but for high security reasons am unable to share it.
  41. 1 point
    "Hybrid" Another one in the Organics series, this time from a layered structure with strong DOF in Redshift and "Power" at 100 for that special look... Cheers, Tom
  42. 1 point
    I think there are a couple of things going on the effects your sim results. Because you thin shape is so much thinner that the thick shape your point thickness changes which will effect the sim, if you visualize thickness in the sim you will see that changing between sims because of overlap. Then the bigger difference between the width and height of the primitives of the thin tube will make it bend easier and that has more to do with the cloth settings than the struts. Increasing the stiffness and bend stiffness a lot on the cloth constraints help as much if not more that the struts. I also upped the strut constraints stiffness a bit. Making those changes gives you good results while only having to set the solver substeps to 2 and the constraint iterations to 200. So you can avoid 4000 iterations odforce_example_zj.hiplc
  43. 1 point
    Thanks Andrea, that did the trick. The two things I am having trouble with now are: 1) Vellum object collision is not registering. I thought it might have to do with the thickness of the cloth (since there is some space between the objects) but changing it didn't affect collisions. I also tried tweaking the the "Search Scale" in the "Advanced" tab of the Vellum Solver, but that didn't do too much either. I feel like I am missing something simple? 2) The collisions look "flickery" when I visualize them. To get around it I have increased the Substep Iterations on the vellum solver which helped a little but it's still a little jittery. Do I need to just increase the substeps or is there a work around for this? Lastly, I noticed you had increased the "Max Acceleration" on the Vellum Solver and I thought that might help me with my balloons inflating smoothly but they still are pretty jerky when they inflate? Any tips. Thanks again!! inflate-test-002.hipnc
  44. 1 point
    Not sure if I understand correctly, but if you want to bake in COPs point attribute in uv space, then VEX works as usual: vector uv = set(X, Y, 0); vector clr = uvsample("op:/obj/geo/geometry", "Cd", "uv", uv); R = clr.x; G = clr.y; B = clr.z; Copy this (replacing path and attribute name) into Snippet VOP inside vopcop2filter and you should have the attribute baked into texture.
  45. 1 point
    This seems to be a recurrent problem for pretty much everybody. The FBX export doesn't have this capability in and of itself. If you want, you could look into the rbd_to_fbx from the gamedev toolset, they actually did it. Their method is to create a subnet with a geo node for all the packed fragment, and export that subnet. I believe this could be replicated inside of sop.
  46. 1 point
    Images you attached are probably generated by some mathematical software like Wolfram Mathematica or MatLab. So if you have exact math function for generating those you can use them in Houdini too. For such Math approach (not really procedural in manner of combining full potential of Houdini), you can use ISO Surface node. Basically that node will accept any function of X,Y,Z coordinate in implicit form. For example if you want to define surface of sphere with radius 1. You actually thinking of function which will give you all points which are at exact 1 unit from let say center of scene. Function which cover that in 3D space would be sqrt(x^2+y^2+z^2)=1 If x,y,z represent coordinate of point then any point with such x,y,z that satisfy equation will be at surface of that one unit sphere. Making power of 2 on both sides of equation of unit radius sphere gives you x^2+y^2+z^2 = 1 That is explicit form of function. If you transfer right part to left: x^2+y^2+z^2- 1 = 0 Now when right part is 0 it could be removed (but think on it as if it exists and is equal to zero) and it leaves you with implicit form of equation x^2+y^2+z^2-1 and that example is default expression value in ISO Surface node. Unit length sphere. That node samples 3D space in ranges you set and for any point in that range generate surface (iso surface) if point's coordinates ($X,$Y,$Z) satisfy implicit equation you entered. Equation of simple torus in implicit form would be (R - sqrt(X^2 + Z^2))^2 + Y^2 - r^2 where R and r are large and small radius of torus ISO_torus.hip Without proper formulas for exact definition of your surfaces everything else is just guessing. If that is good enough you can try modify equation. Btw. any kind of function can be processed even something like noise($X,$Y,$Z) Doing proper math for repetition of many radius levels involve some of repetition functions like Modulus or trigonometry sin or cos. Rearranging arguments for solving for small r gives r = sqrt((R - sqrt(X^2 + Z^2))^2 + Y^2) replacing that instead of r in original function gives you 0 because any point doesn't matter of its coordinate satisfy equation, but if you expand that expression like this r = int(N * sqrt((R - sqrt(X^2 + Z^2))^2 + Y^2)) / N you actually quantize only those points which match quantized radius satisfy equation. ISO_torus_repetition.hip As you can see in example you can also use logical function to clamp calculation in some segment. Expression length($X,0,$Z)<R clamp calculation only inside tube of radius R. That example gives your image 1 On image 2 and 3 you can see that changing over Y axis bends toruses. so you have to put that in equation etc. This is NOT procedural approach, it is just pure math of surface representation for some equation and since you have images from Math software I suppose you also have exact function so you can use it on the same way in Houdini.
  47. 1 point
    You could make relative height gradient (in 0..1 range) like this.
  48. 1 point
  49. 1 point
    Here's one way, no doubt someone else will show an easier method. To fit a colour ramp to something, you need an input value that goes between 0 and 1. In this case, we want a 0-1 value within the bounding box of your volume. The relbbox() function does this, where one of the lower corners is set as {0,0,0}, the opposite top corner is {1,1,1}, and if you give it a position within the box, it tells you where it is within that 0-1 range of the box. In a volume wrangle, you can query the position of each voxel relative to the overall bounding box with vector bb = relbbox(0, @P); For the vertical gradient, we only want the y component of that, ie bb.y. You can directly feed that to a channel ramp, and set the colours you want, and drive @Cd that way: @Cd = chramp('my_colour', bb.y); vol_y_ramp.hipnc Few things to watch for: -the default mode for a channel ramp is a float ramp, not a colour ramp. You need to edit the parameter interface to swap it from float to colour. -you need a volume visualisation node to see it in the viewport -I'm sure its possible to do this in a shader and not directly in the volume, but I couldn't be bothered to work that out. -relbbox() might be depreciated in future in favour of the more descriptive relpointbbox(), but that seems to have a bug with volumes. Instead you need to put down a bounds sop to get an actual box shape, wire that to the second input of the wrangle, and call relpointbbox(1,@P) instead. -the viewport renderer shows washed out colours compared to the render, you can see I used the colour correction toolbar on both the viewport and render view to make them roughly match.
  50. 1 point
    ok, here is the example file with 4 ways (cache the instance geometry first, both blue nodes ) 1. (Purple) rendering points with instancefile attrib directly through fast instancing 2. (Green) overriding unexpandedfilename intrinsic for any packeddisk primitive copied onto points without stamping 3. (Red) just for comparison Instance SOP, which is using copy stamping inside, so it will be slower than previous methods 4. (Yellow) copying static alembic without stamping and overriding abcframe in this case to vary time for each instance independently (if you need various alembics you can vary abcfilename as well) ts_instance_and_packed_examples_without_stamping.hip
×