Jump to content


Popular Content

Showing most liked content since 04/19/2019 in all areas

  1. 5 points
    Hi all, I've made a Color Sampler asset that samples colors from an image. Colors can be sampled with points on a line, on a grid, randomly scattered points or with input points. It's also possible to sample the most frequently occurring colors. The sampled colors are stored in a color ramp which can be randomized. You can download it from https://techie.se/?p=gallery&type=houdini_assets&item_nr=6.
  2. 4 points
    Because the three values defines an imaginary line going from 0,0,0 to the position in space described by the 3 values. That point can be far away or i can be close to 0,0,0. This way the 3 values are able to define both a direction and a length. -b
  3. 3 points
    Hi, set point attribute i@found_overlap to 1 on your packed primitives in SOP. Then in DOP Network into the rbdpackedobject1 turn on parameter Solve on Creation Frame.
  4. 3 points
    what's up bb this is kind of annoying to do because the vellum constraints sop doesn't allow for many options when using pin to target constraints... at least not in my current build. i'm creating the constraint and then using a sop solver in the simulation to modify the ConstraintGeometry so that any primitives named "pin" or "pinorient" have their stiffness attribute set to zero based on an animation inside the solver. check it out: disable_pins_toadstorm.hip
  5. 3 points
    I've uploaded a tutorial video to generate maze on any given polygonal mesh.
  6. 3 points
    Hey guys! I used to be an FX artist with Houdini but made the transition to concept art, however I still have a love for a procedural workflow and am constantly looking for ways to implement Houdini into my art. I won't talk about scattering the snow because I am sure everyone knows how simple and awesome Houdini is for a thing like that! I will just say I createde CSV files for Octane to read in and render and the control is awesome. So I will just briefly talk about the cloth. For this project I wanted to create a MarvelousDesigner in H type of thing, and I have to say Houdini for clothing is amazing. The system that I built means it will auto stitch itself as long as I draw the curves in the same order which is very easy, and the biggest takeaway is that I don't have to struggle to get what I want at all - I can just grab a few points and say these are going to stick here and thats it. In testing this workflow I created clothing for a couple different characters in a matter of minutes. I'm really interested in using Houdini like this for my concepts, which need to be very quick... I think there is a lot of potential in a procedural workflow like this where you don't tweak it to make it perfect, you hack it to just make it good enough to paint over. I am just scratching the surface but clothing is one thing I'll definitely be doing in Houdini from now on. I'm also using VR a lot and have some interesting tests with creating geometry in houdini out of the curves from gravity sketch, but thats for another time if people are interested Thanks for reading Feel free to check out the full project here: https://www.artstation.com/artwork/2xYwrK
  7. 3 points
  8. 3 points
    Try dropping down a pointvelocity node, after the create surface. It allows you to add curl noise or bring in your own velocity attribute. ap_fluid_velocity_042119.hiplc
  9. 3 points
    I also moved all the tumblr example files I've been sharing onto my new website that you can find here. https://www.richlord.com/tools
  10. 2 points
    I know that I posted this video in another post as an mp4. So here is a vimeo video that shows all the stages for this simulation result! I hope you like it! Thank you! Alejandro
  11. 2 points
    @f1480187 the master has spoken! thanks once more man, really appreciate it. Just finished analizing your setup, always learning some new nodes from you, will read more about these: vertexsplit, falloff, polypath. That wrangle is gold to me, will study that one too. Beautiful setup as always. I'll tag you once I finish this study, but basically I'm making some hard surface helmets in movement, I asked that corner stuff to add some detail, now I'll dig a little into COPs. Thanks again, legend. Cheers!
  12. 2 points
    Ok, a bit off topic I should have tried to do it with vellum but I had this idea. What about using chop as post-process for filtering the noise/jittering? Usually the jittering starts or at least is visible when the object is almost static. In this case you can use the length of velocity to blend the cloth/vellum object between the filtered/non-jittering simulation and the non filtered object. This way you get the best from the two. This trick saved me a lot of times (if you consider three a lot) and the supervisor was happy about the "clean sim" vellum_basics__jitter_02_chop.hip
  13. 2 points
    Hello! Here are some test that I did using my Implicit Buoyancy Model! I hope you like it!! Thanks!!
  14. 2 points
    Hey, I have been struggling to get a nice swirling motion in slower moving pyro smoke simulations. In Maya fluids I had an option called swirl witch worked beautifully. I found an old post addressing this issue, but it seems to have died before arriving at a good solution: http://forums.odforce.net/topic/23132-smoke-swirl-vorticity-with-smoke-solver/ Here is a simulation I did in Maya using the swirl attribute. https://youtu.be/Ifd6FJ2oHIc In houdini I have been using disturbance but I don't seem to get the results I want. Disturbance doesn't really seem to add swirl but more turbulent detail. For example, look at this sim I found on vimeo. You can see the disturbance start overpower the simulation towards the end, it doesn't really add a swirling motion. https://vimeo.com/220668349 What is a good Idea to get nice swirls? Thanks
  15. 2 points
    Hi guys. Sorry for late answer. It's known issue with H17.5. DM 1.5, which works only with H17.5, is in last beta stage. Please wait a bit.
  16. 2 points
    Here is the intro sequence for a new video I am working on: Simple FLIP simulation with some smoke and whitewater sparks. Rendered in Redshift. Lens flares, DoF and Glow added in Nuke. The heat distortion from the smoke is the part I am least happy with. I took the volume tint AOV and created the difference from one frame to the next to drive STMap distortion. Would love to hear a better idea of doing that. Let me know what you think.
  17. 2 points
    Their are many ways to do it. I included two very simple ways to do it, just using some micro-solvers. I just have that running a few frames before sourcing my main sim. break.hip
  18. 2 points
    syntax inside the VEX field is : windvelocity, not @windvelocity.
  19. 2 points
    Before going into rendering we tend to compress all volumes to 16 bit. There is no noticeable difference for rendering and almost a 50% space savings. Only thing to really be careful with vdb is pruning with rest volumes (they really should not be pruned as 'zero' is a valid rest value.) During the simulation you want the full 32-bit as that accuracy is needed for the fluid solve, but once you are done with the sim (and any post processing) you can go down to 16-bit.
  20. 2 points
    If you use a sop solver to explicitly set the y point values of the cloth geo and an additional sop solver to do the same to the constraint geo, you can very effectively constrain your sim to just two axis. See attached for example. BTW, it may seem that you only need to manipulate the "Geometry" data, and in the simple attached demo that seems to work fine, however I have found that with complex vellum sims you really need to run over the "ConstraintGeometry" data as well. vallum_x_constraint.hip
  21. 2 points
    Sounds like a Room / Window / Parallax shader ; Sorry I couldn't find it for Houdini
  22. 2 points
    If you want to create a DNA strand, you can use two helices (with different offsets). You can transform these two curves to a guide path using a path deformer. After this you can create attributes for orientation / legth / positioning etc... . With these attributes you can copy/transform a geometry (a box for example) to each of these position. Here is a setup. Double_helix.hipnc
  23. 2 points
    I created a short python script to help out with the excessive nodes that are presented when importing detailed FBX files. My example is a vehicle with 278 object nodes referencing 37 materials inside the materials subnet generated by the import. This script scans the materials subnet and creates a new object level geo for each material it finds. Inside this new geo node it creates an ObjectMerge node and populates the node with all object references to the FBX material. It assigns the new material to this new geo node that points to /shop instead of the FBX materials subnet. Then it reviews the FBX materials and creates a new Redshift material for each FBX material detected. It scans the FBX Surface node and extracts a few parameters like diffuse color, specular etc... The net result is that I only have 37 nodes to manage instead of 278 after running the script. Also my nodes have Redshift placeholder materials assigned so I can get right to rendering. Add this code to a new shelf button and adjust the paths, at the bottom of the script, to point to your FBX subnet. The texture path is not really used at this time. # Scan a FBX subnet for materials. # Create a geo object with an object merge for each object that references the material. # Create a place holder Redshift material by reviewing the FBX materials in the subnet. # Atom 08-22-2018 # 10-14-2018 import hou, os, re def returnValidHoudiniNodeName(passedItem): # Thanks to Graham on OdForce for this function! # Replace any illegal characters for node names here. return re.sub("[^0-9a-zA-Z\.]+", "_", passedItem) def createRedshiftImageMapMaterial(passedSHOP, passedImageFilePath, passedName, passedDiffuse=[0,0,0], passedSpecular=[0,0,0], passedWeight=0.1, passedRoughness=0.23, passedIOR=1.0, passedOpacity=1.0): #print "->%s [%s] [%s]" % (passedSHOP, passedImageFilePath, passedName) rs_vop = hou.node(passedSHOP).createNode("redshift_vopnet",passedName) if rs_vop != None: rs_output = hou.node("%s/%s/redshift_material1" % (passedSHOP, passedName)) # Detect the default closure node that should be created by the redshift_vopnet. if rs_output != None: # Create. rs_mat = rs_vop.createNode("redshift::Material","rs_Mat") if rs_mat != None: # Set passed values. rs_mat.parm("diffuse_colorr").set(passedDiffuse[0]) rs_mat.parm("diffuse_colorg").set(passedDiffuse[1]) rs_mat.parm("diffuse_colorb").set(passedDiffuse[2]) rs_mat.parm("refl_colorr").set(passedSpecular[0]) rs_mat.parm("refl_colorg").set(passedSpecular[1]) rs_mat.parm("refl_colorb").set(passedSpecular[2]) rs_mat.parm("refl_weight").set(passedWeight) rs_mat.parm("refl_roughness").set(passedRoughness) if passedIOR ==0: # A zero based IOR means activate mirror mode for the reflection section. rs_mat.parm("refl_fresnel_mode").set("1") rs_mat.parm("refl_brdf").set("1") rs_mat.parm("refl_reflectivityr").set(0.961998) rs_mat.parm("refl_reflectivityg").set(0.949468) rs_mat.parm("refl_reflectivityb").set(0.91724) rs_mat.parm("refl_edge_tintr").set(0.998643) rs_mat.parm("refl_edge_tintg").set(0.998454) rs_mat.parm("refl_edge_tintb").set(0.998008) rs_mat.parm("refl_samples").set(128) rs_mat.parm("diffuse_weight").set(0) else: rs_mat.parm("refl_ior").set(passedIOR) rs_mat.parm("opacity_colorr").set(passedOpacity) rs_mat.parm("opacity_colorg").set(passedOpacity) rs_mat.parm("opacity_colorb").set(passedOpacity) rs_tex = rs_vop.createNode("redshift::TextureSampler",returnValidHoudiniNodeName("rs_Tex_%s" % passedName)) if rs_tex != None: # Wire try: rs_output.setInput(0,rs_mat) can_continue = True except: can_continue = False if can_continue: if passedImageFilePath.find("NOT_DETECTED")==-1: # Only plug in texture if the texture map was specified. rs_mat.setInput(0,rs_tex) # input #0 is diffuse color. extension = os.path.splitext(passedImageFilePath)[1] files_with_alphas = [".png",".PNG",".tga",".TGA",".tif",".TIF",".tiff",".TIFF",".exr",".EXR"] if extension in files_with_alphas: # Place a sprite after the rsMaterial to implment opacity support. rs_sprite = rs_vop.createNode("redshift::Sprite",returnValidHoudiniNodeName("rs_Sprite_%s" % passedName)) if rs_sprite != None: rs_sprite.parm("tex0").set(passedImageFilePath) # set the filename to the texture. rs_sprite.parm("mode").set("1") rs_sprite.setInput(0,rs_mat) rs_output.setInput(0,rs_sprite) #rs_mat.setInput(46,rs_tex) # input #46 is opacity color (i.e. alpha). rs_tex.parm("tex0").set(passedImageFilePath) # set the filename to the texture. # Remove luminosity from texture using a color corrector. rs_cc = rs_vop.createNode("redshift::RSColorCorrection",returnValidHoudiniNodeName("rs_CC_%s" % passedName)) if rs_cc != None: rs_cc.setInput(0,rs_tex) rs_cc.parm("saturation").set(0) # Add a slight bump using the greyscale value of the diffuse texture. rs_bump = rs_vop.createNode("redshift::BumpMap",returnValidHoudiniNodeName("rs_Bump_%s" % passedName)) if rs_bump != None: rs_bump.setInput(0,rs_cc) rs_bump.parm("scale").set(0.25) # Hard coded, feel free to adjust. rs_output.setInput(2,rs_bump) # Layout. rs_vop.moveToGoodPosition() rs_tex.moveToGoodPosition() rs_cc.moveToGoodPosition() rs_bump.moveToGoodPosition() rs_mat.moveToGoodPosition() rs_output.moveToGoodPosition() else: print "problem creating redshift::TextureSampler node." else: print "problem creating redshift::Material node." else: print "problem detecting redshift_material1 automatic closure." else: print "problem creating redshift vop net?" def childrenOfNode(node, filter): # Return nodes of type matching the filter (i.e. geo etc...). result = [] if node != None: for n in node.children(): t = str(n.type()) if t != None: for filter_item in filter: if (t.find(filter_item) != -1): # Filter nodes based upon passed list of strings. result.append((n.name(), t)) result += childrenOfNode(n, filter) return result def groupByFBXMaterials(node_path, rewrite_original=False): lst_geo_objs = [] lst_fbx_mats = [] s = "" material_nodes = childrenOfNode(hou.node("%s/materials" % node_path),["Shop material"]) #Other valid filters are Sop, Object, cam. for (name, type) in material_nodes: node_candidate = "%s/%s" % ("%s/materials" % node_path, name) n = hou.node(node_candidate) if n !=None: lst_fbx_mats.append(node_candidate) object_nodes = childrenOfNode(hou.node(node_path),["Object geo"]) #Other valid filters are Sop, Object, cam. for (name, type) in object_nodes: node_candidate = "%s/%s" % (node_path, name) n = hou.node(node_candidate) if n !=None: lst_geo_objs.append(node_candidate) # Make an object geo node for each material detected. # Inside the object will reside an object merge to fetch in each object that references the material. root = hou.node("/obj") if root != None: for mat in lst_fbx_mats: mat_name = os.path.basename(mat) shader_name = "rs_%s" % mat_name geo_name = "geo_%s" % mat_name ''' node_geo = root.createNode("geo", geo_name) if node_geo: # Delete the default File node that is automatically created as well. if (len(node_geo.children())) > 0: n = node_geo.children()[0] if n: n.destroy() node_geo.parm("shop_materialpath").set("/shop/%s" % shader_name) node_obm = node_geo.createNode("object_merge","object_merge1") if node_obm != None: p = node_obm.parm("objpath1") all_obj = "" for obj in lst_geo_objs: temp_node = hou.node(obj) if temp_node != None: smp = temp_node.parm("shop_materialpath").eval() if smp.find(mat_name) != -1: all_obj += "%s " % obj p.set(all_obj) node_obm.parm("xformtype").set(1) ''' # Make a place holder Redshift material by reviewing the FBX material. opacity = 1.0 ior = 1.025 reflection_weight = 0.1 reflection_roughness = 0.23 diffuse_color = [0,0,0] specular_color = [0,0,0] # Typically the FBX Surface Shader is the second node created in the FBX materials subnet. n = hou.node(mat).children()[1] if n != None: r = n.parm("Cdr").eval() g = n.parm("Cdg").eval() b = n.parm("Cdb").eval() diffuse_color = [r,g,b] sm = n.parm("specular_mult").eval() if sm > 1.0: sm = 1.0 reflection_weight = 1.0-sm if (sm==0) and (n.parm("Car").eval()+n.parm("Cdr").eval()==2): # Mirrors should use another Fresnel type. ior=0 r = n.parm("Csr").eval() g = n.parm("Csg").eval() b = n.parm("Csb").eval() specular_color = [r,g,b] opacity = n.parm("opacity_mult").eval() reflection_roughness = n.parm("shininess").eval()*0.01 em = n.parm("emission_mult").eval() if em > 0: # We should create an rsIncandescent shader, using this color, instead. r = n.parm("Cer").eval() g = n.parm("Ceg").eval() b = n.parm("Ceb").eval() # Try to fetch the diffuse image map, if any. tex_map = n.parm("map1").rawValue() if len(tex_map) > 0: pass else: tex_map = "%s/%s" % (texture_path,"NOT_DETECTED") createRedshiftImageMapMaterial("/shop", tex_map, shader_name, diffuse_color, specular_color, reflection_weight, reflection_roughness, ior, opacity) if rewrite_original: # Re-write the original object node's material reference to point to the Redshift material. for obj in lst_geo_objs: node_geo = hou.node(obj) if node_geo: m = node_geo.parm("shop_materialpath").eval() if len(m): mat_name = os.path.basename(m) shader_name = "/shop/rs_%s" % mat_name # To do this right, we need to add a material node to the end of the network and populate it with the shop_materialpath value. node_display = node_geo.displayNode() if node_display != None: node_mat = node_geo.createNode("material","material1") # Create new node. if node_mat != None: node_mat.parm("shop_materialpath1").set(shader_name) node_mat.setInput(0,node_display) # Wire it into the network. node_mat.setDisplayFlag(True) # Move the display flag to the new node. node_mat.setRenderFlag(True) # Move the render flag to the new node. node_mat.moveToGoodPosition() # Program starts here. texture_path = '/media/banedesh/Storage/Documents/Models/Ford/Ford_F-150_Raptor_2017_crewcab_fbx' #Not really used yet. fbx_subnet_path = "/obj/Container_Ship_Generic_FBX" groupByFBXMaterials(fbx_subnet_path, True)
  24. 2 points
    Wanted to try a pure vex solution. curvegap_01.hiplc
  25. 2 points
  26. 2 points
    This will split at the length along the curve you specify with a given doorway width. I think you could easily refactor to use percentage of length... splitbylength.hip
  27. 2 points
    Hi, I am working on an airplane destruction with a skeleton underneath the fuselage. At first this worked quite well. ( SimV15_V5_goodImpact ) But after importing a newer version of the model with wings it completely ruins the simulation. ( SimV15_V7_Glitch ). Once I removed the breaks in the fuselage and skeleton mesh it works as it should again but the skeleton glitches through the fuselage. ( SimV15_V7_WingFixSkeletonBug2 ). So I know why this happens but I don't know how to fix it. I also don't really understand why the skeleton glitches through the fuselage mesh all of a sudden. Can two deformable surfaces work together properly? In attachments, you can find the file ( Simtest_15_V10WingFix_SkeletonIssue ) and the ABC ( B25_V30_Turbo_AllWingsV2 ) Btw I used steven knippings rigids 3 tutorial to get the metal deformation. Thanks in advance SimV15_V5_goodImpact.mp4 SimV15_V7_Glitch.mp4 SimV15_V7_WingFixSkeletonBug2.mp4 Simtest_15_V10WingFix_SkeletonIssue.hipnc B25_V30_Turbo_AllWingsV2.ABC
  28. 2 points
    i using classic Julia Fractal to drive my spline : cool spline (Mantra) and black-and-white
  29. 2 points
    I see a couple of things happening. You have based your grain generation off of the proxy mesh for the ground plane. It is better to dedicate a unique object for this. Also I noticed that the points coming into the point deform have a count mis-match. This is due to the fact that when OpenCL is enabled, it generatsd a different amount of particles, compared to the CPU rest version. Tun Off OpenCL to reestablish the match. Basically you don't need the point deform version after making these modifications. But it could be used as a secondary material element. ap_LetterSetup_001_collProbs.hiplc
  30. 2 points
  31. 2 points
    Organics: "Seed" Organics: "Bud" Organics: "Leaf" These were all rendered in Redshift at very high resolutions (10800 square/14400x9600) for printing: https://www.artstation.com/thomashelzle/prints Love it! :-) Cheers, Tom
  32. 2 points
    L-system fun
  33. 2 points
    1. +1 for hiring Alex Wanzhula 2. +1 for some kind of Mantra GPU
  34. 2 points
    alternatively you could also just have put $FF into your foreach-end single pass condition and then your rop after the foreach. Write out 10 frames, get 10 variations.
  35. 2 points
  36. 2 points
    A lot of people asked me to share this fake fire method.If you interested it, you can will check this simple hip. After rander i used ACES for a better look. fake_fire_rnd.hip
  37. 2 points
    sticky pillows....ewl.... vu_vellumstickyballs2.hiplc
  38. 2 points
    You can fit-range an attribute with setdetailattrib() set to "min" and "max". 1st pointwrangle: setdetailattrib(0, 'height_min', @height, "min"); setdetailattrib(0, 'height_max', @height, "max"); 2nd pointwrangle: float min = detail(0, "height_min", 0); float max = detail(0, "height_max", 0); @Cd.g = fit(@height, min, max, 0.0, 1.0); fit_range_VEX.hiplc
  39. 2 points
    I'm working on something related to art direct the swirly motion of gases; Its an implementation of a custom buoyancy model that let you art direct very easily the general swirly motion of gases without using masks, vorticles, temperature sourcing to have more swirly motion in specific zones, etc. Also it gets rid of the "Mushroom effect" for free with a basic turbulence setup. Here are some example previews. Some with normal motion, others with extreme parameters values to stress the pipeline. For the details is just a simple turbulence + a bit of disturbance in the vel field, nothing complex, because of this the sims are very fast (for constant sources: average voxel count 1.8 billions, vxl size 0.015, sim time 1h:40min (160 frames), for burst sources, vxl size 0.015, sim time 0h:28min). I'm working on a vimeo video to explain more this new buoyancy model that I'm working on. I hope you like it! Cheers, Alejandro constantSource_v004.mp4 constantSource_v002.mp4 burstSource_v004.mp4 constantSource_v001.mp4 burstSource_v002.mp4 burstSource_v003.mp4 burstSource_v001.mp4 constantSource_v003.mp4
  40. 2 points
    Gifstorm! First I've used a visualizer sop to show @v coming out of the trail sop: That makes sense so far. To make the next step easier to understand, I've shrunk the face that sits along +Z, and coloured the +Y face green, +X red, +Z blue. So, that done, here's that cube copied onto the points, with the v arrows overlaid too: The copied shapes are following the velocity arrows, but they're a bit poppy and unstable. So why are they following, and why are they unstable? The copy sop looks for various attributes to control the copied shapes, @v is one of them. If found, it will align the +Z of the shape down the @v vector. Unfortunately what it does if it has only @v is a little undefined; the shapes can spin on the @v axis when they get near certain critical angles, which is what causes the popping and spinning. To help the copy sop know where it should aim the +Y axis, you can add another attribute, @up. I've added a point wrangle before the trail, with the code @up = {0,1,0}; ie, along the worldspace Y axis: you can see all the green faces now try and stay facing up as much as they can (note the view axis in the lower left corner), but there's still some popping when the velocity scales to 0, then heads in the other direction. Not much you can do about that really, apart from try some other values for @up, see if they hide the problem a little better. What if we set @up to always point away from the origin? Because the circle is modelled at the origin, we can be lazy and set @up from @P (ie, draw a line from {0,0,0} to @P for each point, that's a vector that points away from the origin): Yep, all the green faces point away from the center, but there's still popping when @v scales down to 0 when the points change direction. Oh well. Maybe we can venture into silly territory? How about we measure the speed of v, and use it to blend to the @up direction when @v gets close to 0? Better! Still a bit poppy, but an improvement. Here's the scene with that last setup: vel_align_example.hipnc To answer the other key words in your topic title, I mentioned earlier that the copy sop looks for attributes, obviously @v and @up as we've used here, but if it finds others, they'll take priority. Eg, @N overrides @v. @N is still just a single vector like @v, so it too doesn't totally describe how to orient the shapes. You could bypass the trail and the wrangle so that there's no @v or @up, set @N to {0,1,0}, and all the shapes will point their blue face towards the top. Without any other guidance, it will point the red side of the shapes down +X. If you give it @N and @up, then it knows where point the green side, and you get a well defined orientation. While using 2 attributes to define rotation is perfectly valid, there are other options. The one that trumps all others is @orient. It's a single attribute, which is nice, and its party trick is that it defines orientation without ambiguity, using a 4 value vector. The downside is quaternions aren't easy to understand, but you don't really need to understand the maths behind it per-se, just understand what it represents. The simplest way is to think of it as @N and @up, but glommed into a single attribute. Another way is to think of it as a 3x3 matrix (which can be used to store rotation and scale), but isolated to just the rotation bits, so it only needs 4 values rather than 9 values. In houdini, you rarely, if ever, pluck quaternion values out of thin air. You normally generate what you need via other means, then at the last minute convert to quaternion. Lots of different ways to do this, coming up with ever funkier smug ways to generate them in 1 or 2 lines of vex is something I'm still learning from funkier smug-ier co-workers. Eg, we could take our fiddled @v, and convert it to a quaternion: @orient = dihedral({0,0,1} ,@v); What that's doing is taking the +Z axis of our shape-to-be-copied, and working out the quaternion to make it align to @v. You could then insert an attrib delete before the copy, remove @N, @v, @up, and now just with the single @orient, all the shapes rotate as you'd expect. vel_align_example_orient.hipnc
  41. 1 point
    All depends on whether your actual data processing (the one you perform, not SOPs/DOPs in general) are I/O bound or CPU bound. Compression just trades off one for another. Decompressing takes time, the question is is it less than the time difference between reads of compressed and uncompressed files. Depending on your hardware AND processing the answer may be different. In case you can afford storing cache on SSD disk, most probably compression doesn't make sense (albeit the cost of *.sc isn't big anyway and increasing storage by say 20% is tempting). The slower I/O is (like network storage over the ethernet), the better compression will perform - given the sizeof(files) is high enough and timeof(processing) is low enough. Same thing. It's a matter of tradeoffs. *.sc compresses/decompresses faster for the price of bigger files. *.gz/*.bzip2 are smaller but also way slower for writing and reading. If you save the cache for later reuse and don't mind decompressing it beforehand, *.gz files might be fine. Practically speaking though disk space is so cheap these days, no one cares. Don't bother unless you really have to.
  42. 1 point
    RMB > Actions > Create Reference Copy
  43. 1 point
    42million seems a little low, for final quality, especially for up close camera work. Try getting your voxel count into the 65-165 million range. (physical memory is a factor, here) I use an estimator expression on my pyro object, just create a new blank float and paste the expression into the field. This expression reviews your box size and particle separation to generate the approximation value. Here is the expression I use. ch("sizex")/ch("divsize")*ch("sizey")/ch("divsize")*ch("sizez")/ch("divsize") NOTE: The result is in scientific notation. So when you see +06 on the end that means millions, +07 means 10 million and +08 means 100 million. The final value shown here 4.28345e+07 is ~42million voxels.
  44. 1 point
    There's ways to decompose/describe a vector - 3 coordinates (xyz) that scale the coordinate system unit vectors (ijk) and add them together: P = i*x + j*y + k*z. - direction (vector scaled to length 1) and legth/magnitude (distance from tail to head of the vector) - direction (two angles) and length And more... It's all about using the representation that is more sensible to the operation you are doing. For angular related calcs direction is handy, while for position calcs you want to deal with xyz, etc.
  45. 1 point
  46. 1 point
    With the help of both the Redshift community and resources here, I finally figured out the proper workflow for dealing with Redshift proxies in Houdini. Quick summary: Out of the box, Mantra does a fantastic job automagically dealing with instanced packed primitives, carrying all the wonderful Houdini efficiencies right into the render. If you use the same workflow with Redshift, though, RS unpacks all of the primitives, consumes all your VRAM, blows out of core, devours your CPU RAM, and causes a star in nearby galaxy to supernova, annihilating several inhabited planets in the process. Okay, maybe not that last one, but you can't prove me wrong so it stays. The trick is to use RS proxies instead of Houdini instances that are in turn driven by the Houdini instances. A lot of this was based on Michael Buckley's post. I wanted to share an annotated file with some additional tweaks to make it easier for others to get up to speed quickly with RS proxies. Trust me; it's absolutely worth it. The speed compared to Mantra is just crazy. A few notes: Keep the workflow procedural by flagging Compute Number of Points in the Points Generate SOP instead of hard-coding a number Use paths that reference the Houdini $HIP and/or $JOB variables. For some reason the RS proxy calls fail if absolute paths are used Do not use the SOP Instance node in Houdini; instead use the instancefile attribute in a wrangle. This was confusing as it doesn’t match the typical Houdini workflow for instancing. There are a lot of posts on RS proxies that mention you always need to set the proxy geo at the world origin before caching them. That was not the case here, but I left the bypassed transform nodes in the network in case your mileage varies The newest version of Redshift for Houdini has a Instance SOP Level Packed Primitives flag on the OBJ node under the Instancing tab. This is designed to basically automatically do the same thing that Mantra does. It works for some scenarios but not all; it didn't work for this simple wall fracturing example. You might want to take that option for a spin before trying this workflow. If anyone just needs the Attribute Wrangle VEX code to copy, here it is: v@pivot = primintrinsic(1, “pivot”, @ptnum); 3@transform = primintrinsic(1, “transform”, @ptnum); s@name = point(1, “name_orig”, @ptnum); v@pos = point(1, “P”, @ptnum); v@v = point(1, “v”, @ptnum); Hope someone finds this useful. -- mC Proxy_Example_Final.hiplc
  47. 1 point
    Which version are you using? It's working correctly in 16.0.742.
  48. 1 point
    here's one way of doing it...the advantage of this way is if you can't have access to the divisions as Konstantin suggested above...say you are using an obj imported from Whatever. This way you just use the Select slider to select how many in a row...then the field after that is always 'doubled'. What if you want to start the selection from 2nd row ? Easy...just set Offset number to equal the Select number... EDIT: might as well do Cols...if you want to select from 2nd col onwards...use (@ptnum+1)%2==0
  49. 1 point
    Quick answer while waiting for render at work. Grab the global var P, transform it from current-space to world-space and grab the Y component. My example here is hard set to a certain range in world-space, the const1-node. That would mean your trees in your scene would all have the same ramp, no matter the individual height of the tree. You could probably plug the fit1 directly in to the multiply2, I just have a ramp-parameter cause it was easier to read what's going on with the colors.
  50. 1 point
    This is how I usually do it: UVtexture in row/column mode to create a uv value (on points for this usecase) along the curve and then a pointwrangle to create a Base_Width value and a Width_Ramp for the tapering. I also put in a colour gradient using the same technique: EDIT 2019: Since somebody recently found this useful: These days I use UV-Texture in "Arc Length Spline" mode on points instead of row/column mode. Seems to give more precise positions in some cases. Also, if you write the code as: // Thickness along Curve using @uv.x: f@pscale = f@width = chramp('Width_Curve', @uv.x) * chf('Base_Width'); // Colour along Curve. The "vector" creates a colour ramp automatically! @Cd = vector(chramp('Gradient', @uv.x)); This way, a.) "pscale" and "width" are both set in one go. For the viewport (Geo Node -> Misc -> "Shade Open Curves in Viewport") Houdini uses "width", while everything else mostly needs "pscale" and b.) if you enclose a chramp() with a "vector()", you will get a colour ramp right away without having to go to the parameter interface editor... :-) Cheers, Tom FIND_POINTS_CURVE_Tom.hiplc