Jump to content


Popular Content

Showing most liked content since 07/05/2020 in all areas

  1. 7 points
    I have a Houdini GitHub repo where (in addition to the code section, which is the Houdini pipeline for my personal projects) I store all my R&D notes related to the pipeline developing and programming organized as one wiki. The valuable part of this wiki is VEX for Artists tutorial, where I record everything I was able to understand about VEX in form of tutorials, so it might be useful not only for me but for anybody else, who is going the same route of learning programming from scratch. It was built by a guy with an artistic background and no technical education and skills, so it might be suitable for the same type of peoples. Easy and clean, a lot of simplification, a lot of explanation of basics. This VEX tutorial was just updated with a new section: Solving problems with VEX. Here, using the basic blocks studied earlier we will create something meaningful for the production. The first example we look into is the creation of a hanging wire between 2 points. For those who tried or even afraid to begin to learn VEX but fail and stop because it was too hard. Enjoy!
  2. 7 points
    Yes I do! I just created one Here you go!
  3. 6 points
    My latest reel, 2020. Collection of shows 2016-2019.
  4. 5 points
    Hello magicians. I want to share with you the result of studying l-systems. Behold the Moss Incredible aka Sphagnum! Redshift shading. And yes, it is growing. Available on https://gum.co/fZROZ https://gum.co/qmDmg Thanks! akdbra_moss_grow.mp4 akdbra_moss_var.mp4
  5. 5 points
    Just sharing my 2020 Reel. Im available. https://vimeo.com/429294957 Thanks! Daniel Moreno http://www.danmoreno.com https://vimeo.com/danmoreno https://www.linkedin.com/in/danmoreno https://www.imdb.com/name/nm1625127/
  6. 4 points
    Hello! So i created a few tools for a recent project for creating trees. I thought Id share it with the community. This is my first ever toolset Ive created so If you like it consider donate a few bucks on gumroad. I currently have it as a "pay what you want" product. You are more than welcome to try it out and come with suggestions for future potential updates. Hope you like it! https://gum.co/nEGYe
  7. 3 points
    Finished my first tutorial, hopefully you find it helpful learning Houdini!
  8. 3 points
    yes there is. (you refuse to upload your file, I refuse to upload my file)
  9. 3 points
    To fill quadratic polygons with copies just take the square root of the intrinsic primitive area as point scale: int pt_add = addpoint(0, v@P); float area = primintrinsic(0, 'measuredarea', i@primnum); float scale = sqrt(area); setpointattrib(0, 'pscale', pt_add, scale, 'set'); removeprim(0, i@primnum, 1); KM_recursive_subd_001.hipnc
  10. 2 points
    here is a simple example with Python, I attached an example: parm = hou.parm('/obj/geo1/sphere1/ty') # store all values in list values=[] for i in range(100): val = parm.evalAsFloatAtFrame(i) values.append(val) # remove parm expression parm.deleteAllKeyFrames() # set values from list to keyframes for i in range(100): myKey = hou.Keyframe() myKey.setFrame(i) myKey.setValue(values[i]) parm.setKeyframe(myKey) bake_keyframes.hipnc
  11. 2 points
    you CAN write the vex yourself every time if you want to, or if you feel lazy, just a Point SOP, preset Morph to 2nd input then you can have a controlled lerp amount.
  12. 2 points
    This belongs to everyone in Houdini Community(for those that didn't Know) Have fun with colors (make absolutely everything (area(biology, particles, modeling, PLot, CNC.....etc) sorry for eventual errors ..I use Houdini 16.5. Tips-save channels Data- download Qlib HaveFunfinal2.hipnc
  13. 2 points
    Ok, so I have found a solution, which I think could be improved upon. This setup keeps a heartbeat runnig for 30 seconds and each second you can generate new work_items. You can obviously change number of heartbeats and wait time. So this allows you to fetch live data form a database or folder for example and act accordingly. There's no logic for tracking what was processed, you need to build that yourself. But it's at least a way to keep houdini polling. Now in the file, I have also 2 python generators which I think would be the proper solution, but can't get to work. I'd love to know if there's anything better of more robust perhaps. Hope it helps someone and if you have feedback please let me know! PDG_while_loop_tests.hiplc
  14. 2 points
    We use 3DEqualizer at work. Although it's doesn't have the best UI, it apparently is the best tracking software. I've done some 3D tracks using Blender and the 3D solves have been pretty good and the tracking is pretty darn quick!
  15. 2 points
    Here's a non-VEX approach, that doesn't even use any rotations. I'm applying a linear gradient across the top face, and then using a Soft Transform to move the points up based on that mask. You do need Labs for the Gradient SOP. columns01_soft_transform.hiplc
  16. 2 points
    not quite sure if that’s what you are after but please take a look at the attached file: rotate_prim.hiplc
  17. 2 points
    Hey all! I just updated Simple Tree Tools to version 1.5.2. There is a lot of goodies in here! Download ----> https://gum.co/nEGYe Oh, and don't forget to read the "read me" to see what's new! Cheers!
  18. 2 points
    I feel like Get Vector Component needs an update to be useful currently it's sort of useful if you want to give the option for statically change component as a promoted parameter, but in reality it has a few issues 1. there is no 4th component in the menu even though it allows vector4 signature 2. doesn't allow dynamic choice of the component since it doesn't have component index input, so better choice for that case is vector2float and switch however if updated it can be useful for those cases
  19. 2 points
    Hi, It's more of a workflow thing. I personally use vector to float myself. I think vector to float might be faster as it's modifying 3 parameters by reference vs calling vop_getcomp 3 times to get 3 values, if you need 3 values that is.
  20. 2 points
    You are welcome. Submitting RFEs doesn't hurt. C++ implementation is still much faster than laspy, but it's possible to do an easy and quick specific modification for las import until LIDAR Import SOP gets some improvements without going the C++ route. I attached an example with adding classification as an attribute (added closing the las file to avoid memory issues). I added an updated file: pz_load_las_with_python_classification_attribute_memory_fix.hipnc as it looks like context management protocol is not implemented in laspy (with File(): block will not close the file at exit) so I switched to try/except/finally instead. It will not error on the node, so watch python console for exception logging.
  21. 2 points
    The classification works fine. But with inFile = inFile.points[I] you are overwriting inFile object with multi-dimensional arrays of all attributes so you no longer get them by .x/.y/.z or other named properties. I uploaded a modified scene, where you can set classification and then it returns only a subset of points that match the correct classification. inFile.points[I] Returns subset of points in an array where I is True inFile.Classification == 2 Returns array of True/False values to determine which points are classified with id 2. Another approach would be adding Classification as an attribute to all points and then use vex expressions, groups, or other partitioning mechanisms to separate points. pz_load_las_with_python_classification.hipnc
  22. 2 points
    Hi, pretty neat library! Thank you for the tip. There is no need for csv, you can do a lot with laspy and numpy itself. Attached example scene to load data from las file. Seems that Lidar Import SOP ignores scale and offset. To make it work (18.0.499, Python 2.7 branch) I cloned the https://github.com/laspy/laspy repository. Then copied content of laspy folder to $HOME/Houdini18.0/python2.7libs/laspy so I have $HOME/houdini18.0/python2.7libs/laspy/__init__.py (and rest of the library) and it's possible to load it into Houdini with import laspy in Python shell. (Numpy is already included with Houdini) I used example file from repository: https://github.com/laspy/laspy/blob/master/laspytest/data/simple.las import logging from laspy.file import File import numpy as np node = hou.pwd() geo = node.geometry() file_path = geo.attribValue("file_path") inFile = File(file_path, mode='r') try: # --- load point position coords = np.vstack((inFile.X, inFile.Y, inFile.Z)).transpose() scale = np.array(inFile.header.scale) offset = np.array(inFile.header.offset) # there is no offset in simple.las example from laspy library # offset = np.array([1000, 20000, 100000]) # just for testing that offset works # geo.setPointFloatAttribValues("P", np.concatenate(coords)) # same as Lidar Import SOP - seems that it ignores scale (and offset?) geo.setPointFloatAttribValues("P", np.concatenate(coords*scale+offset)) # --- load color color = np.vstack((inFile.red, inFile.green, inFile.blue)).transpose() geo.addAttrib(hou.attribType.Point, "Cd", (1.0,1.0,1.0), False, False) # add color atttribute geo.setPointFloatAttribValues("Cd", np.concatenate(color / 255.0)) # transform from 1-255 to 0.0-1.0 range) except Exception: logging.exception("Processing lidar file failed") finally: inFile.close() pz_load_las_with_python.hipnc
  23. 2 points
    Here is a basic setup. It uses a second loop to give each primitive a unique name. Inside the loop, the area for each primitive is stored as an attribute. After the loop, pscale is derived from the area. Use the ramp and the multiplier to dial in the sizes. ap_recursive_subd_001.hiplc
  24. 2 points
    To import a specific script in Python you need to append the folder where it is to the python path (the path contains all the paths to the different modules it loads). You can do it using the sys module : import sys sys.path.append("/path/to/the/folder/where/my/script/is/myscript.py") import myscript And if you want to import just a specific function from your file and not all the file : import sys sys.path.append("/path/to/the/folder/where/my/script/is/myscript.py") from myscript import myfunction Cheers,
  25. 2 points
    I love this, make still can t understand how to get that with chop . I see a formula with your pic but its not for a point vop Tesan , its more a regular expression, hummm.. Not very friendly yet with Chops. May lean to know that beast better... The only way i used it was to stabilize some motion from sim like vellum , if you measure velocity , filter the anim and blend with the original anim and the slighty smoothed anim when the velocity was sup to your input threshold. Fro now i have been playing with different procedural and concept for a work, and some stupidy like this ... Doughnut with Covid creatureA.mp4
  26. 2 points
    Ok...several things: First of all, get out of your head the idea that one renderer is all you need to focus on. In today's production world, especially as a newcomer you better be familiar with several render engines. Secondly, if your goal is to work for a high end VFX studio, then GPU renderers aren't really a thing. For feature films most of the larger studios use Arnold or Renderman....so learn those! With ever-changing and ever-evolving technology, when it comes to render engines particularly, you're better off with a subscription service. I would go with the $20/month Octane subscription over purchasing Redshift. In addition, Octane looks arguably better than Redshift with less effort (but ultimately they're both limited as explained in previous posts). I think GPU render speed is overly hyped for the most part. People will compare Redshift on a 4 RTX 2080ti machine vs. Arnold running on a 6-core Intel and declare that Redshift is the winner, not taking into account the fact that they're comparing $6k worth of GPU's (and one hell of an electric bill) vs. a $400 CPU. If we level the cost playing field, and compare 3Delight running on a 64-core Threadripper vs. Redshift running on two 2080ti's, then 3Delight will be faster...particularly on scenes with volumes and huge amounts of polys. On a recent test that I did between Redshift running on two 1080ti's vs. 3Delight running on a 9900k 8-core, the difference in render times was only 1 minute. If I swapped CPU's to a 16-core Ryzen then 3Delight would have come out on top and with a better looking render. And finally...don't get hung up on technology too much. Ultimately if you're good at what you do, you'll find work regardless of what render engine you use.
  27. 2 points
    I built a non-linear editor/clip mixer for Houdini. On pre-sale right now (PC only, while I continue to work on the Mac version). It's great for bringing in a bunch of different FBX mocap files and mixing and blending them together with a graphical interface. https://gum.co/houdiniClipMixer
  28. 2 points
    This is the official release of the Houdini Music Toolset (HMT)! Here's a tour and demonstration Download and installation instructions as well as documentation can be found on Github. I'm also releasing two tutorials: 00 Installation and Sound Check 01 How to make a Simple Note For the last 5 years I've been doing progressively more advanced music composition in Houdini. The mergers of music and visuals have been a life-long passion for me. In addition to teaching dynamics and FX in Houdini, I've also given selective talks and demonstrations on my personal music developments to groups like the Vancouver Houdini User Group, the Los Angeles Houdini User Group, and the Procedural Conference in Breda. I always experience an overwhelming amount of enthusiasm and a supportive community. Here's my way of both saying thank you as well as furthering anyone who would also like to combine musical and visual art. The Houdini Music Tool-set turns Houdini into a powerful music making suite (a MIDI sequencer). Be sure to keep a look out for free weekly tutorials covering the tool-set and workflows. Enjoy!
  29. 2 points
    @vinyvince progress between lessons checking more about modeling and Uv. learning from ( Len White and Node flow).
  30. 2 points
    Great stuff, Nicolas. This is starting to look Giger-like already! You don't necessarily need to create UVs in SOPs, though. To project textures on those VDB meshes it's arguably more efficient to do in a shader: 1) Transform position to world space. 2) Curve IDs shown as random colors. 3) U value from curves in HSV colors. 4) Direction to nearest curve position. 5) Tangents from curves set to absolute. 6) Direction to curve oriented along tangents. 7) V coordinate enclosing each wire. 8) UV coordinates in red and green. 9) UV mapping an image texture. 10) Texture based displacement along curves or at least what happens when mandrills do the job ; ) The material snippet: string geo = 'op:/obj/curves/OUT'; P = ptransform('space:current', 'space:world', P); int prim = -1; vector uvw = vector(0.0); float dist = xyzdist(geo, P, prim, uvw); vector pos = primuv(geo, 'P', prim, uvw); float u = primuv(geo, 'u', prim, uvw); vector tangent = primuv(geo, 'tangentu', prim, uvw); matrix3 rot = dihedral(tangent, {0,1,0}); vector dir = normalize(P - pos); vector dir_mod = dir * rot; float v = fit( atan2(dir_mod.z, dir_mod.x), -M_PI, M_PI, 0.0, 1.0 ); P = set(u, v, 0.0); curly_curves_shader.hipnc
  31. 1 point
    Crag Splits Face................crudely.
  32. 1 point
    I really appreciate the time you spent on helping me, thank you a lot
  33. 1 point
    @_Styliz3D_ use this and this On Growth and Form just find the file havefun.hipnc and Tokeru you have file chops dops dynamic just use color on any shape. If you need exactly that file that you like ..I gonna need more time to find hipnc ...daily I Make 100 (and finding on internet 1000 )
  34. 1 point
    c'mon...level up !!! select by normal...try it.
  35. 1 point
    Individual HDA (version: BETA v1.00) This is the current beta version. More feature versions will soon be released. ••••••••••••••••••••••• Finally, version 1.7.1 was released. All files were released as open source. I hope it helps you with your work https://github.com/seongcheoljeon/IndividualHDA
  36. 1 point
    sorry, can't find the link .. I was on GitHub by some User (not the SidefX)
  37. 1 point
    You can instance vdb clouds. It'll take less memory and you'll be able to render more of them. Make a few different models and instance them with a point cloud and then you're good to go ! With a RTX 2080 I've been able to render billions of voxels with instancing in redshift, so I assume if you render with mantra it'll be able to handle many voxels if they're instanced !
  38. 1 point
  39. 1 point
    If I understand correctly this question: How import into the PythonModule a Custom Script stored in HDA ? Like your screenshot, to import all functions stored into "import_that_script" inside the PythonModule. There is an answer in docs: https://www.sidefx.com/docs/houdini/hom/hou/HDAModule.html So in your case in the PythonModule section: import toolutils myscript = toolutils.createModuleFromSection("myscript", kwargs["type"], "import_that_script") Then, you can execute a function as imported module do: myscript.myfunction()
  40. 1 point
    thanks, I was able to get it working. MY setup was very specific, I had to attach branches to an already animated "core"
  41. 1 point
    patreon.com/posts/38913618 Subdivision surfaces are piecewise parametric surfaces defined over meshes of arbitrary topology. It's an algorithm that maps from a surface to another more refined surface, where the surface is described as a set of points and a set of polygons with vertices at those points. The resulting surface will always consist of a mesh of quadrilaterals. The most iconic example is to start with a cube and converge to a spherical surface, but not a sphere. The limit Catmull-Clark surface of a cube can never approach an actual sphere, as it's bicubic interpolation and a sphere would be quadric. Catmull-Clark subdivision rules are based on OpenSubdiv with some improvements. It supports closed surfaces, open surfaces, boundaries by open edges or via sub-geometry, open polygons, open polygonal curves, mixed topology and non-manifold geometry. It can handle edge cases where OpenSubdiv fails, or produces undesirable results, i.e. creating gaps between the sub-geometry and the rest of the geometry. One of the biggest improvement over OpenSubdiv is, it preserves all boundaries of sub-geometry, so it doesn't introduce new holes into the input geometry, whereas OpenSubdiv will just break up the geometry, like blasting the sub-geometry, subdividing it and merging both geometries as is. Houdini Catmull-Clark also produces undesirable results in some cases, i.e. mixed topology, where it will either have some points misplaced or just crash Houdini due to the use of sub-geometry (bug pending). Another major improvement is for open polygonal curves, where it will produce a smoother curve, because the default Subdivide SOP will fix the points of the previous iteration in subsequent iterations which produces different results if you subdivide an open polygonal curve 2 times in a single node vs 1 time in 2 nodes, one after the other. This is not the case for polygonal surfaces. VEX Subdivide SOP will apply the same operation at each iteration regardless of topology. All numerical point attributes are interpolated using Catmull-Clark interpolation. Vertex attributes are interpolated using bilinear interpolation like OpenSubdiv. Houdini Catmull-Clark implicitly fuses vertex attributes to be interpolated just like point attributes. Primitive attributes are copied. All groups are preserved except edge groups for performance reasons. Combined VEX code is ~500 lines of code.
  42. 1 point
    Thanks for sharing your setup! One thing it made me realized was that, in addition to having a sufficiently small particle separation, turning *off* "Solve Pressure With Adaptivity" can help prevent air bubbles from collapsing. Btw what was your motivation for using the gas vortex confinement DOP? Was it to help break up the bubbles even more?
  43. 1 point
    We did a take on the diff growth for a shortfilm-festival. I dont have the complete video, but a cut down instagram version of it. (Not pleased with the mushroom animation, but it is what it is); And a instagram breakdown.
  44. 1 point
    the sorting by Y actually does stuff all, test it on a single and you'll see. You have to actually reverse the line to make a difference I've also added Axisalign after your transform8 to make things quicker to line up...most likely you'd play around with Z axis vu_Roof_CurveNormalsFix.hiplc
  45. 1 point
    So many great workflows! Here's my take - using scatter and VDBs. A few optional switches inside to do deletion as well as accretion, or to pipe in a custom vector field for bias (I just have a simple curlnoise for default). differentialgrowth.hiplc
  46. 1 point
    I didn't see much implementation of machine learning in Houdini so I wanted to give it a shot. Still just starting this rabbit hole but figured I'd post the progress. Maybe someone else out there is working on this too. First of all I know most of this is super inefficient and there are faster ways to achieve the results, but that's not the point. The goal is to get as many machine learning basics functioning in Houdini as possible without python libraries just glossing over the math. I want to create visual explanations of how this stuff works. It helps me ensure I understand what's going on and maybe it will help someone else who learns visually to understand. So... from the very bottom up the first thing to understand is Gradient Descent because that's the basic underlying function of a neural network. So can we create that in sops without python? Sure we can and it's crazy slow. On the left is just normal Gradient Descent. Once you start to iterate over more than 30 data points this starts to chug. So on the right is a Stochastic Gradient Descent hybrid which, using small random batches, fits the line using over 500 data points. It's a little jittery because my step size is too big but hey it works so.. small victories. Okay so Gradient Descent works, awesome, lets use it for some actual machine learning stuff right? The hello world of machine learning is image recognition of hand written digits using the MNIST dataset. MNIST is a collection of 60 thousand 28 by 28 pixel images of hand written digits. Each one has a label of what they're supposed to be so we can use it to train a network. The data is stored as a binary file so I had to use a bit of python to interpret the files but here it is. Now that I can access the data next is actually getting this thing to a trainable state. Still figuring this stuff out as I go so I'll probably post updates over the holiday weekend. in the mean time. anyone else out there playing with this stuff?
  47. 1 point
    Yes, unfortunately, d&d in Python panels is very buggy. over the course of several years, I've seen all sorts of issues including this. First, make sure in your dragEnterEvent you store the mime data somewhere and then the next time you enter the widget, check if such data exists and reject the event if it does. See if it works for you, you might need to do the same trick with the dropEvent
  48. 1 point
    I just created a tutorial on how to convert a sequence of slice images to a volume using VEX. slices_to_volume.hiplc
  49. 1 point
    @char the thread you shared has a lot of example in it. Which example are you trying to replicate? Most of them seems to use multiple constraint network. I'll suggest to start with single constraint them add on after that. Start with glue constraint and what can you use to make them break (hint: impact or use 'removeprim' like in my example). I need to set up an other sim with multiple network later today so I might post an example here (don't hold your breath ) @vicvvsh thanks for checking it out. I knew it was something stupid like that. I was watching the angle in the spreadsheet but I must have misread my numbers. here is an update: (still want to work on the bendiness, bend first than break) For kicks I tried the same system with a metaball force but still use the angle to removeprim. works pretty well I think
  50. 1 point
    Yep, kinda like this but devil's always in the details and you have to account for oversampling (timestep) of the sim, especially with age accumulation. You only need the Gas Field VOP to use increment age by the timestep using density as a mask btw. Everything else can be done with Gas Calculate (tends to be faster than Gas Field VOP for simple stuff). - initialize age field at sim start channel matching fields from Smoke Object DOP - increment age by timestep using density as a mask (unless you want to add time to all voxels, up to you). - Keep source field around (need to remove "source" from clear fields in Source Volume DOP) - create temporary agetemp field and add source field reset to be constant timestep (you can't count on source always being set to 1) - Add agetemp to age - advect age I haven't really thought about the best order of things: - when to add agetemp to age. Before or after advection. Open to suggestions/advice. See the attached hip file for an attempt at this. age_field.hip This temp field is pretty hard to read in the viewport when it is visualized as it is kind of like density but counters velocity (to be expected as faster areas should be darker when visualization set to greyscale for age). it looks to be kinda correct. I hope...