Jump to content

Leaderboard


Popular Content

Showing most liked content since 04/22/2022 in all areas

  1. 6 points
    Hi @bentraje, you could map the surface curvature to the height field. The erosion of the landscape can then be transferred back for displacement. https://procegen.konstantinmagnus.de/height-field-erosion-on-meshes
  2. 4 points
    You can achieve this by writing this sort of code inside a Primitive Wrangle: string cam = chs("cam"); float near = 0; float far = -0.01; vector pnear = fromNDC ( cam, set ( 0.5, 0.5, near ) ); vector pfar = fromNDC ( cam, set ( 0.5, 0.5, far ) ); vector dir = -normalize ( pfar - pnear ); vector n = normalize ( primuv ( 0, "N", @primnum, 0.5 ) ); if ( dot ( n, dir ) < ch("threshold") ) i@group_fresnel = 1;
  3. 3 points
    Iteratively resampling a voxel field and visualizing the VDB tree as cubes might also be an option: https://procegen.konstantinmagnus.de/cubify-meshes
  4. 3 points
    Ok, I think I finished this. If anyone is interested, the hip file is available below. I hope it could be instructive for beginners. I added leaves, they can have a general motion (breeze), and you can add another wind based on the tree motion (length of averaged velocity). I discovered that Bone Deform is way faster than Point Deform (it was realtime for this tree), but Point Deform is more accurate. My setup includes both, I use Bone Deform for previewing, you can just switch. The wire sim is realtime as well, and cooking everything with leaves and with little branches was around 3 fps on my shy laptop without any pre-caching. (I think it's not that bad.) Actually the leaves part could be better, for example right now leaves don't orient themselfs according to the direction of the wind... so some controlling features like this still would be nice to have. Well, feel free to improve it I would say, theoreatically this setup should work with any tree, bush, plant, etc. created by the Labs Tree tools. The hipfile includes some notes as well, I tried to explain why am I doing what I'm doing. Also, I started an attempt to create a non-simulated tree motion, what I saw in nirfse3D's videos about his Simple Tree Tools. Not really impressive right now... I have no idea how he did it, this approach is getting really slow with more branches Anyway. Best wishes! tree_16.hiplc
  5. 2 points
    And yet another Mikael Pettersen setup demonstrating guided RBD fracture simulations. In the original tutorial, Voronoi fracture was used, in this one I added rbdmaterialfracture with chipping. You can switch between the two techniques. ap_mp_zombie_crumble_guided_RBD_collapse_052122.hiplc
  6. 2 points
    I worked through a tutorial where Mikael Pettersen demonstrates the emission of vellum sticky balls. ap_mp_emit_vellum_sticky_balls_052022.hiplc
  7. 2 points
    I never thought I would say this but, what about using metaballs?
  8. 2 points
    ejr32123 is correct. You want a constant shader on the top of the sidewalk, but you want proper lighting on the fractured interior faces. A larger issue you'll have is when the pieces break and move the lighting on the top surface should change. The shadows should change, the diffuse and specular lighting should change. But how do account for this if the lighting is baked into the projected image? No easy answer as far as I know. People often make a copy of the original image and try and remove specular and shadows in the hopes of creating a diffuse texture map that can be used with a principled shader. Photoshop or gimp seem the most obvious tools to do this, but I believe that there are some photogrammetry solutions that have some automated mechanisms for doing this. Once this is done you'll need to reproduce the lighting in cg so that it matches the photo. You'll probably need to render a shadow pass for areas where your cg debris is supposed to cast shadows on the constant surface. Then it'll probably take some love in compositing to make it all work. Hope that helps
  9. 2 points
    Hi, You can create a point group to contain all points before the Clip SOP and then invert this group:
  10. 2 points
    @bentraje just use groupfromattribboundary SOP on PRIM 'Cd' endless control.
  11. 2 points
    Thanks for the responses. @Librarian I couldn't quite directly use it since I guess the code is for the mat context. That said I think I get the overall logic. I think its doing Method #2 in the code below. @animatrix The code works as expected. ========== Also just for reference. People from slack group also helped me. Here are the results. Same goal just different execution. Method 1 (from bcarmeli) vector @cdir; vector @raydir; matrix camMatrix = optransform(chs("camera")); //get a matrix with the camera's transforms. @cdir = cracktransform(0, 0, 0, {0,0,0}, camMatrix); //extract out the camera position as a vector @raydir = normalize(@P-@cdir); //get a vector to project pointo from camera f@facing_ratio = fit11(dot(@N, @raydir),0,1); if(@facing_ratio>chf("threshold")){ @group_mygroup = 1; } Method 2 (from David Torno) vector cam = getpointbbox_center(1); //Option 1: Get dir per point vector dir = normalize(@P - cam); //Option 2: Get single point centroid //vector c = getpointbbox_center(0); //vector dir = normalize(@P - c); float d = fit11(dot(@N, dir), 0, 1); if(d > chf("threshold"))i@group_mygroup =1; Will close this thread now. Thanks for the help!
  12. 2 points
    Take a look at the shapematch constraint. It's suitable for rigid bodies. That way you don't have to leave vellum. Vellum_RBD_interaction_v0001_yader.hiplc Vellum_RBD_interaction_v0001_yader.mp4
  13. 2 points
    After months of hard work, the new bonus content for Pragmatic VEX: Volume 1 is finally out! Existing users immediately received access to the updated content as soon as it went live. The new trailer also features the new content: Enjoy!
  14. 2 points
    KineFX uses vertex order to set hierarchy. A reverse SOP will flip the hierarchy. Make sure it's wired in before the first rig doctor. Once you add a rig doctor, additional attributes are added that define parent and child indices and local transforms are calculated using that hierarchy. Adding a reverse SOP after that won't have any effect.
  15. 2 points
    Hi Naim, you could displace grids and stick them to a sweeped curve: hair_microscopic.hip
  16. 1 point
    Custom VEX based carve with randomization as described by the UPP advertising team at the FMX 2022 Hive. https://www.sidefx.com/houdini-hive/fmx-2022/#virtual ap_upp_fmx_2022_hive_custom_carve_052122.hiplc
  17. 1 point
    Very close to getting it working, but in my test I'm getting a weird wedge of high density. Maybe somebody smarter can tell me what I missed? volume_fall_off_test_v1.hip
  18. 1 point
    Might be related: https://www.sidefx.com/forum/topic/82674/ including your Houdini version would help.
  19. 1 point
  20. 1 point
    Should work as Atom said. Maybe post a scene file if you're still having trouble.
  21. 1 point
    Hi everyone! I have been looking at the work of Tobias Gremmler and wondered if anyone knows how he makes these liquid mutations/transformations and how he makes the texture also seem to stretch and mutate together with the animation?
  22. 1 point
    Not sure how to do it in mantra, but in redshift I can make a material that receives GI and shadows (and I can also disable one or both of those if I want), but not normal lighting. That way my texture matches the scene but can also pick up shadows in case a shadow is cast over it. If you can't do that in mantra it would be possible to render the geometry top faces again but with a shadow catcher material, then in compositing you comp the shadow back on the top faces as you said.
  23. 1 point
    @Librarian What do mean when you say from Asia? You often mention that, but rarely provide a link.. Asia is big
  24. 1 point
    hmmm... I just gave it a try. seems to be working with my example, 50 lights. Although there does seem to be a limit to what can be displayed in opengl lops_crowd_lights_doc_v2.hip
  25. 1 point
    Or lops? I wager its all a bit bleeding edge, so you'd wanna test a lot before pushing into production, but this is all the kinda stuff lops is meant to be good at. lops_crowd_lights.hip
  26. 1 point
    You can run them via command line https://www.sidefx.com/docs/houdini/tops/cooking.html#cookcommandline
  27. 1 point
    have a look at this post: https://www.sidefx.com/forum/topic/57112/ it's possible to get the position of an agent's bone with vex. Then I believe you can use the instance lop to instance your lights into place.
  28. 1 point
    @unshmettr on gitHub you can find github FeELib-for-Houdini
  29. 1 point
    @Librarian Thank you, I found
  30. 1 point
    On smoke object guides tab temperature checkbox. You'll get temperature plane colored from red(hi temp) to green(low temp). In temperature tam you can adjust min/max and find your range, need to sim few frames with low res.
  31. 1 point
    It's easier to use a String type parameter (with a menu) for this instead of ordered menu. This way, eval() will return a string.
  32. 1 point
    A suggestion, don't use hard-coded numbers, eventually will break... perhaps something like; file_name = hou.hipFile.basename() return file_name.split("_")[-1] In this way, as long as you keep using the same template separated by underscores, it will work, independently of the length of the string or even the amount of underscores.
  33. 1 point
    My first instinct would be to project the 2D font onto the box surface before extruding it. Also keep in mind that all fonts are not created equal. In this example I switched to Arial because the default font produced jagged artifacts. ap_carve_letter_equal_extrusion.hipnc
  34. 1 point
  35. 1 point
    You could try the technique in this example: https://www.tokeru.com/cgwiki/index.php?title=HoudiniDops#RBD_coin_follow_path
  36. 1 point
  37. 1 point
    @Librarian Wow. These are fantastic. Thanks for uploading the files because I can’t wait to check these out.
  38. 1 point
    Here's few bot example the process could give birth ... Actually some details could only be seen in 4k. Screenshot will give you an idea how the geometry built looks like. It was another experimental research, not saying im happy with everything and i will adapt according for the next try. But this is mostly a non technical question. This is kind of network driven by DG will be a good candidate to feed a Machine learning model like i shown on linkedin before with the Skull generation. Click on each image to see HD version.
  39. 1 point
    Another variant using some more recent nodes. 00_patchwork.hiplc
  40. 1 point
    I'ts This File ..When was The Challenge On think Procedural I Found that on GitHub ...Or You using Something Else .. https://github.com/chloesun/wfc_houdini
  41. 1 point
    Hello everyone ! In this introduction, I have created an asset ES voronoi chunk. Also is included the workflow. I thank all the odforce users given that I understand Houdini in this forum. I have not contributed as much as them, but I achieved to learn from the users and I thank you again! I hope that with my work, I can contribute to be part of knowledge sharing of this forum. Watch > https://youtu.be/x6CaCquqeYg
  42. 1 point
    Hey Nicolas, Thanks for the kind words! This was a labor of love for sure. Worked on it, off and on, for about 2 years. Kept finding and making roadblocks and distractions. I want to revisit it shortly and make a series of unique prints. Hopefully sooner than later. -r And thanks to you for teaching me the word "peregrinations".
  43. 1 point
    Of course! Thank you Joe, that worked.
  44. 1 point
    Added support for Heterogeneous medium. Decoupled Ray Marching & Equiangular Sampling: Default Mantra render (same time): Example file included in repository.
  45. 1 point
    Filament like structure, combination of Smoke Solver, VDB Advect Points + Volume Rasterize Particles. smokesolver_v3.hipnc
  46. 1 point
    If you explicitly cast chramp to a vector first, the 'create spare parameters button' will create a colour ramp. Eg @Cd = vector(chramp('myramp',@P.x));
  47. 1 point
    I have no idea if this is any use to you but here's a scene that shows condir in action (with condof set to 2). The first .gif (condir.gif) has both position and rotation constrained whilst the second has just rotation. The constraint axis is Y. In the first gif you can see how movement is only allowed along Y and rotation is only allowed around Y. If you go into the pointwrangle and change the constraint to rotation only you get the second .gif where movement is allowed anywhere but rotation is still only allowed around Y. This scene does not work properly prior to version 16.0.642 as a bug with condir was fixed from that point onwards (i.e rotation about x was the same as rotation about z). hard_hinge_daryl.hip
  48. 1 point
    hi. To do this, you should set initial N, I assume you will set N to (0,0,1) see file. N_to_angle.hipnc
  49. 1 point
    There is no mystery as to how Houdini works. Anything that gets done in Houdini can be expressed by a node. Whether that node is a coded c++ operator, an operator written in VEX (or using VOP nodes representing vex functions), Python operators or Houdini Digital Assets (HDA's), each node does it's own bit and then caches it's result. There is no lower level than nodes. The nodes in Houdini are the lowest level atomic routine/function/programme. A SOP node for example takes incoming geometry and processes it all in of itself, then caches it's result which is seen in the viewport, MMB on the node as it's stats and in the Details View to see the specific attribute values. If this is a modifier SOP, it will have a dependency on it's input node. If there is an upstream change, the current node will be forced to evaluate. If there is a parameter reference to another node and the other node is marked "dirty" and affects this node, this node will have been forced to evaluate. To generalize the cooking structure of a SOP network, for every cook (frame change, parm change, etc), the network starts at the Display/Render node and then walks up the chain looking for nodes with changes and evaluates dependencies for each node also querying those nodes for changes until it hits the top nodes. The nodes marked dirty causing the network to evaluate the dirty nodes top down evaluating the dependencies that were found. You can set a few options in the Performance Monitor to work in the older H11 way and see this evaluation tree order if you wish. Change that. It is "mandatory" that you do this if you want a deeper understanding of Houdini. You definitely need to use the Performance Monitor if you want to see how the networks have evaluated as it is based on creation order along with the set-up dependencies. Yes deleting and undeleting an object can and will change this evaluation order and can sometimes get you out of a spot with crashing. If you haven't used the Performance Monitor pane, then there you go. Use it. Just remember to turn it off as it does have an overhead performance wise. Another key is to use the MiddleMouseButton (MMB) on any and all nodes to see what they have cached from the last cook evaluation. Memory usage, attributes currently stored, etc. the MMB wheel on my mouse is as worn in as the LMB as I use it so much. You can see if the node is marked as time dependent or not which will affect how it evaluates and how it will affect it's dependent nodes. You can RMB on the node and open up the Dependency view for that operator which will list all references and dependencies. You can hit the "d" key in the network and in the parameter display options, in the Dependency tab, enable the various dependency aids (links and halos) in the network to see the dependencies in the network. Houdini is a file system, in memory, and on disk in the .hip "cpio" archive file. If you want, you can use a shell, and given any .hip file, run the hexpand shell command on the file. This will expand the Houdini file in to a directory structure that you can read and edit if you so wish. Then wrap it back up with hcollapse. If you really want to see how Houdini works low level, then this how it all ends up, and how it all starts. It's just hscript Houdini commands that construct the nodes including the folder nodes themselves. Each node is captured as three distinct files: the file that that adds the node and wires it up to other nodes, the parameter file that sets the nodes parameters, and another file that captures additional info on the node. If you locked a SOP, then that binary information will be captured as a fourth file for that node. It is for this reason that .hip files are very small, that is unless you start locking SOPs and that is not wise. Better to cache to disk than lock but nothing stopping you. When you open up a .hip file, all the nodes are added, wired, parameters modified and nodes cooked/evaluated. There are different types of node networks and nodes of a specific type can only be worked on in specific directory node types. This forces you to bop all over the place, especially if you still willingly choose to use the Build desktop which I do not prefer. You have to have a tree view up somewhere in the interface to see how the network lays out as you work. It's also very handy for navigating your scene quickly. The Technical Desktop is a good place to start when working on anyone's file as there is a tree view and a few other panes such as the Details View, Render Scheduler and more. If you want to use the technical desktop and follow a vid done with the Build desktop, simply switch up the Network with the Parameter pane and now the right hand side is the same as Build, but now you can follow the tree view and see where and when other nodes are dropped down. A new Houdini file is an unread book, full of interesting ideas. Using a desktop that exposes a tree view pane, you can quickly see what the user has been up to in a couple seconds. Again use the Technical Desktop as a start if you are still using Build (if you know me you will know I will force you to have a tree view up). You can quickly traverse the scene and inspect the networks. If that isn't enough, you can pop open the Performance Monitor and see what nodes are doing the most work. You really don't need any videos, ultimately just the .hip file. Helps if the scene is commented and nodes named based on intent. Let's stick to SOPs. In Houdini, attributes are an intrinsic part of the geometry that is cached by each SOP. Not some separate entity that needs to be managed. That is what makes SOPs so elegant. That wire between two SOPs is the geometry being piped from one SOP to the next, attributes and all. Not a link per attribute (which in other software can be a geometry attribute, parameter attribute, etc). This makes throwing huge amounts of geometry with lots of attributes a breeze in Houdini. All SOPs will try their best to deal with the attributes accordingly (some better than others and for those others, please submit RFE's or Bugs to Side Effects to see if there is something that can be done). You can create additional geometry attributes by using specific SOPs: - Point SOP creates "standard" point attributes - Vertex SOP creates "standard" vertex attributes - Primitive SOP creates "standard" Primitive attributes - Use the Attribute Create SOP to create ad-hoc attributes with varying classes (float, vector, etc) of type point, vertex, primitive or Detail. - Use VEX/VOPs to create standard and ad-hoc point attributes. - Use Python SOPs to create any standard or ad-hoc geometry attributes. One clarification that must be made is the distinction between a "point" and a "vertex" attribute in Houdini. There are other softwares that use the term vertex to mean either point attributes or prim/vertex attributes. Games have latched on to this making the confusion even deeper but alas, it isn't. In Houdini, you need to make the distinction between a point and a vertex attribute very early on. A point attribute is the lowest level attribute any data type can have. For example, vector4 P position (plus weight for NURBs) is a point attribute that locates a point in space. If you want, that is all you need: points. No primitives what so ever. Then instance stuff to them at render time. You can assign any attribute you want to that point. To construct a Primitive, you need to have a point for the primitive's vertices to reference as a location and weight. In the case of a polygon, the polygon's vertices is indexing points. You can see this in the Details View when inspecting vertex attributes as the vertex number is indicated as <primitive_number>:<vertex_number> and the first column is the Point Num which shows you which point each vertex is referencing as it's P position and weight. Obviously you can have multiple vertices referencing a single point and this is what gives you smooth shading by default with no vertex normals (as the point normals will be used and automatically averaged across the vertices sharing this point). In the case of say a Primitive sphere, there is a single point in space, then a primitive of type sphere with a single vertex that references that point position to locate the sphere. Then there is intrinsic data on the sphere (soon to be made available in the next major release) where you can see the various properties of that sphere such as it's bounds (where you can extrapolate the diameter), area, volume, etc. Other primitive types that have a single point and vertex are volume primitives, metaball primitives, vdb grid primitives, Alembic Archive primitives, etc. How does a Transform SOP for example know how to transform a primitive sphere from a polygonal sphere? Answer is that it has been programmed to deal with primitive spheres in a way that is consistent with any polygon geometry. Same goes for Volumes. It has been programmed to deal with Volumes to give the end user the desired result. This means that all SOPs properly coded will handle any and all primitive types in a consistent fashion. Some SOPs are meant only for Parametric surfaces (Basis SOP, Refine SOP, Carve SOP, etc.) and others for Polygons (PolySplit, etc.) but for the most part, the majority of SOPs can work with all primitive types. What about attributes? The Carve SOP for example can cut any incoming polygon geometry at any given plane. It will properly bi-lineraly interpolate all attributes present on the incoming geometry and cache the result. It is this automatic behaviour for any and all point, vertex, primitive and detail Attributes that makes working with SOPs a breeze. How does Houdini know what to do with vertex attributes when position P, velocity v and surface normal N need to be handled differently? When performing say a rotate with a Transform SOP and the incoming geometry has surface normals N, velocity vector v, and a position cache "rest", each attribute will be treated correctly (well N because it is a known default attribute but for user-defined attributes, you can specify a "hint" to the vector that will tell it to be either vector, 3 float position, or of type surface normal). It is this auto-behaviour with attributes and the fact you don't need to manage attributes makes using SOPs so easy and very powerful without having to resort to code. Remember that each SOP is a small programme unto it's self. It will have it's own behaviours, it's own local variables if it supports varying attributes in it's code logic, it's own parameters, it's own way of dealing with different primitive types (polygons, NURBs, Beziers, Volumes, VDB grids, Metaballs, etc). If you treat each SOP as it's own plug-in programme, you will be on the right path. Each SOP has it's own help card which if it is authored correctly will explain what this plug-in does, what the parameters do, what local variables are available if at all, some other nodes related to this node, and finally example files that you can load in to the current scene or another scene. Many hard-core Houdini users picked things up by just trolling the help example files and this is a valid way to learn Houdini as each node is a node and a node is what does the work and if we were to lock geometry in the help cards the Houdini download would be in the Gigabytes so nodes are all that is in the help cards and nodes is what you need to learn. I'm not going to touch DOPs right now as that is a different type of environment purpose built for simulation work. Invariably a DOP network ends up being referenced by a SOP to fetch the geometry so in the end, it is just geometry which means SOPs. Shelf tools are where it's at but I hear you. Yes there is nothing like being able to wire up a bunch of nodes in various networks and reference them all up. Do that for a scratch FLIP simulation once or twice, fine. Do that umpteen times a week, well that is where the Shelf Tools and HDA's make life quite simple. But don't be dismayed by Shelf Tools. All of those tools are simply executing scripts that place and wire operators together and set up parameter values for you. No different than when you save out a Houdini .hip scene file. If you are uber-hard-core, then you don't even save .hip files and you wire everything from scratch, every time, each time a bit different, evolving, learning. So with the shelf tool logic you find so objectionable, if you open up an existing .hip scene file, you are also cheating. Reminds me of the woodworker argument as to what is hand built and what isn't. I say if you use anything other than your teeth and fingernails to work the wood, you are in essence cheating, but we don't do that. Woodworkers put metal or glass against wood because fingernails take too long to grow back and teeth are damaged for ever when chipped. And I digress... Counter that to power users in other apps that clutch to their code with bare white knuckles always in fear of the next release rendering parts of their routines obsolete. With nodes, you have a type name and parameter names. If they don't change from build to build, they will load just fine. I can load files from before there were .hip files and they were called .mot (from Sage for those that care to remember) from 1995. Still load, well with a few meaningless errors but they still load. A Point SOP is a Point SOP and a Copy SOP is a Copy SOP. No fear of things becoming obsolete. Just type in the "ophide" command in the Houdini textport and you will still find the Limb and Arm SOPs (wtf?). LOL! First thing I do every morning? Download latest build(s). Read the build journal changes. If there is something interesting in that build, work up something from scratch. Then read forums time permitting and answer questions from scratch if I can. All in the name of practice. Remember from above that a .hip file is simply a collection of script files in a folder system saved on disk. A Houdini HDA is the same thing. A shelf tool again is the same thing: a script that adds and wires nodes and changes parameters. Not pounding a bunch of geometry and saving the results in a shape node never to have known the recipe that got you there. To help users sort out what created which node, you can use the "N" hotkey in any network and that will toggle the node names from the default label, the tool that added that node and finally nothing. Hitting "N" several times while inspecting a network will toggle the names about. That and turning on the dependency options in the network will help you see just what each shelf tool did to your scene. Knowing all this, you can now troll through the scene and see what the various shelf tools did to the scene. If you like to dig even deeper, you can use the Houdini textport pane and use the opcf (aliased to cd), opls (aliased to ls), and oppwf (aliased to oppwd and pwd) to navigate the houdini scene via the textport as you would in a unix shell. One command I like to show those more interested in understanding how Houdini works is to cd to say /obj then do an opls -al command to see all the nodes with a long listing. You will see stats very similar to those found in a shell listing files or if you RMB on any disk file and inspect it's info or state. Remember Houdini "IS" a file system with additional elaborate dependencies all sorted out for you. There are user/group/other permissions. Yes you can use opchmod (not aliased to chmod but easily done with the hscript alias command) to change the permission on nodes: like opchmod 000 * will remove read/write/execute permissions on all the nodes in the current directory and guess what? The parameters are no longer available for tweaking. Just remember to either tell your victim or to fix it for them or you may be out of a job yourself. opchmod 777 * gives back the permissions. An opls -al will verify this. Now you know what our licensing does to node states as you can set the state of a node to be read and execute only but remove the write to any DOP or POP node and you have a Houdini license while a Houdini FX license will enable the write to all nodes in all networks. Also knowing this, the .hip file truly is a book with a lot of history along with various ways of inspecting who created what node and when, what tool was used to create this node, what dependencies are on this node, is it time dependent, and more, all with a quick inspection. After all this, learning Houdini simply becomes learning each node in turn and practice, practice, practice. Oh and if you haven't figured out by now, many nodes have a very rich history (some older than 30 years now) and can do multiple things, so suck it up, read the node help cards, study the example files and move forward. The more nodes you master, the more you can see potential pathways of nodes and possibilities in your mind, the faster you work, the better you are. The more you do this, the more efficient your choices will become. The learning curve is endless and boundless. All visual. All wysiwyg.
  50. 1 point
    Wait, I remembered. If you use the "\\" it will use ASCII code instead of what you write, so you have access to all characters. The 33 thing that anin wrote above is because the first 31 non-printing characters are skipped, so you actualy have 93 instead of 127 characters. If you write \\`int(fit01(rand($F),1,94))` at the Font SOP it will give you every possible character , or use 65-90 for lower case letters. EDIT: anim beat me to it. He
×