Jump to content

Leaderboard


Popular Content

Showing most liked content since 02/13/2019 in all areas

  1. 4 points
    Unwrap the mesh (right) before deformation (left): Assign zigzag UVs to the curve (right) and transfer positions from the mesh based on both UVs (left): spiral_mesh.hipnc
  2. 3 points
    DM 1.5 first screenshot: https://i.postimg.cc/rcy3KY3v/dm1-5-screen.png
  3. 3 points
    Hi everyone! The past week I worked on a personal project for learn something about hairs - vellum. It's my first project ever with hairs so I guess is nothing special but several people asked to see the hip file so here it is. Final result: Hip file (I had to recreate it but it should be pretty much the same): groom_clumping_03.hipnc
  4. 2 points
    You can set values with variables in them by escaping the $: node_rop.parm('sopoutput').set("\$JOB/render/\$HIPNAME.\$F.bgeo.sc")
  5. 2 points
    here is slightly modified code to Konstantin's answer to account for pivots and real bounds in case your packed pieces are already transformed float space = chf('space'); vector offset = 0.0; for(int i = 0; i < npoints(0); i++){ float bounds[] = primintrinsic(0, 'bounds', i); vector pivot = set(bounds[0], 0, 0); setpointattrib(0, 'P', i, offset-pivot, 'add'); float width = bounds[1] - bounds[0]; offset.x += width + space; }
  6. 2 points
    as I said, stylesheets or vex are both good methods. vex allows very straightforward assignments by matching @path names. stylesheets allows assigning materials to packed geometry and more complicated assignment patterns. with simple vex assignment, you'd go like this (in primitive wrangle SOP): if(match('*brick*', s@path)) s@shop_materialpath = '/mat/brickMaterial'; if(match('*wood*', s@path)) s@shop_materialpath = '/mat/woodMaterial'; if(match('*metal*', s@path) || match('*METAL*', s@path)) s@shop_materialpath = '/mat/metalMaterial'; ... this expects a meaningful naming convention, obviously. if you have a good naming, you can specify just a few patterns to batch assign materials to many objects at once. all in single wrangle node. this is also the fastest way to assign many materials I guess (in terms of processing). this method also works well with attributeStringEdit SOP - if you have slightly messy naming of alembic paths, you can fix/unify them first with this SOP and then batch assign materials with vex as shown above.
  7. 2 points
    1) return => function int sum(int x, int y) { return x+y; } takes 2 integers and returns their sum .. so yeah it gives you back something 2) yes 3) surface node is any SOP. those "cells/windows" are parameters. They can have 1 or more channels (= "cells" per row) 4) probably Dimension. Don't care, I'm lazy and use 0, 1, 2 5) centroid() is a function, $CEX, etc are global variables. Difference is centroid() can grab centroid of any SOP whereas $CEX always returns the centroid of the SOP you use it on 6) yes
  8. 1 point
    You could embed a python node that looks at the date and sets an attribute to a 0 to 1, depending if that date has passed. Then use that attribute to switch the output from the intended result to a null. A better approach IMHO, is to publish your code with a creative commons license, where it is clear that the code/process is yours, and you are granting them a license. By publishing the code publicly, you can also take that code to a new studio, when you move on. If it is a big concern, you had better have the conversation up front, rather than stick the studio with some gotcha expiring code that may put you in violation of agreements you may sign.
  9. 1 point
    substr - Returns the characters of s between the start position and the start + length position. 20+4 `substr($HIPNAME, 20, 4)`
  10. 1 point
    I'd post a hip if I were you, but the first thing I can suggest just from your pics is to change the burning rate to .999 or 1. If you have a value less than that, the fuel will remain in space, and give you a trail you may not like.
  11. 1 point
    Did you add a trail sop before sourcing to compute your velocity? Also turning on advect fuel would help.
  12. 1 point
    Hello howard, You can create differents oncurve attribut per pop curves. allow editing contents on popcurve node and in the geometry vop, there is a bind export node set on "oncurve", make it "oncurve1" for example and for the other popcurves do the same "oncurve2..etc you wil have separate attribut per popcurve.
  13. 1 point
    Here's the file, just load the animation in it... and cook the rest. Have a nice day. Animation.7z GrainFirstEverTest_v001_Publish.hip
  14. 1 point
    Hi, I crop out some of the particles by using a group node (by a geo) and a delete node to reduce cache size (that's why some of the particles seem to disappear when they reach the bound), the cache cost 16GB of space, let me upload the source setup HIP and you cook all the node. Sorry for the confusion Cheers
  15. 1 point
    Base your simulation scale off the size of the characters that ship with Houdini. Once the simulation is complete, you can scale it up to the project scale size. I think I would use multiple emitter points, one at each little shelf along the waterfall to get things started. ap_waterfall_021819.hiplc
  16. 1 point
    Thanks for the feedback @acey195 , I'm suspecting I'll have to RFE. Cheers
  17. 1 point
    the attached file shoud do the trick. hth. petz blerp_1.hiplc
  18. 1 point
    Merge all objects, pack them after connectivity and lay them out with a for loop which sums up all widths plus the spacing. This detail wrangle code is not sufficient but should give you an idea: float space = chf('space'); float shift = 0.0; for(int i = 0; i < npoints(0); i++){ float bounds[] = primintrinsic(0, 'packedbounds', i); float width = bounds[1] + abs(bounds[0]); vector pos = set(shift, 0.0, 0.0); setpointattrib(0, 'P', i, pos, 'add'); shift += width + space; } distribute_objects.hiplc
  19. 1 point
    Virtually any IDE will have support for those. Specially HOM. You can just add the path to the python modules and the IDE will do everything else. Here's Juraj blog post about setting HOU up in Visual Studio Code. VSC also has a VEX extension. I know Sublime also has one. Guillaume's tool is also pretty good for in-DCC coding.
  20. 1 point
    right. no and no. For every parameter (no matter if one or more channels) there is a function defined somewhere in the code that takes the values you enter in the channel(s) as function arguments. Keyframes is a different story. you will memorize the syntax of the most common functions over time. If you're not sure, open help or when you type a function name and then the first open bracket houdini should pop up a help to this function where you can see how it's being used centroid("/path/to/sop", 0) instead of centroid("path/to/sop", D_X) and so on should all be in the docs. http://www.sidefx.com/docs/houdini/ref/expression_cookbook.html they are referred to as expressions, which makes sense as they are not variables that represent constant values!
  21. 1 point
    Hi, I recently discovered your website cgwiki and I wanted to thank you for taking your time to write out such awesome material especially about Houdini. Its very valuable and I am learning a lot from it everyday. I will surely be happy to support you once I get a job Keep it up!
  22. 1 point
    hi, you've got a few options here. 1. store an primitive integer attribute for each page number, then use that in the shader (bind VOP) to switch between different textures (probably the most straightforward solution) 2. store a string attribute containing the desired texture name (or full path - that would be simpler to implement in shader) per page, and feed that into the texture path entry on texture VOP in shader (again with bind VOP). 3. use material style sheets to override a texture path per page (this would be a slight overkill for this kind of thing thou
  23. 1 point
    Ray works well. You want your point normals to point to the center in X an Z SpiralConform.hip
  24. 1 point
    Looks like you want more gravity, stronger advection on the particles, and faster fire with more disturbance, higher burn rate (and less fuel Ineffciency), and larger scale turbulence feature and stronger. It's a hard thing to match things like this with nearly 100 different parameters. I'd try to match the first first with at least 4x coarser sim, you know...
  25. 1 point
    Can't you just use 1 particle fluid surface node and use point attributes on the different meshes to get different settings? Also what skybar said. You can just output vdbs from the particle fluid surface node and combine them together. The vdb it outputs would be identical to mesh it would output.
  26. 1 point
    To import a camera through alembic, a good solution is not to use the Build Hierarchy every time, but read the parameters directly from it using the module. >>> import _alembic_hom_extensions as abc http://www.sidefx.com/docs/houdini/hom/abc_extensions.html From it you can pull out all the transformations and some data on the camera. For export, it is better to have a HDA in which you could indicate the path to the camera where all its parameters would be read, after that the camera would be baked and then exported to Alembic.
  27. 1 point
    If you want to be an FX artist you should look only for the FX artist positions. Otherwise, you just lose time working in other roles and it would not make much difference in getting the desired position in future cos you will not get relevant experience. And relevant experience usually called "fit" it is a thing number one which recruiter (or robot) looks for in your resume.
  28. 1 point
    Vellum + Bullet do not work together yet as far as I know unless they added support for in one of the latest builds.
  29. 1 point
    it depends... material names in shape names would be obviously a good thing to have but from my experience, modelers just don't name things as you'd like them to, and it seems to be a waste of everybodys' time to force them to. so I typically just want to have some meaningful names saying what things are, and most importantly, not having objects that are supposed to be made of different materials, fused into a single shape (that's obviously an issue that can only be solved by cherry-picking polygons by hand). Then I typically do attributeStringEdit SOP to add material names to groups of objects. For instance I might want everything called "wood" to have the same material as objects called "planks" and "beams", and also "Beams", "Beam", etc... So I take all these and add "woodMat" at the end of their path names. And then in the wrangle I just filter "woodMat" and assign the material to all these objects. And then the model is likely updated several times, with some new shapes, or some shapes named differently than before etc. So to tackle that, I usually have bright green emissive material (just something really obvious), and the last line in my vex will say: if(s@shop_materialpath == "") s@shop_materialpath = '/mat/green'; so that any shapes that were not picked by my pattern matching for any reason, will render this stand out green color to be immediately noticed. so that's generally my usual material assignment workflow for assets made of many objects. this or stylesheets if objects are packed and I can't unpack them - but mostly I prefer wrangle workflow as I find it more straightforward and it can typically do all I need.
  30. 1 point
    That's super strange. I don't have any issues there, I can see the cone constraint as usual placed at the origin. Can you try with this hip? iii_2.hip
  31. 1 point
    Hi Ryuji, you can inset a polygon or polyline with these steps: vector next = normalize( point(0, 'P', (@ptnum + 1) % @numpt) - v@P ); vector prev = normalize( v@P - point(0, 'P', (@ptnum - 1) % @numpt) ); vector avg = normalize(next + prev); vector up = normalize( abs(cross(prev, avg)) ); vector in = cross(avg, up); float dist = dot(next, avg); vector dir = in / dist; v@P += dir * chf('extrude'); Kim Goossens discusses this here: https://www.youtube.com/watch?v=tnDqwcNG20Y
  32. 1 point
  33. 1 point
    So in this example press the reload button and it will populate the number of tabs. This will help it from being autoset every time your values upstream change. You should avoid having the system inadvertently wiping previous set values in the multi-parms. This code is in the call back script. It looks for the name attribute information on the geometry of the node iterPython. Based on that if finds the length of the list. It then will set it to the multiparm. node = hou.node('iterPython'); geo = node.geometry(); attr = geo.findPrimAttrib('name'); list = attr.strings(); iter = str(len(list)); hou.parm('extrude').set(iter) ForLoopNameMultiparm_02.hip
  34. 1 point
    sup bb I'm not sure how you could do this without a solver, since it's accumulating rotation over time based on the speed and direction of motion each frame. I could be all kinds of wrong, though. Here's my method... I'm using the distance traveled and a vector orthogonal to the direction of travel to build an angle/axis quaternion, and adding (qmultiplying) that quaternion to the existing orient for each timestep. It's not perfect but it seems to work pretty well. Since it's basing the rotation angle on cross(dir, up), it might freak out if the ball rolls straight down. Curious if any smart people have a better answer for that. AutoRoll01_toadstorm.hip
  35. 1 point
    Here is a draft of the idea It seems to work correctly on static meshes (haven't tested much, so there might be some problem) It is painfully slow for animated sequence though. 20 seconds for a standard FBX export turns into 20 minutes or more. Sooo yeah, some work is needed on that area Also, if some points or prims has the attribute, and some doesn't, those will be left out. Again, this is a draft. If some interest is shown, I might work on it a bit more. I encourage anyone that feels like it to enhance the tool The tool currently work with attributes, not groups. fbx_export_v1.1.hip FBX_export_v1_1.hda Please let me know if the tool is useful EDIT : I updated the tool a bit See the tool's help for some (I hope) useful info I found out what's taking so long for animated export - it's the unpacking Using the Fetch Unpacked Geometry from the dopimport was actually slower than an unpack under it, and an unpack was slower than using none before the export. But hey, it's not made for animated geometry. I've had no problem using it for static geometry yet. If the point or prim string split attribute is empty on some points or prim, the tool now exports them correctly.
  36. 1 point
    Here is a breakdown of how I made this bear made out of duplo blocks: The video isn't super comprehensive, I mainly cover setting up the constraints in SOPs, though I briefly touch on some other parts too. But the hip file is available to pick apart! Thanks! duplo_bear.zip
  37. 1 point
    Hey all, here's a shot I've been working on! Speed Tree, Houdini and Redshift. Have a good one
  38. 1 point
    Hi there good folks, So I've been asked to make that famous effect of various colored paints layered upon each other, dripping to the ground. The dripping? No biggie. Getting the colors to NOT mix together? Urgh. Or rather, as the flip sim goes, particles spread under / over the other layers, resulting at the end of ends in a spotted mesh with no real foreseable tricks to get all those to look like sharp separations. Here is a good reference : https://www.youtube.com/watch?v=P6TnO_Rr3G4 And another one would be : https://vimeo.com/225438054 How my work looks so far, you can really notice the spots : Aaaand joining a scene. So how could I proceed to prevent the colours from mixing together too much? I've thought of getting a particle force to repel/attract based on color, but it will just make weird things happen. I've thought of getting a texture in there, but I do'nt see how I could make it look right or apply it correctly to the mesh. I've thought of manipulating various fields, adding some kind of divergence, etc, but really I'm not sure what to do. Maybe it's just a post sim trick, too... Paint_drips.hip
  39. 1 point
    I got a chance to improve the tornado project a few months ago and here is the result, although it's a fire whirl now :
  40. 1 point
    We had a run away sim on RIPD in 2012 that would eat up to the max 128 gigs of ram on the server farms in Taiwan at the time. Generally this should be an extreme outlier like it was. The Shape of Water team in Toronto had a few sims that did this too. Every film generally has one or two shots that depending on the artist will use up this much ram for a sim. Another case is if you are doing some heavy handed photogammetry where you toss hundreds of photos at it, as opposed to a proper plate set of like a dozen high resolution photos. A more practical math for evaluation is for farm nodes we usually do 1 cpu core to 2/4 gigs of ram for rendering. So if you get one of the new thread rippers and your machines is part of the farm, it can be used to run dozens of small jobs or a massive sim. So for a comp Render node or FX render node. Most studios on average have worker stations at 64 gigs of ram. It's generally cheaper for a studio to give them a second box than over doing the ram.
  41. 1 point
    No. Linux and Houdini (assuming x86-64) can both handle more memory than any computer available for purchase right now. Memory isn't a problem until you run out of it. If you don't ever run out of memory then upgrading it won't help anything. If you run out of memory sometimes then upgrading it might be a good idea. Look at a resource monitor while you work to get an idea of how much memory is being used for various tasks. Simulations.
  42. 1 point
    calculating the edge vector is not a problem but you can't store it on edges since houdini doesn't support edges as regular geometry elements. what you could do instead is to use convertLine and do all the computations on prims which you could then lookup by points from the actual mesh. or you could just write all the edge vectors into a detail array. please take a look at the attached file. hth. petz edge_vec.hipnc
  43. 1 point
    wire_constraints_16.hip Thanks Gorrod I have just rebuilt this in 16.0.557 and noticed an 8% speed increase not that it was very slow to begin with.
  44. 1 point
    Forgot to update this thread, came up with a better non solver based setup: http://www.tokeru.com/cgwiki/?title=Houdini#Folding_objects_.28the_transformers_cube_effect.29
  45. 1 point
    https://www.dropbox.com/s/l0p1ay80lctdn45/Cloth_Finite_Field.hipnc?dl=0 Ok, so may be this could help you.
  46. 1 point
    Ok! First - the most important part of the method. Check this diagram and attached file - they are the core algorithm I came up with. 1. Let's say we have a simple 2d point cloud. What we want is to add some points between them. 2. We can just scatter some random points (yellow). The tricky part here is to isolate only the ones that lay between the original point cloud and remove the rest. 3. Now we will focus just on one of the points and will check if it is valid to stay.Let's open point cloud with certain radius (green border) and isolate only tiny part of the original points. 4. What we want now is to find the center of the isolated point cloud (blue dot) and create vector from our point to the center (purple vector). 5. Next step is to go through all points of the point cloud and to create vector from yellow point to them (dark red). Then check the dot product between the [normalized] center vector (purple) and each one of them. Then keep only the smallest dot product. Why smallest - well that's the trick here. To determine if our point is inside or outside the point cloud we need only the minimum result. If all the points are outside , then the resulted minimum dot will always be above zero- the vectors will tends to be closer to the center vector. If we are outside the point cloud the result will always be above zero. On the border it will be closer to 0 and inside - below. So we are isolating the dot product corresponding to the brightest red vector. 6. In this case the minimum dot product is above 0 so we should delete our point. Then we should go to another one and just do the same check. Thats basically all what you need. I know - probably not the most accurate solution but still a good approximation. Check the attachment for simpler example. In the original example this is done using pointCloudDot function. First to speedup things I'm deleting most of the original points and I'm trying to isolate only the boundary ones (as I assume that they are closer to gaps) and try not to use the ones that are very close together (as we don't need more points in dense areas). Then I scatter some random points around them using simple spherical distribution. Then I'm trying to flatten them and to keep them closer to the original sheets - this step is not essential, but this may produce more valid points instead of just relying on the original distribution. I'm using 2 different methods - the first one ( projectToPcPlane ) just searches for closest 3 points and create plane from them. Then our scattered points are projected to these closest planes and in some cases it may produce very thin sheets (when colliding with ground for example). There is a parameter that controls the projection. Then second one is just approximation to closest points from original point cloud. Unfortunately this may produce more overlapping points, so I'm creating Fuse SOP after this step if I'm using this. The balance between these 2 projections may produce very different distributions, but I like the first one more, so when I did the tests the second one was almost always 0. Then there is THE MAIN CHECK! The same thing that I did with the original points I'm doing here again. In 2 steps with smaller and bigger radius - to ensure that there won't be any points left outside or some of them scattered lonely deep inside some hole. I'm also checking for other criteria - what I fond that may give better control. There may be left some checks that I'm not using - I think I forgot some point count check, but instead of removing it I just added +1 to ensure that it won't do anything - I just tried to see what works and what not. Oh and there are also some unused vex functions - I just made them for fun, but eventually didn't used. So there it is. If you need to know anything else just ask. Cheers EDIT: just edited some mistakes... EDIT2:file attached pointCloudDotCheck.hiplc
  47. 1 point
    Hi guys, I wanna share a RBD RnD I did recently. I made this after watching the second one of Siggraph 2014 Talk - Art Directing Rigid Body Dynamics as Post-process Fangwei Lee / DreamWorks Animation (http://vimeo.com/100947043) Really appreciate Fangwei Lee helping me https://vimeo.com/106664159 Because I get some messages that want me to share hip file, so here is my scene file wrap_deform_cleanup_ver.hipnc Hope you guys like it Cheers!
  48. 1 point
    I've wanted to tackle mushroom caps in pyro sims for a while. Might as well start here... Three things that contribute greatly to the mushroom caps: coarse sub-steps, temperature field and divergence field. All of these together will comb your velocity field pretty much straight out and up. Turning on the velocity visualization trails will show this very clearly. If you see vel combed straight out, you are guaranteed to get mushrooms in that area. If you are visualizing the velocity, best to adjust the visualization range by going forward a couple frames and adjusting the max value until you barely see red. That's your approximate max velocity value. Off the shelf pyro explosion on a hollow fuel source sphere at frame 6 will be about 16 Houdini units per second and the max velocity coincides with the leading edge of the divergence filed (if you turn it on for display, you'll see that). So Divergence is driving the expansion, which in turn pushes the velocity field and forms a pressure front ahead of the explosion because of the Project Non-Divergent step that assumes the gas is incompressible across the timestep, that is where where divergence is 0. I'm going to get the resize field thingy out of the way first as that is minor to the issue but necessary to understand. Resizing Fields Yes, if you have a huge explosion with massive velocities driven by a rapidly expanding divergence field, you could have velocities of 40 Houdini units per second or higher! Turning off the Gas Resize will force the entire container to evaluate which is slow but may be necessary in some rare cases, but I don't buy that. What you can do is, while watching your vel and divergence fields in the viewport, adjust the Padding parameter in the Bounds field high enough to keep ahead of the velocity front as that is where you hope for some nice disturbance, turbulence and confinement to stir around the leading edge of the explosion. or... Use several fields to help drive the resizing of the containers. Repeat: Use multiple fields to control the resizing of your sim containers. Yep, even though it says "Reference Field" and the docs say "Fluid field..", you can list as many fields in this parameter field that you want to help in the resizing. In case you didn't know. Diving in to the Resize Container DOP, there is a SOP Solver that contains the resizing logic that constructs a temporary field called "ResizeField", importing the fields (by expanded string name from the simulation object which is why vector fields work) with a ForEach SOP, each field in turn, then does a volume bound with the Volume Bounds SOP on all the fields together using the Field Cutoff parameter. Yes there is a bit of an overhead in evaluating these fields for resizing, but it is minor compared to having no resizing at all, at least for the first few frames where all the action and sub-stepping needs to happen. Default is density and why not, it's good for slower moving sims. Try using density and vel: "density vel". You need both as density will ensure that the container will at least bound your sources when they are added. Then vel will very quickly take over the resizing logic as it expands far more rapidly than any other field in the sim. Then use the Field Cutoff parameter to control the extent of the container. The default here is 0.005. This works for density as this field is really a glorified mask: either 0 or 1 and not often above 1. Once you bring the velocity field in to the mix, you need to adjust the Field Cutoff. Now that you have vel defined along side density, this Field Cutoff reads as 0.005 Houdini units per second wrt the vel field. Adjust Field Cutoff to suit. Start out at 0.01 and then go up or down. Larger values give you smaller, tighter containers. Lower values give you larger padding around the action. All depends on your sim, scale and velocities present. Just beware that if you start juicing the ambient shredding velocity with no Control Field (defaults to temperature with it's own threshold parameter so leave there) to values above the Field Cutoff threshold, your container will zip to full size and if you have Max Bounds off, you will promptly fill up your memory and after a few minutes of swapping death, Houdini will run out of memory and terminate. Just one of the things to keep in mind if you use vel as a resizing field. Not that I've personally done that... The Resolution Scale is useful to save on memory for very large simulations, which means you will be adjusting this for large simulations. The Gas Resize Field DOP creates a temporary field called ResizeBounds and the resolution scale sets this containers resolution compared to the reference fields. Remember from above that this parameter is driving the Volume Bound SOP's Bounding Value. Coarser values leads to blurred edges but that is usually a good thing here. Hope that clears things up with the container resizing thing. Try other fields for sims if they make sense but remember there is an overhead to process. For Pyro explosions, density and vel work ok. For combustion sims like fire, try density and temperature where buoyancy contributes a lot to the motion.
  49. 1 point
    First the problem. You are pushing the smoke a huge distance with large velocities. How do you expect the sim to deal with pushing smoke a few meters in a frame? Not very well of course. You need to supply more substeps. The best way to do this imho is to increase the Max Substeps parameter in the advanced tab of the Pyro Solver. In my test case with an initial velocity of 100 m/sec, just increasing the max substeps to 2 helped a lot. The solver uses the CFL predictor to decide when to start substepping beyond the min substep parameter. The help is lifted from the RBD help and is not correct here as it speaks of voxel collision penetration but substitute collision with density and you get a rough idea as to what is happening. The value for the CFL predictor is the voxel distance that the velocity pushes the density across. If that voxel distance exceeds the CFL value, then you start increasing the substepping. With a max substep value of say 5 and you set the cfl predictor to say 10 and the substepping will start later but provide more substeps for parts of the simulation that are moving very fast.
  50. 1 point
    Here is a simple example of positive and negative divergence using the standard smoke solver: divergence.hip have fun
×