Jump to content

Leaderboard


Popular Content

Showing most liked content since 04/07/2021 in Posts

  1. 3 points
    Hand painted looking textures for low-poly models: paint_bake.hipnc
  2. 2 points
    I don't know how to utilize such functionality via copy-paste... But as an option, you can build a Python script for Houdini which (with selected Mantra Node) will run Nuke and create a reader with exr: import subprocess import hou NUKE = "C:/Nuke12.1v2/Nuke12.1.exe" mantra = hou.selectedNodes()[0] exr_path = mantra.parm('vm_picture').eval() command = 'C:/temp/nuke_create.py' subprocess.Popen([NUKE, '--nukex', command, exr_path]) and the content of nuke_create.py would be: import sys import nuke exr_path = sys.argv[1] nuke.nodes.Read(name="My_EXR", file=exr_path)
  3. 2 points
    Good art work! You can also make curves follow the mesh topology by choosing directions towards neighbour polygons. topo_flow.hipnc topo_flow_only_curvature.hipnc
  4. 1 point
    I'm not leaving anytime soon. It was 2002 when I found this place, I was a lurker for a few years before making and account and saying a word. I think it was the Apprentice edition combined with 3DBuzz free training + 3D World magazine that got me here. Such an exciting feeling i had knowing absolutely nothing. Nowadays, i now know what i don't know...... which is still a lot! But I use Houdini everyday for work while many of my colleagues are still forced to use Maya, and I think this place is partly responsible for that in creating a truly creative community that embraces the sharing of knowledge, one that has never devolved into petty drama (which often happens with old school forums etc). Thanks Marc for making this place, i dont think i would have a career using Houdini if i didnt have help from the amazing people here. SESI have their official forums (as they should), but this is THE Houdini community for me and many others. Here's to another 20 years.
  5. 1 point
    Howdy! So.......... just curious, how many Houdini users from 20 years ago (I feel old!!!) are still here today...? I got hit by nostalgia lately... Cheers!
  6. 1 point
    There's a post FX tab on the render rop that has tick boxes for baking in certain settings from the RV (Like LUTs, Bloom, or CM). The RV shows you what your linear picture will look like when baked to sRGB (the screenshot you posted). It's just a view-transform. Your picture is still linear just viewing it in sRGB. The color picker can be changed, either sRGB or Linear will be correct, I don't know which though!
  7. 1 point
    Never encountered this myself but it might be negative values in your volume fields? You could try to clamping the fields that you use for shading to 0. In a volume wrangle that would look like f@myvolume = clamp(f@myvolume, 0, 10000); I'd try to clamp everything besides the "vel" volume
  8. 1 point
    you can offset the camera using channels, I tried doing that Object merge long back, but it didn't help, As I remembered I offset camera animation using channels there is thread in sidefx which might help you https://www.sidefx.com/forum/topic/71882/
  9. 1 point
    @Alain2131Thank you kindly for your help. I should have clarified that this is a bit of a contrived setup as part of a learning exercise for compile blocks - I specifically want to learn how to reliably get them to work with complicated nested blocks. The generate_line_points wrangle was only created because I wanted to try plugging Block Begin into it, and the Line SOP doesn't have an input. That whole for each loop block is not required, and can be sorted with something simple like pscale, that will drive the line length as I need it. The Distance Along Geometry SOP just needs an internal named reference replaced with a spare input and it will compile. It's a minor oversight on SideFX's part; I've put in an RFE and they should fix it soon hopefully. Once again, thanks for your time, this forum is amazing.
  10. 1 point
    This is nice, thanks Konstantin. I quickly did a test to combining with gradient, interesting... I guess i have to make a custom "triplanar"network to blend each gradient result for each component of my input vector to measure. ________________________________________________________________ Vincent Thomas (VFX and Art since 1998) Senior Env and Lighting artist & Houdini generalist & Creative Concepts http://fr.linkedin.com/in/vincentthomas
  11. 1 point
    Haha Merci! There are far better houdini teacher here I'm afraid, i have been a student for 23 years By the way, a more simple approach will be to use connectadjacentpieces, adjacent piece from point with piece attribute. So if you attribute a name attribute for your separate piece, it will do something similar and more simple to understand. But there is no guarantee the connection is unique per point...
  12. 1 point
    Search for advect particles houdini on youtube, there are tons of tutorials.
  13. 1 point
    Welcome Adam There are many way, here's one... ________________________________________________________________ Vincent Thomas (VFX and Art since 1998) Senior Env and Lighting artist & Houdini generalist & Creative Concepts http://fr.linkedin.com/in/vincentthomas connect_points_unique.hipnc
  14. 1 point
    Advect particle with a curl noise velocity field?
  15. 1 point
    this however you can get island id's using Connectivity SOP in uv connectivity mode
  16. 1 point
    Unless I'm misunderstanding, but it sounds like you'd like to do a type of layered "texture bombing" - if that's the case, the tutorial by @konstantin magnus may be helpful (and this is a topic that has come up repeatedly so a google search should give you a good amount of other reference if needed).
  17. 1 point
    @yujiyuji or you can use this setup and slightly adjust some parameters I think.. SrleParticles.hiplc
  18. 1 point
    @younglegend IF you play with settings you can Adjust ...I like to have rest and Init outside..separate...Maybe you need v..or lower Conservation RipplesXC.hipnc
  19. 1 point
    @Mark01 if someone Have Time? to make this for solver and 18.5 I don't understand clearly #pragma label damp "Damp" #pragma label springIdx "Spring Index" #pragma label magExp "Magnet Exponent" #pragma label magnetScale "Magnet Scale" #pragma label maxDist "Max Distance" sop SpringMagnet(float damp = 0.9; float springIdx = 0.5; float magExp = 1; float magnetScale = 0.5; float maxDist = 1; ) { vector target = point(1, "P", 0); vector rest = point(geoself(), "rest", ptnum); // get the spring force based on the rest position vector displace = rest - P; vector springForce = displace * springIdx; // get the magnet force from the target position vector magDist = P - target; float distance = length(magDist); vector magnetForce = set(0,0,0); if(length(distance) < maxDist){ distance = pow(fit(distance, 0 , maxDist, 1, 0), magExp); magnetForce = normalize(magDist) * distance; } vector mainAcc = magnetForce * magnetScale + springForce; v += mainAcc; v = v * damp; P += v; }
  20. 1 point
    you can do this for example in point wrangle after your Connectivity SOP int class = prim(0, "class", @primnum); // get class of point's prim string grp = sprintf("@class=%s", class); // ad-hoc group string for all prims with that class value vector rbb = relbbox(0, grp, @P); // rel bbox of point's p within bbox of all prims in that group v@Cd = chramp('color',rbb.y); // profit how_to_foreach_prim_with_vex_fix.hip
  21. 1 point
    Edge Cusp SOP should be what you need
  22. 1 point
    drop an AOV node or use the AOV from the ROP then you choose the aovs that you want or use the custom to grab stuff from RSusercolordata, make sure that the names match in the rop and customaov
  23. 1 point
    @Mark01 Open in Houdini 16.5 CHOKOLADA.hipnc
  24. 1 point
    You need to create a smoke sim first, then advect your particles with AdvectByVolume and no need to post the same question twice, thanks
  25. 1 point
    You have 2 diff edges, make 2 groups and use poly bridge .
  26. 1 point
    The sweep SOP can do the shape along with remapping the UV coordinates and Vellum for additional deformation. dry_fruit.hiplc
  27. 1 point
    Heyo, had a look and came up with the setup below. Still some improvements that could be made, like having the shrinking of each constraint happen over the course of a few frames instead of instantly to get a smoother transition etc. One of the key nodes when working with vellum and wanting to really art direct your cloth like this is the "vellum constraint property" node. As long as you have that and either an initial group or use the vexpression tab to fetch attributes or groups from sops then you can do A LOT of stuff with Vellum. The setup below mainly works by creating an animated group in sops to control a rest length multiplier attribute on the vellum constraints. This is done by using a solver sop and merging the current frames group with the previous frames group and then adding to it, causing the group to grow over time. Then setting a rest length multiplier attribute, "rest_mult", based on the animated group. If you are in the group, have a lower rest length multiplier, if not, just be normal and have a value of 1. Then lastly using the vexpression in the "vellum constraint property" node to get this animated attribute from SOPS and bring it into DOPS to have it multiply with our rest length scale. Hope that can be of help! animated_restlength_mnb.hiplc
  28. 1 point
    Hello Odforce! I recently re-visited Houdini For the New Artist which is a perfect course for Houdini Beginners. You can check it out for free on Youtube or Vimeo, and if you're interested in downloading extra scene files + resources, then be sure to visit www.cgforge.com. Cheers!
  29. 1 point
    if this with this, And I'm happy with This
  30. 1 point
    Here is a method to carve mesh surfaces with extrusions, gaps, holes, profiles based on their distance from curves or polygons. Projecting curves or polygons on a primitive. Measuring distance with xyzdist(). Distance rings done with polygon cutting. Carving in various profiles with ramps. profiles_2.hipnc
  31. 1 point
    yes, simple pyro, save your vel field to disk and create a separate dopnet with a popsim and add an AdvectByVolume DOP node There is an example in the docs https://www.sidefx.com/docs/houdini/examples/nodes/dop/popadvectbyvolumes/AdvectByVolume.html
  32. 1 point
    The basic concept is to use a RaySwitch. Plug the Incandescent into the giColor of the switch. This way you can still use the emission color on the rsMaterial and then another emission color from the rsIncandescent. In this mode, you can really over crank the rsIncandesent Intensity. Think of the Alpha value as a noise fall off. As Intensity goes up, try turning alpha down to tame fringe noise.
  33. 1 point
    convert your groups to an attribute first, then it's pretty straightforward to assign colors. attribute_from_groups.hip
  34. 1 point
    you can also just use UV Transform SOP: Transform Order: Trans Scale Rot Translate X: 0.5-$CEX Translate Y: 0.5-$CEY Scale X: 1/$SIZEX Scale Y: 1/$SIZEY Pivot XY: 0.5
  35. 1 point
    For the target polyexpand produces a straight skeleton, triangulate2d converts it to convex triangles, divide (remove shared edges) and resample (subdivision curves) turns them into a smooth outline curve. As you suggested I used minpos() to get the directions from the circle points towards the target: vector dir = minpos(1, @P) - v@P; and intersect_all() would be for shooting rays from the outer circle to the original input shape: int ray = intersect_all(2, @P, dir, pos, prim, uvw, tol, -1); Hit play to see how it performs on different inputs. dir_to_srf.hiplc
  36. 1 point
    You can fit-range an attribute with setdetailattrib() set to "min" and "max". 1st pointwrangle: setdetailattrib(0, 'height_min', @height, "min"); setdetailattrib(0, 'height_max', @height, "max"); 2nd pointwrangle: float min = detail(0, "height_min", 0); float max = detail(0, "height_max", 0); @Cd.g = fit(@height, min, max, 0.0, 1.0); fit_range_VEX.hiplc
  37. 1 point
  38. 1 point
    Thanks!! This worked: int iteration = detail("op:/obj/geo1/Layered_generations/repeat_begin_attributes/", "iteration"); Seems like it need to be in full for some reason, the relative still didn't work
  39. 1 point
    Try int iteration = detail("op:../repeat_begin_attributes","iteration"); You need to put op: before your path, full or relative.
  40. 1 point
    Intersection Analysis Sop can get good results when used with Intersection Stitch Sop. DissolveInside_sy.hipnc
  41. 1 point
    For multiple RBD packed objects, you need to make sure all pieces have a unique name inside the sim. Say I want to use a fractured box and a fractured sphere in the same sim: I fracture a box with voronoi and pack with an assemble node, the name attribute will be piece0, piece1... etc. I do the same with a sphere, but that also gives me piece0, piece1... etc. in the name attribute. Having duplicate name attribute values in the RBD sim will not work as we expect. You'll have to rename the pieces using the sprintf function like so: s@name = sprintf("box_piece%d", @ptnum); From there you can make your constraint networks, and give each object's constraints a unique name e.g 'sphere_constraints' and 'box_constraints'. You can merge the constraint networks in SOPs for a neater DOP network, but it's not necessary. I have attached a file for you multiple_rbds_with_constraints.hip
  42. 1 point
    Well guys thanks a lot for your feeback ! There is nothing really serious for the moment, but i was curious to know how we could connect houdini with a webgl canvas. trigger write sop / read geo pop cache directly in a webgl canvas ... stuff like that. basically at the core of this i guess the best bet is to rely on pyrpc and use python for the backend and js in the front end. I have heard good thing about this one : https://github.com/pallets/flask or maybe node.js could also be an option ? https://nodejs.org/api/child_process.html EDIT Actually francis i've just realised that you are the guy who already make it work
  43. 1 point
    I would keep it simple. You can definitively achieve this look starting with the wispy smoke shelf tool. Make sure to increase your display 3D texture resolution to see what you are doing. (On the viewport press d for display options, go to texture, set the 3D textures parameters.) I would start with a simple box and use the wispy smoke tool. The look you are going for is very much a natural advection, i.e. temperature based motion. (As opposed to velocity based motion, like rockets or fans, etc.) Buoyancy and cooling rate are going to be important to control the amount of lift of the density. Turbulence is going to have large features. If you get to something you like but the scale is off for your scene, hey, just transform the results up instead of adjusting the sim over and over. (Sorry, this is pretty obvious, but it does happen that we get caught up in the scale of the environment and insist is working in physically correct scales when it would be so easy to just scale everything up and down.) Work with part of the domain first. Either work with a small box and copy more boxes later or just control the domain size directly. There really is no reason to have iterations longer than 5 minutes, at the most. When it comes to resolution, I feel like resolution needs to be earned. There is no point in cranking up the resolution if your shape controls don't provide variation at that level. You will just get more of the same. (Obviously you need to start with a resolution that is meaningful to begin with, needless to say a 10x10 box will never be indicative of anything. But cranking up the resolution to solve the shapes never did me any good... Think like you are mixing colors, work with swatches. Once you have a nice swatch, then sure, increase the resolution and adjust what needs adjusting, pretty soon you will see you are efficiently dialing the sim in a resolution that you can't even afford when you run the sim for the complete domain.) Last thing: I suggest sparse and small sources. The look your going for is defined by the negative spaces just as much as the animation of the density itself. When I say use a box, I mean you can start with a box and use noise (in the fluid source sop) to turn its complete volume into small pockets of density for sources. Or if it is easier to control, scatter a few boxes on a grid and you'll have to adjust the noise less, whatever is more intuitive for you. Think about the size of a cigarette tip in relationship to the shapes its smoke creates. Now think what you would need in the real world to source the fog you want. Keep in mind you the look you what is basically made of several layers of smoke that you are seeing through. You'll see things progress fast. Good luck.
  44. 1 point
    No need for loops and stuff. Take one VDB and put it in the first input, merge all the others and put it in the second input. On the VDB Combine choose "Flatten All B into A" and choose the appropriate operation, like SDF Union for sdfs.
  45. 1 point
    Gifstorm! First I've used a visualizer sop to show @v coming out of the trail sop: That makes sense so far. To make the next step easier to understand, I've shrunk the face that sits along +Z, and coloured the +Y face green, +X red, +Z blue. So, that done, here's that cube copied onto the points, with the v arrows overlaid too: The copied shapes are following the velocity arrows, but they're a bit poppy and unstable. So why are they following, and why are they unstable? The copy sop looks for various attributes to control the copied shapes, @v is one of them. If found, it will align the +Z of the shape down the @v vector. Unfortunately what it does if it has only @v is a little undefined; the shapes can spin on the @v axis when they get near certain critical angles, which is what causes the popping and spinning. To help the copy sop know where it should aim the +Y axis, you can add another attribute, @up. I've added a point wrangle before the trail, with the code @up = {0,1,0}; ie, along the worldspace Y axis: you can see all the green faces now try and stay facing up as much as they can (note the view axis in the lower left corner), but there's still some popping when the velocity scales to 0, then heads in the other direction. Not much you can do about that really, apart from try some other values for @up, see if they hide the problem a little better. What if we set @up to always point away from the origin? Because the circle is modelled at the origin, we can be lazy and set @up from @P (ie, draw a line from {0,0,0} to @P for each point, that's a vector that points away from the origin): Yep, all the green faces point away from the center, but there's still popping when @v scales down to 0 when the points change direction. Oh well. Maybe we can venture into silly territory? How about we measure the speed of v, and use it to blend to the @up direction when @v gets close to 0? Better! Still a bit poppy, but an improvement. Here's the scene with that last setup: vel_align_example.hipnc To answer the other key words in your topic title, I mentioned earlier that the copy sop looks for attributes, obviously @v and @up as we've used here, but if it finds others, they'll take priority. Eg, @N overrides @v. @N is still just a single vector like @v, so it too doesn't totally describe how to orient the shapes. You could bypass the trail and the wrangle so that there's no @v or @up, set @N to {0,1,0}, and all the shapes will point their blue face towards the top. Without any other guidance, it will point the red side of the shapes down +X. If you give it @N and @up, then it knows where point the green side, and you get a well defined orientation. While using 2 attributes to define rotation is perfectly valid, there are other options. The one that trumps all others is @orient. It's a single attribute, which is nice, and its party trick is that it defines orientation without ambiguity, using a 4 value vector. The downside is quaternions aren't easy to understand, but you don't really need to understand the maths behind it per-se, just understand what it represents. The simplest way is to think of it as @N and @up, but glommed into a single attribute. Another way is to think of it as a 3x3 matrix (which can be used to store rotation and scale), but isolated to just the rotation bits, so it only needs 4 values rather than 9 values. In houdini, you rarely, if ever, pluck quaternion values out of thin air. You normally generate what you need via other means, then at the last minute convert to quaternion. Lots of different ways to do this, coming up with ever funkier smug ways to generate them in 1 or 2 lines of vex is something I'm still learning from funkier smug-ier co-workers. Eg, we could take our fiddled @v, and convert it to a quaternion: @orient = dihedral({0,0,1} ,@v); What that's doing is taking the +Z axis of our shape-to-be-copied, and working out the quaternion to make it align to @v. You could then insert an attrib delete before the copy, remove @N, @v, @up, and now just with the single @orient, all the shapes rotate as you'd expect. vel_align_example_orient.hipnc
  46. 1 point
    A friend asked SESI this just yesterday. They said it's missing from the docs (and they will add it). It's done with hou.HDADefinition.save() hou.HDADDefinition.save("blackbox.hda", some_node, compile_contents=True, black_box=True)
  47. 1 point
    sticking behavior is supported in new POPs, you just need to provide pospath, posprim, posuv and stuck attributes which is quite straightforward using scatter and attribcreate ts_pop_stick_release.hip
  48. 1 point
    I think the first thing to do when reducing noise in volume renders is to start increasing transparent samples / stochiastic transparency
  49. 1 point
    after a little thought you can strip the digits out in hscript like this...pretty ugly, but it works `substr(${OS},0,strlen(${OS})-strlen(opdigits(".")))`
  50. 1 point
    Natural place for .NET platform is HOM, which is an API serving Houdini's internals to aliens (like currently Python interpreter), so C# would behave in Houdini pretty much like Python, with all pros&cons (like read-only access to frozen gdp outside a node, limited number of callbacks, single threaded UI etc). Houdini doesn't have any other API .NET/Mono could be wrapped around, so afaik introducing it to Houdini is more a problem of API, than platform itself. Wrapping a whole HDK seems to be unrealistic if not impossible at first place. The platform itself however is anther subjects of considerations. Is it really multi-platform, how about its maintenance, does it have any future? I doubt anyone would bet on it, if it was a pure Mono project (which is supported by MS anyway), without MS supporting the majority of people at .NET. In other words,my opinion is that Mono is an artificial project and it wouldn't exists if MS wouldn't like to prove .NET is multi-platform. Knowing things a bit, I presume SESI would have to pack up own mono compilation and ship with Houdini. Dependency with the fig leaf in essence. Additionally, anyone who tried to pair .NET application with right Mono version on Linux, knows how great idea is to bring into your pipeline "OS independent" library designed by MS. Surely there would be some pros, as C# plugins wouldn't be so dependent of builds versions, free and good windowing, threading etc, but this is valid under assumption HOM or any other API would allow C# to coexist inside Houdini, which as I suppose is not a trivial task. (and in fact it looks like the main advantage of .NET in Max was how it changed Max to make it happen). Finally Adsk seems to manage business differently for a purpose - suited to its multi-market clients (with games in focus). But having many languages to your disposal doesn't make your life any good by itself. In fact these 5 or 6 different languages in XSI talk to the same API deriving the same bugs from it, adding its own to the bin, along with number of inconsistencies in API's calls and documentation. I definitely don't see SESI going this way, specially that Houdini already have its babel tower, but at least these 5 languages supported in Houdini serve different purposes (and some are legacy thing) - and afaik C# without changing Houdini itself would inherit the same limitations as Python. I think SESI stays focused on tools already provided to make them better. Things like Python everywhere by extending HOM is happening, more Python Objects types, more serial routines handling well heavy data. Real pita compared to .NET is Houdini interface, single-threaded and in fact hardly scriptable. My secret dream is to have at least a QT-pane, which serves as a canvas for Qt widgets, if not just Qt everywhere - which I believe won't happen ever from philosophical reasons. In terms of performance perhaps making Python and VEX more productive is better option, isn't it? Note that you can already call VEX from Python, read/write serialized attributes, but thing totally absent is a primitive context for VEX. Primitive VOP allowing to change topology, deal with bones and VEX extended to support structures, so you have very productive geometry engine suitable for things Python is a wrong bet.
×