Jump to content

Leaderboard


Popular Content

Showing most liked content since 09/22/2020 in Posts

  1. 10 points
    Here is my take on the schizophyllum commune: Project a distorted grid on a displaced torus. Iterate over remeshing and relaxing the grid. Scatter random points along the outer edges. Find their shortest paths across the mesh. Convert the curves into a signed distance field. Offset the SDF contour based on curve orientation. The gills can be flipped by negating the orientation along the path curves. mushroom.hipnc
  2. 8 points
    Midnight recording on how to do a basic texture bombing shader: https://www.youtube.com/watch?v=sUkyHbSocUE texture_bombing_shader.hipnc
  3. 6 points
    Only sharing files and links for peoples that want to learn ..including mine self Here its snippets(File) useful for Tricks and Links for Video Tut INC ...WITH FILE. https://vimeo.com/454127040 https://vimeo.com/207724703 https://vimeo.com/305429043 uiHud.hiplc
  4. 6 points
    Here's another plant generator this time growing from crevices / occluded areas. It's essentially blending the volume gradient with some curl noise based on distance. mushroom_grow_out.hipnc
  5. 5 points
    Hello everyone! Every Monday at 12am for the last 3 weeks I have been uploading VEX snippets as mini-tutorials on my website: https://aaronsmith.tv/1-Minute-VEX Here, through '1 Minute VEX', I'll try to walk you through some of the more obscure and advanced functions that exist, and add as much explanation as possible to accompany the images in text based form. These are not for Houdini beginners! I also intend on all of my website's educational content being free - permanently. No donations, no subscriptions, no coupons. Below is 1 Minute VEX III as an example; Let me know if anyone has any suggestions for improvement! - 1 Minute VEX III - Ray-Cast Ambient Occlusion
  6. 5 points
    I tried. I couldn't think of a clever way to do it, but I hope it helps. Mushroom.hiplc
  7. 4 points
    Point cloud based / smoothed occlusion texture going the 2D voxel field / SOP import route. This time featuring a butter squab ; ) texture_occlusion_VOL.hipnc
  8. 3 points
  9. 3 points
    Here is the VEX version of the streaking texture procedure. It's pretty flexible now: Starting curves from uncovered areas to any angle, jumping over gaps, gradually changing target direction, measuring curve length for any point and of course texture mapping. Model taken from threedscans.com streaks_VEX_2.hipnc
  10. 3 points
    Currently working on a from-scratch texturing procedure that simulates water running down surfaces. Models shamelessly taken from @animatrix course files. Starting with projected points, the curves run along a volume until they drop onto the next underlying surfaces using nested while loops. The watery effect is achieved in COPs where the texture is drawn based on measuring distance from texture positions to these curves. Alright, enough art, here comes the proof of dripping :
  11. 3 points
    Hello, I recently started Houdini and I realized it can be used as a powerful visualization tool for learning mathematics. I visualized the slope field of Lotka-Volterra equations, which are commonly known as predator-prey model. Also, I posted a more detailed explanation of my work on my website. Now, I'm very interested in learning lighting/post-processing skills. So I'd like to know if there are great tutorials on it. Thank you!
  12. 3 points
    Just found the link and Names sharing . LINK
  13. 3 points
    Particles, L-system more Snippets chops Growth UI31OD.hiplc
  14. 3 points
    Distance from Target SOP -> Target parameter to Plane. fo.hiplc
  15. 2 points
    Instead of using @tan, you should use v@tan. Houdini only recognizes default attributes correctly. So if you use @tan, it is interpreted as one single float for all three axis. Ah.... juse a few minutes too late
  16. 2 points
    Hello once again! Since the last time I posted here I've added three new tutorials, all of which briefly cover textures, mapping and colour in VEX. Thanks to everyone once again for all of the support and useful suggestions! 1 Minute VEX VII - OCIO transformed attribute from image - https://aaronsmith.tv/1-Minute-VEX-VII Using colormap() and ocio_transform() to read an sRGB image and convert it to an ACES-compliant attribute. 1 Minute VEX VIII - sampling nearest texture with UDIM filename expansion - https://aaronsmith.tv/1-Minute-VEX-VIII Using xyzdist(), primuv() and colormap() to sample the nearest UDIM-friendly texture on a surface. 1 Minute VEX IX - triplanar mapping & projection - https://aaronsmith.tv/1-Minute-VEX-IX Using simple vector math, for loops and colormap() to create a triplanar projection. Please feel free to PM me with any questions or suggestions.
  17. 2 points
    You can add vertices to any edge by converting a polygon to a polyline and then sorting it using Intrinsic UV. addvertex.hiplc
  18. 2 points
    Hi, here is another version: With VEX you can insert your point into the point array of a primitive and rebuild the primitive with the new point. choose a primitive, a point and an index get the point array of the primitive and insert the new point to the position given by the index rebuild the primitive with the new pointarray and remove the old primitive insert_point.hipnc
  19. 2 points
    I assume that a triangle has always the same order, so if you add a point between point 0 and 1, just offset the point number after point 1. Let's say you have a triangle, this should work : int pts = primpoints(0, @primnum); removeprim(0, @primnum, 0); vector pos1 = point(0, "P", pts[0]); vector pos2 = point(0, "P", pts[1]); vector new_pos = avg(pos1, pos2); int new_pt = addpoint(geoself(), new_pos); addprim(geoself(), "poly", pts[0], new_pt, pts[1], pts[2]); Cheers,
  20. 2 points
    Hello @kiryha, first of all the torus is a NURBS surface, meaning it's treated as one single primitive which is also already unwrapped in UV space. This makes it an easy target for the primuv-function that is returning the position (or any other attribute) on a primitive at an primitive's intrinsic UV location. Here is what primuv() does to a grid based on a mesh (left) as opposed to a NURBS surface (right): Now when you want to apply a geometry (mesh, points, curves) onto a primitive, it needs to have UV coordinates first. In this case the UVs are based on the grid points positions in relation to the bounding box: v@uvw = relbbox(0, v@P); Because the grid lay on the XZ plane (for no reason in this case) I had to exchange the Z with Y coordinate: @uvw.y = fit01(@uvw.z, 0.2, 0.5); The Z coordinate should then be set to 0 for correctness: @uvw.z = 0.0; So in other words: if the grid was set to the XY plane right away, it's sufficient to transfer the position attribute from the NURBS torus like this: vector UV_grid = relbbox(0, v@P); v@P = primuv(1, 'P', 0, UV_grid); primuv_example.hipnc
  21. 2 points
    the testing Lee Griggs tricks with a textured volume inside a class object Arnold rendering this is rendered with Cycles in Gaffer, next I will try Hydra version from Cycles inside Houdini
  22. 2 points
    The `transform` 3x3 matrix intrinsic controls both rotation and scale. If you're assigning rotations based on quaternions, you're going to run into scale issues because quaternions can't contain scale information. The `w` attribute is for angular velocity, not rotation, so it won't help you here. What you can do is use cracktransform() in VEX to extract the scale from a matrix as a vector, then use the `scale()` VEX function on your rotated matrix to scale it back to the original value, either before or after your rotation (depending on whether you want your scaling to happen in world or local space). You could also consider using MOPs Apply Attributes to handle this for you.
  23. 2 points
    Started to get some cool things myself for a project, using my own ways, still feel a limited to code my own subdivision rules Obviously how not to mention master Michael Hansmeyer here? _______________________________________________________________ Vincent Thomas (VFX and Art since 1998) Senior Env and Lighting artist & Houdini generalist & Creative Concepts http://fr.linkedin.com/in/vincentthomas (Available soon, feel free to contact for reel and resume=
  24. 2 points
    HA! Never mind... i found the problem myself. In case it helps somebody else: I had my spawn points in the same GEO node as my terrain. This messes up the point attributes. Actually I was aware of that before - so i added another GEO node for the spawn points and use Object Merge to collect the points from my terrain geo. Nothing showed up though. The problem was, that you need to use the relative path in the object merge. Not "obj/..."
  25. 2 points
    pre solve means before the timestep has been solved. Post solve means after the timestep has been solved. In a very simple term you can think of it like this: Before the point has moved, after the point has moved for the given timestep
  26. 2 points
    When you have complex ocean spectra layered together using masks with varying speeds, and even animated timescales, retiming them while keeping the same approved look becomes a technical challenge. In this lesson you will see how to achieve this as a procedural post-retime operation, without having to modify any ocean spectra individually.
  27. 2 points
  28. 2 points
    Hello everyone! I'd like to thank you all for the interest the last time around, so I thought i'd celebrate by releasing the next 3 minutes of VEX; Exploring all things camera. If you missed it the first time, every Monday at 12pm for the last month-ish I have been posting VEX mini-tutorial snippets on my website, the most recent of which can be found below. Enjoy! 1 Minute VEX IV - https://aaronsmith.tv/1-Minute-VEX-IV-V An introduction to NDC space, and using to/fromNDC() to scale objects without changing position relative to camera. 1 Minute VEX V - https://aaronsmith.tv/1-Minute-VEX-IV-V#1mv_v Using toNDC() and removepoint() to delete points that are not displayed on camera. 1 Minute VEX VI - https://aaronsmith.tv/1-Minute-VEX-VI Using intersect() and optransform() to delete points hidden by camera occluding geometry.
  29. 2 points
    Here are the single passes for now:
  30. 2 points
    I wrote a custom render engine in COPs today. While 'engine' is probably a bit far fetched, it's a little ray tracer experimentally supporting: Meshes with UV coordinates Shading on diffuse textures Multiple point lights (including color, intensity, size) Area shadows and light attenuation Ambient occlusion Specular highlights Reflections with varying roughness The snippet basically transforms the pixel canvas to the camera position and shoots rays around using VEX functions like intersect() and primuv(). The rendering process only takes a few seconds. I still have to figure the licensing fees, though COP_render.hipnc
  31. 2 points
    Inside the resize container node there is Reference field. This defaults to density. However, the bounding box of density is often smaller than the heat or temperature around it. You can extend the Reference field by typing temperature after density. Then the bounding box of both volume types should be evaluated for the resize operation. I often increase the padding up from 0.2 default to 0.5 as well. Worst case scenario you can disconnect the resize container and simply manually animate the Size fields of the smoke object itself.
  32. 2 points
    Recreated all three subdivision types in VEX. It now fast and uses some input parameters. Didn't achieve any worthy results with animating this stuff, however. subdivide_triangle.hipnc
  33. 2 points
    // Point wrangle. #define PI 3.1415926535897932384 float u = fit01(@u, -PI, PI) * ch("revolutions"); vector pos = set(sin(u), cos(u), 0) * ch("radius"); matrix3 t = set(v@tangentu, v@N, v@tangentv); @P += pos * t; Where tangents and normal was computed by PolyFrame SOP, and @u is 0..1 point's position on curve. spiralize.hipnc
  34. 1 point
    @Librarian HIP attached, the swirly lines are from one of entagma's tutorials. SOP solver with a curl noise. Play the timeline from the resample to get a nice frame, then you can click down the nearpoint() wrangle which is quite heavy. @animatrix Yep I just had a lockup when houdini went over 64gig with a load more points... Would you have any pointers on how to avoid the adhoc grouping? I had a thought on using a for-each piece SOP, then do the nearpoint() to a second input which has the current piece removed, no need for any grouping. trailSwirl_v07_clean.hiplc
  35. 1 point
    Are you already starting to design your own coffn? Stereo-graphic_projection_inversion and Dirichlet problem , Humm.....? I haven t met these guys before but i like your poetry
  36. 1 point
    @Librarian Thank you for sharing! The resources you shared are really helpful to me!
  37. 1 point
    Sure. I don't have here the latest version but here is the "few version ago" hip. This doesn't work with multiple points as source, needs some adjustments for it vine_gen_test_06.hip
  38. 1 point
    Hey @Librarian thanks for your input! That's a really neat approach! I was able to produce a "fix" for my problem. As I have the base primitives that are going to be packed (non rotated) with an id and the packed primitives (rotated but without changed normal / orientation), I used an extract transform SOP to get the orientation change. This seems to work. Maybe not the most elegant solution but that was what I got after banging my head against this problem for a day Asking Pluralsight (or the teacher) wasn't really an option, as I changed from their approach (building clusters where only the buildings fitting completely onto the city block get cut out..which resulted in not really packed blocks for me) to using the UV Layout SOP to get a nice packing before the simulation they use later on. Cheers and thanks a lot! Daniel
  39. 1 point
    Hip HI @Masoud Ask now HOPE IT HELPS ! PathDeformFUN.hiplc
  40. 1 point
    @csp Indeed. I've made it even more robust. But this may not be perfect yet. addvertex_v2.hiplc
  41. 1 point
    seems to work exactly that way, what exactly is not working for you? attached file with the same code Specific_pscale_fix.hip
  42. 1 point
    Amin, you can also dive inside and see, what exactly happens before pre-solve, after post-solve, and between them. I have attached a picture. For example, you can see, when the @pprevious attribute gets written, it is in between the Pre-Solve and Post-Solve inputs (in the node named "stash_current_pos"). So, if a new particle is sourced in the current timestep (usually in the post-solve input), its "has_pprevious" attribute is 0.
  43. 1 point
    Found Something that maybe can Help you in your Process ..Who knows Vex-argsort sort.hiplc
  44. 1 point
    It seems to work in all versions of 18.0.x up until 18.0.499. Up until then, somehow, the Transform Pieces node could apply non-named template attributes from a single point to all four wheel points. In 18.0.499 the points have to be matched by a name attribute for it to work. Possibly SideFX have fixed a long-standing bug or broken a normal piece of functionality. No idea which :-) car_rig_bullet_julian_18.0.499.hip
  45. 1 point
    In case anyone else needs to do this, I made a free hda to do this. you can add noise, rotate, scale and transform randomly!! and it is done all in /mat. Make sure to lay out uv's before using. The hda is included here or you can get it from the link in the video on youtube. example.hiplc uv_randomize_b.hdalc
  46. 1 point
    Thanks a lot for your answers, i will dig the two options you propose tonight. Konstantin, your solution would work for pure procedural shading, or with triplanar maps, but you could not generate UV out of it right? Meanwhile i was playing with super simple polar UV projection and got nice results, here is a test render, but it will not do the trick for more detailed image based displacement.
  47. 1 point
    It is now possible to get surface distance from the softselection with the new function of H16.5! geodesic_distance_softsel.hiplc
  48. 1 point
    I usually do it like this: vdb_maskvel_dv.hip
  49. 1 point
    Hi odForce! Haven't posted on here for years! I had to solve this exact problem on one of the Spider-Man movies where we needed a way for Spidey's webs to hit surfaces without getting twisted. We would project points representing the hit position of each thread in the web's terminating 'Eiffel tower' shape. You get a cloud of points scattered on the surface that isn't necessarily planar. The task is to fit a polygon to the points without getting twists. So I solved this by automatically calculating the best fit plane through the points and the centroid to give me an axis. Then I sorted the points by polar coordinates around this axis. If you then join the points you get an untwisted polygon. Pretty sure it worked every time.
  50. 1 point
    The best way to learn Houdini is by playing around. When you open up a VOPSOP you have to have an idea of what you're trying to achieve. If you want to randomize points maybe you can find a node for that by searching for "random" in the tab menu. And then you can try what happens when you add point position and a random vector and put into the position output or maybe the color output. Perhaps there are even other ways to randomize points. Noises are a very popular way to randomize points. Try some noises out and play with them. For a beginner they might seem confusing, but really most DCC apps use them in one way or another. As for adding color based on point position, that's really just throwing the Position vector into color. But because position can go from -infinity and +infinity and color is between 0 and 1, the points will most likely be very brightly colored. To be honest, learning how to use VOPSOPs is just playing around with it and putting down nodes. Open up the help card (press f1) for nodes you do not understand. really_basic_vopsop_example.zip
×