Jump to content


Popular Content

Showing most liked content on 06/25/2016 in all areas

  1. 1 point
    Haha! You're lucky I didn't use a teapot and tubes ;D. But yeah, absolutely agree. Some proper animation and geometry will make a huge difference. Ain't nobody got time for that on an example file!
  2. 1 point
    Here is an example file I found on this forum a while back that shows how to use a greyscale DEM image for terrain. The image is brought into COPs network and there is an example of drawing a new plateau on top of the original DEM then feed that to SOPs for final generation. The red scattering is another feature of this file. You can scatter objects based upon DEM slope value. I also wrote a python based DEM reader for Moon data that can be found by clicking on the link in my signature. ap_DEM_based_terrain.hipnc
  3. 1 point
    Not sure which particle tutorials will point you in the exact direction for this effect, but if I were to do it I would look at advecting particles through the velocity field of a smoke sim. I've attached a quick, basic setup which should get you going. Lots of improvements can be made to the behaviour of the smoke itself as well as adding further complexity inside the popsim. Step 1: Take some animated geo and create a smoke sim from it (I used the billowy smoke shelf tool). step 2: Create a pop sim from points scattered on the animation. Step 3: Reference the velocity of the smoke sim inside the popsim to drive the particle motion. I've used this basic technique countless times. It's a great little trick to have in your tool box. Cheers, WD Ps - Feel free to use my award winning animation ;D velocityAdvectParticles.hip
  4. 1 point
    Thanks for your help!! and let me understand the meaning of x, y , z output. It works now! finally, I increased the Rows and Columns for the grid from 10 to 500, so the pattern looks more clear! thanks!!
  5. 1 point
    Hello everyone, I just wanted to share my 2 and half years of my FX work done with Houdini. Hope you like it!
  6. 1 point
    attached is a file with all sorts of curvature computation for vdbs ... hth. petz vdb_curvature.hipnc
  7. 1 point
    Is there a reason you wouldn't use the hand as the particle emitter itself? What sort of effect are you trying to achieve? Two quick ways would be to blast out just the hand and use that as your particle emitter. Drop down a trail sop set to compute velocity if you want the particles to inherit some velocity from the animation of the model (Pop source node, Attributes tab, Use inherited Velocity). Another way if you wanted something in the palm of her hand for example, would be to use a point expression to match the transformation of a sphere to a point in the palm of her hand. In this case 2109 seemed like a good candidate. In the translate fields of a transform sop you write three things for each component, X Y and Z. The tooltip will pop up when you start writing to tell you what it expects, but in the case of your model I wrote the following, point("../normal1/",2109,"P",0) for the X component, then for the Y just make the last number a 1, and 2 for Z. http://archive.sidefx.com/docs/houdini15.0/expressions/point That won't do anything fancy like matching orientation or transferring normals etc, but could be good enough for what you need.
  8. 1 point
    @mestela and @MENOZ Thanks for posting your files! Both really interesting to learn from, I got no where close on this one. Will attempt the next one Re:
  9. 1 point
    Geometry is geometry. Whether modelled or displaced. Same thing to Mantra when it needs to shade the geometry. If you model the geometry to high detail or you use displacement maps to capture that detail and apply at render time, Mantra is rendering geometry. Displaced geometry by Mantra has an overhead in time and memory when it is constructing and coving the additional geometry for every frame it renders. For hero objects or those near the camera, this is more than acceptable and Mantra needs no additional babysitting like other render engines. Shading Quality <shadingquality> determines how fine to dice the geometry during the displacement geometry generation. That's it. There's also displace space but that is to handle the classic displaced ocean surface to the horizon. You can use Houdini15's Bake ROP to now procedurally generate several texture maps from high res to lo res geometry so add that to the list from the earlier post. Only missing the curvature map in the bake. Everything else is there. Modelled high res geometry carries a persistent overhead on disk and when loading in to Mantra. These days with gigabytes or terabytes of geometry on disk for shots, many are opting for baking displacements in to high res 4k^2 up to 12k^2 or higher maps instead of modelling super-high res geometry (directly rendering Z-Brush sculpts for example). Houdini15 has the Bake SOP so you again choose how you want things to be set up. Look Dev has never had so many pipeline friendly options in Houdini. Use the render property true displace set to 0 to switch from displacements to bumping for efficiency. When the objects are further away from the camera, you can still have your shader set displacements on but you can add a render property "True Displace" <truedisplace> set to 0 in the Material Style Sheet (H15) to the objects/primitives further away without touching any of the geometry directly with a shop_material override. This will automatically disable displacements and use bumping as well. Since Style sheets can target groups, intrinsic data, any attributes (detail, primitive, point, vertex) then you can set these up on the packed primitives or on the geometry directly and set it automatically. Actually you could create a CVEX shader that takes the position from camera and sets truedisplace based on P.z (distance from camera) and bind that as a material override from SHOP in the Material Style Sheet (H15). H15 has native support for tangent space (local/world) and normal space displace/bump maps along with support for UDIM's. You choose. Houdini15's new shaders support displacements beyond the push along normals now. Polygon size in the camera view does matter. Ultimately to get things as efficient as possible, a rough guide is to have evenly sized polygons from camera to horizon that roughly approximates the size of the buckets set in the Mantra ROP. If you can get Mantra to render your polygons without any refinement and remove/disable subdivisions and displacements for mid to background geometry then you will be rendering as efficiently as possible. This goes for most of the main render engines actually. Geometry refinement in Mantra is critical. If the faces are too large, Mantra needs to cut it up to a renderable size. If the polygons are too small, then mantra just has to deal with more geometry to sample against arguably for little effect. Especially if much of that detail can be captured in MIPMAP texture maps. Again almost all render engines benefit from this sort of optimized workflow. Or just crap in crap out is fine too and live with the memory overhead. Jason is quite knowledgable on this subject: optimize Mantra for rendering large scenes.
  10. 1 point
    Hi guys) Check out my new tool for smoothing geometry. Main problem of smoothSOP that after some iterations it almost don't smooth geometry. And speed of course.