Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


Everything posted by johner

  1. Vellum Constraint Attributes, what do they all do?

    You shouldn't need to create any of the constraint primitive attributes outside of the ones created by a VellumConstraints SOP, which are all either internal to the solver, or output attributes (like stress). The VellumSolver detects changes to the constraint geometry topology (i.e. when new constraint are added), and performs a series of operations to update the constraints, including creating those attributes. There's an example of creating custom constraint primitives here, which shows the attributes you generally would need to create: https://vimeo.com/340544320#t=2h8m30s
  2. Yes, you want the Vellum Rest Blend DOP, which you can point at an external SOP that animates your rest geometry. It will update the rest state of a specific set of constraints based on the input Rest geometry. With 17.5 there is a help example for Vellum Rest Blend SOP that shows some uses of it. Attached is an example of lengthening hair curves over time. Note the rest topology has to match the original geometry, i.e. you can't add points, but you can transform them in various ways. grow_hair.hip
  3. Yes, Houdini 16 ships with a built-in Intel OpenCL CPU driver on Linux and Windows, and should fall back to in on any machine where a GPU is not present (e.g. a render farm machine). We have seen a few cases where library conflicts prevent it from loading, but you should see "Unable to load HFS OpenCL driver" if that's the case. Usually you can do: $ hgpuinfo -l [*HFS OpenCL Platform*] Intel(R) OpenCL Platform Vendor Intel(R) Corporation Platform Version OpenCL 1.2 LINUX OpenCL Device Intel(R) Xeon(R) CPU X5650 @ 2.67GHz OpenCL Type CPU Device Version OpenCL 1.2 (Build 57) Frequency 2670 MHz Compute Units 24 Device Address Bits 64 Global Memory 24102 MB Max Allocation 6025 MB Global Cache 256 KB Max Constant Args 480 Max Constant Size 128 KB Local Mem Size 32 KB 2D Image Support 16384x16384 3D Image Support 2048x2048x2048 ..... That HFS driver should always be first, so that if you do HOUDINI_OCL_DEVICETYPE=CPU it will be the device chosen. Also: $ hconfig -h HOUDINI_USE_HFS_OCL HOUDINI_USE_HFS_OCL Set to its default value of 1, this variable tells Houdini to load the built-in CPU OpenCL driver that is shipped in $HFS (64-bit Windows and Linux only). This built-in CPU device can be selected using the regular OpenCL device specifications, e.g. HOUDINI_OCL_DEVICETYPE=CPU. Houdini will also fall back to using this driver if the usual OpenCL device selection process fails, making it safer to submit OpenCL jobs to a renderfarm that has no GPUs. Set this variable to 2 to disable this fallback mechanism, or 0 to disable the built-in device completely. On OSX this variable has no effect.
  4. Volume FFT Usage?

    IMO the most impressive use of Volume FFT is here: If you download the code you'll find a bunch of .hip files that do smoke simulation via FFT as per the paper.
  5. Volume Convolution: VEX & C++ vs OpenCL

    Hi Yunus, Just a minor point, for your VolumeWrangle approach you can just do: sum += volumeindex(0, "density", set(@ix-1, @iy, @iz)); which will skip the position calc and linear interpolation and be more similar to the OpenCL code (though still way slower!)
  6. FLIP Mesh Artifacts - Need some tips

    Try decreasing the Erosion Scale on the ParticleFluidSurface node:
  7. There's an existing RFE to add this functionality to ParticleFluidSurface::2.0. Skybar's suggestion is good, but if you want extra control you can create the extra "stretched" particles yourself before sending them into the asset. More info: The following VEX code is very close to what the original speed stretching does. You can paste this into an AttribWrangle then press the little button to the right of the text box to auto-create the spare parameters. I found the following values worked fairly well. Stretch Scale: 1/96 (or 1/48) Max Stretch: 1 Min Size: 0.4 Particle Density: 2 float stretchscale = chf("stretch_scale"); float maxstretch = chf("max_stretch"); float minsize = chf("min_size"); float particledensity = chf("particle_density"); vector stretchvec = -v@v * stretchscale; float len = length(stretchvec); if (len > maxstretch) stretchvec *= (maxstretch / len); float radius = @pscale; vector origpos = @P; int nparticles = int(particledensity * len / radius) - 1; if (!nparticles) return; for (int i=1; i <= nparticles; i++) { float duplicatepos = float(i) / float(nparticles); float duplicatepscale = lerp(radius, radius * minsize, duplicatepos); int newpt = addpoint(geoself(), @ptnum); setpointattrib(geoself(), "P", newpt, origpos + duplicatepos * stretchvec); setpointattrib(geoself(), "pscale", newpt, duplicatepscale); } If you're using the Spherical surfacing Method, you'll need to unlock the ParticleFluidSurface asset and change the Minimum Radius in Voxels on the vdbfromparticles1 node to something smaller like 0.25 to handle the smaller radius sizes for the stretched particles. For Average Position this is not required.
  8. Velocity Flickering in Viscous Fluid

    Sadly it's almost certainly due to your GasSurfaceTension node. I only tried the .zip file above, but with surface tension disabled it's completely stable. The GasSurfaceTension node adds an explicit force on the fluid and is fairly unstable at high Surface Tension values and low substeps. The workarounds are generally to use a lower Surface Tension setting (0.1 was still stable in your test file) or use more FLIP Min Substeps if you really need the high Surface Tension.
  9. FLIP - Awaken by Geometry

    Are you using the Scatter option on the Particle Fluid Tank SOP? Those extra surface particles can help create a smoother surface to start. Another option is to use APIC (Swirly Kernel) which cuts down on surface noise. Or use APIC and run 20-30 frames until the fluid settles down, write those particles to disk, then use them as the initial state for a regular (possibly Splashy Kernel) FLIP sim.
  10. Ocean Evaluate Mask

    The usual way is to handle the displacement yourself using the "restdisplace" output of OceanEvaluate. So don't feed your geometry into the OceanEvaluate, only leave the spectrum connected to the second input. Then on the volumes tab enable Rest Displacement. That will output three 2d volumes containing the displacement vectors. Finally use an AttribWrangle to do the displacement. Plug your geometry to be deformed into the first input and the OceanEvaluate into the second. The displacement is a one-liner: @P += volumesamplev(1, "restdisplace", @P); In that code you can scale the amount of displacement however you like.
  11. vdb from particles error

    Most likely in the Resample Input option on VDB From ParticleFluidSurface. The issue also goes away if use a higher Renorm Accuracy on the Dilate node, which maybe should be an option on the ParticleFluidSurface node, or possibly we should just default to it internally. Would you mind submitting a bug? This issue has been fixed recently: Wednesday, February 3, 2016 Houdini 15.0.376: Fix refresh problems when interrupting the cooking of various VDB filtering nodes. This also fixes the refresh problems of the Particle Fluid Surface SOP, which internally used these VDB nodes.
  12. RBD Stop Problem (FLIP)

    Assuming you're using the Bullet RBD solver, you likely need to turn off Enable Sleeping on the RBD Object | Collisions | Bullet Data tab. (Once the object slows down enough to float, it goes to sleep!)
  13. The new surfacing node is an asset named ParticleFluidSurface::2.0, which means it's allowed to break backwards compatibility and only share a name with the previous node, the new one being fully VDB based. Any old files will still pull in ParticleFluidSurface::1.0. But it also implies that the old version of the node is deprecated, since you can't create one from the Tab menu (you have to use the opadd command). This is how node versioning works in general; just in this case we wanted to deprecate the old version and reuse that descriptive name for the new one. I can't really comment on the new functionality besides what you see in Scott's video, but there will be additional info covering it in detail after the release.
  14. Titan X and 64bit OpenCl computation

    You should be in great shape with a Titan X. Recent NVIDIA OpenCL drivers give access to the full 12GB for good size pyro sims, and you don't need fast 64-bit support. As Mark said Houdini mostly uses 32-bit floats internally. You might need to switch the FLIP Viscosity solver to 32-bit if accelerating with OpenCL (a really good idea), but otherwise it's all 32-bit by default.
  15. Houdini 15 Sneak Peek

    And here's the PowerPoint file with videos (minus the new Siggraph demo videos which are replaced with stills since they're embargoed until after gold release): https://s3.amazonaws.com/vfx/OpenVDB_in_Houdini_15_stills.pptx All the 2015 (and 2013) OpenVDB course slides are pretty interesting (and often Houdini-centric): http://www.openvdb.org/documentation/
  16. Grain Sim with Collision Geo that has changing topology

    You might try assigning the point number to an id attribute on each point of your collision geometry before any simulation. Then unlock the Collision Source SOP ("Allow Editing of Contents") and look for the compute_velocity node and turn on Match by Attribute. It's possible that parameter should be promoted to the top-level of Collision Source.
  17. Gas Particle Neighbour Update?

    That microsolver is used in the old SPH fluids to maintain a list of neighbor particles within a search radius. These days you're better off just using pcfind and storing the results in array attributes. So using a POP or Geometry Wrangle: // next two variables could be parameters int maxn = 100; float scale = 2.5; float searchrad = f@pscale * scale; // find nearby particles int n[] = pcfind(0, "P", @P, searchrad, maxn + 1); // remove this particle from list removevalue(neighbors, @ptnum); i[]@neighbors = n; On the Inputs tab set Input 1 to Myself or Myself (no reads of Outputs), the latter being faster. You can then loop through that array to do various neighbor-lookup type things. The POP Grains solver does lots of this if you want to look inside.
  18. FLIP avoid splash

    How many substeps are you using? Unfortunately really high viscosity is affected by number of substeps, i.e. you can set viscosity to 1,000,000 but you'll only get so much stiffness at 2 substeps.
  19. Try using the Deforming Object shelf tool instead of Static Object. That should create a VDB collision volume and point velocities for the collision object.
  20. Is Nvidia Titan Z a worthwhile card for Houdini?

    I posted in this thread. Briefly we've submitted a bug to Nvidia, just waiting to hear back.
  21. PYRO on GPU OpenCL crash

    Good eye, Marty. If anyone has a chance to try these out successfully, please let us know. I've tried 349.16 under Linux, which also reports being 64-bit, and so far still can't successfully allocate above 4GB in any process, even on a K6000. We've got a bug report in with Nvidia to hopefully figure this out. (I've also got a simple non-Houdini test app I could post that shows the same problem, if anyone has experience compiling and running such things.)
  22. SLI for openCL

    The memory limitations on GPU have definitely persisted longer than we expected. And unfortunately even if you can get a 12GB NVIDIA card, their OpenCL driver is still 32-bit at the moment so you're still limited to 4GB per process. The silver lining here is that there are some production-level sims that can fit in 4GB, and we still get a very nice speedup for Pyro / Smoke using OpenCL on the CPU without the memory limitations (particularly with some of the more accurate advection schemes introduced in H14). And the newer uses of OpenCL in H14 for the grain solver and FLIP solver only accelerate smaller-but-expensive iterative parts of the sim and are less memory hungry. For example I think production-scale sims are absolutely possible on the GPU with the grain solver. If you're in a big studio where almost all sims are done on the farm, the lack of GPUs on most render farms is obviously an issue. The OpenCL CPU driver can help there, but there's a bit of chicken-and-egg issue on getting more GPUs on the farm. But these days (especially with Indie) a lot of production/commercial quality work is being done by small studios or individuals; for them running a big grain sim overnight on a GTX 980 is a really nice option.
  23. Flip fluid wierd streaking pattern

    Please submit a bug report if you can easily reproduce this, especially if you're seeing a downgrade in quality from H14 compared to H13. Or just post the file here and I'll take a look (but generally a bug report is better)
  24. I put a little description about the OpenCL additions in H14 in this post, but briefly: Only the constraint solving part of the grain solver is done on the GPU; fortunately it's generally the most expensive part. The other expensive part is finding all the neighbor points for each particle, which is done on the CPU for the moment. Collision detection takes some processing as well and is also purely CPU. So you'll generally see a big speedup from the GPU since constraint solving is really expensive. Also, increasing the Constraint Iterations (often necessary for production-quality results) isn't as expensive as you might think since all the data has already been transferred to the GPU by the time of the first iteration. For best results with OpenCL and the Pyro solver you generally need to disable caching of the Pyro object (turn off Enable Caching on the Pyro object's Creation tab). Most of the Pyro pipeline is OpenCL-optimized these days; the big remaining holdout is turbulence, which will force a transfer of the velocity field back to the CPU for VEX-based turbulence calculations. But on a good card you should still see a big speedup with caching off, at least while your simulation fits in GPU memory. The memory limitation, by the way, is the big argument for an additional GPU, not speed. The display can easily take a gigabyte or more of GPU memory, leaving less for simulation. P.S. On a different note, I happened to find this while trolling Vimeo, I think it's yours?: The main problem you're running into is you don't have a mass attribute on your grain particles, so it's treating them as they have mass of 1. If you enable Compute Mass on the Grain Source and set the density to 100 to match the rigid body, you'll see each particle has mass around 0.05. So your grain particles are about 20 times too heavy and are causing instabilities. (I'm making a bug database entry to get this into the documentation - another user ran into it recently) Also: - The RBD Solver is more stable than Bullet for inter-solver coupled interactions like this. - You had Rotational Stiffness really low (0.3), if anything you want that really high (e.g. 4), which will dampen spurious rotations from particle collisions - Consider substepping at the DOPNet level as well, since that makes the the RBD / grain coupled interactions solve at higher frequency. So for example decrease the grain POP Solver substeps to 2, but increase the DOPNet substeps to 5 (note this will take more DOPNet cache memory). - You might need to increase grain Constraint Iterations even higher than the 100 I set here to avoid stretching on the first ball / net interaction. - If you really just want sphere / grain interactions, you might even use really big grains instead and make it an entirely grain simulation. See the Variable Radius grains helpcard example. I attached a more stable version of your test. stable.net.hiplc
  25. Hi Dan, I'm afraid I can't speak for the OpenGL side of things, but I do know you can set up a "headless" Linux box for OpenCL GPU computing. Partly because I just (re-)tested it, but also because HPC computing is a big part of NVIDIA's GPU strategy, and they make things like the Tesla cards that are designed mainly for server-side computing.