Jump to content

fathom

Members
  • Posts

    447
  • Joined

  • Last visited

  • Days Won

    7

fathom last won the day on February 16 2020

fathom had the most liked content!

Personal Information

  • Name
    miles vignol
  • Location
    method - la

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

fathom's Achievements

Newbie

Newbie (1/14)

82

Reputation

  1. the simplest approach is to use a packed disk primitive. create a pakedPrim for each file you wish to read, then stuff in the path using a primintrinsic. http://www.sidefx.com/docs/houdini/hom/hou/PackedPrim.html you can unpack later to turn into geo, or if you don't need to process anything, just leave them as-is for better performance for rendering and display. if you need a format unsupported by houdini, then you've got a different problem on your hands.
  2. distance to an infinite line is really a 2d distance that's been rotated out of alignment to a cardinal axis. so imagine rather than an arbitrary line, you were trying to find the point closest to the y axis. it would be the point with the least value of (x*x+z*z). trivial. so now you just need to transform everything such that your infinite line is the y-axis. because we only care about distance, we can do this very easily with a shear transform -- adding a proportion of the y value to each of x and z. the proportions for each is based on the slope of your original line. the math would be something like this (drycoding here... don't have houdini available atm): > // p0 and p1 are the points of your line float x = P.x - p0.x - P.y*(p1.x-p0.x)/(p1.y-p0.y); float z = P.z - p0.z - P.y*(p1.z-p0.z)/(p1.y-p0.y); f@distanceSquared = x*x+z*z; if p1.t-p0.y is too small (like abs() less than 0.0001 or something), you should select a different cardinal axis.
  3. if you start playing with @v in wrangles or vops, you should also consider @TimeInc to help normalize your velocities. @TimeInc is the length of the current step, which is normally 1 frame length (1/$FPS), but will adjust for subframes. if you add @v to @P, multiplying @v by @TimeInc will provide a consistent result no matter what frame rate or sub frame stepping you use.
  4. probably just need to put everything into the same object. use packed prims instead of instances (same efficiency) and keep in the same geo. see if that gives a better result.
  5. yeah, i think i'm pretty much the exact same way, cept i use vex for volume sample/gradient as well. vops are also nicer if you're going to create any parms you want to expose. the "ch()" function is a weak alternative, i find. but even then, it's vops for noise and point transforms (i can never remember the to/from order in code) feeding into an inline vop to do the work.
  6. yeah, don't use pcimport unless you're manually looping over your point cloud. from your shader code, it looks like you're really just using the point cloud to isolate areas so a pcfilter vop instead of the pcimport is what you're looking for. that will do the loop for you and put out a single filtered(averaged) value based on what it finds in the point cloud. in this case, the point position coming out will be an average of those points found in your file. also, you need to transform the incoming search P to whatever space the point cloud is in. shaders operate in camera space. depending on how your point cloud was written, it's probably in world space or more likely in some object's space. you should make sure it's exported in world space (ie, it's in an object that has no rotate/translate/scale). then transform your shader P to world space (from "current").
  7. sim scale has more to do with the value of the forces involved than it does with the amount of data. you can have a microscopic sim that has a much data as a battleship plowing thru the ocean. simming at 1m just means that your units are in meters. gravity is 9.8m, for example. or a particle separation of .2 means each particle is 20cm in radius and your flip voxels will be around the same size. sticking with real units then makes it easy to understand your sim. is 20cm separation enough to capture the detail you need? how many particles does that end up making? how big of an area do you need to sim? how deep? that all drives your resolution which is different and really independent of the scene scale.
  8. you can control the scene scale in houdini. default is 1m, but you can change to other scales (preferences->hip file options). if you do anything where you want to change what 1 unit represents, you should do it there BEFORE you place any dops. when dops are placed, they check the scene scale and make adjustments to the default values based on the scale (like a gravity dop will be 9.8 meters or 980 cm). now here's the major caveat: it doesn't always work correctly. simple things will adjust, but more complex setups won't always (like shelf rigs). your best bet really is to stay at meter scale and run at 1:1 or you're gonna have to chase down all sorts of random settings.
  9. the point cloud only has a single point. that's why it's acting like a toggle instead of a count. vex code is iterating on each point in the source geo. the pcopen/pciterate is iterating over each point it finds in your point cloud (second input in your sample file) for every iteration of your vex code -- ie, on each source point. your green dots are colored based on each individual dot finding the single point in the point cloud since it's within the search radius for that dot.
  10. here's my understanding: volume quality relates to the number of samples created along the path of the ray. at ray quality 1, the volume is sampled once for each voxel along the camera ray. at .25 it's sampled roughly once every 4 voxels. since it's along the camera ray, it's not very noticeable really, until you get very low. the result is that you have fewer transparency calculations along your ray since you have fewer intersections with your volume. stochastic sampling is a modified screen-door technique applied to all transparency (not just volumes). the idea with stochastic transparency isn't to reduce the number of ray intersections, the idea is to stop the path tracer early for some rays. so what it does it to treat some portion of your shading samples as opaque, even if they're partially transparent. i'm a bit vague on how the actual samples number is utilized other than more gives better results. i tend to not mess too much with volume quality. usually you try the lowest stochastic samples you can get away with. if you're ray tracing shadows in your volume, i would suggest you increase ray samples and turn off ray variance.
  11. my general opinion is to not use cookie. ever.
  12. yeah, i think the alembic import file menu has a lot of people instantly going down the absolutely wrong route for utilizing alembic files in houdini. it's 1000% better to use a camera rig than to import a camera into your file. same with geometry -- the alembic sop is generally more useful than importing. the alembic xform node can pull matrices right from the abc file. wrapping up all these in a better suite of tools would make alembics way more useful than they might appear to people using the import menu. also, they should expose some means to unload an abc file. right now, you can crash houdini by overwriting the abc (or it'll be locked if on windows). you have to quit houdini or jump thru some cache clearing hoops to release it.
  13. be careful. some nodes will have opencl on by default in an "advanced" tab (i'm looking at you gas shred).
  14. yeah, but you need to make sure your sim doesn't change point count. no idea how well this will work, but it's relatively easy to set up and try.
  15. you can use the volume viz sop (or even just a volume mix) to drop the opacity way down (try .1, .01, .001, etc). from there you can either adjust your sources or your sim to generate less density (if that's the problem) or just reduce the density in the shader.
×
×
  • Create New...