Welcome to od|forum

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

fathom

Members
  • Content count

    445
  • Joined

  • Last visited

  • Days Won

    6

Community Reputation

72 Excellent

About fathom

  • Rank
    Illusionist

Personal Information

  • Name miles vignol
  • Location method - la
  1. if you start playing with @v in wrangles or vops, you should also consider @TimeInc to help normalize your velocities. @TimeInc is the length of the current step, which is normally 1 frame length (1/$FPS), but will adjust for subframes. if you add @v to @P, multiplying @v by @TimeInc will provide a consistent result no matter what frame rate or sub frame stepping you use.
  2. probably just need to put everything into the same object. use packed prims instead of instances (same efficiency) and keep in the same geo. see if that gives a better result.
  3. yeah, i think i'm pretty much the exact same way, cept i use vex for volume sample/gradient as well. vops are also nicer if you're going to create any parms you want to expose. the "ch()" function is a weak alternative, i find. but even then, it's vops for noise and point transforms (i can never remember the to/from order in code) feeding into an inline vop to do the work.
  4. yeah, don't use pcimport unless you're manually looping over your point cloud. from your shader code, it looks like you're really just using the point cloud to isolate areas so a pcfilter vop instead of the pcimport is what you're looking for. that will do the loop for you and put out a single filtered(averaged) value based on what it finds in the point cloud. in this case, the point position coming out will be an average of those points found in your file. also, you need to transform the incoming search P to whatever space the point cloud is in. shaders operate in camera space. depending on how your point cloud was written, it's probably in world space or more likely in some object's space. you should make sure it's exported in world space (ie, it's in an object that has no rotate/translate/scale). then transform your shader P to world space (from "current").
  5. sim scale has more to do with the value of the forces involved than it does with the amount of data. you can have a microscopic sim that has a much data as a battleship plowing thru the ocean. simming at 1m just means that your units are in meters. gravity is 9.8m, for example. or a particle separation of .2 means each particle is 20cm in radius and your flip voxels will be around the same size. sticking with real units then makes it easy to understand your sim. is 20cm separation enough to capture the detail you need? how many particles does that end up making? how big of an area do you need to sim? how deep? that all drives your resolution which is different and really independent of the scene scale.
  6. you can control the scene scale in houdini. default is 1m, but you can change to other scales (preferences->hip file options). if you do anything where you want to change what 1 unit represents, you should do it there BEFORE you place any dops. when dops are placed, they check the scene scale and make adjustments to the default values based on the scale (like a gravity dop will be 9.8 meters or 980 cm). now here's the major caveat: it doesn't always work correctly. simple things will adjust, but more complex setups won't always (like shelf rigs). your best bet really is to stay at meter scale and run at 1:1 or you're gonna have to chase down all sorts of random settings.
  7. the point cloud only has a single point. that's why it's acting like a toggle instead of a count. vex code is iterating on each point in the source geo. the pcopen/pciterate is iterating over each point it finds in your point cloud (second input in your sample file) for every iteration of your vex code -- ie, on each source point. your green dots are colored based on each individual dot finding the single point in the point cloud since it's within the search radius for that dot.
  8. here's my understanding: volume quality relates to the number of samples created along the path of the ray. at ray quality 1, the volume is sampled once for each voxel along the camera ray. at .25 it's sampled roughly once every 4 voxels. since it's along the camera ray, it's not very noticeable really, until you get very low. the result is that you have fewer transparency calculations along your ray since you have fewer intersections with your volume. stochastic sampling is a modified screen-door technique applied to all transparency (not just volumes). the idea with stochastic transparency isn't to reduce the number of ray intersections, the idea is to stop the path tracer early for some rays. so what it does it to treat some portion of your shading samples as opaque, even if they're partially transparent. i'm a bit vague on how the actual samples number is utilized other than more gives better results. i tend to not mess too much with volume quality. usually you try the lowest stochastic samples you can get away with. if you're ray tracing shadows in your volume, i would suggest you increase ray samples and turn off ray variance.
  9. my general opinion is to not use cookie. ever.
  10. yeah, i think the alembic import file menu has a lot of people instantly going down the absolutely wrong route for utilizing alembic files in houdini. it's 1000% better to use a camera rig than to import a camera into your file. same with geometry -- the alembic sop is generally more useful than importing. the alembic xform node can pull matrices right from the abc file. wrapping up all these in a better suite of tools would make alembics way more useful than they might appear to people using the import menu. also, they should expose some means to unload an abc file. right now, you can crash houdini by overwriting the abc (or it'll be locked if on windows). you have to quit houdini or jump thru some cache clearing hoops to release it.
  11. be careful. some nodes will have opencl on by default in an "advanced" tab (i'm looking at you gas shred).
  12. yeah, but you need to make sure your sim doesn't change point count. no idea how well this will work, but it's relatively easy to set up and try.
  13. you can use the volume viz sop (or even just a volume mix) to drop the opacity way down (try .1, .01, .001, etc). from there you can either adjust your sources or your sim to generate less density (if that's the problem) or just reduce the density in the shader.
  14. that could produce negative values, no? i would do a volume wrangle: f@density *= 1-volumesample(1,0,v@P); convert your box into a houdini volume (NOT A VDB) and plug it into input 2. plug your smoke volume into input 1.
  15. yeah, pyro shader 3.0 is different than pyro 2.0. you might be able to "opunhide" a pyro v2 shader, tho. in hscript textport type "opunhide" and it'll list everything hidden. you can see if there's a pyro v2 in there... not at a workstation at the moment or i'd look myself. that said, pyro 3.0 is way better. generally speaking, the idea in the pyro shader is to take 2 or 3 volume fields and use them to make your explosion look nice. the density field is pretty much the "smoke" portion of your explosion. the fire is usually a combo of heat and temperature, but i've often found heat by itself suffices. the underlying theory is to have one field drive the color of the flames and another field drive the intensity. like i said, i usually use heat for both. then it's just a matter of finding good multipliers for each. you can use the "physically accurate" color mapping or use the "artistic" color ramp (you'll have to generate the ramp yourself). the trick is that your shader has to be tuned for the sim that's generating it. there are not really any units on density or temperature or heat... they could be 0-1 or 0-100000 depending on how you set things up in your sim. there's also a "fireball" material you can drop down that will fill in some default values... edit: oh and the volume viz sop is really a simplified version of the same idea. it just affects the display, not rendering.