Jump to content

fathom

Members
  • Content count

    447
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by fathom

  1. How to import obj with python?

    the simplest approach is to use a packed disk primitive. create a pakedPrim for each file you wish to read, then stuff in the path using a primintrinsic. http://www.sidefx.com/docs/houdini/hom/hou/PackedPrim.html you can unpack later to turn into geo, or if you don't need to process anything, just leave them as-is for better performance for rendering and display. if you need a format unsupported by houdini, then you've got a different problem on your hands.
  2. Closest point to line

    distance to an infinite line is really a 2d distance that's been rotated out of alignment to a cardinal axis. so imagine rather than an arbitrary line, you were trying to find the point closest to the y axis. it would be the point with the least value of (x*x+z*z). trivial. so now you just need to transform everything such that your infinite line is the y-axis. because we only care about distance, we can do this very easily with a shear transform -- adding a proportion of the y value to each of x and z. the proportions for each is based on the slope of your original line. the math would be something like this (drycoding here... don't have houdini available atm): > // p0 and p1 are the points of your line float x = P.x - p0.x - P.y*(p1.x-p0.x)/(p1.y-p0.y); float z = P.z - p0.z - P.y*(p1.z-p0.z)/(p1.y-p0.y); f@distanceSquared = x*x+z*z; if p1.t-p0.y is too small (like abs() less than 0.0001 or something), you should select a different cardinal axis.
  3. better understanding of velocity

    if you start playing with @v in wrangles or vops, you should also consider @TimeInc to help normalize your velocities. @TimeInc is the length of the current step, which is normally 1 frame length (1/$FPS), but will adjust for subframes. if you add @v to @P, multiplying @v by @TimeInc will provide a consistent result no matter what frame rate or sub frame stepping you use.
  4. Subsurface Scattering on Instances

    probably just need to put everything into the same object. use packed prims instead of instances (same efficiency) and keep in the same geo. see if that gives a better result.
  5. wrangel vs vops

    yeah, i think i'm pretty much the exact same way, cept i use vex for volume sample/gradient as well. vops are also nicer if you're going to create any parms you want to expose. the "ch()" function is a weak alternative, i find. but even then, it's vops for noise and point transforms (i can never remember the to/from order in code) feeding into an inline vop to do the work.
  6. Using a Point Cloud in a Shader?

    yeah, don't use pcimport unless you're manually looping over your point cloud. from your shader code, it looks like you're really just using the point cloud to isolate areas so a pcfilter vop instead of the pcimport is what you're looking for. that will do the loop for you and put out a single filtered(averaged) value based on what it finds in the point cloud. in this case, the point position coming out will be an average of those points found in your file. also, you need to transform the incoming search P to whatever space the point cloud is in. shaders operate in camera space. depending on how your point cloud was written, it's probably in world space or more likely in some object's space. you should make sure it's exported in world space (ie, it's in an object that has no rotate/translate/scale). then transform your shader P to world space (from "current").
  7. Simulation scale

    sim scale has more to do with the value of the forces involved than it does with the amount of data. you can have a microscopic sim that has a much data as a battleship plowing thru the ocean. simming at 1m just means that your units are in meters. gravity is 9.8m, for example. or a particle separation of .2 means each particle is 20cm in radius and your flip voxels will be around the same size. sticking with real units then makes it easy to understand your sim. is 20cm separation enough to capture the detail you need? how many particles does that end up making? how big of an area do you need to sim? how deep? that all drives your resolution which is different and really independent of the scene scale.
  8. Simulation scale

    you can control the scene scale in houdini. default is 1m, but you can change to other scales (preferences->hip file options). if you do anything where you want to change what 1 unit represents, you should do it there BEFORE you place any dops. when dops are placed, they check the scene scale and make adjustments to the default values based on the scale (like a gravity dop will be 9.8 meters or 980 cm). now here's the major caveat: it doesn't always work correctly. simple things will adjust, but more complex setups won't always (like shelf rigs). your best bet really is to stay at meter scale and run at 1:1 or you're gonna have to chase down all sorts of random settings.
  9. a quick Point Cloud query...

    the point cloud only has a single point. that's why it's acting like a toggle instead of a count. vex code is iterating on each point in the source geo. the pcopen/pciterate is iterating over each point it finds in your point cloud (second input in your sample file) for every iteration of your vex code -- ie, on each source point. your green dots are colored based on each individual dot finding the single point in the point cloud since it's within the search radius for that dot.
  10. here's my understanding: volume quality relates to the number of samples created along the path of the ray. at ray quality 1, the volume is sampled once for each voxel along the camera ray. at .25 it's sampled roughly once every 4 voxels. since it's along the camera ray, it's not very noticeable really, until you get very low. the result is that you have fewer transparency calculations along your ray since you have fewer intersections with your volume. stochastic sampling is a modified screen-door technique applied to all transparency (not just volumes). the idea with stochastic transparency isn't to reduce the number of ray intersections, the idea is to stop the path tracer early for some rays. so what it does it to treat some portion of your shading samples as opaque, even if they're partially transparent. i'm a bit vague on how the actual samples number is utilized other than more gives better results. i tend to not mess too much with volume quality. usually you try the lowest stochastic samples you can get away with. if you're ray tracing shadows in your volume, i would suggest you increase ray samples and turn off ray variance.
  11. Cookie/Bool Help

    my general opinion is to not use cookie. ever.
  12. yeah, i think the alembic import file menu has a lot of people instantly going down the absolutely wrong route for utilizing alembic files in houdini. it's 1000% better to use a camera rig than to import a camera into your file. same with geometry -- the alembic sop is generally more useful than importing. the alembic xform node can pull matrices right from the abc file. wrapping up all these in a better suite of tools would make alembics way more useful than they might appear to people using the import menu. also, they should expose some means to unload an abc file. right now, you can crash houdini by overwriting the abc (or it'll be locked if on windows). you have to quit houdini or jump thru some cache clearing hoops to release it.
  13. [Houdini 15] PyroFX explosion render question

    be careful. some nodes will have opencl on by default in an "advanced" tab (i'm looking at you gas shred).
  14. yeah, but you need to make sure your sim doesn't change point count. no idea how well this will work, but it's relatively easy to set up and try.
  15. [Houdini 15] PyroFX explosion render question

    you can use the volume viz sop (or even just a volume mix) to drop the opacity way down (try .1, .01, .001, etc). from there you can either adjust your sources or your sim to generate less density (if that's the problem) or just reduce the density in the shader.
  16. Delete part of Volume Cache

    that could produce negative values, no? i would do a volume wrangle: f@density *= 1-volumesample(1,0,v@P); convert your box into a houdini volume (NOT A VDB) and plug it into input 2. plug your smoke volume into input 1.
  17. [Houdini 15] PyroFX explosion render question

    yeah, pyro shader 3.0 is different than pyro 2.0. you might be able to "opunhide" a pyro v2 shader, tho. in hscript textport type "opunhide" and it'll list everything hidden. you can see if there's a pyro v2 in there... not at a workstation at the moment or i'd look myself. that said, pyro 3.0 is way better. generally speaking, the idea in the pyro shader is to take 2 or 3 volume fields and use them to make your explosion look nice. the density field is pretty much the "smoke" portion of your explosion. the fire is usually a combo of heat and temperature, but i've often found heat by itself suffices. the underlying theory is to have one field drive the color of the flames and another field drive the intensity. like i said, i usually use heat for both. then it's just a matter of finding good multipliers for each. you can use the "physically accurate" color mapping or use the "artistic" color ramp (you'll have to generate the ramp yourself). the trick is that your shader has to be tuned for the sim that's generating it. there are not really any units on density or temperature or heat... they could be 0-1 or 0-100000 depending on how you set things up in your sim. there's also a "fireball" material you can drop down that will fill in some default values... edit: oh and the volume viz sop is really a simplified version of the same idea. it just affects the display, not rendering.
  18. Maya is morphing into Houdini

    when maya was first being developed, the sales rep kept trying to demo it for me and i kept asking about a procedural workflow (coming from prisms). they kept talking it up, but of course, it never was really there (even tho they kind of have nodes for everything...) i guess this is finally it? like... 20 years later?
  19. Time Shift Node

    max($F,1) that'll evaluate to $F or 1, whichever is higher.
  20. you could try a point deform sop.... you'd have to have a constant number of points (no reseeding). plug in your high rez mesh, your rest frame sim points, and then your sim points animating. won't work if your sim breaks up a lot...
  21. i haven't used hqueue, but mantra itself uses the -H flag to distribute renders so the issue may or may not be hqueue related. you could try manually launching a mantra task with -H to help isolate the issue. regarding ifd's... my guess is that the ifd is fed to a single mantra task and then it handles telling the other mantra instances what to do (probably by means of forwarding the ifd content for the most part). if you're on linux, it should be relatively easy to generate an ifd and verify it works and then push that same ifd to a mantra -H call to see how it responds.
  22. Mantra error loading Geo issue

    you'll have better luck not merging and just having multiple objects set up as packed disk primitives (from the file sop). the merge really just adds memory overhead to your task.
  23. you're probably going to have more luck on a muster board asking about houdini...
  24. Point Cloud Import by Index

    import by index lets you "skip" point cloud indices and grab, say, the 5th point found in your search. it's probably not something you'd ever use unless you had a really specific reason. you can find the point number by doing a pcimport("point.number",pcNum) inside a pciterate() loop. not sure how you're searching for your point to grab attributes from, but that'll give you the current point number in your point cloud.
  25. deadline is a render farm manager. you need some kind of system in place to handle all your tasks. houdini does not provide this (well, unless you're talking about hqueue).
×