Jump to content

danw

Members
  • Posts

    181
  • Joined

  • Last visited

  • Days Won

    4

danw last won the day on December 9 2015

danw had the most liked content!

Personal Information

  • Name
    Daniel
  • Location
    London

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

danw's Achievements

Newbie

Newbie (1/14)

55

Reputation

  1. Found it. Needed to use the get_global_size() function. size_t x_res = get_global_size(0); size_t y_res = get_global_size(1); size_t z_res = get_global_size(2);
  2. I'm currently building "baby's first OpenCL kernel", to try and speed up setting border voxels in a field to a constant value. Plenty familiar with vex and such, but first time poking around with OpenCL code. I've got it working, except that I can't work out how best to pass in the resolution of the DOPs field I'm operating on. I thought that the stride_x variables might be it... but I've worked out I can only derive x and y resolution from them (dividing stride_z by stride_y), but not z res. I can calculate it using "Include Size" and "Include Voxel Size" - but it seems odd to be doing floating-point maths to work back to such a basic bit of info, and I don't know how much that'd slow down an OpenCL kernel. Is there any way to just pass the info straight in? (Edit: should probably chuck up the code I've gotten so far, in case anyone spots any other glaring mistakes while I'm at it :-P) kernel void windsandwich( int stride_x, int stride_y, int stride_z, int stride_offset, global float *vel_x, global float *vel_y, global float *vel_z ) { size_t x = get_global_id(0); size_t y = get_global_id(1); size_t z = get_global_id(2); size_t idx = stride_offset + x * stride_x + y * stride_y + z * stride_z; float3 v = 0; if (z<4) { v = (float3)(0.0f, 0.0f, 0.125f); } else if (z>( /*x_res_attribute*/ -5)) { v = (float3)(0.0f, 0.0f, -0.125f); } else { v = (float3)(vel_x[idx], vel_y[idx], vel_z[idx]); } vel_x[idx] = v.x; vel_y[idx] = v.y; vel_z[idx] = v.z; }
  3. This is the bit that I wonder at... I think it might be more accurate if they said "our online licensing system will see a dual-boot machine as 2 machines". If it's possible for the hostid and resulting license server code to be identical, I can't see how it could tell the two OSes apart if key was only checked out once, and entered manually.
  4. Well, not sure why, or if there's a proper solution, but for now, a brute-force workaround worked fine: Run the output through a Timeshift node, with "$FF/5" and Integer Frames unticked, then set the ROP to 5x the frame range, and filenames with an `$FF/5` expression in place of $FF
  5. I presume that provided your hostid does remain the same between both OSes, you could install the license to one, and then manually enter the same keys to the other without using the online service. Or just manually check-out the license on license.sidefx.com and manually install it to both. I'm not sure exactly how the hostid is derived, but most of these kind of license managers derive it from the MAC address of your primary network adapter, which could very well mean it remains the same on a dual-boot system. Alas, I've not tried it, so I can't confirm anything.
  6. I've got a Geometry ROP outputting bgeo files from a fluid sim - the sim has 5 substeps, the Geometry ROP is set to a Step Size of 0.2, and the filename is using "something.$FF.bgeo.sc" That works fine locally... I get files numbered {.1. .1.2. .1.4. .1.6. .1.8. .2.} etc... When I throw in an Hqueue Simulation node, and submit the same thing to the farm, it will only output whole-frame files, the subframe files never appear. Why would there be a difference?
  7. Well, in case anyone else wonders at the answer to this - it turned out it was actually slightly simpler than I'd hoped. You can open and edit .pmap files in SOPs just the same as bgeo files... and it seems combining them is as simple as a straight merge and ROP out as a .pmap file. The only caveat is that it will proportionally increase the brightness of the map - so two similar maps combined will be double brightness. You can compensate for that by setting an exposure of -1 stop on the caustic light... -2 stops if you combine four maps together, -3 for eight maps, etc... (Or, you can set that exposure compensation on the caustic light before generating each component map, and the combined results will then add up to the correct brightness without adjustment) I figured there'd be a way to multiply the brightness levels directly on the point cloud's attributes, but as far as I can tell, none of the per-point attributes actually increase/decrease with photon count... somehow the map just seems to implicity *know* the brightness :-P Perhaps some kind of meta-data get stored somewhere? (But then I'd assume it'd lose whatever that metadata was when you bake out the combined photon map, but it doesn't) No matter anyway, the way I'm doing it seems like a perfectly stable workaround. For the photon map seed stuff, it seems Mantra's primary "seed" under the sampling tab affects photon generation sample patterns as well, so that was thankfully straightforward too :-)
  8. Also, on a more simple note - is there any way to randomize/seed the caustic photon map's sampling pattern? I know at some point it was tweaked to maintain a fixed sampling pattern to reduce photon jitter between frames, but I'd like to experiment with random patterns if the ability still exists.
  9. Does anyone happen to know about the anatomy of Houdini's caustic photon maps? I'm using a bunch of subframe renders to combine into multi-segment motion blur for some fast moving fluid, and as I'm generating a caustic map for each subframe already, I was wondering if there's a way to pre-generate them, combine them into a single map, and then pipe that to each subframe render within the same frame. I imagine that might either add more detail (4x5-million photons giving a correspondingly crisper 20-million photon map), or possibly just reducing aliasing. ...or of course I might be barking up entirely the wrong tree, and two photon maps combined would just conflict or be meaningless. If it *is* possible... is there also a way to separately run the photon-map-prefilter operation after I'd combined them?
  10. danw

    VDB to Maya

    I've never used vray, but could it be something as simple as it expecting the velocity to be named something other than "vel"? Maybe try renaming it "v" or "velocity"?
  11. This is my understanding of it all... I'm no programmer or maths expert, so some of this may be a tad naive and/or wrong :-P Incompressible fluid is how most-if-not-all CG industry fluid sim solutions will model fluid - it presumes that in fluids like water and air, compressibility is negligable, and so leaves it out because, as I understand it, simulating it is mathematically far more complicated (for no real benefit unless you're doing it for a scientific application) DIvergence is what you (usually) don't want in a fluid simulation... the pressure solve is the process of taking a divergent velocity field and iterating over it until the divergence is removed - basically, so that fluid can't move *through* other fluid... it always has to go around - so you end up with swirling motion. With divergence, the fluid can grow/compress in volume. That can actually be used as an artistic tool - most commonly in explosions - you can inject intentional divergence into the sim, and it'll expand rapidly. Fluid Implicit Particles is the hybrid technique of using particles to represent the fluid, (as in classic SPH fluid) but splatting their velocities onto a voxel grid, running a pressure solve on that voxel grid (which at high particle counts is massively more efficient than attempting to pressure-solve the particles themselves), then using the resulting divergence-free velocity field to advect the original particles to their new positions. The particles don't have any "awareness" of each other as they do in an SPH fluid, but should maintain distance from each other automatically due to the divergence-free advection. In general terms, it takes the performance benefits of grid-based fluids, and the detail benefits of particles, and combines them.
  12. You could splat the velocity of the character into a field and use that as an influencing force in the two sims, but from experience I'd say that approach would probably be a bit of a pain. You can't really use custom velocity as a basis for fluid sim - as your custom velocity will almost certainly not obey the laws of fluid dynamics, it'll just cancel out or give entirely unpredictable effects when the pressure solve removes the divergence from it... the more you try to shove and pull fluid around, the more frustration it'll cause - it has a very strong will of its own :-) If you're going for the artistic approach of layering multiple discrete sims for effect, I'd suggest it might be a better approach to try baking the blobby one first, and then filtering the surface SDF or meshed surface to be a fluid source for the detailed sim... that way you'd sort-of-inherit the behaviour of the blobby sim directly into the detail sim, and you'd only have one thing to render at the end.
  13. I think it may depend on how much you're removing from what... I just found that removing primitives in a primitive wrangle was notably faster *if* I was removing a small number of prims from an object with a large number of them (deleting ~20,000 from ~1,000,000) If I was deleting the majority of them, it was notably faster using Blast (deleting ~980,000 of ~1,000,000) No idea why!
  14. Argh! Anyone know what I'm missing? I've got three Geometry ROPs for three different output geometries per frame, and they all branch from the same root fluid meshing. As it stands, no matter whether I "merge" the ROPs, or chain them in series, when it comes to generating Hqueue jobs it will either create 3 separate parallel jobs per frame, or 3 separate nested jobs per frame. That results in each frame effectively being meshed three times over, rather than just keeping the cached data in memory and outputting three pieces of geometry at a time. I could just pack those three pieces of geometry, merge them, and output them to a single geometry file per frame... and split them out later on, but that seems pointlessly awkward when I ultimately want three geometry sequences on disk. I can't seem to work out what utility ROPs to use to set it up the right way. Batch would presumably just batch the entire sequence into a single job... I can't quite work out what Frame Container is for, but it doesn't appear to apply. ...any pearls of wisdom? :-)
  15. Hehe, fair point. Reach far, just don't reach *too* far :-P The thing I've found with pretty much all FX throughout my career is that by the time you're half way through building a setup for a simulation, especially when using a procedural tool like Houdini, you'll have uncovered 10 other problems that you'd just love to have the time to study and develop entire other setups for... one month-long project can fill up your "must-get-around-to" list for the next 2 years easily :-P If you plan ahead, you can allow yourself some tangent-time, and that'll ultimately feed back into making the original project better. I also find I tend to have the most fun when I dig deep into one small area.
×
×
  • Create New...