Jump to content

pockets

Members
  • Content count

    39
  • Joined

  • Last visited

Community Reputation

1 Neutral

About pockets

  • Rank
    Peon
  • Birthday 08/10/1971

Contact Methods

  • Website URL
    http://www.youtube.com/perfectseanie

Personal Information

  • Name
    Sean
  • Location
    Austin, Texas
  • Interests
    anime, cars, filmmaking, horror...oh yeah, and Houdini
  1. Dark volume in Redshift

    I got it figured out. It was actually a cascade of problems due to updating RS, making the necessary updates in Shotgun, and some old settings being partly to blame. This morning things magically worked doing exactly the same steps that had me leave the office in frustration last night. I like Shotgun, but I really don't like how it dictates the Houdini environment and much of the troubleshooting out of my hands. I really hate "reboot and things work" stuff.
  2. Dark volume in Redshift

    Density is mapped to scatter. Nothing needs to be mapped to emission for smoke to render as smoke. The default value of a RS light is 100. It illuminates any other object near by very brightly. This isn't a case of no lights. Or no material. It's simply (until now, apparently) not that difficult or that many more steps to get a volume to render in Redshift as a volume. I even followed that video that SESI is hosting with the examples for rendering pyro in Redshift, that starts with doing fog lights and moves on to the float toy into shelf tool explosion. Just in case I'd forgotten something in the last month or so. No dice. Where his first render shows the volume all blown out and bright from the default setting before playing with the mapping I get black, doing the exact same thing, and only black. Even mapping "heat" to emission in that case. Black. I place a grid underneath it and this surface is bright, with a shadow from the volume, but the volume is black like ink. They should really think about updating that video though, because it's not in 16, with the changes to deprecating /Shop.
  3. I may be missing some option but I cannot, for the life of me, get anything but very dark, basically black volume renders with Redshift recently. Running 16.0.600 and Redshift 2.0.93 where the last time I ran into no problems at all I was in 15.5.673 and RS 2.0.75 I believe. For simple smoke there simply weren't any options, that I recall, besides applying the parameters from the shelf, setting to volume, using volume primitive, apply the rs volume shader, have an rs area light. The volume showed up in RS renders with a similar looking neutral gray as I'd expect, like the visualization, like it would look in Mantra. I can make it thicker or thinner by tweaking the volume shader parameters but it's always black. I know I must be missing something but I cannot, for the life of me, find it. Unless something's broken.
  4. Multi UV Sets to Maya

    I was really hoping there was more discussion here. I've output geometry with a second UV set attached as a vertex attribute (uv2) and though it doesn't show up as a second UV set in Maya it does appear to show up under "Extra Attributes". In Houdini it would be trivial to shuffle "uv2" into UV or reference it in a material. In the Hypershade it looks like you should be able to pipe an alternate attribute into the UV parameter of the texture placement node but our Maya guy hasn't figured out how to access "Extra Attributes" with any node in the Hypershade to do so, and I know too little of Maya's workings to be of much help. I was kinda hoping more folks would have had cause to export geometry with multiple UV sets to Maya.
  5. Better Surface Debris Option?

    I like to more explicitly control the points that get birthed by birthing from all points in the source. That way if I have 1000 points feeding the POP/DOP I get those 1000 points per time step, and then I use SOPs to both shape and control the number. One of the ways I like to control the shape is by doing a fairly dense scatter and then add turbulence to Cd, letting me visualize the noise, and then delete the darker points based on a threshold. This can create some nice, organic, cloud-like volumes of particles that feed nicely into either advection by a velocity field or advection by curl noise. Generating the turbulence based on rest position means you get some coherence across frames and you can then do things like slowly shift the pattern over time or animate the threshold for culling so that the clumps of points generating particles "erode" in (or out). I had to do some exploding blocks for a Minecraft-like spot recently and I did an additional density of noisy points near the edges of each surface and did two different passes of inputs for simulations based on a dot product against the velocity of the block. Point normals more or less in line with travel were used to source point stamped density for a pyro sim while points with normals that more or less trail the direction of travel birthed particles for trailing dust. Rather than just being purely random and noise the bit of overhead in processing and visualizing the particle source inputs has been worth it to achieve results that I can influence in ways other than jittering seeds and scaling up and down density.
  6. Houdini 15 Point SOP

    I noticed this, and it threw me off, but then I remembered I stopped using the Point SOP entirely after getting comfortable with VOPs and the Wrangle SOPs since there isn't anything that I used to do with the Point SOP, that I can remember, that doesn't work faster and better in a Wrangle, a VOP, etc., as mestela suggests.
  7. Alembic with Zooming Camera

    If it's still borked in the next release version I install I will. They're already up to 15.0.347 so, since it wasn't broke before I'm hoping I'm just stopped at an unfortunate version. It was an oversight on my part not recognizing that the camera back parameters were without embedded expressions. All of the other shots with fixed focals worked and were aligned just fine so I wasn't thinking something was quite that broken. Not to mention, this version of Soft has some issues with its Alembic export. Even Maya misinterprets some of the parameters, but it's one of those not always and not in the same way sort of thing. The kind of problems I hate. We've been able to get over it by comparing the cameras in both packages side-by-side.
  8. Alembic with Zooming Camera

    No, there are no expressions on the camera node at all. It has static values in all parameters. The only expressions of any kind present in the entirety of the Alembic Archive hierarchy is in the Frame and Frames Per Second parameters on the Alembic Xform node containing the Houdini camera SOP. No Python anywhere. However, that was H15.0.313 and I just read the Alembic into H14.0.474 and not only does the camera successfully zoom but all the pertinent camera back parameters have expressions. Aperture and pixel aspect ratio are completely wrong but I have a fix for that. So, I'll look at versioning up to the next production build soon and perhaps in the meantime "launder" the files through H14.
  9. Alembic with Zooming Camera

    The camera animates and a static focal and (bogus) aperture are read in. On shots with a static focal length it's always been correct, to the best of my recollection, but there's a single held value that's, I'm assuming, a single value in the source camera's zoom range. Also, it's not being animated in Maya it's being animated in Softimage and I read the Alembic into Maya as a control, to see what it did there. Maya correctly interprets the animated focal length. Houdini is not, I only get a single value. And, from the Alembic scene which Maya was able to read correctly I wrote that camera out and Houdini treated it exactly as it did the camera coming from Soft, with no animation in the focal length, only a single value. edit: and actually, no, there are no expressions if I dive down into the hierarchy within the Alembic Archive to the Houdini camera object. There are all static values and the only expressions are on the various Alembic Xform SOPs, which has been my experience since H13. The quality of the parameters in this camera has varied depending on where the the Alembic was written from. Softimage in writing bogus pixel aspect ratios into the file which Houdini interprets a little differently than Maya, and the aperture is completely wrong but that's an easy override. An animated focal, however, is a little more fiddly.
  10. Hey there, I've recently run into an issue on two shots where the standard Alembic Archive doesn't seem to import an animated focal length on any camera objects. It's never come up before since zooming is pretty rare. I thought it was an issue with the file coming from Soft, since aperture and pixel aspect ratio aren't interpreted correctly but that's an easy fix. But I've loaded the Alembic into Maya and I get a camera with zoom during the shot intact, where Houdini just has a fixed focal and no animation. I exported the camera from Maya to a separate Alembic and this too reads into Houdini with a fixed focal length. Before I put the bug report in and try to figure a workaround I wanted to see if anyone else had run into this issue. There aren't many parameters to the Alembic Archive so I feel safe assuming this is a bug and not simply operator error. cheers
  11. Turns out the main source of my crashing is likely specifying Volume Collisions. Switching to the mesh surface I was able to get a sim in another scene with far greater velocity to run without crashing. Success! The values I'm having to use to get loose, drooping cloth that doesn't look like it's loaded up with spray starch or made of flexible cardboard are much different than any of the examples offered in the docs, and a couple orders of magnitude different than the mouse-over suggestion that defaults are somehow correct for cloth measured in meters. But I actually look forward to doing more cloth now rather than cringe at the thought of it.
  12. I'm resurrecting this old topic because there seems to be so little information on the new FEM cloth and the default settings are pretty much rubbish. This has been a helpful topic in setting up my first bit of cloth with the system. At DD, SPI and R+H it wasn't something that fell into the scope of the VFX. I'm at a small boutique for the now and I'm it. Anyway, I've got a character that moves rather briskly out of the frame wearing a hooded cloak and it just wants to break the solver at the first big velocity frame, bad enough to crash Houdini altogether. I was able to work it down to being the contraints that were likely the cause of the crashing. If I disabled the Cloth Attach Constraints it was able to push on with the collision object (volume collision) ripping through the cloth (no fracturing). The problem is the documentation is so sparse (most parameters have no more information than what appears in the mouse-over), incomplete (parameters in the interface not mentioned in the docs at all) or old (parameters from an older version appearing in the docs but no longer represented by the interface). I was eventually able to get "something" out for a complete range by doing three sims with subsequent sims started from the time before the last one broke with a new set of constraints. But I envision all manner of characters with capes or flowing cloaks and dresses where this simply wouldn't be viable and the whole process can't realistically be this horrible, can it? The settings I'm at are also really, really slow. Some frames, before crash, topping 2hrs for a bad solve on an i7-5820K @ 3.3GHz while the solve barely ever pushes very far over 20% CPU and I've over 20Gb of memory still available to do other things while the FEM eventually has an aneurysm and dies. 4x oversampling on the DOP network, because 2x not only didn't stop a crash but what did solve wasn't as good. 20 Min Substep Rate ...lower killed the sim sooner, gave worse results 0.0612 Substep Tolerance ...interaction with substep rate settings and oversampling on DOP Network unclear, just says lower is better 4 Max Global Collision 16 Max Local Collision ...seems like I'm more interested in local collision, I don't think I ever tried running it at 1x on the Global setting No self-collision ...and that's pretty much it. Lowering two parameters that seemed to most effect the speed were the interaction between the DOP Net oversampling plus the Min Substep Rate. Going higher might have gotten me a few more frames but I have zero confidence the entire thing would have completed. Meanwhile there are several other shots that were done by a different artist with the same exact geometry in nCloth and while not great, it always managed to keep the spring constraints attached to the animated figure driving the animation, with some very wild, flying animation and the cloak flowing believably, and could be cached out over a similar range of frames in a manageable time frame.
  13. This is something I'm running into as well for PBR. I recently did a shot lighting 23 dinosaurs in a copter plate. The shadow pass actually took longer than the beauty. Even with no texture on the ground the shading of it just to get the cast shadows in AOV was a real drag, on top of having to tell the compositor to just ignore the RGB of the shadow pass and pull the AOVs. The old shadowmatte technique doesn't seem to have an analog with PBR. The closest viable solution I could think of so far might be rendering with the ground/BG projections in the beauty and then have a channel that would end up being a matte to separate the object you really wanted from the main beauty render, so that you got the cast shadows in AOVs of the main render. At the very least it would seem less wasteful.
  14. Voronoi - dynamic - location based fracture (WIP)

    Ah, that does make a difference. Very nice. The dynamic fracturing indeed holds up well, as well as any Rayfire examples I've seen to date. This is fun stuff!
  15. Voronoi - dynamic - location based fracture (WIP)

    Interesting. I'll take a look at that, thanks. I wonder how this would end up working on a practical model though. It's easy enough to get the Fracture SOP to fail using only primitive shape inputs which lead to our searching for a more reliable, non-Cookie method last year. A solution for a single squared torus like this example is one thing, though still computationally non-trivial to solve but I wonder about geometry like an apartment building with twenty or fifty windows on a side. Even for small numbers of shots and a finite number of buildings I've already seen it's next to impossible to impose rigid rules for FX concerns even if you could guarantee total reliability by building things a certain way. I know there's no panacea for this stuff right now. It's interesting though.
×