Jump to content
[[Template core/front/profile/profileHeader is throwing an error. This theme may be out of date. Run the support tool in the AdminCP to restore the default theme.]]

Community Reputation

3 Neutral

About meeotch

  • Rank
    Peon

Personal Information

  • Name
    mitch
  1. displacement coving artifacts

    Thanks for the reply - unfortunately, I don't think that solution will give visually similar results. If you look at the geo (output of a voronoi shatter), you can see that blending off the displacement toward the edges will result in really simple, straight-line silhouettes. But I did hear back from Sidefx with a good explanation of why the geo breaks in the first place, and why the renders are non-deterministic. Short version: it's a combination of the displacement being along the normal, the normals being discontinuous at the edges, and multithreading. The fix is either to define a continuous "displacement normal" to use, or just use 3D noise for the displacement, instead of 1D noise. ---------------- From Sidefx: The issue is that all the fragments have shared points, but they have different vertex normals for each of the faces that share the points (due to cusping). Because of threading, the order that faces get diced is indeterminate (one face might get diced first on one run, but later on a 2nd run). Mantra assumes that the displacement shader will displace the same point in the same direction on every run. But since the shared points have different vertex normals, the direction to move the point (along the normal) is determined by which face dices the polygon first. For example, if you had a cube and you wanted to displace one of the corners along the normal, what direction would you expect the point to be moved? There's no really good answer to that. You might say that it should move along the average of all the normals. And that's what mantra would do if you had point normals. There are a few solutions to this problem. 1) Use 3D noise displacement (create a noise vector based on the shading point P) and don't displace along the normal. 2) Create a per-point attribute that represents the shared normal and displace along that normal instead. Both of these would involve diving into the displacment VOP net. The 2nd would involve setting up another attribute on the geometry as well. You can also change the geometry to have point normals instead of vertex normals. This would change the look quite drastically though. Using bump mapping instead of displacement would probably work but there still might be subtle issues because of displacing along the vertex normals.
  2. Here's a scene with some pieces that are the result of a voronoi shatter. I've got a principled shader on them, with noise displacement turned on. If you render 10 frames of the "close" ROP, you'll see geometry flickering at the edges of the pieces. Weirder still, if you do a single frame render (in Render View, or to mplay), then save the frame and render it again, you'll see that the geo artifacts are *not deterministic*. The same frame renders differently each time. Now dive into the RENDER_test node, and instead render it with one of the pieces blasted. Suddenly, the frame renders pretty much the same each time. Doesn't seem to matter which piece you delete. Now go back to the original file node with all the pieces, and render it with coving disabled on the Geometry tab of the RENDER_test object. Again, the frame renders the same way each time. So it appears that coving is acting nondeterministically, depending on how much geo is in the object - WTF? Sure - I expect a little geo weirdness when displacing geo with sharp edges. But I at least expect each frame to render the same every time - and I'm pretty sure that's where the flickering is coming from. If you look at it closely over many frames, it appears that it's happening in certain spots, always on the edges, and that it's flipping back and forth between two different coving solutions at each spot - though each spot flips independently of the others. I've tried everything to eliminate the flickering: all the various dicing parameters, ray predicing, shading quality, flatness, re-dicing, uniform measuring, sample coving, sample lock. I've also tried pre-dividing the pieces to get more polys in the geo. No love. Any bright ideas? (The actual shot that this comes from involves a large number of these pieces that move slowly and then come to a stop. Even when they are completely static, and the camera is static, they flicker from frame to frame.) testgeo_v2.bgeo.sc MRS_001_0160_fx_test_v060.hip
  3. flip collisions & substepping

    I'm away from my workstation, but I feel like I tried changing the collision velocity scale, as you suggested. I believe the results were that the particles just ended up going through the geometry, but I'd have to confirm. My instinct says that it's not a "birth frame" problem, though - all of the particles seem to collect in a shell at the same distance from the collision field. If it were a problem affecting only just-born particles, I'd think you'd see a mix of particles between the "shell" (just born) and the proper collision (older particles).
  4. flip collisions & substepping

    The attached file demonstrates the following problem: there's a translating static object, emitting flip particles from the object & also colliding with it. Run the file up to about frame 20, and you see that there's a gap in the direction of the translation, even though the collision field is correct. (Note that the gap is wide in the direction of travel, but nonexistent on the sides of the object.) Increasing substeps reduces the problem. (But is also heavier, and changes the look of the sim significantly.) Which, o.k., that makes sense - the object is moving some amount of distance between substeps... But why are all the particles clustered in a shell that seems to be halfway between frames? In fact, if you set "max substeps" on the flip solver to 1, the problem remains, and the particle shell seems to be one frame *ahead*. Huh? Interestingly, changing the collision mode to "Surface Collisions" on the static object produces "correct" results - though again, the look of the sim is different. Digging through the guts of the flip solver, it seems to be all volume-based collisions - so I'm surprised this works at all. Assuming that I've got a sim that I'm already happy with, and I just want to force the timestep that the "shell" forms around to be the integer frame rather than the 0.5 frame, is there a fix? Timeshifting by half a frame after the fact seems to break the collisions altogether. collisiontest_v001.hip
  5. Just started fooling around with H17, and there's one thing that's driving me nuts. If I have my viewport set to "hide other objects", and I dive in/out of a geo node from the /obj level, it unhooks the viewer from whatever camera it was looking through. Camera is preserved on "show other objs" or "ghost other objs" settings. Is this a preference somewhere, or a bug? No previous version seemed to have this behavior.
  6. Thanks for the replies. @jason - An interesting point about printf possibly preventing optimization. Would be interesting to test, though I'm not sure how off the top of my head. But I'm unclear what you were referring to w.r.t. texprintf - it seems to just be a sprintf that does UDIM substitution automatically? @oldschool - point taken about the shader being evaluated for every ray. It still feels like this wouldn't prevent optimization of non-changing values at various levels. But I'm not sure how the parameter-override mechanism is implemented. With packed geo, presumably you can't know if a value is overridden until render time. But mantra also knows which object / prim is currently being shaded, so once it's determined that parameter X isn't overridden at the point level, it presumably doesn't need to make that determination again. I guess it comes down to how clever the compiler is about arranging things to execute the minimum number of times, as jason mentioned. In the absence of an "optimization hinting" mechanism, it would be great to at least have an explanation in the docs re how the compiler / mantra work together to optimize stuff. Then you could attempt to arrange your input in a way that was efficient to render, and avoid pathological cases like doing string mangling on every sample.
  7. Do shader VOPS have a sense of "class", in the way that old-school Renderman SL parameters do? Specifically, is there a way to limit some block of shader stuff to not calculate on every sample? The context is that I've got packed alembics coming in. Inside those packed objects is a detail ID attribute (int, but saved as a float) that specifies a group of texture maps. In the shader, I need to construct the texture paths based on this integer, which I'm currently doing in an Inline VOP. This works, but when I insert a printf() into the Inline as a test, it appears that it's getting evaluated many thousands of times. What I'd like is evaluation once-per-alembic, or once-per-packed-prim, or something. I know that this could be done with style sheets, or by baking the texture paths into the alembics, or by unpacking the alembics and adding the texture paths, or by having many dozens of individual materials. But for reasons unspecified, we're not doing any of that.
  8. renderstate vs packed fragments

    Nice - thanks for the info.
  9. So am I correct in understanding that the Render State VOP can be used to read attributes on packed *geos* in a shader. (Where "attributes" means attributes you've added to the top-level packed points themselves, not the underlying packed geometry.)... But that the same doesn't work with packed fragments? Or is this a bug? See attached file. Render and inspect the "noiseColor" image plane. If you turn on "create packed fragments" on the Pack node, and re-render, you'll notice that the shader is no longer rendering colors into this plane. I understand that packed fragments are references into a larger geo, whereas packed geos are entire geos. But I don't see why this would prevent you from using Render State with both of them. If a technical explanation exists, I'd be glad to hear it. packedRenderState.hip
  10. Is it possible to make certain hotkeys dependent on the pane mode? Specifically, the new quickmarks (finally! and there was much rejoicing...) hotkeys 1/2/3/etc. I'd prefer that they were only active in "view mode", so that they become space+1, space+2, etc. - after which I can re-re-bind 1/2/etc. back to the display/render flag commands that now have a decade+ of muscle memory behind them. (C'mon SESI - changing something that basic with no thought toward a "classic mode" is a total Autodesk move, yo.) Quickmarking seems like it's more properly a viewing operation, anyway. In the same way that space+g in a 3D view is a viewing operation. But I'm not trying to start a hotkey flame war - just wondering if users have access to the panel mode logic.
  11. reading from arbitrary geo in a shader

    D'oh! Dude - nice one. You're like a psychic for boneheaded mistakes. Reading the curve geo from a file seems to work, with that fix. op: syntax still doesn't work, which is not entirely surprising. Anyone know if that's is something that's been implemented, and maybe it's another "PEBKAC" issue?
  12. Is it possible to access an arbitrary piece of geo (i.e., not the one being shaded) from a shader? I know that you can read data from pointclouds on disk. I also read somewhere that the "op:" syntax doesn't work for shaders (although it does work for referencing COPs from a shader, yes?). Presumably, you'd have to alert houdini somehow to the fact that there was non-rendered geo that needed to be pushed to the ifd. The context is that I'm trying to write a shader that does coloring based on distance from a set of guide curves. The color transitions need to be sharp, not blurry, therefore baking the colors into the shaded surface's points in SOPs isn't an option, unless I dice it up into a zillion polys. (I suppose a fallback plan would be cooking the color info down into a very large texture. But all of this is animated, so I'd prefer not to render a few hundred frames of textures.) I can confirm that using the xyzdistance VOP from within the shader doesn't seem to work - neither with op: syntax, nor reading the curves from a file on disk.
  13. shadows and normals

    So here's a weird one. While going thorough the H15 Rendering Masterclass, I noticed some weird artifacts in the example scene renders. Eventually tracked it down to the background geo, which was just a box with most of the faces removed. Check out the attached hip file... If you render the mantra node (which is set to all defaults, I believe), everything looks fine. Run the render in the Render View, for reasons I'll get to in a minute. Shadows from the grid and the ball are casting on the box. Now go into the box object, and turn off Cusp Normals on the Facet node. (Or alternatively, put the render flag on the Blast node.) Now the shadows are totally wrong! o.k., so at first I was thinking - maybe it has something to do with the fact that, in PBR, everything is driven by BSDF's, and lighting effects come "free" with the global physical simulation of light. Maybe it's the fact that the normals on the box are smoothly interpolated by default, so that along the edges, you get a normal which isn't actually perpendicular to the sides of the box - and presumably, the normals are affecting the BSDF. (Problems with this theory below.) So, o.k. - switch to Micropolygon rendering. Shadows are still wrong - wtf? But wait - now turn off Preview mode, and in fact Micropolygon rendering displays the correct shadows. Again, wtf? So, a decent theory - but here's my problem with it: no matter which direction the BSDF was "pointing" (based on the normal), it's not going to change the set of overall directions on the surface that can "see" the light. The geometry hasn't moved, so the shadow should be falling in the same place. So can someone explain what I'm missing? (Also, what's up with Preview mode, that causes Micropoly to give PBR-like results?) shadows_and_normals.hip
  14. Yes - I have the same experience, w.r.t. environment. Unfortunately, I'm a freelancer & don't have any control over that in this case. I think the thing that confused me is above $JOB, which shows up in the Aliases+Variables window in houdini, even when it's set as a system environment var. Whereas the other sys env vars don't show up there (but are still usable in the file dialog). I take it $JOB is a special case? p.s. - I'm almost 100% sure that on my work machine, $JOB wasn't inheriting its value (in a blank, new houdini session) from the system env var that I set up for it - it seemed to default to my windows profile dir. But I just tested it on my home workstation, and it does inherit here.
×