Jump to content

anamous

Members
  • Posts

    178
  • Joined

  • Last visited

  • Days Won

    4

anamous last won the day on February 19 2022

anamous had the most liked content!

Contact Methods

  • Skype
    anamous

Personal Information

  • Name
    Abdelkareem
  • Location
    Leythum

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

anamous's Achievements

Newbie

Newbie (1/14)

14

Reputation

  1. you're experiencing a combination of different issues. one of them is caustics, the other is ray bias. since the drops completely cover the underlying surface, you need refractive caustics to have light reach the surface - otherwise it will shade as black since no light reaches it. to get caustics, you can either add a caustic light to your scene to precompute a caustic photon map, or set the mantra ROP to "All Paths" so that mantra traces caustic paths. i did the latter for simplicity. the ray bias determines how much a ray origin is displaced before a trace ray is sent out to avoid having a microfacet or polygon trace itself. it should be adapted to the scene scale and amount of detail. in this case i set it to something ridiculously low, just to illustrate what it does. cheers, Abdelkareem kapky_help.hip
  2. Here you go, a simple sample scene (H11). What this does is basically render a polygon geometry as a volume without needing to IsoOffset it first. It renders slower than a simple volume would but the point here is to avoid the IsoOffset step as that can sometimes be very time consuming to impossible. In this case the torus is rendered as a volume, and the cube is just there to illustrate shadowing. The "IsoOffsetting" basically happens at rendertime. Now, In the context of the original question in this thread, we only need this for the following reason: to use Particle Fluid Surface SOP on a million particles, generating a mesh, which we then use to calculate shadow maps which we can then use at rendertime to create shadows on the particles themselves. And the shadow map generation is pretty fast. That being said, if you're interested in rendering objects as volumes, you might want to look up the vm_uniformvolume rendering parameter which is new in H11. It basically allows for the same thing but obviously more optimized. cheers, Abdel hackedVolumes.hip
  3. I don't have Houdini on me right now, but the procedure is simply to use a low res volume and assign a shader that does a ray hit test against a supplied geometry (that you can read in at render time as a .bgeo) in +x,-x,+y,-y,+z,-z (usually a subset of these is enough) and determines if we're inside or outside the geo, plus a falloff to feather the volume edges as desired. The resolution of this shader is then driven by the volume step size - for shadow maps it can be very coarse (as in large steps) and they render very quickly, for beauty the step size you need depends on your scene scale. And to keep things optimized keep an eye on the opacity limit parameter and experiment with various values, it can accelerate renders a lot. Back then, the use case I had was a mass of foam which would have needed ages to isoOffset in enough resolution, and even LOD volumes which I experimented with weren't nearly quick enough. So I took the polygon geo, saved it to a sequence of BGEO's. I then piped the geo into a Volume SOP (to make the volume cover the polygon geo bounds) and the shader I assigned to the volume was just a modified "Simple Smoke" material. The Smoke material looks up a density attribute usually stored in the volume. I replaced that lookup with a few nested ray hit tests that told me if I was inside geometry, and how far the closest ray hit is. Keep in mind that these ray hits are actually pretty fast since I'm not raytracing (as in evaluating a shader at the ray hit position and possibly branching into more ray hits from there), but just intersecting a vector with an array of polygons. So I ended up rendering the low res volume with the ray hit shader, and I handed the original polygon geometry BGEO as a file parameter to the shader. One further optimization that I did was to create a fairly low res quick isoOffset and use it as a hint for the ray hit tests. In other words I used the resulting volume to lookup density and density gradient which gave the shader a quick hint if it should bother test for ray hits at all (if density is 1 you know you're inside a volume and so on) but I ran into some aliasing issues and gave up. Another thing to keep in mind is that Mantra implicitly optimizes volume rendering by testing the density attribute - if it's 0, Mantra doesn't even bother evaluating the shader assigned to the volume. In the case of this little hack I just described that obviously means that the low res "empty" volume that you assign the ray hit test shader to has to have a density attribute set to anything above zero. In my case, rendering at 720P with the volume filling 50% of the frame was a matter of 3 to 6 minutes (quadcore), shadow maps took 10 to 30 seconds. Reference: http://www.sidefx.com/docs/houdini10.0/vex/functions/rayhittest cheers, Abdel
  4. I don't feel that the FLIP/PIC demo scene is really representative, both rendering (amount of motion blur on the geo) and simulation wise - 270k particles isn't close to enough in terms of the implied scene scale. And the FLIP solver is made for much higher particle counts (think millions). The demo video is supposed to demo POP control over FLIP simulated particles, not so much the advantages of FLIP itself.
  5. It is an actual light source which means that it contributes to direct lighting and gets sampled much better than the old emission approach. It also evaluates the shader on the source geometry to determine the light amount/tint. In other words, it's pretty neat.
  6. By volumes, do you mean voxel based simulations or collision SDFs? Does that mean that voxel grids are flexible in XYZ dimensions, and voxels are added and removed on demand? What do you setup then, the size of a single voxel? That's my point. In the long run, if I have to choose between two options, I'd rather have more flexibility and less speed than the other way around, simply because algorithm optimizations can always happen as we go, whereas switching to a completely new architecture for more flexibility takes a lot of time (see nucleus for reference...). And trust me, optimizations are coming. cheers, Abdelkareem
  7. Based on that article and the GUI screenshots ("BOPs", "FOPs" ), I fail to see the extreme difference between this and Houdini's solvers. Yes, it's probably much faster etc. but how is its concept any different from DOPs? I know we all love to rant about DOPs (and often rightly so, especially when it comes to speed and elegance) but this looks like a similar framework with better performance, a slicker GUI, minus interaction with SOPs/etc. I know next to nothing about Naiad so far (very sparse information base out there, and lots of marketing talk), but what I know so far doesn't warrant a "revolutionary" tag. cheers, Abdelkareem
  8. Yes, the particle fluid surface SOP has an option to promote attributes found on the particles onto the mesh and interpolate between them. If you have a rest or uv attribute on your particles and promote it, displacing and texturing the resulting mesh is trivial. cheers, Abdelkareem
  9. What do you mean by "swapping specular and reflection"? A specular function in PBR is a reflection function with a reflection diffusion angle of 0 (a sharp reflection) and should show up in the "PBR combined specular" AOV. cheers, Abdelkareem
  10. Is your geometry rendered as subd? cheers, Abdelkareem
  11. That's probably due to the missing caustic bounces - the internal glas can't pick up the specular contribution through the external shell. Try turning off fake caustics on both the glass material and the Mantra ROP, and add the rendering param "Allowable Paths" to the Mantra ROP, and set it to "All". if the specular highlights show up you have the option of either rendering like this with fairly high samples or you need to generate caustic maps and use those in the rendering. cheers, Abdelkareem
  12. The "usual" IOR value is the inside IOR divided by the outside IOR (or the other way round, depending on convention). That means that as you usually use an IOR of 1.5 for glass, you are basically using 1.5 / 1.0, where 1.0 is the approximate IOR of air. So denoting the common IOR as two values is just another, slightly more flexible way of expressing the same thing that a single IOR expresses. As for the tail light glass, it definitely shouldn't be one sided, otherwise you'll get a distorted refraction of the inside of the light. It needs to have two sides with correct (outwards pointing) normals on them. The thickness between these two layers has to be larger than the "Raytracing Bias" parameter on the PBR tab of the Mantra ROP. My approach to car lights in general is to put a phonglobe() based shader on the internal reflective surface (basically a normalized blurry reflection) and use glass around it. That reflects light that the bulb emits around the internal structure before sending it out through the glass shell. cheers, Abdelkareem
  13. Does your scene reference any large textures (or maybe delay loads)? Are the textures in RAT format? cheers, Abdelkareem
  14. This is driving me insane: keyboard shortcuts are mouse-context-sensitive, which means that if you're hovering the cursor over a network pane and press "w", you'll add a tree view to it, but if you press the same key over a scene view, you'll toggle between shaded and wireframe. That's all fine and dandy, however, there are some special cases where this doesn't make as much sense. Mostly when you have a dialog box opened and need to type something in it. Something that happens way too often to me - editing an inline VOP in the built in editor, and the mouse cursor is blocking a word. I move the cursor off the word, type something only to realize that behind the editor window I just unleashed a barrage of mostly painful activity due to the fact that my key strokes where sent to wherever the mouse is hovering. This leads to an endless hide and seek (specially in smaller fields) where i have to chase the mouse away to see what I'm typing, but be careful not to push the cursor too far off or else I wouldn't be typing in the field anymore. I'm actually not sure on how to enhance this behavior, but it's definitely far from perfect. cheers, Abdelkareem
  15. Oh wow! It's the... announcement... of a... path tracing engine... with all of two demo images. The sheer pizazz is overwhelming. I mean, there are only approximately 7 open source and 4 commercial engines that have been providing the same thing for at least years, but this one is from the brand that maintains mental ray! cheers, Abdelkareem
×
×
  • Create New...