Jump to content

Serg

Members
  • Content count

    141
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Serg

  1. Hey guys, too busy to reply with details now, but I will do later! Thanks S
  2. HACK ALERT!! In reply to comments on the BSDF bonanza thread. You can use a very very simple hack to trick the pathtracer to use an irradiance cache, of sorts... In the past I've done a lot of point cloud baking for this purpose, in combination with a lot of other `indirect light` vop hackery, in order to do stuff like the gi light distance threshold... which works, but by using the indirect light vop (deprecated stuff) hacks made rendering much slower when more than 1 bounce was needed and irr cache was not in use... and sometimes the pcwriting would randomly skip objects for reasons I never got to the bottom of. In this case I'm using ptex baking (H15), but I suppose it could be anything... Since the ggx thread post was made I had what currently seems to be a much better/simpler idea than how I did it before, without any real modification to the pathtracer. Basically the hack is, plug the ptex bake into Ce and zero F on indirect rays (not the pbrlighting input)... despite zero'ing indirect rays Ce is magically looked upon by erm, indirect rays... But of course there are lots of practical implications and associated further hackery... Need to wedge bake entire scene (though it could maybe be selective, like just the caching the walls) auto filling in the shaders cache file path Just discovered baking doesn't work on packed geo!! Don't want to be killing indirect reflection beyond the first bounce. This leads to needing pre multiplied separate F components until they arrive in compute lighting, which in turn means making your own shader struct and all the layer blending tools that go with that. OR (I really really hope someone knows how to do this and shares it here), make a is_diffuse/reflection/refraction/etc ray vop. I have a hunch that the best way to do irrcaching in general might be to voxelize the scene... not only because it would provide a cache for shading as we are doing here, but also because we (meaning sideFX or other brainy people) could then start looking at things like cone tracing (like nvidia is doing for realtime gi). But the best thing would be (really dreaming here) that it would remove geometry complexity from the problem of raytracing... Basically the voxels would become a geo LOD, so if the cone angle is big and the distance is bigger than say 2 or 3 voxels then it would do all that expensive ray intersection stuff against the level set instead... forests, displacements, bazzillion polys, reduced down to voxels. I think this might work because I've been doing this in essence for years, but limited to hair... by hiding hair from all ray scopes and using a volume representation to attenuate/shadow direct/indirect light reaching the hair (nice soft sss looking results). But! I hear some say it will be blurry etc, because the volume lacks definition, so there is also a simple illum loop to trace `near` shadows and so add some texture/definition back in... fast... even compared to today's much faster hair rendering in mantra, and arguably better looking (the sss/attenuation effect esp if the volume shader casts colour attenuated shadows), but there is the hassle of generating the volumes even if automated as much as it can without sesi intervention. 1m11s for cached, 2m26s for regular. This is using the principled shader. Btw a test on this simple scene looks like the GI light works (here!), but it is way way way brighter than brute force PBR, and yeah also had grief with the gilight, sometimes not writing the actual file... irrcache_v003.hip
  3. Irradiance caching

    Found Dennis Albus thread on ray switch labels (awesome! been trying to figure that out...) so I'll try and use that to improve the results when I get more time... I hope it allows the bsdf components to remain combined rather than editing all shaders and layer blending tools to handle separated bsdfs... and also hoping it might fix the noisy reflections. Ideally I want reflections to behave as the gilight does in reflections (and what my old gather based system did), in that reflections are reflecting diffuse tracing against the cache, whereas currently the irr cache completely replaces all diffuse in reflections of cached objects at all ray levels, which is the same result in reflections as when "point cloud mode" is On with the gilight. The difference between pcm and irr cache is that the irr cache is far cleaner so these reflections are far less objectionable (see post #3 renders). Cheers! S
  4. Irradiance caching

    Done. I rendered with sample lock On so we can clearly see how the caching methods differ. Its surprising how the photon cache strobe is not as visible in the beauty as the direct photon render would suggest. It looks like the photons that hit the wall directly stay stable, but of course the ones that hit the statue swim all over. Wonder if its still forgiving if the light moves... The irr cached render is rock solid though, and now at 1 minute per view dependant cache frame (with RHF filter) its starting to look really compelling. Solved the weird reflection on the right wall by setting min reflect ratio to 0.5, but its pretty clear that the hack confuses the path tracer, it looks as though the indirect rays are playing russian roulette with diffuse rays that aren't there, and this is still visible on the floor reflections. Mantra is same settings as photon cache render except for min refl ratio, but its still clear to see that the noise levels are lower because the irr cache is far more coherent than a photon cache. The funny thing with photons is that as the photon count increases blotching becomes less of a problem but variance gets worse, because they are getting smaller in radius and so harder to hit coherently ray after ray. I think there is a very good case for irr caching in Houdini... But like I said before, it needs sideFX to make it work proper and hassle free irrcacheVSphotons_001.mov irrcache_v008.hip
  5. Irradiance caching

    That will be an interesting comparison... I suspect any movement on the Buddha catching the light will move those photons around and cause low frequency strobe on contact points and dark areas they find hard to reach, lets see. Probably much less likely to happen with image based caches and sample lock On. I suppose I could try the ray histogram filter stuff on this... or even clean it up in post, NeatVideo does a wonderful job. Maybe we can even get away with rendering the view dependant cache at half FPS and re-time it in post! There might be some weirdness if I animate Buddha on the right to come into frame (global cache to view cache transition). Other things to try... distance threshold. I've got a bsdf occlusion node which is an indirect lighting vop hacked to gather occlusion attenuated by distance (normally used for my version of attenuation) that I could use to mix brute force PBR back in corners/contacts, rather than switch it off entirely for ray level > 0.
  6. Irradiance caching

    Cheers! One more test, with happy Buddha's yay! The only thing that got cached was the walls. Goes to show you don't have to cache everything... The Buddha's still get the benefit Indirect contribution from them shows up in the regular aov's while light from room ends up in emission. I also setup a hybrid of global GI cache + view dependant cache. The view dependant cache is just a half res render from camera where every object but the room is phantom and specular rays disabled. Fresnel is view dependant so there is less diffuse energy in the room than there should be (there is ways around it like a baking mode in the shader where Fresnel is Off)... Everything inside the frame that is not occluded from camera (a self shadow test) is using the view dependant cache, everything outside the frame or occluded is using the global cache. The view dependant cache has the potential to provide vast coverage with a cache detail level auto appropriate for the distance. The point of it is mainly to deal with animated stuff in frame efficiently with the option of a view dependant cache per frame, and a static frame for global. Gain and Gamma was adjusted until it roughly matches the brightness of Photon cache render. I turned up the settings until quality >= patience limit, pretty good for 8m14s + 4m for both caches (cache time could be much less if blurry reflections). This was 6x6 2min 9max 0.0025 noise level (var aa still struggles with dark transitions). Think pbr would still be chewing the first 16 buckets after this finished and no where near these noise levels. Something weird with the reflection on the right wall though. Photon cache does really well 5m53 with same mantra settings. 10million were fired, there were some odd glows/blotches with 1million but not that big a difference. Quality wise pretty similar for the time spent, I call it about even, irr cache took longer but its cleaner over big areas (less variance in irr cache). More photon caching glitches... think turning off prefilter photons then re-rendering photons breaks it (black) and seems to stay stuck black, after a while of trying to get it back (on/offs, different file paths, etc) it somehow comes back. irrcache_v006.hip
  7. Irradiance caching

    More craziness. This time I'm keeping the F components separate so I'm not terminating reflection rays (you'll see what I mean if you look at the hip). I also set the reflect limit to 10 (previously 1) and rough to zero (to see reflect bounces clearly), and I turned Adaptive sampling Off because it makes this scene more noisy rather than less. Interesting results... in that the beauty render time difference is bigger than before, at 6.4x faster than brute fore PBR and better noise quality than even Photon caching. Photons take a hell of a lot less time to cache than any way other way I can think of to bake lighting. ptex baking looks to not be viable at all for high polygon counts objects ... takes an age on something like the happy budha scan (640k poly)... pcwrite from an offset camera would be good but it's currently saying no to baking anything coming out of pbrlighting. Photon cache is a very practical solution but it looks much brighter. It would seems either PBR or Photon cache is wrong, but there's probably more to it... With Gi Light in point cloud mode the brightness is also too bright, the blotchy photons are clearly visible in reflections and light leaks at corners. photon caching is glitchy though, one minute its working, the next it doesnt, and then it works again... irrcache_v005.hip
  8. Irradiance caching

    btw one of the numerous gotchas, is that all indirect light comes out in the emission aov And yeah I think its probably a bug that emission is getting indirect bounces... this may not work for long
  9. I replied here with something that may or may not help you: http://forums.odforce.net/topic/24137-irradiance-caching/
  10. eetu's lab

    AWESOME stuff Eetu This is easily one of the best cg threads ever
  11. Not using Stylesheets here (have something else), but the point is that unless you set the property "Declare Materials" in Mantra/Rendering/Render tab to Save all Shops, then mantra cant see any packed prim materials.
  12. Actually 1 min ray in the above instance looks ok in shadows, as long as the pbr.h hack is in place. Saves another 2 min for 16m20s. Doing some more testing with the pbr.h hack. I think I'm convincing myself its always better than not, but more so when the scene has dark areas... Would be good to get a confirmation/second opinion. I also upped the lights intensity by 2 stops, to see if a simple overall brightness increase increases render times It does. Looking at the indirect samples map (without the hack) it is obvious that most of the image becomes clamped to max samples. Without the hack and 2 stops less bright like before, its the opposite, a lot of the image is clamped to min samples. So, at half res for speed and 2 stops up. With the hack (1min10max 0.0025 direct, 1m40max 0.005 indirect), it takes 5m11s. To get ~comparable quality without the hack it has to have the noise level halved (1min10max 0.00125 direct, 1m40max 0.0025 indirect). It takes 5m34s. Very interesting to look at the indirect_samples aov comparison. Some areas pop up in sample count that were previously dark, while large expanses that have less noise got less samples. In other words I'm getting more samples where its needed and less where it isnt. Variance aa out of the box looks more like a remapped luminance map whereas with the hack it looks more variance related.
  13. No variance AA. Decouple Indirect On 10 min rays on direct lighting 40 min rays on indirect lighting. Took 25m22s Shadows are much better, some areas are over/under sampled compared to var aa render. But imo I think I got more than the 4 minute difference back... + no time spent fiddling noise levels... Decouple Indirect is great. IMO the best balance of time/quality is to not rely too much on var aa, so: Var aa back On 5min10max direct, 20min40max indirect The result took 18m28s. and its less noisy than the first 21m21s render with 1min100max. I find more often than not that var aa is best for adding polish... 1 min rays at any meaningful noise level is hopeful... any time I try that I get chewed shadows, esp without the hack. 1 min is only the best approach if your perception of the noise matches what var aa is catching. Perhaps a easier general strategy is to set decoupled min/max rays to ~40 until quality good, then use min rays to reduce quality until unbearable...
  14. But yeah, just set 40min rays to get rid of the crunchy shadows altogether. It doesn't appear to be possible to get rid of it while using var aa efficiently. looks like odforce is re-compressing and scaling jpg's?... trying png...
  15. At 1080p, the hack + settings produced this result in 21m21s. On a i7 4930K @ 4.3Ghz
  16. You could try the following hack to try and help bias the samples toward the shadows. Open the pbr.h file in houdini/vex/include and find the line bellow (should be line# 169): lum = sqrt(lum); Change it to: lum = sqrt(sqrt(lum)); it seems to help the crunchy shadow artefacts better/more efficiently than reducing noise level. The above line is what sets color space from linear to gamma 2.2, so just doubling the gamma... I got a good/quick result with that hack plus the following settings: - Min Rays 1 - Max Rays 10 - Noise Level 0.0025 - Enable Indirect Sample Limits - On - Min Indirect 1 - Max Indirect 100 - Indirect Noise Level 0.005 I wouldn't worry too much about high max ray samples. As far as I can tell, if you set 100 and it takes 10 to reach the noise level then 10 is all it used. Isn't such a low noise level setting effectively setting every pixel to 40 samples?
  17. Hi, I don't see why you should have to go anywhere near those functions to replicate the results of those shaders. IMO you are much better off staying with BSDF workflow, in just about every way. Especially the env light look up. This will be nicely importance sampled if you use BSDF. As far as I can tell, that shader amounts to layering a metalic reflection BSDF's over a diffuse BSDF (the mix value represents metal particle vs paint substrate), and a sharp clear coat mixed over this using Fresnel as a mask. If you use mix nodes as opposed to add's etc you should retain energy conservation. The tricky thing is the flake, there are various ways. One I have used (you can find examples of it here somewhere), is to use voronoise to generate random vector patches... then multiply this by some factor (cell ID plugged into a random + power and/or fit range is a neat way to control the sparsity of the flake), add this vector to global N, normalize, plug into nN of metal layer. The apparent roughness will increase depending on the magnitude of the flake vector. iirc I also used Is Front Face to negate the vector if it pointed into the surface. This stuff will buzz like hell as you move away from surface because its not anti aliased noise (and not antialiasable afaik), but you can get around this by using Shading Area fitted to some range that allows you to blend between the `flaked` layer and another that has perceptually the same roughness, so that at a distance the flake disappears and is replaced with regular soft reflections.
  18. Hi ajz, I only did what I did to get the ggx shader working. I'm not using the Disney stuff, but that stuff should not be name clashing with H14 stuff... Probably just needs a Woolfwood compatibility sweep.
  19. I found while migrating to H14 that the sidefx implementation name clashes with Wolfwood's otl, so they cant both be used at the same time. To fix that I made a copy of the ggx cvex shaders, appending a W to the name (must also be done on the Node tab in type properties or the cvex_bsdf() wont find it), and changed the cvex_bsdf() path to reflect this. The H14 implementation does not appear to use microfacet Fresnel shadowing as Wolfwood's does. i.e. as roughness goes up the result should darken a lot. That feature enables to shade specular the physical/easy/good looking way, with just roughness map, no need to mess around with reflectivity maps/slider/scaling factors. Presumably it was implemented this way because the correct albedo still cant be computed, and so the mantra surface still has to post multiply the specular bsdf with unrealistic (perfect mirror) Fresnel. I'm still using the fake ggx albedo hack as per a few pages back to multiply diffuse with, although obviously not `scientific`, it works better than using basic Fresnel complement to mult diffuse with. The Disney shading model uses basic Fresnel for this, which causes dark edges when spec rough is high. IMO it's much more wrong to ignore microfacet Fresnel, for the sake of mathematically perfect energy conservation with unrealistic Fresnel.
  20. Axis Animation

    Thought I'd post a link to our Vimeo page. So you can see what we have been up to, ask questions, diss us or whatever http://vimeo.com/axisanimation 99% of it is rendered in mantra, going back to H8 days. Animation is in Maya. Alien Isolation, Crackdown, Dead Island 2, Fable Legends, Halo One, the list goes on long enough to make me feel old. I'll pop new releases in here as and when we are allowed to. Cheers S
  21. Had a crack at faking a GGX albedo output. It takes IOR and Roughness inputs. Here's a pair of balls, the one on the left is 1.2 GGX microfacet reflecting a white env ball, whereas the one on the right is my crazy fudgetastic hack. Matches pretty close with varying IOR and Roughness settings. Well... I expect its a lot more right than complementing diffuse lighting with the microfacetless fresnel as I'm currently doing. OTL and hip, hardly any testing was done... GGX_Fake_Albedo.otl GGX_Fake_Albedo.hip And I've just realized I matched against a BSDF that has a shadowing term, which I don't think was a good idea... Well I guess I can now find out whether it is or not by rendering something with reflections!
  22. "Calculation of albedo needs some thought. Currently the albedo returned is the normalization factor for the distribution function. While this matches how phong() and blinn() are setup, it should instead return the full reflectivity over the hemisphere taking into account frensnel (and masking?)" My guess is masking shouldn't be part of it. My reasoning is that it represents shadowed light, so the absence of light wouldn't be weighting whether there should be more or less diffuse? In the mean time I suppose we could do a white furnace render of a sphere for reference and come up with some approximation to fit the separate microfacetless fresnel based on roughness, by raising/lowering it to some power + fit range or some crazy hack like that I've been using this scene as part of my testing process and following the rule of `everything is 100% reflective at glancing angle", you can see how the diffuse goes over dark at glancing angle due to lack of microfacet fresnel to weight it. I'll try the hack this weekend.
  23. This pint of Hoegaarden has just been re-toasted in your honour
  24. There is an issue with this, in that the camera velocity is not taken into account... This a problem if the camera is moving with the object, and numerous other situations ... on my list of things to look at properly, but I think it will probably involve sidefx support/fix. Its a problem whether you are exporting vectors or even rendering 3d blur.
  25. Hi Jim, Do you already have a vop for that? I suppose I can hack one out of the disney BRDF vop, but would be good to have it in the package for consistency/tidyness. For various reasons I can't currently just use the Disney BRDF, our parametrization was already almost identical to what they came up with, but I need the bsdf's to remain separate until the last moment, to allow our indirect lighting system to persist (necessarily), and to allow them to be conditional on things things like; are we in pcloud irradiance caching mode, is the cache in use, is sss in use (requires that Fresnel attenuation output), and a whole bunch of other practical things like thin translucency that works properly with indirect lighting, separate diffuse/spec bump, etc. Basically our parameter mixer node is separate from the bsdf's, it just spits out energy conserving fresnel'ed intensities for diff, spec, coat, sss, opc. Although I know the microfacet goodness is necessarily going to blow that up into a sort of web over whole thing, I still need things separate, and so fresnel'ed albedos are critical All that stuff is wrapped up into a neat vop UI, similar to surface model but nice UI/behaviour like disney. This vop outputs a struct containing the bsdf's which we can either plug into a shader blender or into our custom indirect lighting system that works in raytrace mode (or bypassed when in pbr). The unpacked bsdf's go into corresponding customized indirect lighting vops (for per component sample/scope/maxdist control and irr cache lookup comparability), speaking of which I have been getting results more or less twice as fast as PBR that are visually identical apart from less noise, but that's a future thread where I hope to convince more to get on the bandwagon and push for per component sample control (indirect decoupling is not enough), like Arnold
×