Jump to content

stelvis

Members
  • Content count

    49
  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About stelvis

  • Rank
    Peon
  • Birthday 01/17/1971

Contact Methods

  • Website URL
    http://www.axisanimation.com

Personal Information

  • Name
    stuart
  • Location
    Glasvegas, jockoland
  1. Halo 4 Spartan Ops - Behind the Scenes 1

    all rendered in mantra of course
  2. mia_material VOP OP

    Ward does seem like a good choice (though from the included BSDF's it looks like Ashikmhin might be a good choice as well, since its physically plausible, and seems to be able to fit measured data relatively well compared to some of the other analytical models) there's a paper on implementation of the ward BSDF here (including some discussion of the associated function for importance sampling): http://www.graphics.cornell.edu/~bjw/wardnotes.pdf
  3. mia_material VOP OP

    you could be right but from what I have read the PDF needed for importance sampling is NOT exactly the same as the BSDF required to compute irradiance - the math is over my head but that's what I gleaned from reading various papers - eg: http://www.cs.princeton.edu/gfx/proj/brdf/brdf.pdf from that paper: "Effective importance sampling strategies are known only for the simplest Lambertian and Phong models, and generalizations such as Lafortune's cosine lobes [1997]. More complex BRDFs, including both measured data and physically-based analytic models (such as Cook-Torrance [1982], which has been used for over 20 years) have no corresponding importance sampling strategies." the Ompf forum is great for this stuff (if you didn't know about it already) - some very clever people on there: http://ompf.org/forum/index.php anyway take what I say with a pinch of salt, calculus isn't my strong suite
  4. mia_material VOP OP

    from my (very recently acquired and still limited) understanding of importance sampling and path tracing I believe you would ALSO need to supply a PDF (probability distribution function) to go with the BSDF (this presumably handles the weighting of the path tracing samples used to compute indirect irradiance - I'm guessing here - I haven't seen any documentation that claims this is what is happening in PBR, I am just assuming that it is. I'm also assuming that's the reason that only certain 'popular' BSDF's have been included with PBR - ie these are the ones where the PDF is 'obvious' (eg apparently there is no exact PDF for say a Cook-Torrence BSDF, whereas for say an AShikmhin Shirley BSDF there is a corresponding PDF, so thus the latter is included but the former isn't) as Symek says the inner workings of how these are defined in PBR mode don't seem to have been exposed yet - at least in H10.x
  5. shading params and BSDF's

    I'm not even sure its got anything to do with PBR itself really - now I think about it some more, just tweaking P alone wouldn't work - you'd also need some way of altering some of the associated vectors as well (eg eye vector, vector to lights etc, ) for that to make sense in terms of actually computing lighting from a different point what motivated this was, I was trying to think of a "PBRish" way to do SSS - by PBRish I mean the whole idea of having only one path tracing 'sample' per bounce per eye sample that way I was thinking you could (maybe) get SSS effects for 'free' as it were if you could somehow get mantra to shift where it was shading across the surface for a single shading point since sampling the local surface area in that fashion would have the same 'expense', samples vs noise wise, as calculating an indirect lighting bounce... PBR appeals to me in that it would appear to simplify 'shading optimization' (in terms of choosing noise vs quality) to one place (the number of primary eye samples) that scales very linearly, whereas any 'normal' shading approach that requires shaders to fire off multiple rays and gather the results within a loop called each time the shader is run, creates many more places where you may have to tweak that balance and also many more opportunities for exponential increases in render time (ie: indirect bounces get exponentially more expensive, whereas with the PBR sampling approach they are relatively 'cheap' as the increase is more or less linear with each extra bounce, with the caveat that each subsequent bounce is going to get progressively noisier - though that is offset by the fact that each subsequent bounce level is also generally going to have a progressively less significant effect on overall illumination) it would be great to see some examples of shaders for PBR that weren't simply "pass a BSDF" and that's more or less it. I think that would really help get my head around whats doable
  6. shading params and BSDF's

    okay - in certain aspects the previous post is incorrect in its assumptions you can pass a normal to BSDF's its just the VOP for lambert and the shading model VOP set to diffuse don't do that explicitly - simply changing one bit of the inline code worked for that: ie in the BSDF part of the code changing bsdf = kd * diff * diffuse() to bsdf = Kd * diff * diffuse(Nf) (paraphrasing a little - not open in front of me right now) works fine regarding P: I guess I had assumed that one can actually 'fool' mantra (of any flavor) into computing shading from a different P than the one that is actually derived from the shaded point, by directly altering global P in the shader (I don't mean displace the surface, just displace where the shading is being computed from) is that assumption actually true? so far some experiments seem inconclusive (ie I can apparently 'break' the shading by setting a param called P to something, but have yet to get it do anything that looks like I have actually displaced the shading in world space - eg by say adding 0.1 to each component of global P then passing that to a param called P)
  7. shading params and BSDF's

    I've noticed that in the supplied VOPs that actually use BSDF's for PBR, they never seem to get passed any of the usual global vars like P and N whereas for normal rendering this is common looking at the actual VEX code it appears that none of the internal BSDF functions themselves actually get passed these anyway so is there any mechanism for altering this stuff before computing the BSDF? eg do they implicitly use the global variables supplied through the shading context? (and its just never made explicit) and if so (I presume they must do somehow) can we actually alter these somehow? - ie if I actually modify 'global P' inside the shader (rather than creating a new attribute like "newP" or something) then call a BSDF function will the BSDF actually use the modified value? second question - is there access to some mechanism within a shader to access the guts of how PBR is sampling the point? eg number of samples fired, some kind of 'id' for the current sample, total number of samples (though I guess I could get that from the output node), whether the shader is being called from a primary camera ray or from a secondary bounce ray, etc (I know we can't get what type of ray it is necessarily but seeing how PBR can limit bounces I assume it somehow knows the difference between primary and secondary sample rays internally)
  8. PBR experiments

    see below for the test shader the lambert and specular nodes create a BSDF of diffuse or phong respectively the specular or the lighting model nodes can also create Ashikmhin (set it to anistropic), matchvex_blinn (set it to blinn), matchvex_specular (set it to glossy or VEX specular) BSDF's to access Lafortune and other models You need to write some code rather than just use VOPs right now available BSDF's (not all of them though) are described in the docs here: http://www.sidefx.com/docs/houdini10.0/vex/pbr the two "exp" parameters are just for me to quickly adjust the contribution of either BSDF in the shader
  9. PBR experiments

    No this was in H10 (.643) - you just pump a value into an exported attribute called Ce in the shader for anything that needs to have its own luminosity and it works, you don't need to actually connect anything to either Cf or F in the shader output (unless you want to see the actual object of course) - I think that Ce has always been the default for emission? the shader for the environment sphere is posted below also attached is the hip file for the test scene (character geo removed) engineTest3.zip
  10. PBR experiments

    finally (yes I promise) I found this link really interesting - it's a short paper comparing the 'accuracy' of various popular BSDF's (most if them bundled with Houdini's PBR, if only accessible by code in some cases) in terms of replicating measured values from real world samples: BSDF comparison download the presentation (its a pdf as well despite the description) rather than the paper
  11. PBR experiments

    next up was looking at whether an environment light was equivalent to an inverted sphere (entirely encompassing the test objects, again with an emissive shader applied) the quick the answer appears to be yes there was a big difference on this test though in that both the sphere and the light were mapped with the same HDRI image and had exactly the same intensity (ie both were set to 1.0) - the small difference in colour (they seem to be equivalent in terms of luminosity) is probably down to the env light using some kind of fancy box mapping and my emissive sphere being simply spherically mapped, thus different bits of the image are showing up in reflected light across either set of tests: (indirect tests with sphere on left, direct test with env light on the right, diffuse tests top row, phong tests (roughness at 0.01) bottom - bounces constrained to remove additional secondary illumination) so it seems where the emissive object covers the entire sampling hemisphere its luminosity is exactly equivalent to an env light set to the same intensity value in relative terms next up are 3 tests with bounces increased to 4 for all 3 terms in the PBR rop options and a test shader set to: (diffuse BSDF * 0.65) + (phong BSDF * 0.05) the first image is the indirectly sampled emissive sphere, the second image the equivalent environment light, and the third image the original area light at 9 units size: these images help throw into relief the original issue I was struggling with - namely trying to balance overall global illumination (given an env light and an emissive sphere are pretty much equivalent it doesn't matter which is used for that) with a more direct light source - a very common approach to general shot lighting here at least to be more exact the relative difference in brightness between specular and diffuse illumination with a direct light source seems to be much greater than that with a 'global' type light source used for ambient GI (inc ambient specular reflections) and the lack of any ability to separate out this behavior given that a surface can only have a single ultimate BSDF and lights can't be set to spec or diffuse only under PBR what this (long) set of experiment has shown though is that I would get pretty much the same results with an equivalently bright emissive object instead of the area light doing the "direct lighting" (but with far more noise) and that there isn't really the big difference between indirect path tracing and direct lighting under PBR that I had originally assumed to be the issue Its obviously more 'physcially correct' than I had thought and I guess I have relied so much on 'fix it in post with independant AOV control' type methods that actually addressing how one should properly balance global and direct illumination within one shader and without resorting to "cheating" at light level is something I've not had to deal with recently this series of tests also threw up what looks to be a bug with PBR on H10 at least - ie the image on the left is the emissive sphere plus the area light visible at the same time, whereas the one one the right is the env light and the area light both turned on: the latter image should look much like the former but there's something very strange going on in terms of how the PBR engine is attempting to sum both light sources in the image on the right given that PBR only ever evaluates one light and one indirect ray per sample, one would expect the test using the sphere to be better resolved noise wise (since its making better use of the samples) but the test with the two lights still looks F*!ked up to me
  12. PBR experiments

    next set of test - same thing but with a phong BSDF on the test objects rather than a diffuse this time the sequence shows results with the same size of sphere/area light (ie 3 units) but with varying roughness values of 1.0, 0.1 and 0.01: indirect tests are on the left (sampling was significantly reduced to speed things up), direct lighting on the right: I was somewhat surprised to see that both sets of tests showed quite similar results (given matched brightnesses of source illumination) interestingly (and this is actually what inspired all of this) given that the area light has a 'real world' brightness of 230 for its size and position compared to an equivalent emissive object neither set of tests show specular reflections that are brighter than the respective light sources I did the same set of tests at a sphere/area light size of 9 units but have omitted the results here for brevity the only significant thing to note was that the results matched again with the caveat that the indirect tests were 3 times brighter than before whereas the direct light tests showed no change in luminosity - again presumably because area lights are 'fixed' at a constant brightness regardless of area whereas the effect of the emissive sphere is directly proportional to its size relative to the shaded point I'm not sure this behavior of area lights is really desirable in a PBR context - as it doesn't match with how anything effects indirect lighting (ie returned luminosity is a function of the proportion of 'light' across the entire sampled environment and thus the relative size of a luminous object is important - this seems more physically correct at any rate) ...anyway this had cleared up a fair few things in my understanding of whats going on under the hood of PBR - or it had until I cranked roughness way down to 0.0001 (indirect test first on the left, direct lighting on the right): 2 things puzzled me here: 1) at very small roughness values the indirect sampling is starting to diverge significantly from the direct lighting in terms of reflected luminosity - maybe the similarities are only approximately equivalent at non-extreme values of roughness? (next time I'll try different BSDF functions to see if they behave similarly) 2) the indirect test shows reflected luminosity roughly twice as bright as the actual light source (the info box is picking out the brightest spec hit I could find on the torus - a value of 427 is almost twice as bright as the actual emission value of 230) the latter result is really weird as I had assumed that all the indirect rays were simply averaged and that the BSDF function in an indirect lighting context was only determining which direction they were sent off in (ie it biases the samples along the direction of incidence) - if they are then simply averaged that would mean the reflection could NEVER be brighter than the thing its reflecting this result suggests that the BSDF is also modulating the intensity of the sampled values and in extreme cases magnifying it somehow?
  13. PBR experiments

    did some testing to see whether a BSDF based shader using the PBR rendering engine is "equivalent" in terms of how it computes illumination for both direct lights and indirect lighting the latter was tested by using emmissive objects with a shader set to put some value into the Ce output variable which is apparently all PBR needs to treat something as emmissive one of my initial assumptions when setting this up was that there really wasn't any equivalence and that specular BSDFs in particular would give quite different results - seems that I was wrong about that in some aspects at least given an emmissive object the same size, shape and position as an area light, if you modulate its brightness (ie multiply Ce in the shader) so that it gives roughly the same diffuse illumination as the area light using one bounce PBR, then both this object and the area light actually give very similar results across a broad range of BSDF values (I tested diffuse and various roughness settings on a phong) - ie the results from indirect path tracing seems to approximately match those from direct lighting. I had kinda reckoned they were divergent, but apparently not. in the following images I set up a sphere with an emmissive shader so that it matched precisely the size and position of a spherical area light I then rendered EITHER the sphere OR the area light to see how the results differed for the diffuse test bounces were limited to 1 for the sphere (otherwise it doesn't illuminate at all) and 0 for the area light - ie there are no additional bounces to give secondary illumination amongst the test objects themselves and the shader used for the test objects was literally just a diffuse BSDF 1. sphere only at sizes of 3, 9 and 20 units (it was matched in terms of brightness to the area light at 3 units - note the white clamping point is different in all 3 images to account for the increased amount of illumination as it gets bigger and that the difference in brightness is roughly linearly coupled to the size of the sphere) we can also see here that the sampling required to effectively eliminate noise gets progressively more expensive as size decreases - which was to be expected, given that a diffuse BSDF should be sampling the entire hemisphere above each shading point equally and only a few rays are going to hit the emmissive sphere itself - the images above used about 16,000 samples per pixel and were quite slow - only on the 20 unit sphere was that number of samples really enough same set of test for the area light at the same range of sizes - the illumination stays at the same intensity regardless of size: direct lighting is a lot more efficient noise wise, again expected - though I think I like the shadows in the indirect test: and just to prove that sampling quality doesn't really have any effect on how the illumination is actually being computed for direct lights the following image used only 1 sample per pixel: there's no way to do something similar on the indirect tests without just ending up with total noise (ie so indirect sampling is definitely different from calculating direct lighting)
  14. did a few experiments today comparing indirect lighting of emmissive objects and direct light source (both area lights and environment lights) - rather than hijack the OP thread further, I've started a new one
  15. didn't mean to sound like I as moaning - I'm just trying to get my head around what it actually does IMHO part of the problem is that its called "physically based rendering" , which sort of sets up some expectations that its fundamentally more physically correct in how its treating illumination I don't think it really is though - as far as I can tell its really a combination of a different, more 'efficient' sampling framework and a clever idea about how to use BSDF's to shape that sampling, but apart from that it doesn't fundamentally treat lights or indeed indirect reflections really any differently from non-PBR - it just samples them differently, and allows one to use the same function (or combination of functions) to shape both direct and indirect lighting without actually changing anything about how the illumination from either is actually computed at a low level there is definitely some mileage in terms of being able to work with BSDF functions directly, and I'm quite interested in seeing what advantages that might bring - the frustration is more to do with the lack of documentation about what its actually doing and VOP level support for stuff that doesn't hide the PBR side of things away from the user which means a lot of guessing and making assumptions that may be incorrect anyway I guess I need to take a closer look at H11 at this point if its changing radically
×