Jump to content

Mario Marengo

  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

  • Days Won


Everything posted by Mario Marengo

  1. Maya to Mantra

    Hello OdForce! Circumstances have recently forced me to explore the possibility of rendering directly to Mantra from Maya -- that is: generating an IFD directly from Maya. This is in contrast to the more typical exporting of scene elements to Houdini (via some intermediate format like Alembic/Fbx, say) and then rendering from Houdini, which I realize is still a valid avenue open to us. Instead, I'm looking into the possibility of a method that will allow Maya users to simply "press render" and have Mantra transparently service said renders behind the curtains. My uncertainty with the whole thing lies primarily on the Maya side, because while I'm quite comfortable with Mantra and IFD's, I'm very much *not* a Maya power user. I realize this is not at all a trivial task (and perhaps not even worth the effort in the end), and am also conversant with some of the individual components that may be put to use in a hypothetical solution: Maya & Houdini C++/Python toolkits IFD's reference SOHO implementation Houdini Engine for Maya etc... But I'm curious, so I thought I'd tap into the vast Houdini brain-store here to see if anyone has had experience with this or can point me to existing partial/complete solutions (I'm aware of at least one partial attempt), or simply has travelled down the road enough to say "don't even think about it, you fool!" TIA!
  2. Glass

    Stu already posted the ultimate glass shader, but I thought it might be fun to try it the old fashion way, just for giggles I'll spread this over several posts so I can add bits and pieces as I find the time to work on them. I have a bunch of code lying around that deals with glass, but since I'm sharing, I thought I'd take the opportunity to rethink the whole thing from scratch, and post as I build it. That way we can also discuss each step separately and critique the choices made along the way. Requirements: A complete glass shader should support the following optical properties: reflection, refraction, transmission, absorption, and dispersion (did I leave anything "visible" out?). All these effects are wavelength-dependent, so there's a big choice to be made along the way regarding the potential need for a spectral color model. This hypothetical "complete" model would be relatively expensive to compute (in *any* renderer) and clearly "overkill" in many situations (e.g: glass windows), so we'll need the ability to turn features on/off as needed (or end up with several specialized shaders if we can't keep the full version both flexible *and* efficient). Space: It is customary to do all reflectance calculations in tangent space, where the x and y axes are two orthonormal vectors lying on the plane tangent to the shade point "P", and z is the unit normal to that plane. This simplifies some of the math and can therefore lead to more efficient code. However, since both VEX and RSL provide globals and runtime functions in some space other than tangent ("camera" space for those two), working in tangent space will inevitably mean some transformations. Whether working in tangent space is still advantageous after that, remains to be seen. As a result, we'll need to look at the costs involved in writing our functions in either space, and base our final choice on what we find. Naming Conventions: I'll adopt the following naming conventions for the main variables: vector n - unit normal to the surface vector wi - unit incidence vector, points *toward* the source vector wo - unit exitance vector, points *toward* the observer vector wt - unit transmission direction [float|spectrum] eta_i - index of refraction for the incident medium [float|spectrum] eta_t - index of refraction for the transmissive medium (glass) [float|spectrum] kr - fraction of incident light/wavelength that gets reflected [float|spectrum] kt - fraction of incident light/wavelength that gets transmitted All angles, unless otherwise stated, are in *radians*! All vector parameters, unless otherwise stated, are expected to be normalized! Fresnel: This is the workhorse for glass, it alone is responsible for 90% of the visual cues that say "glass". The Fresnel functions determine the fraction of light that reflects off a surface (and also the fraction that gets transmitted, after refraction, *into* the surface). Glass is a "dielectric" material (does not conduct electricity), so we'll use that form of the function. We'll also ignore light polarization (we're doing glass, not gems... a full model for gem stones would need to take polarization into account). But wait! both RSL and VEX already *have* this kind of fresnel function, so why re-invent the wheel?!? Implementations are all slightly different among shading languages. Having our own will hopefully provide a homogeneous (actually, we're shooting for "identical") look and API across renderers -- if we find the renderer's native fresnel is identical to ours we could always choose to switch to the native version (which is usually faster). The following is, to the best of my knowledge, an accurate Fresnel implementation in VEX for dielectrics (unpolarized). In case we find it useful at some point, I give it for both "current" space (camera space for VEX and RSL), and tangent space. Here's the fragment for world space: // Full Fresnel for dielectrics (unpolarized) //------------------------------------------------------------------------------- // world space void wsFresnelDiel(vector wo,n; float eta_i,eta_t; export vector wr,wt; export float kr,kt; export int entering) { if(eta_i==eta_t) { kr = 0.0; wr = 0.0; kt = 1.0; wt = -wo; entering = -1; } else { float ei,et; // determine which eta is incident and which transmitted float cosi = wsCosTheta(wo,n); if(cosi>0.0) {entering=1; ei=eta_i; et=eta_t; } else {entering=0; ei=eta_t; et=eta_i; } // compute sine of the transmitted angle float sini2 = sin2FromCos(cosi); float eta = ei / et; float sint2 = eta * eta * sini2; // handle total internal reflection if(sint2 > 1.0) { kr = 1.0; wr = 2.0*cosi*n - wo; kt = 0.0; wt = -wo; // TODO: this should be zero, but... } else { float cost = cosFromSin2(sint2); float acosi = abs(cosi); // reflection float etci=et*acosi, etct=et*cost, eici=ei*acosi, eict=ei*cost; vector para = (etci - eict) / (etci + eict); vector perp = (eici - etct) / (eici + etct); wr = 2.0*cosi*n - wo; kr = (para*para + perp*perp) / 2.0; // transmission if(entering!=0) cost = -cost; kt = ((ei*ei)/(et*et)) * (1.0 - kr); wt = (eta*cosi + cost)*n - eta*wo; } } } The support functions like cosFromSin() and so on, are there just for convenience and to help with readability. These are included in the header. After some testing, it looks like VEX's current version of fresnel() (when used as it is in the v_glass shader) and the custom one given above, are identical. Here's an initial test (no illumination, no shadows... this is just a test of the function). Yes, you'd expect the ground to be inverted in a solid glass sphere. The one on the right is a thin shell. I ran a lot of tests beyond that image, and I'm fairly confident it's working correctly. The first thing that jumps out from that image though, is the crappy antialiasing of the ground's procedural texture function for the secondary rays. In micro-polygon mode, you'd need to raise the shading quality to 4 or more to start getting a decent result... at a huge speed hit. It looks like there's no area estimation for secondary rays in micropolygon mode. In ray-tracing mode however, things are good -- whether it uses ray differentials or some other method, the shader gets called with a valid estimate and can AA itself properly. The micropolygon problem needs to get looked into though. You would expect the built-in version to run faster than the custom one, and it does (~20% faster)... as long as you keep the reflection bounces low (1 or 2). As the number of bounces increases, our custom version starts out-performing the built-in one. Yes, this is weird and I very much suspect a bug. By the time you get to around 10 bounces, the custom code runs around 7 times faster (!!!) -- something's busted in there. OK. That's it for now. It's getting late here, so I'll post a test hipfile and the code sometime soon (Monday-ish). Next up: Absorption. (glass cubes with a sphere of "air" inside and increasing absorption -- no illumination, no shadows, no caustics, etc)
  3. The SSS Diaries

    Hi all, I have a few days of "play time" ahead of me, so I thought I'd revisit the various SSS models, and see if I can come up with something a little more user friendly. And since I'm sharing the code, I thought I'd take a cue from Marc's "Cornell Box Diaries" and share the process as well... selfishly hoping to enlist some of the great minds in this forum along the way My initial approach to this SSS thing was a somewhat faithful implementation of the di-pole approximation given in Jensen's papers. However, that model is very hard to parameterize in a way that makes it intuitive to use; the result is that, as it stands, it can be very frustrating. Regardless, I'll continue to think about ways to re-parameterize that model; but I must confess it's evaded every one of my attempts so far -- maybe I can convince someone here (TheDunadan? ) to look at the math with me. As a user, I'd love to have a model that I can control with just two parameters: 1. Surface Color (or "diffuse reflectance"). We need to texture our surfaces (procedurally or otherwise), so we must have this one. In Jensen's model, this gets turned into the "reduced scattering albedo", which in turn gets used to calculate the actual scattering and absorption properties of the material; all of which relate to each other in very non-linear ways, making it hard to control. So the goal here is to come up with a "what you set is what you get" model (or as close to that as possible). 2. Scattering Distance. This should behave exactly as one would expect; i.e: "I want light to travel 'this far' (in object-space units) inside the medium before it gets completely extinguished". No more and no less. Well... the main problem with an exponential extinction (Jensen) is that, while physically correct, it never quite reaches zero, so again, it is hard to control. At this point in time, I don't see how any model that satisfies this "two parameter" constraint can ever also be physically correct -- meaning whole swathes of Jensen's model will need to go out the window. And first in the list of things to dissappear will likely be the di-pole construction... next in line is the exponential falloff... and the list grows... OK. Looking over a whole bunch of papers, I think I've decided that Pixar's approach from the Siggraph 2003 renderman notes (chapter 5, "Human Skin for Finding Nemo") is the closest thing to what I'm looking for, so I'll start with that. I'll post my progress (and the code, natch), in this thread so people can take it for a spin and see what they think. Cheers!
  4. Maya to Mantra

    Thanks for your thoughts^H^H^H^H^H^Hwrenches, Jim! :-) Yup, I agree with all of it. My new mission: to convert all these Maya heathens to Houdini! Cheers.
  5. fresnel vs refract

    The only difference between the two is the way in which they deal with total internal reflection (TIR). If you look at their output for the cases where the transmission is valid (kt>0), you'll see that they're identical: When writing this kind of function, there's always the question of what to do with the transmission vector (T) when there is no valid transmission (which happens under TIR since you end up with a div-by-zero or root-of-a-negative and T is undefined). You could return a zero vector (possibly dangerous), or assume the user will always inspect kt and deal with the issue (not very reliable), or pick something arbitrary that's "incorrect" but not zero. Both fuctions choose the last approach, but they make different choices (a matter of legacy behaviour I think). Under TIR, fresnel() returns T=I (direct transmission), and refract() returns T=R (mirror reflection) -- and that's why you see a difference (notice that the difference is only happening in the black portions of the image above; i.e: under TIR). Knowing this, you can modify refract() to match fresnel() (or vise-versa) by inspecting kt: Though of course, the moral of the story is not so much "here's how you make them match", but rather "here's where you have no business refracting at all" I've added a few parms to your shader so you can explore all of the combinations I mentioned above. HTH. mgm_refraction.hipnc
  6. ramp problem

    I think this might be what you're after: 1. Move the Pos null to move the center of influence 2. Switch "Noise Space" from "Fixed" to "Relative" to make the noise space "stick" to the center of influence. HTH. mgm_moving_ramp.hip
  7. Transfering attributes one by one...

    Thanks, Eetu! Nice to be back.
  8. Transfering attributes one by one...

    Here's one possible approach using vex (PointWrangle SOP). Comments in the code. cummulative_transfer.hip
  9. Maya to Mantra

    Thanks, Francis. Yes, I've just recently started looking into the guts of HouEngineForMaya and am in touch with SESI... so we'll see. I'm fairly certain now, however, that at least to start with, we'll tackle things in the traditional way (Maya->Fbx/Alembic/etc->Houdini->Mantra), just for practical reasons. But as I familiarize myself better with some of the components, I may decide to tackle a direct Maya->IFD solution (in which case I'd make it open source). The purpose behind this initial exploration was to A) find out if any such thing exists out there; to which the answer is clearly "no" -- at least not publicly, and failing a ready-made tool, what would be the scope of a roll-your-own solution, to which the answer seems to be "rather large". ...but I'll continue to pull the string, of course:).
  10. Unified Noise problem

    Yes, 4d->perlin->1d and 4d->perlin->3d included. Again, if you need to patch by hand right now, these additional two changes are: At lines 395-398, where it currently reads: #define ns_fperlin4 \ nsdata ( "perlin" , 0.0168713 , 0.998413 , 0.324666 , 1 ) // +/- 0.0073 #define ns_vperlin4 \ nsdata ( "perlin" , 0.00576016 , 1.025 , 0.32356 , 1 ) // +/- 0.0037 It should instead read: #define ns_fperlin4 \ nsdata ( "perlin" , 0.0168713 , 0.998413 , 0.507642 , 1 ) // +/- 0.0073 #define ns_vperlin4 \ nsdata ( "perlin" , 0.00576016 , 1.025 , 0.518260 , 1 ) // +/- 0.0037 The rest of the 4d stats look OK -- and it was the 4D batch that was compromised for some reason (maybe because I ran it in Windows? ). As an aside, I'd recommend using simplex over perlin from now on, if you can. Thanks for catching these!
  11. Unified Noise problem

    Bug and fix submitted (ID:49911).
  12. Unified Noise problem

    Huh. Seems like there was a little hiccup when auto-generating the stats tables. Not sure what happened there, but the "running mean" value (which is not necessarily the same as the average of the minimum and maximum values encountered -- it is the mean over approx 5 million samples) for the ns_fsimplex4 wrapper didn't get calculated properly or got corrupted somehow (it got assigned a value of 0.294652 which is clearly wrong). These tables were generated automatically and are used by the algorithm to "normalize" the output to [0,1] (while spanning as much of that range as possible). The biasing you're seeing in that particular flavor of noise (simplex, 4d-IN, 1d-OUT) is due to this flawed entry in the tables. I haven't checked lately so this may have been corrected already by SESI, but I'll post the BUG just in case anyway. In the meantime, if you need a fix "right now", you can change one value in the file $HH/vex/include/pyro_noise.h as follows (you may need to change permissions on that file to be able to edit it): At lines 403-404, where it currently reads: #define ns_fsimplex4 \ nsdata ( "simplex" , 0.0943673 , 0.912882 , 0.294652 , 1 ) // +/- 0.0064 should instead read: #define ns_fsimplex4 \ nsdata ( "simplex" , 0.0943673 , 0.912882 , 0.503625 , 1 ) // +/- 0.0064 This new mean value of 0.503625 may not be super accurate (it's just the average of the min and max) but all simplex means hover around 0.5 anyway so it couldn't be too far wrong either. Hope that helps.
  13. pointcloud vopsop issue

    A somewhat related old thread that may be useful.
  14. material id render

    All good suggestions, but I think the problem is not so much how to generate a "unique color" (however that's defined) per material, but how to decompose the result after the fact. Here's some interesting reading[1] on this topic (and much more). He uses principal component analysis (PCA) in various color spaces (finally landing in Lab/Luv IIRC), along with some "fuzzy" algebra to get very impressive results. But, you know, not exactly "simple" this. The obvious problem scenario that springs to mind is: What happens when one of your materials is semi-transparent/translucent? And what if it *is* translucent and 15 other materials are partially showing through it? Can you recover any one of them from the resulting color soup? [1] Arash Abadpour, Master Thesis, COLOR IMAGE PROCESSING USING PRINCIPAL COMPONENT ANALYSIS, 2005
  15. H12 Pyro Shader & Scattered Emission

    Hi Alexey, I'm hoping to get to it this weekend... A couple of VEX functions had issues, and those have been fixed, which is good. The problem is that my original treatment of PC-based emission unwittingly relied on one of these flaws (and what was solving "fast" was actually fast, yes, but also wrong ). Long story short: even though PC-based emission is now "fixed" (i.e: it doesn't crash or generate garbage point clouds), it is no longer "fast"... which is what remains to be fixed. Thanks for all your testing. I'll let you know when a fix is submitted. Cheers.
  16. H12 Pyro Shader & Scattered Emission

    There was a change introduced in VEX right after the release candidate was frozen. This had the unfortunate side effect (no pun intended) of breaking the PC-based scattering portion of the Pyro2 shader. Now that we again have access to daily builds, I'll try to address this problem. Unfortunately, PC-based scattering will remain pretty much unusable (for daily builds) until then. Sorry about that. Just unfortunate timing. I'll try to remember to update this thread once a fix is submitted.
  17. writing help for shader

    #pragma help is for writing help for the overall operator. To add a "tooltip"-style popup help for a single parameter, use #pragma parmhelp
  18. Questions about shader writing

    The short answer: "Use PBR" MIS is used by the default PBR path tracer. The path tracer is written in VEX and, if you're interested, you can look at its source code in $HH/vex/include/pbrpathtrace.h. This means you could, in theory, customize pretty much all of PBR except for the BSDFs (bsdf's are not written in VEX). The PhysicallyBasedSpecular VOP, and all other "Physically Based xxxx" VOPs resolve to a BSDF -- notice that its output (F) is not a color (vector type) but a BSDF type (which is an opaque type that represents a linear combination of scattering distributions, or "lobes"). All these nodes that only output an 'F' (a bsdf) are meant to be used with the PBR engines. You can look at their code by RMB on the VOP and selecting "Type Properties...", then click on the "Code" tab of the Type Properties dialog to see the source code for that VOP. You'll notice that none of these "Physically Based" VOPs use illuminance() or phongBRDF() or any of those functions. PBR samples (or transports) light differently than MP or RT -- for example, you'll see things like "sample_light()" instead of "illuminance()", and "sample_bsdf()" instead of "phongBRDF()"... similar ideas but different approach (in PBR, a BRDF is a probability distribution instead of a weighting function, and things like MIS are used to balance the various importance measures assigned to each sampling strategy). */ float phongBRDF() is the standard Phong lobe as a weighting function (in [0,1]) -- note that it returns a float. */ vector phong() computes illumination using the Phong lobe as a weight (i.e: using phongBRDF() as the weighting function). That is: it returns the color (notice it returns a vector, not a float) of the incident illumination, as weighted by phongBRDF() and so is equivalent to using phongBRDF() inside your own illuminance loop. */ bsdf phong() is, again, the Phong lobe but this time expressed as a probability distribution. It is normalized in the sense that it integrates to 1 over its domain of incident directions (a hemisphere in this case), meaning that, unlike phongBRDF(), its range is not necessarily in [0,1]. Also note that its return data type is "bsdf", the contents of which are inaccessible to the user (you can only combine bsdf's with other types in certain ways but not manipulate their values directly). Long story short: these "bsdf" animals are meant to be used with the PBR engines -- they can be sampled and evaluated to resolve into a color, yes, but the scaffolding required to make that happen correctly (or in a useful way) is, well, a path tracer, not an illuminance loop. */ None of these functions "invoke" anything -- they just compute and return values. But, yes, some shading globals (like F) are only used by certain engines (F -- and the code path that defines it -- is only executed when rendering with PBR, for example). So, any assignment to the global F when rendering using, say, the MP engine, would be ignored, and conversely, any assignment to Cf will be ignored by the PBR engines. But these functions themselves do not "invoke" anything. HTH.
  19. Questions about shader writing

    The mirror reflection brdf is a bit of a strange animal in that its density distribution integrates to 0, which is why it's modeled as a delta distribution (which is more like a limit than a function). In any case, if you were writing it as a VEX function that computes the fraction of energy leaving the surface in the direction 'wo', after arriving from direction 'wi' at a location on the surface with normal 'wn' (all vectors unit length and pointing away from the surface position 'P' -- and note that here we're using vectors instead of spherical angles), then it might look something like this: float brdf_mirror(vector wi,wn,wo) { return (wo==reflect(-wi,wn)); } vector illum_mirror(vector p,wn,wo) { vector out = 0; illuminance(p,wn,M_PI_2) { shadow(Cl); vector wi = normalize(L); out += Cl*brdf_mirror(wi,wn,wo); } return out; } This would be a direct interpretation of the delta function you posted above -- a function that returns zero everywhere except for the unique case where wo is in the exact mirror direction (about wn) of the incident vector wi (where it returns 1) -- a situation which, if drawing from a random set of directions wi, would occur with probability 0. That's what I meant when I said that it's not a very useful model in the context of an illuminance loop, where the wi's are chosen for you by Mantra -- that is: inside an illuminance loop, *Mantra* decides where the samples on an area light will go, not you, and the chances that it will pick a sample (with direction 'wi') that just happens to exactly line up with the mirror direction of the viewing vector ('wo' above) are zero. And, as expected, it looks like this: The only way to work with a delta distribution is to sample it explicitly -- you manually take a sample in the single direction where you know the function will be meaningful. This can be done either using ray tracing (see the functions reflectlight(), trace(), and gather()), or using a reflection map (see the function environment()) -- but *not* inside an illuminance loop. This is not "cheating", it just follows from the kind of statistical animal we're talking about. Even the PBR path tracer handles delta BxDF's this way -- when a BSDF contains a delta lobe, it will, when sampled, return a single direction with probability 1, and be excluded from multiple importance sampling. Here's a version using trace(). The only catch is that, when using ray tracing (as opposed to a reflection map), you'll need to turn the light geometry into an actual object so that it can be reflected: vector illum_trace(vector p,dir; float maxcontrib) { // Using reflectlight(): //return reflectlight(p,dir,-1,maxcontrib); // Or... using trace() instead of reflectlight(): vector hitCf = 0; trace(p,dir,Time,"raystyle","reflect","samplefilter","opacity","Cf",hitCf); return hitCf; } And it looks like this (using the RT engine): Here's your hipfile augmented with those two approaches (the otl is embedded in the file). square reflection_mgm.hipnc Oh, one more thing: A Phong lobe is not the same as a Delta lobe -- if you want Phong then just use the phongBRDF() function (and note it's "phong", not "phone"). Cheers.
  20. Questions about shader writing

    1. The product of those 2 delta functions is zero everywhere except when the viewing direction is the exact mirror (about the normal) of the incident direction (or, stated in polar coords, when theta_r==theta_i and phi_r is exactly +/-PI radians, or 180 degrees, away from phi_i), at which point the argument to both delta functions is 0 and therefore the functions themselves evaluate to 1 (as does their product). The scaling of 1/cos(theta_i) is there to cancel out a cos(theta_i) factor that would normally appear outside the brdf to convert incident power to irradiance. All of it essentially boiling down to a radiant value of "I" along the exact mirror direction from incidence and 0 everywhere else -- an effect we all know as a "mirror reflection". What do you mean by a "square specular"? 2. The kind of analysis you mention in #1 is better suited to a statistical context where the BRDF's can be explicitly sampled (like in PBR). It's not really suitable for "illuminance loops" (you mention Cl) where you have no control over the directions in which to sample incident illumination. In that context, the probability that any one of the samples that the loop iterates over is in the exact mathematical mirror direction to the viewing direction is pretty much zero -- so yeah, not the right context to be thinking in terms of delta distributions. In the traditional old-style shading approach, a perfect mirror reflection would have necessitated a "reflection map", which you can indeed sample in a specific direction. In that method, the illuminance loop is only used to do approximations to broad glossy or diffuse reflections of light sources. HTH.
  21. Axyz Animation Now Hiring

    Axyz Animation, in beautiful Toronto, Canada, is now looking for a Houdini person experienced with lighting and shading. The position does not require technical knowledge of VEX and writing shaders per se (though some knowledge of VOPs is a plus), but rather an intimate knowledge of Mantra, shading concepts, and preparing scenes for efficient rendering, as well as an excellent eye for lighting, texturing, tone mapping, and generally integrating CG with live elements. This is a full-time position, starting now. Candidates should have 2 years experience or more. Please contact: John Stollar, General Manager, js@axyzfx.com Thank you.
  22. How to use VEX variables in shader

    At the end of your 2 VOPSops I see 3 point attributes: "Cd" (vector), "Alpha" (float), and "topp" (float). Over in the shader, AFAICT, you're only picking up one of these: "Alpha", and piping it directly to the Of and Af outputs. Finally, over in the "mantra1" ROP, you're adding 2 AOVs (or "deep rasters"): "Alpha" (float) and "MapDisintegration" (vector). So... 1. I don't see any attribute or shader parameter called "DestrMatte" anywhere. 2. Even though the shader is picking up "Alpha" (and using it), it is not exporting it, and so Mantra can't itself pick it up and pipe it to an AOV. To export it so that Mantra can use it, set "Export" to "Always" in the Parameter VOP (see attached). 3. If you intended either of the other two attributes ("Cd" and "topp") to stand for "MapDisintegration" or "DestrMatte" or whatever, then you have to pick them up in your shader and export them as well. I've done this to both in the attached. Once exported by the shader, the rop can pick them up (and rename them to whatever you like). Anyway, that's how the mechanism works, but having said that, keep in mind that Af, Of, Cf, N, P, and Pz, are all automatically available for AOV output (look at the pull down menu for the "VEX Variable" parameter of each AOV and you'll see them). This means that, in your case, since all you're doing with the attribute "Alpha" is assigning it directly to Af/Of, you don't strictly need to manually export them as you could still get at it via the automatic "Af" AOV -- but only because you're currently doing nothing with it inside the shader and so Alpha==Of==Af (which is not usually the case with most attributes, so it's still good to learn how the export business works). HTH Head_Creepv3_1_mgm.hip
  23. volume density falloff

    Have a look at the "Countour" controls in the field modifiers of the pyro shader: Pyro Docs: Contour
  24. Fast Gi Anyone?

    Hi all, I recently came across a cool paper by Michael Bunnel from nVidia (one of the chapters from the GPU Gems 2 book), that deals with speeding up GI calculations using surface elements. Complementary to that paper, and along similar lines, but broader, is this one by Arikan et al. which will appear in the SIGGRAPH 2005 proceedings. A lot of yummy stuff in there! Here are some early results from an HDK implementation I'm working on for the ambient occlusion portion of the nVidia paper (sorry, won't be able to share code on this one, but I thought you'd be interested in the viability of the method nonetheless): Test Geometry: Car Wheel, 45,384 points, 44,457 polys, rendered as subdiv surface. Reference: Here's the reference image, with 200 samples per shade point. Calculation time: 2 hrs. 48 mins. 12.26 secs. (the little black artifacts are probably due to the wrong tracing bias, but I didn't feel like waiting another 3 hours for a re-render). Solid Angle: And this is a test using two passes of the "solid angle" element-to-point idea described in the paper. Calculation time: 6.20609 secs. (!!!) . A little softer than the reference, but... not bad! The implementation is in its infancy right now, and I know there's room for more optimization, so it should go faster. Point Cloud Method More comparison fun... Here is the "occlusion using point clouds" method with the point cloud matching the geometry points, and 200 samples for each computed point (note that about half of all points were probably actually visited since, at render time, we only touch the ones visible to camera plus a small neighborhood). Calculation time: 38 mins. 19.482 secs The artifacts are due to a too-small search radius, but if I were to increase it, then everything would go too soft, and since it's a global setting I left it at a small enough size to catch the high-frequency stuff. In any case, I was more interested in the timing than the look of it in this case. Methinks the days of sloooow GI are numbered.... yay! Cheers!
  25. Fast Gi Anyone?

    You could always bake onto a point cloud (from a scatter SOP) and then do a filtered lookup (of the pre-baked point cloud) at render time. You're not limited to working strictly with the points in the geometry you're piping to the IFD. So, for example, you'd only need to subdivide for the purpose of scattering the points, but you can then render the un-subdivided surface thereafter. But no, this is not something that can be implemented at the shader level because at that point you don't have access to the entire scene geometry (though some limited functionality with file-bound geometry exists) -- and even if you did, you wouldn't want to be calculating this thing for every shade point (it would be many orders of magnitude slower than tracing). The whole idea here is that you compute it once on a much coarser scale than what normally results from a rendere's dicing step (or the even larger sampling density while raytracing), and then interpolate -- the assumption being that AO is sufficiently low-frequency an effect to survive that treatment... though that's not always a safe assumption, as the later updates to the original paper confirm). Also keep in mind that this method, even when working properly, has its own set of limitations: Displacements and motion blurred or transparent shadows come to mind. And... well, AO is just *one* aspect of reflectance -- a noisy AO may be noisy on its own, but not after you add 15 other aspects of light transport. So, yes, tracing may be slower, but you're likely not just tracing AO for a real material in a real scene... just be careful how you judge those speeds.