Jump to content

Fast Gi Anyone?


Recommended Posts

sibarrick

Are you doing both passes or just the first? Things always look rather wrong after the first pass and get corrected in the second.

Only first pass. In any case the picture looks not how it should look in the first pass.

Also instead of max(1.0f, 4 * rTheta) try clamping it between 0 and 1.

I have tried, but the result is even worse. I'll play more with this algorithm, to change a way of the areas calculation etc.

Mario Marengo

Yes, but that's only because of the way I chose to do things.

In my case, the design choice of using a kd tree as a heuristic for building the hierarchical tree structure, was borne of a desire to have the system work with point clouds (the most basic "currency" in this case), which lack connectivity (or any other type of structural information). However, this is neither the only, nor necessarily the best way to do things for all cases. It all depends on how you envision the system being used in the end. In their sample implementation for example, nVidia chose to use texture UVs as an indicator of the spatial relationship between elements. But that is *also* not necessarily the best choice for all cases.... intuitively, the best would probably be a hybrid which could handle everything from the most basic (a point) to the most enriched (say, a parametric patch)... the toughest to handle being a raw point.

This is where the whole "design" thing comes into play... and where things really start to get interesting smile.gif

You are right. I look narrowly to KD-Tree as at an available technique and realization of this technique in HDK. It is not necessary to invent a bicycle. :) The opportunity of work of algorithm with surfaces of any types is necessary. And it would be quite good to relieve the user of necessity to do manual work (I mean creation of the attributes with the areas) that use was even more transparent.

Link to comment
Share on other sites

  • Replies 137
  • Created
  • Last Reply

Top Posters In This Topic

IOW - it implies the texture map where the shadow (occlusion) is painted - or is it some "space between the elements" according to UV space?..  It's just that the sense of the statement alludes me.  Could you clarify, please, Mario?

19410[/snapback]

I thought they were just using texture maps to store the calculation of the first pass.

19411[/snapback]

Hi guys,

Download and have a look at the code they include in the CD that accompanies the book (which they've kindly made available here), and you'll notice that they use texture UVs to group the elements. This is done as a pre-processing step and is not at all related to their use of texture maps to store intermediate calculations.

P.S: the file to look at is DynamicAO.cpp, where they load polygonal geometry and prep it for processing.

Link to comment
Share on other sites

Thanks, guys.

Anyway I'd like to clear up some moments, though I suspect that the answers have already been given.

1. Do back faces contribute to occlusion or does tha algorithm take into account only the visible faces (i.e. visible to the camera)?

2. In the nVidia paper we see that the elements (emitters/recivers) are circles (placed in the point of neghboring polygons - see that illustration with some foo_blue_looking geometry).

The question is - why necesserily using a circle as an occluder/emitter? Why not a polygon? Why placing it in that "neighboring point"? If we need a circle, because of its' somehow magic area properties - then why not simplifying it to, say, 78% of the square's (polygon's) area?

I mean this algorythm is bound to simplify an speed up the calculations - why do we deal with circles, then?

Link to comment
Share on other sites

1. Yes all occluders are concidered. So if you have no animation it's like a normal GI solution, do the calculation once then you are flying.

You can't really cull anything cos it's like shadow casting something invisible to the camera is most likely still occluding something that is.

2. Why circles, because the basis of the idea is that you calculate the solid angle of the "polygon" projected on the viewing hemisphere, which gives you the coverage, if you do that with circles it all reduces down to a very quick approximation that is accurate enough for our purposes. Check out Mario's great explaination of this a few replies back. If you did it with the the actual polys you wouldn't get this simplication and it would all take longer, it might be a smidge more accurate but why bother when circles work and circles are quickest?

Link to comment
Share on other sites

You can't really cull anything cos it's like shadow casting something invisible to the camera is most likely still occluding something that is.

It is true generally, but it is incorrect in specific case. If the wall is placed between an occluder and object, the occluder can be culled, because the occluder doesn't affects on shading of object. For the same reason the second pass in nVidia's algorithm is required (with loss of time).

Link to comment
Share on other sites

Ok, thanks.  It's just that I thought that a circle is not the most simple sollution (with all those imaginary numbers like Pi, you know)..

Ok,  I guess I'll have to take that on trust, anyway ;)

19432[/snapback]

There's nothing imaginary about PI (I think the word you're thinking of is "transcendental"... i.e: non-algebraic). But in the end, it's just a constant, and things don't really get much simpler than that. The most expensive thing in there is a square root (and a few dot products, which boil down to a few multiply-add's).

Yup, take his word for it: it's simpler than projecting an arbitrary poly ;)

Cheers!

Link to comment
Share on other sites

Hi folks,

I am trying to figure out something.

solid1la.th.jpg

Here I enclose 2 images with A and B variants. "A" represents the nVidia approach to "elements"(well... sort of), i.e. an alement is placed on the "common point" where the edges cross or poligons meet. I am trying to understand why this approach should be more beneficial, then B variant? Var.B - here we place the elements in the point in center of a poly.

So basically it's almost the same - but we take not the "comon points", but points in the center of a poly.

I mean it's all a sort of approximation - and does it really matter if we place our elements slightly differently? What could be the results of such a change?

Link to comment
Share on other sites

Hey MADjestic,

No difference (outside of the obvious: less points with larger areas, different distribution, etc). In practical terms, you could take either approach. In fact, you could also scatter a bunch'o'points with the scatter sop and use those instead.

But. This is different from what you were asking about earlier, which is whether it would be simpler to compute solid angle from the actual polygons. In this case, you're still using circles except you're placing them in different positions (prim centers instead of prim points). It's all good :)

Cheers!

Link to comment
Share on other sites

Actually there are some differences, but they are subtle. For example, one could argue that circles on tangent planes to shared points (with shared normals) are a better approximation to the implied surface than the plane of the polygon's face (in the case of polys only). On the other hand, if the surface is ultimately used as a hull for subdivision, then the prim centers may be a better choice.

Also, if the calculated occlusion values are intended as point attributes which then get interpolated at rendertime or through a subdivision, then using the prim centers wouldn't be very useful...

There are probably more little things like that, so think about how you want to use it and then base your approach on that, I guess...

Link to comment
Share on other sites

ok, folks.

I *think* I am starting to get preliminary results. What I am not happy with, yet, is that the bunny looks too soft and I failed to change that yet, no matter I varied the number and position of occluding grids. That worked for the ball - I could make the shadow softer and more pronounced, but not for the bunny. I think that'll take some time to figure it out anyway.

No optiization was used so far. occtree or whatever it is called - is the next step (I am afraid that Mario already forsees an avalanche of incoming questions ;) ). Only 1 pass was used - the ammount of shadow was calculated and substracted from 1 (the "full brightness").

And yes, It's hard to tell how grateful I am to you all, guys, for helping me to get this far (not too far but whatever :) ).

second6hp.th.jpg

rendering time ~5sec.

bunny24ix.th.jpg

rendering time ~9sec.

I'll play a little bit more with the code and then I'll upload what I've got for any other masochist who wouldn't mind ripping it up.

ta ta

Link to comment
Share on other sites

This is my last results. Almost ideally. But I don't like a shadow on the floor. In the raytraced variant the shadow looks more sharp. The shadow from a head of a T-Rex is visible even.

I use this formula

value = max(eTheta, 0.0f) * max(rTheta, 0.0f) * (1.0f - (1.0f / SYSsqrt(1.0f + *eArea / d2)));

As I use emitters on a condition

if(rTheta > 0.0f && eTheta > 0.0f) // The emitter is before the receiver and the normal of the emitter looks in the same direction as a normal of the receiver

then I don't need max(), so

value = eTheta * rTheta * (1.0f - (1.0f / SYSsqrt(1.0f + *eArea / d2)));

There are any ideas how to make a picture of more correct?

There is one more not clear thing for me. In it's own shader NVidia not simply multiplies value of the second pass by value of the first pass but uses the artful formula

if (PASS == 1)  // only need bent normal for last pass
   result = saturate(1 - total);   // return accessibility only
else
   result = saturate(1 - total) * 0.6 + texRECT(lastResultMap, receiverIndex.xy).x * 0.4;

What sense is incorporated in this formula, Mario?

post-312-1122280727_thumb.jpg

Link to comment
Share on other sites

What sense is incorporated in this formula, Mario?

That's just an average of the two passes, except that instead of using a straight average (which would be 0.5*pass1 + 0.5*pass2 or simply (pass1+pass2)/2) they use a weighted average where pass2 is given slightly more importance than the first pass -- 10% more weight to be precise. So their expression is: 0.6*pass2 + 0.4*pass1. i.e: "60% weight to the current pass, and 40% weight to the previous pass, but only when current_pass>pass1".

That's looking good, Hoknamahn! :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...