Jump to content
Mario Marengo

Fast Gi Anyone?

Recommended Posts

Are there some artifacts in shadow above the foot? My common complaint is that these pictures should be posted as .png or else it's hard to tell if the artifacts are just caused by the .jpg compression.

Share this post


Link to post
Share on other sites
Are there some artifacts in shadow above the foot? My common complaint is that these pictures should be posted as .png or else it's hard to tell if the artifacts are just caused by the .jpg compression.

19037[/snapback]

I think that's actually the other front foot you are seeing there. As far as I can tell there are no really bad artifacts from this method now.

Hey Mario, I been thinking it might be interesting/useful to extend/modify this method to do soft shadows from directional lighting. The simple way being just to include some dummy emitters that cover the hemisphere except where the light source is then just add in a multipler to stop the unoccluded parts being to dark, but this seems a bit like cracking an egg with a spade.

Maybe its a case of having a special emitter that is treated as the light source and then checking the other emitters against it first... <_<

Can you think of some way that is more cunning?

Share this post


Link to post
Share on other sites

very impressive results, under a minute on a p3 is astounding with that number of samples, id be looking at 5-10 minutes at least at home, more like 15+ with 128 or 256 samples.

if this can find its way into the exchange that would be great ;)

even if there are artifacts and it was only used for testing and you needed to use "real" GI for the final render it would be a huge time saver.

Share this post


Link to post
Share on other sites

Looking good Simon!! :D

One thing I did notice was that if I removed the 4xrtheta and made it just rtheta as in Marios proof then the solution turns out to washed out....  <_<

So that term must be doing something clever, is it something to do with the way they remove double shadowing?

19035[/snapback]

It's minimizing the effect of the cosine weighing that accounts for irradiance (which is that since there is less intensity at grazing angles, there is therefore less to occlude at those angles). If you were using the VEX occlusion() function, the equivalent of this term (minus the factor of 4) would be to set the parameter "distribution" to "cosine" (the default). And removing this term altogether would be analogous to setting "distribution" to "uniform". The effect of multiplying it by 4 is to give elements at the periphery more importance than they would normally have, resulting in a "darker" image. Removing the term outright would give you an even darker result, since all directions would have a constant, equal weight. Here's a graphical representation of the effect of multiplying by 4 (and clamping):

post-148-1119841207_thumb.jpg

Of course, the question is "Why do this?". My guess is it might have to do with the fact that at the edges, things get a little murky. For example, even though some element's center may be below the horizon and therefore get discarded, its area may extend above the plane and should be considered as an emitter nonetheless, but isn't. Similarly, an element's center which is just riding the edge has it's whole area considered when only about half of it should really be taken into account. I think a more accurate weight could be derived for the irradiance term -- one which attempts to take into account not just the center of the emitter, but also its implicit extent. It would have to be cheap to compute though, otherwise it would defeat the whole purpose of the thing. This fudge factor removes some of the importance of the cosine weighing and so it would be a good candidate for a cheap fix :P

Hey Mario, I been thinking it might be interesting/useful to extend/modify this method to do soft shadows from directional lighting. The simple way being just to include some dummy emitters that cover the hemisphere except where the light source is then just add in a multipler to stop the unoccluded parts being to dark, but this seems a bit like cracking an egg with a spade.

19040[/snapback]

Yeah, that's what the whole second half of the paper is about. For point lights you'd compute the amount arriving at each element in the standard illuminance loop way, which gives you the amount to occlude during the occlusion pass, except now you'd use the disk-to-disk transfer for occlusion. And for area lights you interpret the light's elements as reflectors with an initial intensity (whereas indirect reflectors acquire their intensity from either direct, ambient, or area lighting). The solid angle version of the occlusion calculation is only used for ambient occlusion, not directional shadows. All my testing on this side of things has been very preliminary though since I've had to jump into production before finishing it, but maybe you can test it out and let us know how it goes :)

Share this post


Link to post
Share on other sites
Yeah, that's what the whole second half of the paper is about. For point lights you'd compute the amount arriving at each element in the standard illuminance loop way, which gives you the amount to occlude during the occlusion pass, except now you'd use the disk-to-disk transfer for occlusion. And for area lights you interpret the light's elements as reflectors with an initial intensity (whereas indirect reflectors acquire their intensity from either direct, ambient, or area lighting). The solid angle version of the occlusion calculation is only used for ambient occlusion, not directional shadows. All my testing on this side of things has been very preliminary though since I've had to jump into production before finishing it, but maybe you can test it out and let us know how it goes :)

19051[/snapback]

Doh, should have read the paper all the way down..... :D

I'll try and continue my experiments as and when I get time, watch this space.

if this can find its way into the exchange that would be great

I'll try and finish everything up first and wrap it all up into an otl, then for sure, if Mario doesn't mind, i'll post it up.

Share this post


Link to post
Share on other sites
I'll try and finish everything up first and wrap it all up into an otl, then for sure, if Mario doesn't mind, up post it up.

19059[/snapback]

Of course I don't mind. Why should I!?

Please, go for it! :D

Cheers!

Share this post


Link to post
Share on other sites

Here is my little buggy HDKed alpha version of the point to element occlusion. Without any optimizations in the code or in the algorithm. Just the NVidia's formula without max(1, 4

post-312-1120653202_thumb.jpg

Share this post


Link to post
Share on other sites

sorry for the confusion - just been wanting to move this topic to the rendering section for a while...

Share this post


Link to post
Share on other sites
Here is my little buggy HDKed alpha version of the point to element occlusion.

19312[/snapback]

HeHey! Starting to look promising there, hoknamahn! :)

Keep at it!

... it's "HDK Fever" over at od[force], people! :D

Share this post


Link to post
Share on other sites

It had to happen with all those free HDK licenses floating around. Perhaps it will prompt someone to write some proper help. Soon sesi will be flooded with HDK help mails, be quicker to write proper help. ;)

Share this post


Link to post
Share on other sites
It had to happen with all those free HDK licenses floating around. Perhaps it will prompt someone to write some proper help. Soon sesi will be flooded with HDK help mails, be quicker to write proper help.  ;)

19340[/snapback]

The only ones who can write the HDK help at SESI are the programmers...and writing good in-depth help takes a lot of time....time that they couldn't be working on Houdini. I suspect half the reason the HDK cost a good bit of money before was to help pay for the lost time of the programmers. Basically you weren't really paying to use the HDK you were paying for the support of it. Now that its free I don't expect SESI to go out of their way supporting it. (I do expect them to keep the samples up to date and maybe add one or two from time to time but that's about it.) What I think SESI should do is offer a HDK Support package. For example, paying X amount of money allows you to take the HDK training courses they offer from time to time, plus you get the training material from the courses... Or something like that.

(This isn't directed at you Simon, but it is meant for anyone who is wondering why there isn't any HDK help)

And I'd like to thank George and Ed for taking their time to help us HDK n00bs tackle it. :)

Share this post


Link to post
Share on other sites

Okay, it's time to ask questions.

I have tried some variants of calculation of the point occlusion. But all of them give very bad results. The more or less good result gives me such formula

...
result = 0.0f;
// For all points
result += 1.0f - (r * eTheta * max(1.0f, 4 * rTheta)) / sqrt(*eArea + r * r);
// End of loop
...
result /= num_of_points; // How much the current point is occluded
...

But also in this case a picture not such what should be.

Points which store attribute with value of the area, placed in the centers of polygons. Value of attribute is equal to value of the area of polygon.

area = prim-&gt;calcArea;()

The everything else is similar to that is described in the Nvidia's document. I have re-read this thread but no useful ideas have arisen. Where I am mistaken? :unsure:

The most appreciable differences are specified on the picture.

P.S. Do you use the KD-Tree as method of optimization, Mario?

post-312-1120922322_thumb.jpg

Share this post


Link to post
Share on other sites

I think mistake was in this place

rV = rPosition - ePosition;	// Vector to emitter
rV.normalize();
r = distance3d(rPosition, ePosition); // Distance between receiver and amitter
eTheta = dot(-1.0f * rV, *eNormal);
rTheta = dot(rV, *rNormal);

Now

rV = ePosition - rPosition;	// Vector to emitter
rV.normalize();
r = distance3d(rPosition, ePosition); // Distance between receiver and amitter
eTheta = dot(rV, *eNormal);
rTheta = dot(rV, *rNormal);

Looks better. But the shadow on the floor is terrible :D

post-312-1120926460_thumb.jpg

Share this post


Link to post
Share on other sites

Are you doing both passes or just the first? Things always look rather wrong after the first pass and get corrected in the second.

I also found that until I got a really accurate way of calculating the prim area I got very mixed results.

Also instead of max(1.0f, 4 * rTheta) try clamping it between 0 and 1.

The only ones who can write the HDK help at SESI are the programmers...and writing good in-depth help takes a lot of time....time that they couldn't be working on Houdini. I suspect half the reason the HDK cost a good bit of money before was to help pay for the lost time of the programmers. Basically you weren't really paying to use the HDK you were paying for the support of it. Now that its free I don't expect SESI to go out of their way supporting it. (I do expect them to keep the samples up to date and maybe add one or two from time to time but that's about it.)

I totally agree, I wouldn't want the programmers to stop doing the important stuff, I was just being cheeky :P . But I would love to see more examples. Just a big list of code snippets that do useful things would be fine, doesn't even need to be whole sops or whatever. If you can handle enough C and C++ to get a basic sop done all you need is to see how some of the functions in the HDK should actually be called. The existing help they have pretty much got me started, it gives you a fair idea of what you need, but to go forward it would be so much quicker with more examples. Plus it would save bugging support all the time.

Share this post


Link to post
Share on other sites

we should make this thread "Fast GI diaries" after anyone of us gets a decent result ;)

Share this post


Link to post
Share on other sites

BTW, Sibarrick,

Could you post that rabit geo here - so that we could use it in the tests?

(that "...bunny.tar/bunny/bun_zipper.ply")

Share this post


Link to post
Share on other sites
P.S. Do you use the KD-Tree as method of optimization, Mario?

19381[/snapback]

Yes, but that's only because of the way I chose to do things.

In my case, the design choice of using a kd tree as a heuristic for building the hierarchical tree structure, was borne of a desire to have the system work with point clouds (the most basic "currency" in this case), which lack connectivity (or any other type of structural information). However, this is neither the only, nor necessarily the best way to do things for all cases. It all depends on how you envision the system being used in the end. In their sample implementation for example, nVidia chose to use texture UVs as an indicator of the spatial relationship between elements. But that is *also* not necessarily the best choice for all cases.... intuitively, the best would probably be a hybrid which could handle everything from the most basic (a point) to the most enriched (say, a parametric patch)... the toughest to handle being a raw point.

This is where the whole "design" thing comes into play... and where things really start to get interesting :)

It's all up to you.

Cheers!

Share this post


Link to post
Share on other sites
...chose to use texture UVs as an indicator of the spatial relationship between elements...

19407[/snapback]

IOW - it implies the texture map where the shadow (occlusion) is painted - or is it some "space between the elements" according to UV space?.. It's just that the sense of the statement alludes me. Could you clarify, please, Mario?

Thanks

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×