Jump to content

Search Of The Neighbour Points


hoknamahn

Recommended Posts

I try to search the points neighbouring to a current point. But getneighbourcount() is not that I'm looking for, because it is dependent on the connectivity. If two points are connected by an edge, they are considered neighbour if are not connected are not considered. Even if these points belong to the same polygon. How to search neighbour points in other way?

Link to comment
Share on other sites

Is this related to the fast GI thread? If so you really need point clouds, they are the only way in vex to get neighbourhood points efficiently. Otherwise you have to import the position of every point in the geometry in a loop and check whether it is in the neighbourhood by calculating the distance from your source point. Not the way to go.

Point clouds just do this by default you specify your input point and the size of the neighbourhood and it just gives you back a handle so you can loop directly through those points only.

Link to comment
Share on other sites

Yes, this is related to that thread. For some reason I have decided, that the neighbour points necessarily should belong to adjoining faces, as a matter of fact to depend on topology. It is my mistake.

Point clouds just do this by default you specify your input point and the size of the neighbourhood and it just gives you back a handle so you can loop directly through those points only.

Yes, I understand this. But I don't understand what *GOOD* sizes for the neighbours. Or the parameter "size" lays entirely on shoulders of the user of the shader? What is spoken about it in the NVidia paper?

Link to comment
Share on other sites

Yes, I understand this. But I don't understand what *GOOD* sizes for the neighbours. Or the parameter "size" lays entirely on shoulders of the user of the shader? What is spoken about it in the NVidia paper?

You mean what's a good number of neighbors? Neighbors to a shade point? If that's the case, then you're better off stating it as a filter radius (in object-space units) than as a "number of points", but you only need to worry about that when assigning the final (filtered) occlusion result to a shade point. It is not related to the occlusion calculation itself. (... and the nVidia paper is understandably silent on this issue since they don't use point clouds like these)

In terms of how many point cloud (PC) points need to be visited in order to compute the occlusion value for each PC point, the answer is simple: all of them. That is to say, for each PC point that falls within your filter radius (user-defined), you will evaluate occlusion by visiting (for each pass) *all* the PC points. The cloud is occluding itself, so for each point in the cloud you need to check all the points in the cloud (no, that's not a typo :P). That's what I meant by "N^2 comparisons" in the other thread. So for 2 passes you will have 2*N*N evaluations of that infamous formula -- not very efficient, no, but you'll quickly find out if the formula is OK or not, and it *is* faster than 200 samples per shade point (up to some upper density where they'd probably start taking the same amount of time since typical occlusion() would only run for every *visible* shade point, whereas this algo is running over all PC points in the entire scene).

Clear as mud? :)

Link to comment
Share on other sites

The cloud is occluding itself, so for each point in the cloud you need to check all the points in the cloud (no, that's not a typo :P).

Clear as mud? :)

18789[/snapback]

nope :P sorry. Why (just for occlusion) do you care if a point in the PC is also occluded? Sorry I haven't spent any time trying this out yet so maybe the answer is very obvious and as soon as I try it I will find out, but I just can't visualise why you need to know if the PC points themselves are occluded? Either the point you are shading is occluded or not and if you test it against every point in the PC then you are done, no?

In my mind I'm thinking it goes like this:

I'm point that needs shading, am I occluded? look at every emitter disc around me, how much do they block the light from the whole hemisphere? If I'm looking at an emitter disc that is also in shadow what do I care?

What am I missing? :unsure:

I was also thinking, rather than do it for every shader point I would store the result in the PC itself and then just filter that result for all nearby points. That way even if I need N^2 tests it will not be N^2 for every sample just the number of PC points. It maybe I'll need two point clouds but that should improve efficiency, no?

(Sorry for hijacking your thread hoknamahn)

Link to comment
Share on other sites

Hey Simon,

Why (just for occlusion) do you care if a point in the PC is also occluded?

Because the calculation is done for (and stored in) the PC... then transferred to the shade points via PC filtering.

I just can't visualise why you need to know if the PC points themselves are occluded? Either the point you are shading is occluded or not and if you test it against every point in the PC then you are done, no?

Think of the PC as a "proxy" for your "real" shade points. It's a much sparser version of the shade points, so the idea is that if your expensive effect (SSS, AO, whatever) doesn't change too rapidly along the surface, then one could calculate it *only* on the PC and then transfer the result to the much denser shade points (which is done through filtering based on proximity of the target shade point to some small neighborhood of PC points -- that's what that whole "filter radius" or "number of points to filter" nonsense is all about: a way for the user to define just how big that neighborhood should be).

In my mind I'm thinking it goes like this:

I'm point that needs shading, am I occluded? look at every emitter disc around me, how much do they block the light from the whole hemisphere? If I'm looking at an emitter disc that is also in shadow what do I care?

Now change "I'm point that needs shading" with "I'm a PC point that needs shading" and you got it.

See, you don't want to be visiting the whole cloud for every shade point. In fact, that's exactly what you're trying to avoid.

I was also thinking, rather than do it for every shader point I would store the result in the PC itself and then just filter that result for all nearby points. That way even if I need N^2 tests it will not be N^2 for every sample just the number of PC points. It maybe I'll need two point clouds but that should improve efficiency, no?

Bingo! :)

Except you don't need two PCs -- just the one will do.

Cheers!

Link to comment
Share on other sites

Ok cool :D , that's exactly the way I've always used PC's I just didn't follow the N^2 tests bit, since that's how they always work, I thought you meant something else. N^2 never seemed that bad to me compared with hundreds of samples per shading sample that still seems pretty rapid. I was actually therefore thinking you meant N^3!!

It was my misunderstanding what you had written, we were both thinking the same thing. But how can you ever avoid that even with the HDK?

Clang!! sound of penny dropping.... of course normally you don't need to visit every point, only the ones within the filter radius. Ok so this is going to be much slower. Still it doesn't matter too much as long as it's quicker than and more stable than want is available at the moment.

The only reason for considering using two PC's was in order to (perhaps) optimise the calculation by using one PC that contained all the points in the geometry (no over-sampling) and one that stores the result at a finer grain using the scatter sop, it would be the finer grain one that would be filtered to provide the final shading. But perhaps that's is just a crazy idea that just won't work. In which case yes just the one PC that is tested against and stores the result to be filtered.

Edited by sibarrick
Link to comment
Share on other sites

But how can you ever avoid that even with the HDK?

It's avoided (or sped up, rather) by taking advantage of the fact that, the further the emitter is from the receiver, the less it occludes (in inverse proportion to the distance between them squared -- incidentally, that's what the "r^2" term in the denominator of the formula is doing). Consequently, the emitters that represent surfaces that are far away ("far field") don't need anywhere as much resolution (or "granularity", or "emitters per volume") as the ones that are close to the receiver ("near field"). This is the same principle as the geometric Level Of Detail that we all know about -- a very detailed human head can be approximated convincingly with a single sphere, provided it's "far away enough" from the viewer (everything being relative).

So what they do in the paper is build a "pyramid-like" structure where the bottom level has all the emitters, and each subsequent higher level has 1/4th as many elements as the level immediately below it, each having an area equal to that of all 4 of its "children", and so on. In other words, each "parent" element is a composite of all 4 of its children.

Now with this kind of structure in place, instead of checking every single emitter, you can just scan the upper levels of the pyramid first, and only descend into the lower "higher definition" levels if the emitter is "close" to the receiver. Meaning that far away surfaces with thousands of emitters could (depending on how far and how big they are) be reduced to a single lookup. Hence the speed.

It is to build this kind of structure (and other niceties, like access to the kd-tree class to do your own space partitioning) that you need to resort to the HDK. But the basics of the thing can be done with point clouds, no problem -- and it's the fastest way to play with variations on the formulae for occlusion, emission (area lights and indirect illum), transmission (sss), etc.

Cheers!

Link to comment
Share on other sites

Thanks for all the explainations Mario. :)

I wonder if it will be possible to search the PC in a pseudo pyramid manner by varying the sample radius and number of points requested.. hmmm worth a try. Since I still haven't had any time to learn the HDK and this sounds like something fairly advanced anyway I'm going to try every hack I can think of to see what i can milk out of point clouds. They've been pretty good to me so far, greatest invention since sliced bread :D

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...