Jump to content

storing light in points.


freaq

Recommended Posts

Hello guys,

I would like to use a (pre baked?) light information to determine scattering of plants and possibly other objects.

I have seen examples of people baking ambient occlusion maps and such, but the approach was often complicated and requires generating proper UV's and a place to store the maps.

I would prefer baking into points/vertices as it saves me from using textures and is a bit more flexible and easy to use overall.

also I do not require the detail maps provide, nor the sophisticated rendering for my purposes, i would prefer (near) realtime simple shadows (as displayed in the viewport) for the scattering.

ideally i'd like to store the lighting as a color value,

any tips would be greatly appreciated.

Link to comment
Share on other sites

Depending on the complexity you're looking for it might be possible with the ray SOP. Set the normals towards the light. Put something on the light like a sphere and then transfer an attribute from the sphere. Then you can use that new attribute on the original points to scatter new points. If the results are too sharp you can blur the results or add noise, whatever you can do with any other point attribute.

Link to comment
Share on other sites

i'm unsure about what you are suggesting,

if I understand correctly I have for example a grid and a sphere. (as a light)

using a point node I "point" my normals towards the location of the light (the lights position - the points position)

then use a ray and import the color.

however if I do this I would always get 100% transferred color, (as the normals line up 100% and thus transfer the color)

therefore I am assuming I am misinterpreting what you mean.

I could dot the "original normal" against the normalized light direction vector created as how you proposed

and multiply it with the incoming value, essentially writing my own lambertian style lighting...

but still it would not provide me with shadows or basic occlusion of any kind...

edit-

ok the shading seems to work, does anyone have any clues for the shadows?

post-6571-132026954364_thumb.png

post-6571-132026991204_thumb.png

Edited by freaq
Link to comment
Share on other sites

wow that occlusion is beautifull!

I have tried to understand the inner working of the VopSop, but am having some big dificulties with it... (as I have never used Vex before)

I know I am asking much but could you explain what it is doing? i'm tying my best to understand and it seems to be a "standard" raytracing algorythm

but I do find it hard to read.

in the vopsop the first inline (code) vex code is:

if ($nvec.y > -0.6 && $nvec.y < 0.6) $N = set(0, 1, 0);

else if ($nvec.z > -0.6 && $nvec.z < 0.6) $N = set(0, 0, 1);

else $N = set(1, 0, 0);

so if I understand correctly if sets the normal to line up with the closest axis X Y or Z

and proceeds to cross them with the original(normalized vector)

this provides the scene with two vectors on an arbitrary plane.

the original vector and the cross (forming the plane) are then crossed again to create third vector

creating a local coordinate system. so far so good.

the second inline node has this code:

$M = set($nvec.x, $nvec.y, $nvec.z, $nvec_1.x, $nvec_1.y, $nvec_1.z, $nvec_2.x, $nvec_2.y, $nvec_2.z);

so i do see that de inline node is set to create a matrix3, but i find the code above hard to read

nor can I find anything about a vex function called set generating a matrix3, just about them creating vector3's

I see it is taking all the input vectors but the set() function stays unclear...

anyways if i understand correctly it is creating a matrix for local coordinates based on the point normal.

but then the real code...

if i understand correctly it uses a monte carlo like system to scatter rays in a random direction from every point.

if the rays hit, a value is added to AO.

int $i, $j;

$ao = 0;

vector $org = $P + $offs*$N;

for ($i = 0; $i < $nbSmp; $i++) {

for ($j = 0; $j < $nbSmp; $j++) {

// initiating the for loop. and a secondary one

float $sin_theta = sqrt(nrandom());

float $phi = 2.0 * M_PI * nrandom();

// generates two variables a variable PHI which is eaqual to PI*2 (which relates to 360 degrees) which he multiplies by random 0-1 to get 0-360 spread.

// and the variable sin_theta of which the sqrt in front of the random seems to "flatten" out the rays, providing a nicer spread? can anyone shed rome additional light on this?

// i did see a difference with and without comparing screens in photoshop, but some extra explanation would be welcome.

vector $dir = set($sin_theta*cos($phi), $sin_theta*sin($phi), sqrt(1 - $sin_theta*$sin_theta)) * $M;

// creating a verctor in a random direction based on the variables created above. placed by the matrix?

float $hitU, $hitV;

vector $hitPos;

// creating variables for the intersect function to store it's output data.

int $hitRes = intersect($hitGeom, $org, $dir*$maxDist, $hitPos, $hitU, $hitV);

// intersect is the actual raycasting, storing its results in the variables HitPos, hitU and hitV which seem to remain unused afterwards.

if (-1 != $hitRes) $ao += 1;

// if intersection occurs the value of hitres will not be -1 thus a value of 1 is added to ao.

}

}

$ao /= $nbSmp*$nbSmp;

// divide ao by number of samples to normalize output.

//finally in the complement the values are reversed.

I think I am deducing this correctly but if I made any wrong assumptions please correct me. (as I have never used vex before I might have made some errors...)

also if someone could explain the matrix to me and how it ties in that would be greatly appreciated.

Freek hoekstra

Link to comment
Share on other sites

There is a small visualization in the initial scene, but here I made a better one.

The goal here is to generate a set of points on a hemisphere around the normal.

Let's suppose that we know how to generate random points with desirable characteristics for some default case - a hemisphere around the world's Z axis. When we need to build a transform that has its Z axis aligned with the normal.

In the attached scene the blue (Z) axis is the normal, use geo1/xform_me to change its direction and you can see how those little spheres that represent sampling points stay centered around the axis.

So we already know the Z axis - it's the normal. Axis selection in the first inline finds a suitable initial upvector (the $N variable name is probably misleading here...), the new X axis is found by taking cross product between upvector and Z, and finally we compute the new Y axis, perpendicular to the other two, as a cross product between Z and X.

The set function with 9 arguments is a matrix3 constructor - basically, it's a way to assign the axes computed above to matrix rows.

The calculations of $sin_theta and $phi variables is a way to map a set of 2D random points (provided by a pair of nrandom() calls) from the unit square to a set of points on the hemisphere. The theory behind this is rather complicated, can't point to any online resource right now (although they undoubtedly exist), but this is nicely explained in "Realistic Ray Tracing" by P. Shirley & R. Morley in the chapter on Monte Carlo Integration.

$dir expression simply converts from spherical coordinated to 3D Cartesian vector.

For AO we only need the boolean result from the intersection test, so indeed hitPos/u/v results are simply ignored.

N_basis.zip

post-3010-132042193071_thumb.jpg

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...