Jump to content

Dumb question : Fastgi


Recommended Posts

Hi !

I'm still a beginner myself, but here's how I did it.

grid SOP -> Facet SOP (post-compute normals checked) -> Scatter SOP

(without the facet the points of the cloud seem to have no normals)

Turn on the render-flag on the grid.

Save the Scatter SOP geo to disk.

Add a fastgi SHOP, and point it to the geo file.

Assign the SHOP to the grid.

Should work now.

However, I have some questions myself:

If I understand it right, the purpose of the point-cloud based approach is

to somehow 'intelligently' scatter the points over the object, based for example on curvature, and then use these as gi-sample points.

Points in low frequency areas will be spread farther apart, while points in high-freq areas will be denser.

So far so good, but wouldn't I need to increase the filter radius in these low-freq areas?

The fastgi shader seems to have a fixed filter radius for all points.

Maybe one could use the average distance of the points in the filter radius of the shader, but being a newbie myself I'm not ready to implement it myself.

Finally, has anyone got good results on highly detailed objects using point-cloud based GI?

Or is it more meant to be used in rather low-frequency scenarios?

Any insight would be greatly appreciated, as I'm desperatly trying to get faster ambient occlusion out of mantra.

(Compare with mental ray + dirtmap shader, for example)

Thanks

Link to comment
Share on other sites

Ok i got it working now . Thank you very much.

but as you said it really doesnt work all that good on complex objects :(

A shader that uses this approach : http://www.andrew-whitehurst.net/amb_occlude.html

would be great for houdini but i am still lightyears away from being able to write it :(

I mean the shader as it is doesnt seem to complex the one thing is those renderman __inShadow function which returns if the shaded point is in the shadow of a spotlight. if theres a leightweight ( rendertime wise ) way to do that in houdini the rest wouldnt be very hard i guess . Maybe one of the cracks in here could take a look at it :rolleyes:

Sven

Link to comment
Share on other sites

Great link there.

Ok, I'm obviously just pulling this out of thin air.

Here is Larry Gritz uberlight shader:

uberlight.sl

Has the __inShadow variable defined.

Wolfwood ported this shader to VEX:

http://odforce.net/forum/index.php?showtopic=681

Doesn't have __inShadow defined.

Maybe just add it to Wolfwood's shader?

I'm totally new to shader-writing, but if i understand this right, the magic is done in the BlockerContribution part of the shader.

So maybe one wouldn't need the whole uberlight shader.

Maybe this helps someone(Sven?) to cook something up until someone with a deeper understanding can take a look.

I think Mantra would really benefit from a shadow-map based ambient occlusion solution also in regards to displacement-mapping.

edit:

I think I understood the shader wrong.

BlockerContribution is nothing we need (I think)

This happens when newbies try to be smart :(

-b

Link to comment
Share on other sites

Hi guys,

I can't get into this right now, but I just wanted to mention that you already have everything you need to do this trick in VEX. :)

I haven't looked at the code in the web page, but here are a few thoughts ...

1. The "mysterious" __inShadow exported parameter (variable, MTOR function, whatever -- pick one, it doesn't matter) is simply the result of evaluating 1-shadowmap() (see file://$HH/vex/html/functions.html#zdepth for details). IOW; how much "in shadow" a surface point is, based on a shadowmap lookup.

2. The "bent normal" is simply the average direction of unoccluded light. Now; you could choose to calculate this yourself if all your occlusion info is coming exclusively from shadow maps (not trivial), or you could just use the occlusion(float,vector,[options]) function to do the work for you (if you don't mind using ray tracing) -- in fact, this "bent normal" thingie is exactly what that version of the function is designed to do :)(see file://$HH/vex/html/shading.html#fn_occlusion for details)

3. In general, an evenly distributed light dome is a pretty piss-poor way to sample an HDR env map. It will do in the beginning while you're testing the rest of the solution, but... let's put it this way: you'll need a heck of a lot of lights to get a good representation of the environment; this means *lots* of shadow maps... which will likely mean you're better off using ray tracing via the occlusion() function...

4. Shadow maps have problems of their own -- they're not the panacea for speed in all cases. A camera that starts far away and then pushes in tight to an object will highlight a lot of the deficiencies with shadow maps (and when using a light dome, you need to multiply these problems by the number of lights in your dome). A solution that can switch between ray-traced and mapped occlusion might be a wise choice...

5. Last but most certainly not least, you have point clouds. If running occlusion() on every shade point is too slow for you, and you don't mind a soft occlusion pass, then you could average a few points in a cloud... see Mark Elendt's bundle for an example of this type of usage.

Just thinking out loud; hope that helps.

Cheers!

Link to comment
Share on other sites

Fist of all thanks for the replies :)

My main problem is that full raytraced ambient occlusion takes way to much time to render and i cant get decent results on complex objects with the fast gi shader . On his site Andrew Whitehurst says that his depthmap based approach is 8 - 20 times faster then raytraced ao so i think its well worth to try it out . And even if it doesnt work out to be better i still have learned a lot and have something to do for the weekend :P

Sven

Link to comment
Share on other sites

Maybe i should first grab a book about renderman before trying to convert a shader .

Does OcclSum /= dotSum mean OcclSum = OcclSum / dotSum ???

(searched around to find an answer by myself but /= is not the best thing to google for :blink: )

Thanks for the help

Sven

Link to comment
Share on other sites

Maybe i should first grab a book about renderman before trying to convert a shader .

Does OcclSum /= dotSum mean OcclSum = OcclSum / dotSum ???

(searched around to find an answer by myself but /= is not the best thing to google for  :blink: )

14869[/snapback]

Yes :)

In general,

<variable> <op>= <expression>

is the same as:

<variable> = <variable> <op> <expression>

Where <op> can be one of +, -, *, /, %, &, ^, |

So:

a += b; is equivalent to a = a+b;

a -= b; is equivalent to a = a-b;

a *= b; is equivalent to a = a*b;

a /= b; is equivalent to a = a/b;

etc...

Cheers!

Link to comment
Share on other sites

Hey SvenP, so why is fastgi not good enough for complex objects? Is fastgi the thing with point clouds? If it's point clouds, could it be just not enough points?

14877[/snapback]

I tried clouds between 1000 and 30000 points and all i get are odd looking results and with 30000 points its not that much faster then raytraced ao anymore so i am trying to find a solution that is fast and looks good

Sven

Link to comment
Share on other sites

This is with 30000 points and that looks quite dense for me .

14884[/snapback]

Could you post a couple of the renderings, even if they're not as good as you'd like and we'd be able to comment.

Thinking about it, I'm not terribly sure that a highly detailed surface is the best case scenario for Point Cloud AO or GI. I'm pretty sure its suited for typical interiors scenes or simpler outdoor scenes. A model with complex fine nurnies might not get a fantastic representation in a point cloud - maybe to the point where the point-cloud density to catch all the detail is higher than the number of micropoly vertexes when you come down to render the surface.

Link to comment
Share on other sites

On the speed issue of depth mapped based AO stuff: The actual shadow map generation is usually much slower for the initial first frame. You only get any speed gains when reusing the once generated shadowmaps.

In some cases this works out really well, but if objects move etc. you run into the risk of having to increase the sampling quality of the shadow maps / make them high-res and rerendering the shadowmaps for each frame speratatly. If you ever used gi_joe by emmanuell campin for Maya, you'll know what I'm talking about, his maya script did a very similar job.

ZJ AO Page

This here is a decent explanation on how to make depth mapped AO working with Prman and Maya in this case.

I know AIR allows the actual baking of light information into textures. Since so far Houdini's AO shader is an actual light shader I don't know if there is a way to bake this into a texture. If raytracing should be supported when baking textures it sounds fairly straightforward to write a AO material shader for Mantra.

Jens

Link to comment
Share on other sites

As long as the geometry has legitimate UVs, it would be easy enough to render the object during an occlusion pass using the "mantra -u" (render the object in texture space) option. For this pass, I'd make the object(s) white and only contribute light from the GI light. The expandCOP is useful for taking these maps and extending them a few pixels beyond the flattened geometry in order to ensure that there aren't any cracks along the texture seams. These maps, multiplied by both the world space occlusion colour/image mulitiplied by the texture space diffuse colour/image should result in what is essentially pre-baked occlusion.

I could be totally wrong, though - it is Monday morning. :)

Link to comment
Share on other sites

  • 3 weeks later...

I haven't used Houdini's point cloud GI caching but I do have a lot of experience with PRMan's, which I believe works similarly.

I have successfully used point clouds to cache ambient occlusion and bent normals on my Mars Rover object (~250,000 polygons). The occlusion is a tad noisier and blurrier compared to direct ray-tracing, but the render time savings are enormous (30 seconds vs 1 hour per frame).

Point clouds probably work better on smooth, nicely-parameterized surfaces (patches) compared to complex polygon objects. I had to spend a lot of time tweaking the sample rate to get rid of artifacts. I found that a sample rate corresponding to ShadingRate of 10 or less is necessary to capture all the details. Also, there tended to be artifacts where polygons intersected at places other than the vertices. (e.g. if you have two quads that cross each other in the middle, you might get one sample point on one side of the intersection and one sample point on the other side, so the final interpolated rendering will have a dark splotch at the crossing).

The final point clouds added up to about 3 million points. (that sounds like a lot, but it's for the whole model at a rather high level of detail)

Link to comment
Share on other sites

PRMan's point clouds are just dumps of arbitrary shader variables at every micropolygon vertex. They do include surface normals for disambiguating lookups (so you don't mix in points on the opposite side of the surface you want) but no other structure is included. They don't really support adaptive sampling. (you can adaptively shoot rays, but the results are still stored at the uniform shading rate)

In version 12 they added a volumetric texture feature that compresses and filters point clouds into sparse octrees. These have the advantages of on-demand tile loading (like texture maps) and anti-aliased lookups. But there are still some bugs with this feature so I can't take advantage of it yet.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...