Jump to content
Sign in to follow this  
Candide

world coordinates from illuminance loop

Recommended Posts

Hi there,

This is kind of a complicated question but hopefully someone can help.

I am trying to project onto a surface using some vector maths but I am completely confused as to how to get world coordinates out of the illuminance loop.

The attached scene has 2 different shaders that I feel illustrate this point. Change the material on texturedGeo.

The instance point geo creates a single point that will act as the 'projector', gives it a normal that will be it's projecting direction, and writes it out to a pointcloud ("scatterCloud.pc" which odforce won't let me upload so you'll have to make an empty one yourself if you're kind enough to help).

when shader = PCpreview:

The shader iterates through the points in scatterCloud and colours the geometry if the direction of projection and the vector connecting the projector to the shaded point are within a certain

search angle (in this case 45 degrees). This produces a render as shown in PCpreview.png from cam1. This is all fine so far.

when shader = mjw_surface1:

Things start to get weird. This should be a very simple shader. It runs an illuminance loop and colours the surface blue if it determines the shaded points should be 'lit' by the illuminance loop

(in other words if the vector N from the shaded point and L from the light shader are within a certain angle (in this case 45 degrees). Within the light shader (mjw_light1) is where all the

confusion really lies. I do not understand how to determine the light position (P in a light shader) or the surface position (Ps in a light shader) in world space in order to return the correct L

vector. The closest I can get is to just use the P and Ps values as they are (which I believe is in camera space) and do P-Ps which creates a render as shown in mjw_surface.png from cam1

The reason I need to use the illuminance loop is because I need to detect intersection and only shade the surface points that are the first to be intersected.

If anyone can explain to me how I could get the world space coordinates from the light shader (wo_space(P) obviously doesn't work) OR if anyone could explain how I could detect intersection in a surface shader I would be HUGELY appreciative as I have been working on this for ages

illuminanceLoopQuestion.hip

post-8332-134815962209_thumb.png

post-8332-134815962922_thumb.png

mjw_otl.otl

Share this post


Link to post
Share on other sites

I had a very glancing look at the the scene. There is no illuminance loop? Have you tried putting one down and diving inside. If you set a new global variables node in there you get a bunch of stuff that is not available in the other shop context. I would have thought you could extract the necessary information from there.

Using a pad and two fingers on a tiny laptop is a bit cumbersome, but I'm doing a very similar thing at work with volumes and I think it should be possible, especially if you transform to worldspace.

Edited by Macha

Share this post


Link to post
Share on other sites

Thanks for your reply.

The illuminance loop is in the surface shader vex shop which is defined in the otl.

The problem I have is basically understanding how to extract coordinates from the light shader (called by the illuminance loop) which will be in the same coordinate system as the surface position defined in the surface shader.

I want to append the world position of the light that is being called, within the surface shader illuminance loop that is calling it.

Seems simple enough but everything I try is failing. Perhaps you can't post stuff from work but if you have any examples that would be great. Another solution to my problem would be to attach a unique identifier to an instanced light and be able to retrieve THAT from within the illuminance loop.

Share this post


Link to post
Share on other sites

So, calculating lightpositions with direction from p to light doesn't work?

Ok, hm, well I think you can attach info to a light, like an attribute, and get it from the light shader, but, hmmmm. Not sure exactly how anymore, but there is an import node I think.

You could also store it on the points itself.

Edited by Macha

Share this post


Link to post
Share on other sites

I tried a setup in an illumination loop. Adding Surface position to Direction from Surface, then transform space from current to object gives me the correct light position.

Share this post


Link to post
Share on other sites

Sorry, transform to world, not to object, otherwise it won't work when translating. Mantra showed me something weird. The principle is the same though. Obviously it will add on top if you have multiple lights.

As a side note: recently I have noticed some strange behaviour with the illuminance loop where it does not always seem to evaluate properly. For example the renderview will work, but only if you forward a frame, or it will work once and then not again, or the other way around. Or it will work, but not if you launch renders, either in micropoly or raytrace. This problem only happens with illuminance loops, and only with that part of the shader that the loop deals with.

It's weird, but, perhaps you're doing the right thing and mantra doesn't show it properly.

Edited by Macha

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×