Jump to content

2.5d Mattepainting - Image Integration With 3d


Recommended Posts

Not really a Houdini specific question, but i am using houdini to do this, so if there are any people with experience doing 2.5d mattepainting in production - with houdini or general 3D i hope there is someone that can help me...

-----------------------------

I intend to create a 2.5d mattepainting using mainly combined photographs (which I will take myself) and some creative manipulation and combination in photoshop...

The main intention for this is to create a stylised but photo-realistic landscape environment in which I can composite someone shot against a bluescreen.

The mattepainting(s) will also be used to create a simple 2.5D mattepainting so I can also introduce some movement as well.

So theoretically, it is compositing live action in a combined live action/cg environment - if you are still following me

My question is how do I go about perspective.

Should I take all the photos at the same focal length? (in this case they are mainly going to be mountains, rocks etc)lets say for example 50mm

If they are then combined to make a mattepainting, would this still be the same focal length - 50mm - even if it is extended a bit or objects are placed in all different places?

And what about shooting the live action - the studio is quite small, so if I wanted someone to look like they are in the distance, I couldnt use a really long lens, I would have to shrink them in post - so using the example, would I have to maintiain shooting at 50mm?

I understand that it is not the lens that is distorting the image, it is the distance they are from the lens - so this is what i am concerned about.

Also - and importantly - in the 3D CG environment, if I project the painting onto geometry, would my projection camera also have to be set up to be the example 50mm? Because..if I have extended the picture so it is really wide for panning purposes...wouldnt this mean I have a different field of view?

OR should i design everything orthographically in 3d, so that i can change lenses with the recording cam in 3d? with minimal distortion?

Sorry for such a long post

Hope this is relevant and that I havent confused anyone

... any help would be greatly appreciated

J

Link to comment
Share on other sites

Hey Johnny,

There are a lot of separate issues here...

The only real constraint, in terms of producing the matte painting, is that the settings of your projector(s) must match your capture camera(s) relative position, orientation, lens, etc..

Once you've recreated these capture points with suitable projectors, you need to build simple "dummy" geometry for them to project onto. At this point you'll have part (but not necessarily *all* of the environment), so you need to fill in or extend with painting. To do the extention, you can pick some arbitrary point from which to create a cube environment map. Then you paint the map and re-project this painted version through 6 new projectors with a 90 degree FOV each and centered at the same spot as where you took the initial env map. Your dummy geo can remain the same.

Alternatively, you can "fill-in" only some of the missing bits of environment by recapturing/painting/projecting from just the few points (lensing doesn't matter here) that need it to fully cover your CG move properly... and repeat to taste...

Normally, you would shoot a live background with the move as you intend it to be in the final, and then match-move your CG "recording" camera to that. You'd also shoot your keyable subject after the fact so you can match to your already shot BG... but I assume there are reasons why this is not possible in your case (?)

Hope that helps :unsure:

Cheers!

P.S: I have a vague memory of a program (from Adobe?) that tried to reconstruct the capture locations and let you set up dummy canvass geometry to project onto.... I think it was called "Canoma" -- but this was some time ago... ring any bells for anybody?

<edit>

I just found it. It *is* called "Canoma" and it's by MetaCreations... go here.

</edit>

Link to comment
Share on other sites

Hey Mario

Thanks so muchfor your detailed advice - you dont know how hard it has been for me to get information on this subject.

Im sure I will have some further questions to your advice soon - but the first one that I want to get my head round a bit is .... if I am making a panorama or stiching together photos to make an imaginary scene - what effect does this have on the field of view and those kind of details?

In Houdini, you also have to put the aperture setting into the projector - (which is the size of the film/ccd?) - so how would this be effected with a panorama/stiched pictures/matte painting? I assume you would have to take everything at the same height/focal length etc?

However normally with mattepainting this doesnt have to be the case.

Would it be possible to project orthographically - thus allowing any kind of perspective for the recording camera? ..would this work?

Anyway - thanks again - Canoma looks very cool - ive also come across imagemodeller by Realviz which seems similar :D

Link to comment
Share on other sites

Im sure I will have some further questions to your advice soon - but the first one that I want to get my head round a bit is .... if I am making a panorama or stiching together photos to make an imaginary scene - what effect does this have on the field of view and those kind of details?

Again, the important thing is that your projector matches the captured image. In this case the captured image is a stitched-up panorama, but that doesn't change the process. IOW, if the panorama spans 180 degrees horizontally and 90 degrees vertically, then that is exactly what your projector should do. (BTW, a cube or lat-long map is *also* a "panorama").

Also note that when I say "projector", I mean a light that puts out coloured light. Not "projecting texture coordinates". Texture coordinates are interpolated and so will likely distort your texture under some circumstances -- so these are *lights* whose FOV matches the panorama. Also note that the "canvass" geometry will need to give you the exact values being projected, so don't use a "Lighting Model" VOP, just accumulate the raw Cl from the projector lights.

In Houdini, you also have to put the aperture setting into the projector - (which is the size of the film/ccd?)

Have a look at the "Matching Houdini Cam to The Real World" document (online help, search for "FOV focal aperture", it should be the first one or near the top). This gives you the relationships between all these parameters. As you can see there, aperture (x and y) *is* somewhat related to the filmback size but the relationship is not that intuitive. Basically, aperture is the number of units spanned by the field of view (in x or y) at a distance of focal_length units away ... pass the Aspirin :P

so how would this be effected with a panorama/stiched pictures/matte painting?  I assume you would have to take everything at the same height/focal length etc?

However normally with mattepainting this doesnt have to be the case.

It would make things easier yes, but no, you don't *have* to (as long as you have a reasonable chance of duplicating the captures' pos/orientation after the fact, that is).

Would it be possible to project orthographically  - thus allowing any kind of perspective for the recording camera? ..would this work?

No. Unless the projection in your panorama is orthographic, which would be impossible if it covered >90 half-angle... no; not orthographic.

Just think of your stitched-up panorama as simply an image that has an X-by-Y angular coverage and duplicate this in your projection. After you've matched your projection to your panorama, you can then re-capture (in CG) and re-project as many times as you like to add definition or modifications that your final shot may need.

I'll try to set up a demo when I get a chance (not this week) as this stuff sound a lot more complicated than it really is... (maybe someone else can whip up a demo?)

Cheers!

Link to comment
Share on other sites

Mario - you are THE man! B)

Im so grateful that you have taken the time to answer my questions.

Also note that when I say "projector", I mean a light that puts out coloured light. Not "projecting texture coordinates". Texture coordinates are interpolated and so will likely distort your texture under some circumstances -- so these are *lights* whose FOV matches the panorama. Also note that the "canvass" geometry will need to give you the exact values being projected, so don't use a "Lighting Model" VOP, just accumulate the raw Cl from the projector lights.

The way I have been doing it in my tests, is to project the image through a camera, using my painting as a texture in a Constant lighting model and setting a UVTexture to 'Camera Projection'. It does distort the image at times, however the general look is right when I have dummy geometry right.

You are mentioning lights as a mode of projection - i take it u are using this as a way of describing how the process works in logic? I guess by doing it my cam-mapping/constant shader way - it is not far off right?

Have a look at the "Matching Houdini Cam to The Real World" document
I hate looking at that page without asprin!

The amazing thing is, if you stare at the page long enough, it starts to make sense. It does say however

Scanned Film Images

For scanned film images, you can simply divide the pixel width of the scanned image by the pixels/mm for the scanner, and plug this number into the aperture channel. Then set the focal length to the live action focal lens.

hmmm seems easy enough, considering I am going to be scanning all my negatives.

So if I shoot all at 50mm, patch em together a bit, do the painting thing. Ultimately I divide the final image's pixel width by pixels/mm for scanner and...im left with the apeture!

Last thing left to do is shove 50mm into Houdini focal length. (deep breath) :blink:

The reason im checking is i havent seen any way of manually inputing the FOV into Houdini - neither cameras or the lights have a place to put this....so I guess apeture is houdini's only way of working that out?

Link to comment
Share on other sites

You are mentioning lights as a mode of projection - i take it u are using this as a way of describing how the process works in logic? I guess by doing it my cam-mapping/constant shader way - it is not far off right?

No, not far off at all. If you don't see any objectionable "stretching" artifacts (or blurring), and if you are able to set the camera projection to match, then go ahead and use a UV projection from camera. The "projecting from lights" approach removes interpolation artifacts (if any) and allow you to easily project an entire sphere (or distorted projections), which may be hard to do using the UV-from-cam approach. It is also better when you need a 1-to-1 match of pixels and have deforming canvass geometry, etc, etc. But there's nothing wrong with UV-from-cam if it works for you.

The drawback of projecting from lights is that you need to write the projection logic into custom shaders, both for the projector lights and the receiver (canvass) objects....

So if I shoot all at 50mm, patch em together a bit, do the painting thing.  Ultimately I divide the final image's pixel width by pixels/mm for scanner and...im left with the apeture!   

Last thing left to do is shove 50mm into Houdini focal length.  (deep breath)  :blink:

Hmmmm.... not quite. See below.

The reason im checking is i havent seen any way of manually inputing the FOV into Houdini - neither cameras or the lights have a place to put this....so I guess apeture is houdini's only way of working that out?

Looking at the geometry (which is just solving right-angled triangles), we can derive a way to set these parameters based on FOV angles (and x-resolution). One of the possible ways would be to modify aperture and resy, like this:

Assuming we know the two field of view angles fovx and fovy (in degrees), and given that:

tan(fovx/2) = (aperture/2) / focal

we can solve for "aperture" (given as "apx" in the docs), which gives us:

aperture = 2*focal*tan(fovx/2)

Meaning that we would insert the following expression in the Aperture parameter of the camera:

2*ch(focal)*tan(fovx/2)

replacing "fovx" with the desired angular spread for the horizontal field of view (start with 45 degrees to ensure that the numbers match the default...they should <_< )

Next, we can choose to enforce the vertical field of view "fovy" through the y-resolution parameter. We do this by relating it to our previous result for fovx, and expressing it as y-aperture first. So; given that:

   apy = (resy*apx) / (resx*aspect)
and
   fovy = 2*atan(apy/(2*focal))

we can solve for resy to get:

resy = 2*focal*aspect*resx*tan(fovy/2) / aperture

So, in the y-resolution parameter, we can insert the expression:

2*ch(focal)*ch(aspect)*ch(resx)*tan(fovy/2) / ch(aperture)

And again, replace fovy in that expression with your desired vertical field of view spread (in degrees). You can start with the value 34.9213 to ensure that it matches the default resy, which is 243 pixels.

Note that at the end of all this, the "Focal Length" parameter becomes a placebo parameter... since the aperture value depends on it, it will adjust to the focal length you enter (whatever that value may be). But this is intentional, since we've decided to define the camera through two FOV values, not focal and aperture... so we only want those two parameters (fovx, and fovy) to modify what our camera sees. And the x-resolution controls the "pixel density" -- the overall resolution...

There are many other ways to slice this cat, this is just one of them... but they *all* require some Aspirin :P

But at least this will give you a way to control the camera through FOV values only.

Cheers!

P.S: @Marc: I think this topic belongs more in Rendering or General or... something that is not Modeling anyhow ;)

Link to comment
Share on other sites

This sounds like it's crying out for a special camera in an otl with all the expressions pre defined.

17822[/snapback]

Yup. And it wouldn't be that hard to add a bunch of presets, including a few standard prime lenses and such. And perhaps a few different modes: "Define with aperture/focal", "Define with FOV", "Define with Maya Params", "... Max Params"... the basic geometry is simple so I don't see why not.

I'll try to get one started when this job is over.

Link to comment
Share on other sites

Yeah that would be great to make an OTL - im screaming for one!!

-----------

Hey guys

Take a long deep breath before reading on - and pop a few asprin.

Been thinking through and trying out some things.

I really really appreciate the breakdown Mario. Its helped me so much, and for that I am very grateful.

A few queries though :huh: (sorry - i promise ill stop soon :rolleyes: )

It would probably be best for me to give figures I am currently working with as an example so that things are kept clearer.

Ive shot my photos with a 50mm lens, and scanned the negatives in. The scanner isnt super-acurate, but my final image is pretty much all of it - ending up at a resolution of 3600x2400

As I shot on the photos on 35mm stills film, the field of view at 50mm is...in x = 39.6

and my field of view in y = 26.99

Ive sucessfully managed to project through the camera, setting the focal to 50

and the apeture to 2*ch(focal)*tan(39.6)/2) .... based on the x FOV

I set the x and y resolution to 3600 x 2400 as this was a known amount.

(I am taking it that the further examples you gave Mario were ways of letting Houdini work out the resX and resY if I didnt know this - or was this for when I make a panorama out of the photos?)

So far it seems to work really well - the photo projects well.

as for my Recording Camera - The format I will be shooting the live action is DVCPro50 (which is a 2/3inch ccd chip - and records at 16x9!!!!) so if the ccd size is 16.93333 (2/3 inch converted to mm) in the horizontal.

So if I am right - I put whatever my focal is in the focal section and in the apeture setting put 2*ch(focal)*tan(16.93333)/2) ?

Can i use this as a recording camera, considering the projection settings are completely different? I will have to really, because the footage will be shot this way - It doesnt look bad, and I think I should be able to composite somone into the scene releatively ok with these settings. The resolution (PAL - 720x576 at a pixel aspect of 14.42 for being anamorphic) surely wont make a difference as it is a setting for the recording cam, and the projecting cam is set up fine .... or will it?

I guess it will take compositing test for the theory be realised.

-------------------------------------------------------------------------

My second query is about setting up the projection through a light rather than a camera as you were mentioning Mario.

At the moment, as mentioned before, I am projecting as a texture through a camera, with the shader set to constant.

This works well, but I would really like to try it through a light as I would like understand know the difference and benefits. Setting up the viewing parameters is obviously the same as the camera settings - but how do I go about getting the texture to project onto the geometry?

what settings do the geos and the lights have to have?

Thanks a lot guys

You all rock, and I am so glad you are able to answer my headbanging questions.

Cheers

J

:notworthy:

Link to comment
Share on other sites

Hey Johnny,

I'll take on each part of your post separately, that way I can respond as I find time to sit and think about it.

BTW. I'm glad you got me thinking about this stuff since I've made up my mind that we need a nice solid cam model here at Axyz (instead of rediscovering the formulas every time we need to do this type of stuff, which is what we've been doing so far). I want something meaty that can reliably handle anamorphic formats, different film croppings, and all that other nonsese

So this post is about matching your stills camera and the numbers you've come up with so far.

I'm by no means an expert on camera optics and film formats. Far from it. But I *can* look at the numbers and, from what I can see, a lot of what you've done so far seems to be correct, but just to confirm...

Matching the Stills Camera

According to the specs I've seen, a 35mm camera has a picture size (filmback) of 36mm horizontal and 24mm vertical. This means that a scanned resolution of 3600X2400 seems correct. You also know that you're shooting with a 50mm focal length. All the units are the same (mm), so let's see if the numbers jibe...

The horizontal filmback size (36mm) is the equivalent of Houdini's aperture parameter, and using our formula to calculate the horizontal FOV, (and assuming all trig functions are in degrees) we get:

fovx = 2*atan(aperture/(2*focal))

= 2*atan(36/(2*50))

= 2*atan(0.36)

= 39.5978 (degrees)

Which approximately matches your choice of 39.6 degrees.

To double check, we can put our result through our aperture formula, which gives us

aperture = 2*focal*tan( fovx / 2 )

= 2*50*tan( 39.5978 / 2 )

= 36 (mm)

So it checks out: for a 36mm hor. picture size and a 50mm lens, the horizontal FOV is exactly 39.5978 degrees, and choosing a resx of 3600 means your horizontal sampling density is 100 pixels/mm.

We can repeat the check for the vertical fov but I'll skip the details. The result is fovy = 26.9915 degrees. which your 26.99 matches close enough. All that's left to check is whether your scanning choice of 2400 for the vertical scanning density makes sense. Well, using the "resy" formula I posted earlier, we can see that:

resy = 2*focal*aspect*resx*tan(fovy/2) / aperture

= 2*50*1*3600*tan(26.9915/2) / 36

= 2400 (pixels)

(BTW, that's what I had meant when I said that the resx value becomes the sampling density: it's because using that formula for resy makes it dependent on fovy, leaving resx as the only "scanning density" or "resolution" control. But whether you put a literal value for resy (because you know resy beforehand) or use the formula (because you know fovy beforehand), they should both arrive at the same number, which in this case is 2400 pixels.)

So everything checks out for your stills camera. Life is good :)

...to be continued...

Link to comment
Share on other sites

H

So everything checks out for your stills camera. Life is good :)

...to be continued...

17885[/snapback]

Thats a releif - it looks really good too. Im starting to work out how to break it down into layers now so that i can do some nifty pull-focus stuff in compositing. Very exciting.

Ive posted a question about a VOPs shader ive been trying to make (to get my head round VOPs mainly) that allows me to set up passes and sections of my painting quickly...i made a bit of a shambles of my first effort...the post can be seen in the WIP section. Any advice you have there and I would be very grateful.

Because the dummy geometry is so 'rough', ive found it works brilliantly if i separate the layers of my painting in photoshop - extract an alpha and then project it onto the separate parts of the image that way, so that I have some tranparency on the edges. I can also paint extentions for when the camera pokes around corners etc.

Apart from that I would be interested in the light setup you mentioned...so far i am using cameras, and it would be great to see all the optioins and nail the best texcnique before i go full steam on producing all the map paintings.

And yes Mario - your idea of a Cam model would be very cool indeed.

By the end of this, hopefully ill be a 2.5d mattepainting machine....in Houdini

Cheers

J

Link to comment
Share on other sites

as for my Recording Camera - The format I will be shooting the live action is DVCPro50 (which is a 2/3inch ccd chip - and records at 16x9!!!!)  so if the ccd size is 16.93333 (2/3 inch converted to mm) in the horizontal.

So if I am right -  I put whatever my focal is in the focal section and in the apeture setting put 2*ch(focal)*tan(16.93333)/2) ?

Can i use this as a recording camera, considering the projection settings are completely different?  I will have to really, because the footage will be shot this way - It doesnt look bad, and I think I should be able to composite somone into the scene releatively ok with these settings.  The resolution (PAL - 720x576 at a pixel aspect of 14.42 for being anamorphic) surely wont make a difference as it is a setting for the recording cam, and the projecting cam is set up fine .... or will it?

17875[/snapback]

If the live footage captured with your DVCPro50 is intended as a compositing element only, then there's no need to worry about matching a CG camera to it -- it just becomes a matter of undoing the anmorphic squish and comping away. However, if for some reason you need to project this footage into your CG scene through a Houdini camera, then yes, putting that expression into the aperture channel and setting focal to match your lens' focal length would be the way to go... except... that it doesn't seem to take into account this anamorphic business you mention.

I don't know anything about that camera but if the ccd dimensions are natively in a 16:9 aspect (ccd height would need to be 2/27") which then gets squished into a 4:3 image when recording to tape (768x576 = 4x3 which you wrote as 720x576 = 5/4 but I'm guessing that was a typo), then you'd have to also stretch the images horizontally by 4/3 = 1.33333 to expand the 4:3 anamorphic to its native 16:9, and also change the CG camera's resolution from standard PAL (768x576) to its 16:9 non-anamorphic version which, according to my numbers, should be 1024x576 (=16/9).

So, as far as I can tell, the steps to match the live camera would be:

1. set focal to match your lens' focal length (mm).

2. set aperture to 2*ch(focal)*0.148856 (mm) (the 0.148856 comes from tan(25.4*((2/3)/2)) = tan(25.4*(1/3)) = tan(25.4/3) = 0.148856).

3. set resx to 1024 and resy to 576

4. stretch all your images horizontally by a factor of 4/3 = 1.33333..., or simply scale to fit a 1024x576 resolution and use these stretched images as the projection plates.

The only thing that really puzzles me from what you wrote is that crazy 14.42 pixel aspect you mention... it doesn't seem to fit with anything else... that's one *monster* of a factor. Can you explain where you got that number from?

I'll do the projecting from lights stuff in a separate post.

Cheers!

Link to comment
Share on other sites

The only thing that really puzzles me from what you wrote is that crazy 14.42 pixel aspect you mention

My bad ... typo

The input is not 14.42 for pixel aspect ratio - but rather, 1.442. I should also have mentioned it was pixel aspect.

The SDX-900 camera im using, which is a DVCPro50 format, records 16x9 through an anamorphic lens. The image size is 720x576, but sqished into 4:3 by using a different pixel aspect ratio 1.442.

(576/720)*(16/9) = 1.4222222222. (for some reason, in Houdini, when you set the camera to Abekas PAL anamorphic, it is set to 1.442, and not 1.42222222222)

This information then 'unsquished' (for want of a more technical term)...either when broadcast (in which case what you are viewing the image on will have to be set to the same pixel aspect in order to view the correct widescreen 16x9 OR the image is made to fit into a 4:3 frame by squishing the top and bottom and allowing room for black bars at the top and bottom.

I dont know any other methods for acheiving the same aim.

If im going to match my live action with the 2.5d environment, theoretically im going to have to render the environment at the same resolution and pixel aspect before compositing.

It seems like a challenge, but I dont think anything will go wrong...the main thing is to maintain perspective...so the camera in Houdini will have to be set to the same settings as the SDX-900 - and the focal lengths matched.

So if I video on the SDX-900 with a 120mm lens, the Houdini cam, as you mention in your break down.. will have to be set to the equivalent apeture and focal.

Im wondering, and I guess I will only be able to confirm at the time of compositing...whether the environment will maintain the illusion of being filmed at the SDX-900 settings.

If it was a fully CG environment, then I would be absolutely sure that the perspective will match, as long as the cameras are the same settings. However, with the 2.5d, the issue im concerned about is that it is a projected image originally from a completely different camera....in this case a 50mm stills.

Im just hoping that it will look right, but a lot can be done with compositing in terms of perspective...i will be doing some live action tests in the coming weeks, and I will start to post some results up for you to see.

Cant wait to hear about the light setup

J ;)

Link to comment
Share on other sites

The SDX-900 camera im using, which is a DVCPro50 format, records 16x9 through an anamorphic lens.  The image size is 720x576, but sqished into 4:3 by using a different pixel aspect ratio 1.442.

(576/720)*(16/9) = 1.4222222222.  (for some reason, in Houdini, when you set the camera  to Abekas PAL anamorphic, it is set to 1.442, and not 1.42222222222)

Yeah. Hard to tell where they got this 1.442 number from (as opposed to 1.422... ), unless the Abekas has it's own weird pixel aspect, which is entirely possible given that it *does* for NTSC, which it records at 720X486 with an aspect of 0.9 -- so there could be something similar happening for PAL. Are you using an Abekas disk recorder?

Either that or SESI made its own little typo :o

I dont know any other methods for acheiving the same aim.

If im going to match my live action with the 2.5d environment, theoretically im going to have to render the environment at the same resolution and pixel aspect before compositing.

...unless you want to work (render/comp) with square pixels, in which case you'd do your focal/aperture matching (for the CG recording camera) and set the resolution to 1024x576 and an aspect of 1, leaving the anamorphic squeeze for final comp-out. This may be beneficial if you have to add text for example... might be worth considering. But either way is "correct" and the choice is a matter of convenience really.

Im wondering, and I guess I will only be able to confirm at the time of compositing...whether the environment will maintain the illusion of being filmed at the SDX-900 settings.

If it was a fully CG environment, then I would be absolutely sure that the perspective will match, as long as the cameras are the same settings.  However, with the 2.5d, the issue im concerned about is that it is a projected image originally from a completely different camera....in this case a 50mm stills.

It should be fine (but don't quote me on that :D). The device matching for the projectors is independent of any device-matching done for the capture. Much as you can capture a real scene using any lens you like, you can capture your CG scene with any camera setting you like (as long as the CG projectors match your 50mm stills camera).

Cant wait to hear about the light setup

I'll probably get to it tonight sometime... but don't quote me on that either :P

Cheers!

Link to comment
Share on other sites

OK. Here goes the "light projector" thing:

All you need is a light shader that can look up a texture map based on how the lens-based perspective projection maps onto the image plane -- in shader-speak, this image plane is known as "NDC" space (Normalized Device Coordinates).

So... two main ways to go about this: 1) transform the surface position (P in a surface shader, and Ps from a light shader) to NDC space, and use the x,y coordinates in this space as the uv's used in the texture lookup. And 2) A little old-school perhaps, but you could duplicate the projection geometry inside the shader for potentially "funky" projections instead of relying on the NDC transform.

Using NDC Space

Projecting From a Light

Inside a light shader, the surface position is given by the global Ps (P is the position on the light geometry itself which, in the case of point-sources is, you guessed it, {0,0,0}). But Ps is given (as everything else inside a shader) in "world" space which, in VEX, means "camera" space. Fine. Then we have the function toNDC() which transforms a point from the perspective-projected space to the 2D image plane, such that its coordinates are normalized, running 0-1 from the bottom left corner to the top right corner. Problem is that the perspective projection in the case of a light is being done in the light's own space, which in VEX is "object" space, because, in the case of a light, the "object" we're shading is the light itself ... got those aspirins handy? :P

In VEX then, a bare-bones light projector using NDC would look something like this:

light ProjectorNDC (
      string map  = "";
   )
{
   Cl = 0;
   if(map!="") {
      vector Pndc = toNDC(wo_space(P));
      Cl = texture(map,Pndc.x,Pndc.y,"filter","point");
   }
}

WARNING: There is a bug in VEX where the aspect ratio seems to be ignored by the toNDC() function (IN A LIGHT SHADER ONLY). I don't usually use the NDC method so I just noticed this now. If your aspect is not 1, then the only workaround that comes to mind is to use aspect as an x-resolution modifier: resx = resx*aspec, and aspect = 1.

Projecting From the Render Camera

The case of looking up a texture inside a surface shader as though the texture were being projected by the render camera is very similar. The only difference is that we don't need to transform the surface position (which in this case is P instead of Ps) from world-to-object before transforming it to NDC. There is also no NDC bug with respect to aspect in this case.

Here's a basic camera projection in VEX. I've also added weighing by the alpha channel just to show how that could be done, but it's not something specific to surface shaders (i.e: alpha could be used like this in the light shader above in the same way):

surface CamTexture (
      string map  = "";
   )
{
   vector4 Ctx = 0;
   if(map!="") {
      vector Pndc = toNDC(P);
      Ctx = vector4(texture(map,Pndc.x,Pndc.y,"filter","point"));
   }
   Cf = (vector)Ctx * Ctx.w;
}

Using Hand-Rolled Projections

We can also roll owr own projections. As an example, here's how we could duplicate a perspective-from-camera type of projection, using the standard Houdini camera parameters. Not very useful since we can already do this using NDC space, but it hopefully shows that we're not limited to standard projections inside our shaders. So this shader has the same functionality as the light shader above, but it does its own calculation of the projection geometry (simple vector projection to form the fustrum).

light Projector (
      string map        = "";
      int    resx       = 720,
             resy       = 486;
      float  aspect     = 0.9;
      float  aperture   = 41.4214;
      float  focal      = 50;
   )
{
   Cl = 0;
   if(map!="") {
      float  spread  = 2 * focal / aperture;
      float  tmod    = (float)resx*aspect / (float)resy;

      vector l    = wo_space(Ps);
      float  norm = 2.*l.z;
      float  sloc = spread*l.x / norm + 0.5;
      float  tloc = spread*l.y*tmod / norm + 0.5;
      Cl = vector(texture(map,sloc,tloc,"filter","point","wrap","clamp"));
   }
}

The Receiver "Canvas" Objects

Lastly, the objects receiving the "coloured light" need to reflect back exactly what they receive. So it's a "constant" shader, but not in the sense that the LightingModelVOP uses for its "constant" model. This one can be thought of as "constant reflectance", whereas the built-in one can be interpreted as "constant surface color".

No rocket science here: what comes in goes out -- a "pass-through" BRDF.

#include &lt;shading.h&gt;
#include &lt;math.h&gt;

surface Canvas (
      string   lmask     = "";
   )
{
   Cf = 0;
   vector Nf = normalize(frontface(N,I));
   illuminance(P,Nf,M_PI_2,LIGHT_DIFFSPEC,"lightmask",lmask) {
      shadow(Cl);
      Cf+=Cl;
   }
}

That's the basics of it anyway...

Here's a hip file that has some of the projectors shown above, but done in VOPs, which may be easier to follow.

Projectors.zip

DISCALIMER: I haven't tested any of the VEX code above, so it's very likely that I made a mistak^H^H^H^H^H^Htypo somewhere... :P

Hope that makes sense.

Cheers!

Link to comment
Share on other sites

Have been playing around with the light projections.

Very cool

Mario - im interested in that you mentioned at one point using this technique means you can also project an entire sphere.

I assume this means that if I photograph a full 360, spherical panorama which can be put together in a program such as stitcher - i should be able to project this from within a sphere to recreate the environment?

Sounds like a lot of fun and very interesting.

Ive managed to project a picture inside a sphere by reversing the normals, im just wondering if this full 360 technique is possible - and how. I guess it has a lot to do with working out apeture/focal/fov relationship again, or if there is another far simpler way of doing it.

Im sure you could just apply a polar uv map - but the interesting thing about projecting it is to be able to manipulate it with all the benfits of projection...which could be awesome.

Any suggestions?

J

Link to comment
Share on other sites

Mario - im interested in that you mentioned at one point using this technique means you can also project an entire sphere.

18067[/snapback]

Sure, in this case you'd use the mechanisms available for looking up environment maps, except that instead of using a reflected incidence vector as the lookup direction, you would use the direction from the light's origin to the point on the surface (Ps-P)... and taking into account that this is in "camera" space and we need to express it in the light's space, it becomes:

vector D = normalize(wo_vspace(Ps-P));

The vector "D" then becomes your evironment lookup direction, which you can feed to the environment function in VEX, or the Environment VOP if you like VOPs instead. This would give you the effect of a point light spewing a lat-long env map in all directions.

Cheers!

Link to comment
Share on other sites

I dont know vex at all yet im afraid...and ive sorry to say that ive only been doing vops for about a month :unsure:

so..based on the examples you gave, i went into the project image subnet of the Projection Light Shader.

From there you took the light surface position, and changed it to NDC space etc...for the original light.

Im guessing i dont need the texture map at all anymore, and instead i need to do as you said and somehow use a vector operation to direct the light from the surface position, through an environment map (thus onto the surface position of the canvas,) and back up through the sub output and into the light shader output.

I have no definite idea about how to do this, but im learning from this so i really appreciate your (or anyone elses) direction.

Also - where does this leave the camera parameters? does this render them useless, or do they still need to be exactly the same as the photo to get a spherical projection? ...or will the environment map project spherically by default?

Thanks again

J :huh:

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...