Jump to content

vex_101


_suz_

Recommended Posts

Haai all,

this is a post documenting my journey thru learning vex at the hand of the Advanced Renderman book (by Apodaca & Gritz),

skipping the scene description parts and braving the shader section.

:alien1:

mr mario is correcting my misconceptions as i proceed and we are currently at the environment map generation part....

---but ANYONE is free to shed some light along the way---

#1.

this is an excerpt from the book, the environment map section:

"note that the environment() is indexed by direction only, NOT POSITION. Thus not only is the environment map created from the point of view of a SINGLE LOCATION but ALL LOOKUPS are also made from THAT PT... two points with identical mirror directions will look up the same direction in the environment map"

"this is most noticeable on flat surfaces , which tend to have ALL their points index the SAME SPOT on the environment map"

mario said : "picture grabbing all the bounced rays (little arrows) and moving them so they all come off a single point instead of being spread out over the surface: the "heads" of all those arrows will be pointing in pretty much a single direction."

my reaction: just to ensure i understand the situation correctly, i drew a, albeit primitive,diagram:

#2.

from the book, to overcome this they proceed:

"we assume the environment map exists on the interior of a sphere...(even if we have assembled an environment map from six rectangular faces, because it is indexed by direction only, for simplicity we can just as easily think of it as a spherical map)

instead of indexing the environment by direction only, we define a ray using the position and MIRROR DIRECTION of the point, then calculate the interscetion of this ray with our aforementioned environment sphere"

/*what's the mirror direction of a point?*/

mario said:

the actual "overcoming the problem" part is the bit about intersecting the reflection ray with a virtual sphere of *finite* (as opposed to infinite) size, then using that sphere's normal at the point of intersection as the lookup direction.

me again:

k, so we are simplifying the situation by assuming a finite sphere environment - regardless of the actual environment map format.

and a diagram to illustrate my understanding of the MIRROR DIRECTION:

//remember to concentrate on the info of the diagram - and not the suckiness of it ... :P //

k, a fun pic, right...

f_avengemetinm_49bff8a.jpg

thanx, mr mario

- we are endlessly grateful! :D

post-3928-1225786874_thumb.jpg

post-3928-1225787008_thumb.jpg

Edited by _suz_
Link to comment
Share on other sites

k, a morning smile first:

f_StopHammerTm_a6495c4.jpg

--we can see if you're not doing it--

some RSL to VEX questions:

#1.

keywords:

FORMAT:

in RSL

their use in RSL

the question of their use in VEX:

uniform

if the variable not going to be changed make it uniform (for speed)

the opposite is varying

uniform is the default

? any equivalent in VEX ?

extern

? access to global variables ?

? is it necessary here - you just access globals directly, right ...P, N etc ?

output

used in the function header to define incoming references to place the output in

void foo(float kr&, float kt&)

is the VEX(and C) way of saying u'r getting a reference to the already defined variables

and changing this will change the original

? right ?

#2.

in the lighting models section, the *illumination loop*

you can loop over only specific lights

you can ?group? them by using the *categories* parameter available for each light at object level

RSL says the implementation of diffuse() and specular() respond to the __nondiffuse and __nonspecular parameters

as toggles for whether the light only contributes to the diffuse / specular of the scene

the VEX language reference says to check out the shading.h for implementation of the light definitions...

where might i find that?

-----the shading.h is a header file that contains only the function definition and constants----

-----is there anywhere i can view the implementation of the diffuse() function, for example...?----

----i hope i'm making sense...if there are any questions about, perhaps my lingo...pls dont hesitate to ask...----

thank-you mario,

thats all for 2day...so far... :oneeyedsmiley02:

Edited by _suz_
Link to comment
Share on other sites

Hi

About the "extern" type, shaders, depending on their type/context (surface shaders, displacement, light, imager, etc..), can access a specific set of variables that the renderer makes available to them.

For instance, in surface shaders, you can acess P, N, dPdu, dPdv, etc... (for a list of the variables that a shader type can access, you can check the RI specs (3.1) , or the 3.2 version (pdf only).

So, in a surface shader, you can access these variables, but let's say you wanted to keep your shader clean, and you moved some calculations to a function, this function won't be able to access the global variables, unless you either specifically pass the global variable you want to access to, as a function parameter, or you can access it inside a function, by declaring it with the extern type.

color bar( point PP; )
{
    return noise(PP*2);
}

surface foo()
{   
    Oi = Os;
    Ci = bar( P );
} 

or with the extern type

color bar()
{
    extern point P;
    return noise(P*2);
}

surface foo()
{
    Oi = Os;
    Ci = bar();
}

As for the "output" type, mind you that i'm also learning VEX (my relatively small experience is with RSL though), i think the equivalent is "export".

If you want to return more than 1 value per function, in RSL, you would need to pass explicitly some parameters that you would define as "output" (or "export) that would get filled with the values that you wanted out of the function (in RSL 2.0 you can have structs, but ignore that for the time being). In VEX you can have functions return arrays, so in a way, returning more than 1 value.

In RSL though, this would be

void foobar( output float foo, bar; )
{
	extern point P;
	foo = noise(P*23);
	bar = noise(P*42);
}

So in order to get those 2 values out of the function foobar, you would pass 2 variables, explicitly set as output type (or export). This means that they would be filled with whatever values they're set inside the function, and these values would be available in the caller function or shader.

You can also use output types as shader arguments, to get AOVs (Arbitrary Output Variables).

About the storage class, uniform vs varying, i don't think there's an equivalent in VEX, but i might be wrong.

About the illuminance() looping construct, it's a way for surface shaders to loop over all light sources, and access global variables that are made available to light shaders (such as Cl, the light color, and L, the (incident) light direction vector, basically, for each light source, do foobar)). Inside an illuminance() construct, you can also query some light variables (that are exported in the light shaders, with the "output" type), using message passing, via the lightsource() shadeop.

For instance

    illuminance( P, Nf, PI/2 )
    {
        uniform float nonspec = 0;
        lightsource("__nonspecular", nonspec);

        if (nonspec < 1) {
            /* light contributes to specular illumination,
               so do whatever is needed here... */

In VEX you can also use message passing to query some variables exported from whatever shader types, but for the lightsource case, the __nondiffuse, and __nonspecular, you can use lightmasks (someone correct me if i'm wrong, i haven't read much into this yet).

As for information on this, the shading.h header is at $HFS/houdini/vex/include/shading.h

Hope this helped.

Edited by rendertan
Link to comment
Share on other sites

Hey Suzanne,

Regarding your (and other people's -- don't worry, you're not alone :)) puzzlement at why an environment map should be indexed using only a direction and ignoring the ray's origin, you wrote:

1.

the book: "which tend to have ALL their points index the SAME SPOT on the environment map"

and your reaction: "you can see how all those rays I just described would end up indexing the map at almost the same pixel"

what bothers me about this that is that its supposed to be a lookup in order to get a mirror reflection - yet all points that reflect in the same direction just default to the same lookup pt

Suzanne's diagrams.

Nice diagram. The problem with that image though (as a tool to help you visualize and understand what's going on), is that you've drawn a finite sphere (well... not just finite, but so small as to be in the same scale domain as that of the object itself). At these relative scales, position *does* matter, and so you're finding it hard to understand why the heck they'd only use a direction as a lookup (ignoring position), when your diagram clearly shows that different ray-origin positions land at different envmap positions, right?

The whole purpose behind using an environment map is that they make for a nice, compact way to store *distant* radiance. And by "distant", one usually means "at scales that are several orders of magnitude bigger (or farther) than the receiver object" -- and in the particular case of environment maps, we mean that, for all practical purposes, the environment is so large (relative to the object) that you might as well say that it is "infinitely far away (or big)". So, if you were to draw a picture that included this infinitely large sphere, you would be forced to draw your entire object (heck, your entire scene) as a single dimensionless point at its center. And in this context, the positions from which the rays originate no longer matters -- the only meaningful measurement left is their direction.

Here's an attempt at a visual explanation: as the surrounding sphere gets larger, the rays emanating from a flat surface (and in the same direction) project to an ever decreasing solid angle; until they all end up projecting to a single point on the sphere (everything drawn in 2D for simplicity).

post-148-1225811204_thumb.jpg

do i understand the MIRROR DIRECTION of a point correctly as the reflection angle being the same as the incoming angle,

so that it is the correct environment lookup for that specific pt...?

as in the diagram..(try to concentrate on the diagram's info and look past its suckiness...)

Suzanne's diagram.

Yes, but to clarify: the mirror direction of a *vector* (not a point) about a plane with normal N, is the original vector rotated 180 degs around the axis N. And yes, by definition, they both make the same angle with N ("theta" in your diagram). The reflection vector will also lie on the same half-space (the same side) as the incident vector. There's some more detail on the reflection vector at this odWiki page, (if you're feeling somewhat masochistic :)).

#1.

keywords:

FORMAT:

in RSL

their use in RSL

the question of their use in VEX:

uniform

if the variable not going to be changed make it uniform (for speed)

the opposite is varying

uniform is the default

? any equivalent in VEX ?

The RSL storage modifiers (uniform and varying) are not needed in VEX. On the bound attribute side of things (the geometry attributes that may override shader parameters), Mantra will deduce the storage class based on the geometry attribute class -- so inbound point attributes will almost certainly be treated as "varying", whereas inbound primitive and detail attributes will most likely be "uniform" (there's likely more analysis going on than that, but I believe that's roughly the situation for bound attributes). On the shader side (local variables and function parameters which are not tied to bound attributes), again Mantra will deduce the "const-ness" (whether they can be treated as uniform or varying) of these based on their usage, so you don't have to do it manually. Long story short, you can safely write the following in some global VEX header:

#define  uniform  
#define  varying

and keep all your RSL-based "uniform" and "varying" keywords untouched in your code since they will be replaced with empty strings by the preprocessor -- i.e: vcc won't even see them.

extern

? access to global variables ?

? is it necessary here - you just access globals directly, right ...P, N etc ?

In RSL, "extern" is not just for global variables, but for any variable that's outside the current scope -- i.e: it would be needed for inline functions or external functions which pull these variables from an external scope instead of taking them as parameters (which is a better way to go about it, really). Mantra doesn't provide this mechanism for local scopes (which is just fine since it forces you to pass them as parameters in most cases), but you do have implicit access to context globals from inside user functions (though I would strongly advice against using this feature). So vcc will let you write something like this:

vector f() {
   return N;
}
surface test() {
   Cf = f();
}

without needing to use the "extern" keyword, as in RSL, like this:

color f() {
   extern varying normal N;
   return color(N);
}
surface test() {
   Ci = f();
}

But again, I would recommend passing all necessary parameters as just that: parameters. Like this:

vector f(vector n) {
   return normalize(n);
}
surface test() {
   Cf = f(N);
}

output

used in the function header to define incoming references to place the output in

void foo(float kr&, float kt&)

is the VEX(and C) way of saying u'r getting a reference to the already defined variables

and changing this will change the original

? right ?

The equivalent of RSL's "output" in VEX is "export", but!...

Both RSL and VEX pass by reference, but whereas RSL's compiler won't let you assign to function parameters that have not been declared as "output" parameters, vcc will, and this can result in some unexpected behaviour if you're not careful. Take the function f() in this fragment, for example:

void f(vector n) {
   n = {0,1,0};
}
surface test() {
   f(N);
   Cf = N;
}

The little function f() ends up overwriting the global N -- needless to say: do not do this! :)

Having said that, VEX's "export" keyword *is* expected and enforced in the case of parameters to context functions, so you do need to explicitly use them there. For example, if a shader is to export the object-space version of N (for an AOV layer, for example), then it has to explicitly declare one of its parameters as "export", like this:

surface test( export vector Nobj = 0; ) {
   Nobj = normalize(ntransform("space:object",N));
   Cf = 1;
}

Without the "export" keyword, the assignment would generate a compile error.

The bottom line: get used to NOT writing to parameters unless you absolutely need to (for shader exports, or extended returns from functions). And in all cases where it must happen, go ahead and decorate all instances with the "export" keyword (even if in the case of parameters to user functions ths is just a placebo keyword). Another way of saying it would be: keep doing what you do in RSL (and assume the compiler will slap you if you write to a parameter that was not declared as "output"), but when writing in VEX, add another define to some global VEX header:

#define output extern

This will let you continue using the "output" keyword as you do in RSL.

[EDIT] Rendertan beat me to it :) Sorry for some of the duplicated info [/EDIT]

Link to comment
Share on other sites

wow,

thanx very much for the insights

- mario and rendertan -for the great diagrams, and willingness to engage :lol:

think i grasp environment lookup and keywords now...

so enough prelim questions --> an implementation... oren-nayar....

not to be useful, simply to try out the vex muscle

houdini's version:

f_houdiniorenm_641941e.jpg

---smooth and milky looking---

houdini's code with my comments:

#include <voptype.h>
#include <voplib.h>

surface
vopsurface1()
{
	vector	  clr;
	vector	  illum;
	bsdf		f;

	// Code produced by: oren1
	VOPvector ii = (0 != 0) ? {0.0, 0.0, 0.0} : normalize(I);		   //always normalize I
	VOPnormal nf = (0 != 0) ? {0.0, 0.0, 0.0} : normalize(N);	   //always normalize N

	if (1) nf = vop_frontface(nf, ii);					//always assign the frontfacing normals 
	illum = diffuse(nf, -ii, 0.1);						   //diffuse() implements oren-nayer to get the illumination of the surf ????????? can i inspect this function somewhere ??????????????
	clr = 1 * { 1, 1, 1 } * illum;						 //white color * the illumination
	#if defined(__vex)									  //???????? what's __vex ? i.e where can i find it?????????????
	f = 1 * { 1, 1, 1 } * diffuse();					   //get the oren_nayar BSDF 
	#endif

	// Code produced by: output1
	vector tempCf = clr;
	Cf = tempCf;
}

mine works fine if you apply it far away! ...ok no, my vs goes horribly wrong:

f_RSLorennayam_c9d885d.jpg

its much darker and there are dark patches as you progress closer to the camera,

or where there's insufficient light

---it is the oren-nayar formula implemented in the illuminance loop----

f_orennayarfom_da84d2c.jpg

// formula courtesy AdvancedRenderman book//

#include <math.h>
/*  
 *  oren-nayar rough surface
 *  roughness = 0 == lambert surface
 */
surface
RSL_orenNayar(float roughness=0.1;)
{
	vector V = normalize(-I);								  //I is from eye to surf pt

	//surf roughness coeff's for the oren-nayar formula
	float sigma2 = roughness * roughness;
	float A = 1-0.5 * sigma2 / (sigma2 + 0.33);
	float B = 0.45 * sigma2 / (sigma2 + 0.09);
	float theta_r = acos(dot(V,N));						   //angle betw V and N
	vector V_perp_N = normalize(V-N * dot(V,N));	//part of V perpendicular to N  

	//accumulate incoming radiance from lights in C
	vector C = 0;
	illuminance(P,N,M_PI/2.0)
	{
			vector nL = normalize(L);   //L is from surf pt to light
			float cos_theta_i = dot(nL,N);
			float cos_phi_diff = dot(V_perp_N, normalize(nL -N * cos_theta_i));
			float theta_i = acos(cos_theta_i);
			float alpha = max(theta_i, theta_r);
			float beta = min(theta_i, theta_r);
			C +=  Cl * cos_theta_i *
					(A + B * max(0,cos_phi_diff) * sin(alpha) * tan(beta)); 
	}
	Cf = {1,1,1}*C;
}

i dont even know what to ask here...i have to idea what's the matter :blink:

i'm sure this'll be an easy-as-pie answer for u...

fun pic time again:

f_CatnipAddicm_397411e.jpg

ATTACK-ATTACK-ATTACK!

Edited by _suz_
Link to comment
Share on other sites

houdini's code with my comments:

#include <voptype.h>
#include <voplib.h>

surface
vopsurface1()
{
	vector	  clr;
	vector	  illum;
	bsdf		f;

	// Code produced by: oren1
	VOPvector ii = (0 != 0) ? {0.0, 0.0, 0.0} : normalize(I);	 //always normalize I
	VOPnormal nf = (0 != 0) ? {0.0, 0.0, 0.0} : normalize(N); //always normalize N

	if (1) nf = vop_frontface(nf, ii);	 //always assign the frontfacing normals 
	illum = diffuse(nf, -ii, 0.1); //diffuse() implements oren-nayer to get the
                                       //illumination of the surf ????????? 
                                       //can i inspect this function somewhere ??????????????
	clr = 1 * { 1, 1, 1 } * illum;	 //white color * the illumination
	#if defined(__vex)	//???????? what's __vex ? i.e where can i find it?????????????
	f = 1 * { 1, 1, 1 } * diffuse(); //get the oren_nayar BSDF 
	#endif

	// Code produced by: output1
	vector tempCf = clr;
	Cf = tempCf;
}

You're looking at the code generated by a VOP network, and there are certain preprocessor symbols that exist in the VOP environment which do not in vanilla VEX. These include things like predefined constants which are non-zero only when something is connected to a VOP's input, and things like that. Not to mention layers upon layers of indirect assignments to temporary variables (depending on the complexity of the network).

In short: VOP code is not written by a human for humans to read, and trying to analyze it won't help you much with learning how to write VEX shaders -- it'll more likely just give you a massive headache.

In any case, to satisfy your curiosity, here's how that silly-looking code came to be:

// Code produced by: oren1
VOPvector ii = (0 != 0) ? {0.0, 0.0, 0.0} : normalize(I);  //always normalize I
VOPnormal nf = (0 != 0) ? {0.0, 0.0, 0.0} : normalize(N); //always normalize N
if (1) nf = vop_frontface(nf, ii);	 //always assign the frontfacing normals

Those pointless conditionals in each assignment are likely the result of code that originally read something like this:

VOPvector $ii = ($isconnected_nI != 0) ? $nI : normalize(I);
VOPvector $nf = ($isconnected_nN != 0) ? $nN : normalize(N);
if($facefwd) $nf = vop_frontface($nf, $ii);

By the time the VOP processor gets to those lines, the symbolic constants isconnected_nI and isconnected_nN will have been defined and given a value of either 1 if something was connected to the corresponding VOP input, or 0 otherwise. The processor then replaces those symbols by the values they actually represent (exactly like vcc's preprocessor does, or RenderMan's, or cpp) and leaves you with those silly-looking statements (some of which will get tagged as "dead code" and get thrown to the garbage by the optimizer).

The VOP parameters that those inputs represent (nN and nI) were obviously given a default value of {0,0,0} -- a value which would have been overridden by the input if something had actually been connected to it. But since nothing was, you see the defaults of {0,0,0} appearing in the "truth" clause of each conditional, and the fallback values of normalize(I) and normalize[N] in the "false" clause -- i.e: to be used when nothing is connected to the inputs nN and nI.

Similarly, the facefwd "variable" is actually a VOP parameter (int toggle), and as such, can be replaced by a constant (VOPs cannot be time dependent) -- said constant being the current value of the parameter, which in this case is 1. Again, the processor makes this replacement, and leaves you with the seemingly nonsensical statement: if ( 1 ) do_something;. Had the current value of this parameter been 0, then that statement would have read: if ( 0 ) do_something; and so no front-facing would have been done.

Did I mention that you're probably better off not looking at VOP-generated VEX code? :)

illum = diffuse(nf, -ii, 0.1); //diffuse() implements oren-nayer to get the illumination of the surf ????????? can i inspect this function somewhere ??????????????

Yes, diffuse() implements Oren-Nayar when given a roughness parameter >0. When roughness==1, you get pure Oren-Nayar, and at 0 you get pure Lambert. See the vex documentation.

No you cannot look at the actual implementation of VEX's built-in diffuse() function (ditto for PRMan's). However, they also make the BRDF version of the function available (diffuseBRDF() -- see the docs), so you can always use that one when inside an illuminance loop (the diffuse() function does its own illuminance loop, same as PRMan's).

clr = 1 * { 1, 1, 1 } * illum; //white color * the illumination

Again, the 1 and the {1,1,1} are both VOP parameters (probably something like "diffuse intensity" and "diffuse color") that the VOP processor has replaced with constants.

#if defined(__vex) //???????? what's __vex ?????????????

//... some VEX-specific code goes here...//

#endif

A single question mark will do :)

The compiler (vcc) defines this symbolic constant. It can be used to do compile-time branching (as is the case here, since the bsdf type won't be recognized by other compilers).

OK. That's all the time I can spend on this right now. I'll take a look at your VEX port of Larry Gritz's implementation when I next get some time.

... but I can't leave this without giving you some unsolicited advice (I hope you don't mind)...

Writing a non-trivial BRDF at this early stage (even if it's simply copying code from someone else's RSL source) is probably a little overly ambitious. It will most likely only succeed in getting you very frustrated, very fast. Why not try something simpler (but far more rewarding), like a repeating pattern for example? Something you can truly attempt on your own, from scratch... and feel like a million bucks when it actually works.

The thing is... copy-pasting is fine -- everybody does it, and if the pasted code happens to work perfectly (highly doubtful), then great; go ahead and use it. But the moment it doesn't work, you suddenly realize you have to actually understand every single comma, semicolon, and every other syntactical wackiness (not to mention the actual algorithm) in order to be able to fix it. So might as well start slow, with something simple that you can understand completely and write yourself from scratch.

Here's a very popular web site [RManNotes] which gets you going slowly, but gives you a good structure in terms of how to think about bulding shaders. It's for RSL, but simpler than the "Advanced Renderman" book (which assumes you already know how to write shaders). Maybe try porting those exercises to VEX first, and see how it goes. (I learned a lot from those RManNotes when I was starting out myself -- yup, they've been around for a long time :))

Cheers!

Link to comment
Share on other sites

haai!

so, walking before running ... the RManNotes ...this is what i can gather so far

...pls feel free to steer any misconceptions into the right direction... :D

ST and UV coordinates:

UV are the parametric surface coordinates:

thus the transformation of the 3D object to 2D space

with UV ranging [0,1]

f_st01m_0b0949e.png

by default ST are assigned per primitve face, also in a range [0,1]

so if you use the default (s,t) coordinates in your shader - it runs the shader per primitive:

but assigning the UV coordinates to ST coordinates you use in your shader you span the whole UV range

f_st02m_4799d71.png

now that u'r working in the (uv) space - range [0,1],

you can repeat the pattern...and rotate the pattern...and

//	  rotate2d(pt, angle, origin, result) -- clockwise
#define rotate2d(x,y,rad,ox,oy,rx,ry) \
  rx = ((x) - (ox)) * cos(rad) - ((y) - (oy)) * sin(rad) + (ox); \
  ry = ((x) - (ox)) * sin(rad) + ((y) - (oy)) * cos(rad) + (oy)
#define pulse(a,b,x) (filterstep((a),(x)) - \
						   filterstep((b),(x)))
#define blend(a,b,x) ((a) * (1 - (x)) + (b) * (x))
#define repeat(x,freq)	((x*freq) % 1.0)

surface
foo2(vector uv=0;)
{
	vector surf_color,layer_color;
	float layer_opac;
	/*****isbound -> if there is such an attrib on the surf->override the param value*/
	vector st = isbound("uv")? uv : set(s,t,0);		 //(ss,tt) now range between [0,1] over the stretch of the whole UV space
	float ss = repeat(getcomp(st,0),5);
	float tt = repeat(getcomp(st,1),5);					//(ss,tt) now range between [0,1] 5 times over the UV space
	float S,T;
	rotate2d(ss, tt, radians(45), 0.5, 0.5, S, T);

	//backgr layer0
	surf_color = {0.3,0.3,0.3};

	//layer1 BLUE
	layer_color = {0,0,1};
	layer_opac = pulse(0.35,0.65,S);
	surf_color =  blend(surf_color,layer_color,layer_opac);

	//layer2 GREEN
	layer_color = {0,1,0};
	layer_opac = pulse(0.35,0.65,T);
	surf_color =  blend(surf_color,layer_color,layer_opac);

	Cf = surf_color;
}

surprise pic if u need a break from reading thru code

f_st03m_d1b41ab.jpg

question:

i have aliasing at the end of each 'block' - any thoughts on this? :unsure:

---i found that if i use the smooth() instead of the filterstep() in the pulse() aux function the aliasing disappears... :D ---

---don't know why though---

this is my understanding of the filterstep() and smooth()

f_filterstepm_fad3870.png

f_smoothm_468c4e8.png

and:

when i reverse the order of transformation (ie rotation before the scaling) my result is this: :o

f_st04m_9f4ce4e.jpg

...that's not right...why are the lines shifted to the top and bottom right corners?

...thank-you for the input...

f_dancingcottm_40766fa.gif

Edited by _suz_
Link to comment
Share on other sites

  • 2 weeks later...
UV are the parametric surface coordinates:

thus the transformation of the 3D object to 2D space

with UV ranging [0,1]

Yes, u and v in RSL are equivalent to s and t in VEX and they both represent parametric coordinates. Parametric coordinates are the parameters associated with the construction of a surface element: So for polygons, meshes, and primitives (sphere, tube, torus, circle/disk, open curves, etc) they'll be in [0,1] over the surface element (e.g: a polygon element is a face, and a sphere element is the whole sphere), but for parametric surfaces (NURBs and Bezier) there is no guarantee that the range will be [0,1]. In fact, as a general rule, don't assume a [0,1] range for parametric coordinates (and certainly never for texture coordinates).

Keep in mind that even though RSL also has the global varying floats s and t, these are meant as *texture* coordinates (for convenience) which DEFAULT to the values of u and v (and can be overridden by bound attributes of the same name/type), the true RSL *parametric* coordinates are still u and v (not s and t like in VEX). So:

RSL | VEX
----+--------------------------------------------------------
u   | s
v   | t
s   | N/A (or at least not guaranteed to be equal to VEX's s)
t   | N/A (or at least not guaranteed to be equal to VEX's t)

f_st01m_0b0949e.png

by default ST are assigned per primitve face, also in a range [0,1]

Well... mmmnnnyeah.... kind'a :)

The "transformation" is a "projection" -- the kind of thing that the UVProjectSOP does.

And again, you can't assume that texture space will be in the range [0,1] -- that is: a range of [-200.8,1.65285] is just as valid and your code should therefore be prepared to deal with it (or indeed, any other real-valued range).

now that u'r working in the (uv) space - range [0,1],

you can repeat the pattern...and rotate the pattern...and

Change that to "range unknown", but yes, you can do all that.

//	  rotate2d(pt, angle, origin, result) -- clockwise
#define rotate2d(x,y,rad,ox,oy,rx,ry) \
  rx = ((x) - (ox)) * cos(rad) - ((y) - (oy)) * sin(rad) + (ox); \
  ry = ((x) - (ox)) * sin(rad) + ((y) - (oy)) * cos(rad) + (oy)
#define pulse(a,b,x) (filterstep((a),(x)) - \
						   filterstep((b),(x)))
#define blend(a,b,x) ((a) * (1 - (x)) + (b) * (x))
#define repeat(x,freq)	((x*freq) % 1.0)

Functions like your "rotate2d" shouldn't really be macros. The reason is that the arguments will be exchanged verbatim by the preprocessor. Meaning that if you call it with an 'x' argument of 'smooth(s,abs(dot(N,R)),exp(-length(I)*dot(a,b )))', then that whole expensive expression will be evaluated twice (one for each assignment 'ry' and 'ry') -- and ditto for the other arguments. Make it a true function instead (which IIRC is what the RMan Notes header does as well):

void rotate2d(float x, y, rad, ox,oy; export float rx,ry) {
   float sang = sin(rad), cang = cos(rad);
   float dx = x-ox, dy = y-oy;
   rx = dx*cang - dy*sang + ox;
   ry = dx*sang + dy*cang + oy;
}

And similarly for the other macros: pulse, blend, and repeat. Although your "blend()" is already available in VEX as "lerp()" so you might as well use it.

surface
foo2(vector uv=0;)
{

Unless you enjoy recompiling every time you want to try out a different rotation or different colors, I'd suggest adding some parameters to your shader:

surface foo2 (
	  vector csurf	= 0.3;
	  vector clayer1  = {0,0,1};
	  vector clayer2  = {0,1,0};
	  float  ufreq	= 1;
	  float  vfreq	= 1;
	  float  rotation = 0;

	  vector uv	   = 0;
   )
{
}

And then you might as well give them some #pragma directives to tell vcc how to construct the UI for each one of those parameters:

#pragma label  csurf    "Surface Color"
#pragma hint   csurf    color
#pragma label  clayer1  "Layer1 Color"
#pragma hint   clayer1  color
#pragma label  clayer2  "Layer1 Color"
#pragma hint   clayer2  color
#pragma label  rotation "Rotation"
#pragma range  rotation -180 180
#pragma label  ufreq    "U Frequency"
#pragma label  vfreq    "V Frequency"
#pragma hint   uv       hidden
surface foo2 (
      vector csurf    = 0.3;
      vector clayer1  = {0,0,1};
      vector clayer2  = {0,1,0};
      float  ufreq    = 1;
      float  vfreq    = 1;
      float  rotation = 0;

      vector uv       = 0;
   )
{

}

Now you can play with it without recompiling every time you want to test different settings.

The source of most of the problems you're seeing is the modulo function (or the remainder operator '%' in VEX). As an aside, note that RSL's mod() function and VEX's '%' operator are not interchangeable. But regardless of which one you use, at least in this particular case, you'll end up getting the same kinds of artifacts. A variable 'x' that gets put through a modulo(x,1) operator will instantaneously jump from 0.99999 back to 0 at every integral boundary, and so a 0.2-wide filter for example, will, at a point near the boundary (say 0.9), attempt to filter over the interval [0.9..1-to-0...0.1] wrapping over an entire "tile" at the boundary -- which is why you see those ghosted greens and blues at the edges of each tile.

To avoid this, split the variable ('x' in this example) into its integral and fractional parts, then use these to feed the step and pulse functions.

Here's a complete fixed version, with plenty of comments to explain each step (hopefully :)):

// A slight variation on rotate2d(). Here we rewrite it so
// it can return the rotated x,y bundled as a vector
vector rotate2d(vector p; float angle_rad; vector pivot) {
   float sang = sin(angle_rad);
   float cang = cos(angle_rad);
   float dx   = p.x-pivot.x, dy = p.y-pivot.y;
   float rx   = dx*cang - dy*sang + pivot.x;
   float ry   = dx*sang + dy*cang + pivot.y;
   return set(rx,ry,0); // return the result as a vector
}

// Your pulse function. This will work as long as the 
// frequency is not too high
float pulse(float e0,e1,x) {
   return filterstep(e0,x)-filterstep(e1,x);
}

// VEX has "blend()" except it calls it "lerp()" so here
// we'll tell the preprocessor to replace every instance
// of the word "blend" with the word "lerp" in our code
#define blend  lerp

// Now we tell vcc how to construct the UI for our parameters
#pragma label  csurf    "Surface Color"
#pragma hint   csurf    color

#pragma label  clayer1  "Layer1 Color"
#pragma hint   clayer1  color
#pragma label  olayer1  "Layer1 Opacity"
#pragma label  wlayer1  "Layer1 Width"
#pragma range  wlayer1  0! 1!

#pragma label  clayer2  "Layer2 Color"
#pragma hint   clayer2  color
#pragma label  olayer2  "Layer2 Opacity"
#pragma label  wlayer2  "Layer2 Width"
#pragma range  wlayer2  0! 1!

#pragma label  rotation "Rotation"
#pragma range  rotation -180 180
#pragma label  pivot    "Pivot"

#pragma label  ufreq    "U Frequency"
#pragma label  vfreq    "V Frequency"

#pragma hint   uv       hidden

// And finally the shader
surface foo2 (
      vector csurf    = 0.3;        // Surface color
      vector clayer1  = {0,0,1};    // Layer 1 color
      float  olayer1  = 1;          // Layer 1 opacity
      float  wlayer1  = 0.3;        // Layer 1 width
      vector clayer2  = {0,1,0};    // Layer 2 color
      float  olayer2  = 1;          // Layer 2 opacity
      float  wlayer2  = 0.3;        // Layer 2 width
      float  ufreq    = 1;          // Frequency in U
      float  vfreq    = 1;          // Frequency in V
      float  rotation = 0;          // UV Rotation in degrees
      vector pivot    = 0.5;        // Origin of rotation

      vector uv       = 0;          // Bound (hidden) texture UV's
   )
{
   // get in the habit of initializing variables (even though
   // in this case it admittedly won't matter much)
   vector surf_color  = csurf;
   vector layer_color = 1;
   float  layer_opac  = 1;
   float  layer_width = 1;    // we're adding a width control here

   // get the current texture coords (defaulting to parametric coords)
   vector st = isbound("uv") ? uv : set(s,t,0);

   // now scale and rotate the texture space, if "rotation" is not 0
   if(rotation!=0) st = rotate2d(st,radians(rotation),pivot);
   st *= set(ufreq,vfreq,0);

   float  ss = st.x;
   float  tt = st.y;

   // now extract the integral parts of s and t -- this is the
   // continuous equivalent of the zero result in the discontinuous
   // mod(s,1) or s%1 functions
   int    si = floor(ss);     // integral part of s
   int    ti = floor(tt);     // integral part of t

   // Some temp variables we'll use when calculating where each 
   // layer "band" starts and stops (the extents fed to the pulse 
   // function)
   float layer_start=0.35, layer_end=0.65;

   // Now we calc each layer. Note how I call the function "blend"
   // when there's no such function defined anywhere in this module.
   // This works because the preprocessor will have replaced it with
   // the word "lerp" (which *does* exist in VEX) by the time the 
   // compiler (vcc) sees it. And this is achieved via that #define
   // directive near the top of the page.

   // layer1
   layer_color = clayer1;
   layer_width = clamp(abs(wlayer1),0,1); // good habit: force input to valid domain
   layer_start = si + 0.5 - (layer_width*0.5);
   layer_end   = layer_start + layer_width;
   layer_opac  = olayer1 * pulse(layer_start,layer_end,ss);
   surf_color  = blend(surf_color,layer_color,layer_opac);

   // layer2
   layer_color = clayer2;
   layer_width = clamp(abs(wlayer2),0,1); // good habit: force input to valid domain
   layer_start = ti + 0.5 - (layer_width*0.5);
   layer_end   = layer_start + layer_width;
   layer_opac  = max(0,olayer2) * pulse(layer_start,layer_end,tt);
   surf_color  = blend(surf_color,layer_color,layer_opac);

   // rinse and repeat... 

   Cf = surf_color;
}

post-148-1227240373_thumb.jpg

And finally, a small suggestion: You're much more likely to get a quick response if you keep things to a single focused question per post. Otherwise it's very hard to find enough time to sit and pick through twenty items in one sitting. Thanks,

Cheers!

Link to comment
Share on other sites

  • 2 weeks later...

morning all!

...progress report...

f_jumpingcookm_281cebe.gif

"the renderman shading language guide" (by cortes and raghavachary) has arrived and that'll be what i'm working thru

--its starts from the way basics--

--by the way if the idea of programming or maths sends chills down u'r spine this is the book to start with--

the book recommends using xemacs for development, since you can customize it as you pls,

but i'm wondering whether this is out of the point of view of the old school masters who are used to this environment (like vim)?

then there is cutter - a renderman text editor with extension for vex,

even though rsl and vex are alike, is this really the best environment for vex development?

i'm thinking kate is probably the easiest to use,

you just have to set it up as a c-environment and u'r good to go - not to mention the pretty gui - thats important ;)

i was wondering which editor u've found to be most effective,

and whether u'd say that its simply a personal choice

thank-you!

Edited by _suz_
Link to comment
Share on other sites

I'm guessing you're either going to get a lot of 'personal choice' answers to that question.

Having said that I use kate because I'm just too lazy to learn xemacs/emacs/vi or vim. The people who use them swear by them, but I question their sanity anyway... so there you go. ;).

M

Link to comment
Share on other sites

haha,

f_lol2m_bcb0f26.gif

i've reached the light shaders section...

the VEX equivalent to the RSL:

illuminate() - a light contribution statement used within the light shader,

to define a light that emits thru a 3d solid cone

solar() - to define a light that shines as a distant light source -> the sun

from mario's uber reply to the ambient light shader post

setting L=0 is equivalent to leaving out the illuminate() or solar() calls in the light shader

this makes your light an ambient light i.e the ambient() picks this light up and the illumination() loop ignores it

The main difference between VEX and RSL lights is that in VEX you describe the light's directionality by assigning a value to L,

whereas in RSL you set it implicitly via an illuminate() or solar() block...the important thing to take away from all this is that, in order to

have a true ambient light, you need to set L to zero.

thus if u not wanting a ambient light : L = nonzero ;)

...coz an ambient light has no direction - its equally radiant from all directions...

...while i'm at it ... the volume shader section uses RSL:

Atmosphere shader call to apply the volume shader between the camera and rendering surface

Interior shader call to apply the volume shader inside the surface

...can't find them in the VEX function list...am i looking in the wrong place? :blink:

Edited by _suz_
Link to comment
Share on other sites

the VEX equivalent to the RSL:

illuminate() - a light contribution statement used within the light shader,

to define a light that emits thru a 3d solid cone

solar() - to define a light that shines as a distant light source

from mario's uber reply to the ambient light shader post

setting L=0 is equivalent to leaving out the illuminate() or solar() calls in the light shader

this makes your light an ambient light i.e the ambient() picks this light up and the illumination() loop ignores it

thus if u not wanting a ambient light : L = nonzero ;)

...coz an ambient light has no direction - its equally radiant from all directions...

See page 4 of that reply. The behaviour of solar() depends on how its parameterized, and therefore its VEX equivalent is slightly different for each case -- this is all detailed (to the best of my knowledge) in that reply.

...while i'm at it ... the volume shader section uses RSL:

Atmosphere shader call to apply the volume shader between the camera and rendering surface

Interior shader call to apply the volume shader inside the surface

...can't find them in the VEX function list...am i looking in the wrong place? :blink:

In VEX, the atmosphere context is called "fog".

There's no VEX equivalent to RSL's "interior" context.

Link to comment
Share on other sites

whoops...reading on... :blush:

solar()

to define a light that shines as a distant light source - like the sun

from the ambient light shader post

--repeated here in the name of completeness--

The difference between illuminate() and solar() is just that solar() doesn't take into account the light's position -- i.e: it's purely directional.

1. Solar without parameters :

This is every point at infinity -- i.e: an illumination map.

L = normalize(N);

2. Solar with direction but zero angle:

Distant light -- i.e: purely directional, with zero cone angle and direction 'D' matching the light's +Z direction.

L = Lz*dot(L,Lz);

3. Solar with direction and non-zero angle:

Distant light over some cone. Presumably, this would use some function 'f' of the angle 'theta' between the light's direction and the the direction to the surface to modulate intensity.

vector D = Lz*dot(L,Lz);

Cl = intensity * color * f(dot(D,L)/length(L));

the dot product boils down to:

f_dotproductm_7a7fedb.jpg

NB - only if A and B are unit vectors (normalized)

f_RSLGAOm_fae9db2.jpg

--ambient light shader on plastic surface-- that be a hot Tpot

...thank-u mario...

...moving on...f_runningSmilm_d7647cf.gif

Edited by _suz_
Link to comment
Share on other sites

Morning!

on to BDRF's

the day i figure this one out:

f_2daym_d06cb42.jpg

firstly to quote the book (p301):

"BDRF's are acquired using instruments such as gonioreflectometers, sensor arrays, digital camera's and so on.

You can find BDRF data for a variety of real-world materials online"

"Given a viewing direction V (usually normalize(-I)) and a light direction L, a true BDRF renderman shader would simply look up the reflectance from a BDRF file or table,

interpolating between existing data samples.

Such data-driven BDRF's are less common in practice compared to the use of alternative illumination models, which include procedural descriptions (arbitrary and possibly non-physical), phenomenological models (based on empirical observation, qualitatively similar to reality), analytical as well as simulated approximations of the real-world surface characteristics , models fitted to measured BDRF data , and so on."

...so is there BDRF data available for look-up - if they can be measured with a super-ultra-meter? --if so, where on the internet?--

and the rest of that long paragraph means -> look for the latest siggraph paper on the material, right?

Edited by _suz_
Link to comment
Share on other sites

numero uno...diffuse BDRF but translucent surface

/*
* also shade OTHER SIDE of surfaces
* so not only FRONTFACE
*/
#include <math.h>

#pragma label Kd_f "Front Face Diffuse Amp"
#pragma label tex_f "Front Face Texture" 
#pragma range Kd_f 0 1
#pragma label Kd_b "Back Face Diffuse Amp"
#pragma label tex_b "Back Face Texture"
#pragma range Kd_b 0 1

surface
RSLG_illum_translucent(float Kd_f = 0.5, Kd_b = 0.3;
					   string tex_f = "", tex_b = "";)
{
	vector c = 0, c_f = 0, c_b = 0;
	vector Ln = 0;

	if(tex_f != "")
		c_f = texture(tex_f);
	else
		c_f = 1;
	if(tex_b != "")
		c_b = texture(tex_b);
	else
		c_b = 1;

	vector Nn = normalize(N);
	vector Nf = frontface(Nn,I);

	//FRONT
	illuminance(P,Nf,M_PI/2)		//will iterate over lights in the cone defined by Nf and angle M_PI/2
	{
		Ln = normalize(L);
		c += Cl * c_f *Kd_f * dot(Nf,Ln);
	}
	//BACK
	illuminance(P,-Nf,M_PI/2)	//iterate over the lights defined by the -Nf and the angle M_PI/2
	{
		Ln = normalize(L);
		c += Cl * c_b *Kd_b * dot(-Nf,Ln);
	}	
	Cf = c;
}

f_dogrollm_962dc8a.gif

my result is a diffuse surface, shaded on the inside too, but

the result of this in the book isa translucent surface...

am i combining the front and back contributions incorrectly?

f_OrenTexturem_ee4af14.jpg

--the Nf and -Nf lighting contributions

f_OrenTexturem_ec8808f.jpg

--only the Nf lighting contribution

PS if i wanted to add the surface color - Cs in RSL - how do i access that in VEX?

thank-you for the input :D

Edited by _suz_
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...