Jump to content

translucency in Houdini ... howTo ?!


Recommended Posts

Hi everyone

sry if i'm beeing an "over-asker" lately but i badly want to get my head around houdini's way of thinking! so excuse me for that and thanks for you all.

so one of the issues i'm facing now is the translucency, in fact whatever i try i can not get it to work. what i want i simple a light behind a sheet of paper ...

so i tried the translucent material with different settings , and while searching all over the place for an answer i found one technique consisting of negating the normals and passing them to the nN attribute of a lambertian lighting model... but that also didn't work...

so any help and clarification around that subject would be much appreciated.

Best Regards

Link to comment
Share on other sites

Thanks Jason for the file that was very helpfull.

one other question though, do you know how the translucent default material in houdini works !

coz i played with it for some time and i wasn't able to get any result out of it, not even close!!

and since it's there ... i would really wwant to know if it works or not...:)

Thanks again Jason

Best Regards

Link to comment
Share on other sites

Thanks Jason for the file that was very helpfull.

one other question though, do you know how the translucent default material in houdini works !

coz i played with it for some time and i wasn't able to get any result out of it, not even close!!

and since it's there ... i would really wwant to know if it works or not...:)

Thanks again Jason

Best Regards

As far as I know it was intended for things like glass (clear transmissions w refraction). I wouldn't be surprised if that VOP has fallen into disrepair over the years, in favour of the Glass VOP and such.

Link to comment
Share on other sites

Hi again Jason here i'm back with some few questions about this subject

1 - i tried applying displacement to the shader and whenever i use some displacement on it, it appears as if the light begins going past the translucent object (as direct light i mean) in a random pattern, so if you have your translucent object between the light source and another normal object : no displacement = good result , with displacement you will obtain direct light patterns on the normal object ... which is problematic ... any ideas ?

2 - when i disconnect the lighting model you called front face in your network, seems it has no difference at all ... so why did you put it there , does its effect appear in some conditions i didn't run into yet ?

and 3rd and last thing, instead of the global N i tried using the N output of a burlap noise so i got some lines on my translucent object, but ... if i create and change UVs on this object it doesn't affect that bump at all :S , is that normal ? (i didn't forget to link s and t from the global variables), in fact i did another test andi realised that if i use the uvCoords node i get the influence of the UVs change but if i use those if the global node i don't get any change why is that ? :S

Thanks for your help

Regards

Edited by nmn
Link to comment
Share on other sites

2 - when i disconnect the lighting model you called front face in your network, seems it has no difference at all ... so why did you put it there , does its effect appear in some conditions i didn't run into yet ?

The lighting model in the frontface box is so the front_side_light is contributing to the color of the shader (i.e. it gives the grid a green tint).

As long as we're asking questions I have one of my own Jason. You note: "Please note that you must turn off "Ensure Face Point Forward" in the Lighting Model VOPs." This only applies when the normals of the grid are pointing away from the torus. When they are pointing towards the torus (I rotate the grid 180) it doesn't matter if "Ensure Face Point Forward" is on or off. So I suppose the question is: Am I right about that? If so, when using this technique is the orientation of the normals something you always need to think about or not worry about it and just uncheck "Ensure Face Point Forward" if it's not working?

Link to comment
Share on other sites

The lighting model in the frontface box is so the front_side_light is contributing to the color of the shader (i.e. it gives the grid a green tint).

As long as we're asking questions I have one of my own Jason. You note: "Please note that you must turn off "Ensure Face Point Forward" in the Lighting Model VOPs." This only applies when the normals of the grid are pointing away from the torus. When they are pointing towards the torus (I rotate the grid 180) it doesn't matter if "Ensure Face Point Forward" is on or off. So I suppose the question is: Am I right about that? If so, when using this technique is the orientation of the normals something you always need to think about or not worry about it and just uncheck "Ensure Face Point Forward" if it's not working?

Yeah, the "Normals Point Forward" thing really means "Normals Point Toward Camera" - and so if you do this, the back-facing-ness of the normals when they point away is then lost: something we don't want to lose quite yet.

One of the big reasons for this frontface() operation (the name of the actual VEX operation for this inside the Lighting Model VOP) to exist is so that N is well prepared to enter into the multitude of lighting models like diffuse, blinn, specular, and so on. Many of these operations will return funky/negative values - for example: in the case of lambertian diffuse we dot(N,L) (L is the vector from the surface to the light) and we don't want negative values. This is really important in the case of semi-transparent surfaces where we see through to the back faces of the model, but a more subtle case exists at the grazing edges of rounded objects where the smoothed normal interpolation along with the polygon approximation of the curved surface result in N starting to face away from camera on a piece of polygon that is still facing the camera. This is obviously impossible in reality (having something facing us, but instead being shaded as the surface was facing away from us) but this frontface() thing saves us from ugly artifacts. Am I making sense? Should I illustrate it?

Link to comment
Share on other sites

The lighting model in the frontface box is so the front_side_light is contributing to the color of the shader (i.e. it gives the grid a green tint).

Thanks for the tip geneome, lol I first did the tests on jason's scene, then i replicated the vop in another scene which dooesn't have any front light, and i did other tests, then i deleted the front face lighting model and i saw no difference, and i forgot about the tests i did on jason's lol, my mistake...

One of the big reasons for this frontface() operation (the name of the actual VEX operation for this inside the Lighting Model VOP) to exist is so that N is well prepared to enter into the multitude of ligh.................sible in reality (having something facing us, but instead being shaded as the surface was facing away from us) but this frontface() thing saves us from ugly artifacts. Am I making sense? Should I illustrate it?

i kind of understand a little bit what is happening but an illustration would be of a grest help, but here i'm just wondering why other apps don't have such an option ? is it because they are doing it under the hood without letting us know or they aren't doing it at all ?

besides, doesn't it make problems for the faces that SHOULD have their normals facing away from the camera ?

:unsure: any answer for my other two questions ?

- what's the difference between the s,t from the uvCoords node and the s,t from the global node , coz they give different results ...

- when i apply displacement to my geometry, with jason's surface material, i have very weird results on the surrounding objects ...

(for more details about those 2 questions please refer to my above post)

Regards

Link to comment
Share on other sites

Thanks for the tip geneome, lol I first did the tests on jason's scene, then i replicated the vop in another scene which dooesn't have any front light, and i did other tests, then i deleted the front face lighting model and i saw no difference, and i forgot about the tests i did on jason's lol, my mistake...

i kind of understand a little bit what is happening but an illustration would be of a grest help, but here i'm just wondering why other apps don't have such an option ? is it because they are doing it under the hood without letting us know or they aren't doing it at all ?

besides, doesn't it make problems for the faces that SHOULD have their normals facing away from the camera ?

Yes, other renderers are all doing this under the hood, in their builtin shaders. It's only that in Mantra (like Renderman) the shading language is open for you to do all kinds of things. VOPs hides just a little bit of these little technical snafu's for you... but as you can see, not all.

- what's the difference between the s,t from the uvCoords node and the s,t from the global node , coz they give different results ...

The uvCoords are generally being read in from the uv point attribute on geometry; something you've created yourself. The s and t on the Global VOP are the intrinsic parametric coordinates that exist per primitive. So for polygons, its a 0-1 coordinate per polygon which is obviously not so useful, but for patch surfaces (like NURBS, Meshes, etc) this is a builting coordinate space that spans over the entire surface. It's actually quite rare to use s&t since most of the time you define your own uv's using texture projections, etc.

- when i apply displacement to my geometry, with jason's surface material, i have very weird results on the surrounding objects ...

(for more details about those 2 questions please refer to my above post)

I'll take a look soon - I'm not sure why without opening it up and seeing what you are doing.

Link to comment
Share on other sites

Sorry if this comes off harsh, but it would be really helpful if you posted your sample file - people are taking their time to help you, and that would be the least you can do to make it a bit easier - I also realize that sometimes it's not possible, so please construct a simplified version. In the case of displacements, I can't seem to be able to reproduce your issue, but it could also be a problem of displacement bounds, or redicing, or raytraced displacements, and so on. Not having the scene to analyze makes it a lot harder.

In any case, here's Jason's example with displacement.

post-1116-1242563455_thumb.jpg

cheers,

Abdelkareem

backface_diplaced.hip

Link to comment
Share on other sites

Sorry if this comes off harsh, but it would be really helpful if you posted your sample file - people are taking their time to help you, and that would be the least you can do to make it a bit easier - I also realize that sometimes it's not possible, so please construct a simplified version.

Hi Anamous, first of all i'm really sorry if it gave that impression, but sometimes i post the questions from work and i'm not allowed to post files from there ... but no worries, i'll try and ask my questions when i get home this way i will be able to do example scenes more easily, thanks for your remark and sorry again.

In the case of displacements, I can't seem to be able to reproduce your issue, but it could also be a problem of displacement bounds, or redicing, or raytraced displacements, and so on. Not having the scene to analyze makes it a lot harder.

Thank you very much for the tips and the example file, apparently the problem was displacement bounds when i played a little with it, it solved the problem, even though i don't know what it really does, so now i'm searching for some docs about it.

Thanks again for your time

Regards

Link to comment
Share on other sites

Thank you very much for the tips and the example file, apparently the problem was displacement bounds when i played a little with it, it solved the problem, even though i don't know what it really does, so now i'm searching for some docs about it.

You'll notice that Mantra renders in buckets - little blocks of, say, 32x32 pixels. Using this technique, Mantra can make more clever decisions about it's RAM usage and attempt to keep it's memory footprint at a minimum. The problem with displacement shaders is that these shaders can move geometry into a bucket which might not have been visible in that bucket beforehand. Also, programmable displacement shaders can be written in such a way that artist can do whatever he likes with the resultant position of the surface, things which the renderer cannot know about until it's actually attempting to render the surface. So all of this means that only the artist himself really knows how far his shader is going to displace the geometry, and so he has to inform Mantra that a surface has the potential to be displace into other buckets by a certain amount. Having the tightest bounds around your object makes Mantra more efficient computation- and memory-wise.

You'll find that PrMan has the exact same concept, so probably any searches that hit Renderman are probably quite valid, conceptually.

Link to comment
Share on other sites

Thanks you very much for the explanation Jason was very helpfull

From the Rfm Docs (i ommited some things that aren't relevant here):

While RenderMan displacements are both detailed and fast, there are a couple of issues that you should be familiar with. The most important concept is displacement bounds. Displacement bounds set up a bounding box around the object, for use when the object is rendered. The bounding box determines when the object is loaded by RenderMan. If a displacement shader pushes an object outside of its bounding box, you will see that part of the displacement is being clipped.

The solution in this case is to increase the bounding box. Adjust the Displacement Bounds attribute. The correct setting will vary depending on the size of your object in world space. Generally, a good value to start with is the farthest distance an object may be displaced. Note that too large of a displacement bound can cause an object to consume more memory than needed, so the tightest displacement bound possible is recommended.

what i didn't understand here is, in reality the displacement bound value represents what ? is it the amount you want to add to your original BBox from each side? if yes then why isn't it a 3D value ? and if not ... then what does it represent ?

Regards

Edited by nmn
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...