Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About marcosimonvfx

  • Rank

Personal Information

  • Name

Recent Profile Visitors

548 profile views
  1. technical: How are geometry lights rendered

    I never thanked you for these! Thanks! I didn't quite find what I was looking for, but I found my answer eventually in the raytracing gems book: http://www.realtimerendering.com/raytracinggems/ [page 216ff]. Tldr: Instead of shooting rays to the points on the lightsource, trying to see if they reach the point only the direction is taken and a ray shot in that direction. If the ray hits the lightsource (no matter where) then it's good.
  2. Runtime/Execution time of script (and parts) in vex

    Thanks guys, very helpful!
  3. Hi I am building a (somewhat) complex script with vex inside an attribute wrangle node. At the moment the whole thing is very slow and I am not entirely sure where that time goes. Is there a way to clock cpu time (and print)? Cheers!
  4. Hi all, this is a technical question to help me understand what's going on under the hood about how geometry lights are rendered: I am playing with writing my own raytracer within Houdini, where I also want to use geometry lights. The way I go about it is that each shaded point does a shadow test to a randomly generated point on the surface of the light geometry. As this happens completely at random it is somewhat likely that the generated point will lie on the far side of the geometry, so the light geometry itself will cast a shadow (self shadow on). This results in the first attached picture. This makes sense to me - if you consider a shaded point on the wall behind the turquoise cube - it has only a 1/6th chance of generating a point on that cube that it can actually see whereas a point to the top and right of that point sees 3 sides of the same light geometry, therefore it has a higher chance of generating a point it is illuminated by. This is not what I would imagine seeing in real life (though maybe my conception is wrong). However when I rebuild and render the scene in Houdini/Mantra (type geometry with a box as geo and self shadow on), the result is very different (see attachment 2). Maybe someone can shed a light on how Mantra does its magic. Cheers!
  5. .rat files, linear and color spaces

    Hey, can you help us with a bit more information. What is exactly your workflow, where do you import the rats (houdini or mplay), where/how were those rats generated would be helpful.
  6. Hi all, I stumbled across this video about using filmic LUTs in blender: https://www.youtube.com/watch?v=m9AT7H4GGrA This led me down a rabbit hole about how color works and is managed in Houdini. First, what colorspace does Houdini work with natively? I would expect it to be linear sRGB, like that it's using the same primaries but working of course in a linear space. Then textures that come in as ACES for example would need to be transformed internally to sRGB to be workable? As Houdini has been around much longer than ACES, I don't suppose it's working with this internally. So as an example: If I create a shader with an emission of R:1,G:0,B:0 - the renderer will display in sRGB a value of 1,0,0 and if I save that as png, which converts it to sRGB when the transform into colorspace is active, the same values are there - which is why I think it's using the same primaries. Secondly (this is related now to the above video): At work we use the OCIO ACES color management with a display LUT of "sRGB" - however it doesn't specify what color space goes in. If I'm correct with the above statement, that Houdini works in linear sRGB, then the ACES color management software would have to translate linear sRGB into ACES and ACES back into sRGB (non linear) for display, right? This then leads me to another point from the video - where he uses LUTs to preview his renders within Blender that are based on ACES. However he has several LUTs to choose from that give him different looks (high contrast, low, ...). As a lighter I can see how this has a huge influence on how I light the scene. Again, in the video he shows that without a LUT the image in the preview of Blender looks too contrast-y, because values above 1 immediately get clipped. As we don't want to produce clipped images (even if the data isn't clipped but the artist sees the clipped preview) they will use less bright lights and overuse fill lights to compensate for missing bounce lights. But with a LUT you can squash much higher contrast levels into the display range of 0-1 - so the sunlight could have crazy values, there'd be a lot of bounce lights and still the highlights wouldn't clip. I guess more similar to what we and also a camera sees. How can I be sure now that the standard "sRGB" LUT we use at work is ok for the scene/show I do. Is there one fits all or should we - as I know they do on sets - create different preview LUTs for different shows?Has anybody experience with using different LUTs? I could imagine even taking one from a real camera - what for example a Canon DSLR uses to convert their raws into 8bit (display referred) images. Or I create one myself, but then how do I know what is physical and what isn't? Cheers, M
  7. Awesome, that works! Thanks for your help!
  8. Unexpected result with Portal Lights

    Very odd - I downloaded your scene and did nothing but remove the texture from the light, because I don't have it and try to render it - with and without the portal geo on. With portal geo it's much faster and less noisy than without - Didn't change a thing. What version are you using (I'm on 16.5.323).
  9. Hi, at work I discovered a problem trying to get mirror-like, completely sharp reflections. My setup is super simple: A Sphere, in the center a camera that views towards a grid. The sphere has a high resolution texture, the grid has a mirror-like material: Everything but reflectivity is set to 0, roughness is at 0. I've tried with both the principled shader as well as the classic shader (I'm using the education license of 16.5 here at home). I attached a picture I get with both the principled as well as the classic shader. However! The classic shader let's me change the shading model. If I switch this to for example phong, I get super crisp and clear reflections as expected (attachment 2). Is there a way to get as crisp reflections in the principled shader? I attached the scene if anybody wants to play. mirror_reflection.rar
  10. Baking color channel

    @lamer3d still doesn't seem to work for me. I mean, it's ok for now, I can get my exports with custom layers. It would just be interesting, why these layers are not there. Even when I don't connect anything in the shader to the base color input and instead load a texture in the "Textures" Tab it comes out black (I also tried to ad the Baking tab to my mantra node).
  11. Baking color channel

    Thanks davpe, your help got me on the right track. I managed to output a texture in the Cf channel. Keeping at this I finally solved the output problem by throwing everything into a materialbuilder and putting my bind_exports there (outside they didn't work). Now also my custom channels get exported, whether the "disable ligthing and emission" is enabled or disabled. Still though - unless I additionally put a bind export in there and name it "export_basecolor" I don't get a basecolor channel from my principled shader, eventhough if I dive into the principled shader I see the bind export with the same name in there - rather strange. At least like this I can work around it
  12. Baking color channel

    Hi, this seemingly simple thing has bugged me all day and I couldn't get this to work: I want to bake the color channel of a principled shader to a file. 1. When I create a principled shader and connect my texture to the "basecolor" input I can't see it in the "basecolor" image plane. After trial and error I found that it's in the Cf image plane - why? 2. When I create a baketexture node, plug my object and now say I want to bake the Cf channel, the baked texture is always completely black - When I render the image with a mantra node I can see the texture in the Cf channel. But the baking is always black. Thanks a bunch for any help!
  13. Hi folks, let's get the disclaimer out of the way first: This question is rather technical. I use "the problem" in a rendering circumstance and therefore I thought people here might have the answer - but it could also be used somewhere else. Here we go: I'm trying - for my own learning - to build a raytracer in VOPs - it has been done before, but I want to build it from scratch to get the most out of it. Currently I'm stuck on the "diffuse/blurry reflection" of a ray, or adding randomness to the direction of a vector. Here I couldn't find a good manual on how to do it so I figured it might work with expressing the vector in radial coordinates and then adding a random offset to the angle (and converting it back for further use). Seems cumbersome but so far it was my only solution. Here's what I got: Picture 1 shows the vector I'd like to randomly transform - let's say by about 10 degrees (+/-5 north-south, +/-5 east-west). Picture 2 shows the result. That's looking rather good. The problem comes when the initial direction of the vector is close to the up-vector (y-Axis in this case) - Picture 3. Picture 4 shows it from the top. As I use the approach detailed on Scratchapixel (as far as I could follow), I first offset theta, the angle to the up vector and afterwards phi - the angle between x and z. The problem is that if the initial vector is very close to the up vector and the random offset for theta happens to be close to 0, the offset of phi afterwards just turns the vector around itself - not "down" towards the plane of the x/z axis. Any help is much appreciated - not just on how to solve this, but also for a better approach on how to do this. Thanks a bunch!
  14. Baking Indirect Illumination

    Wohoo - Sir, you deserve a Cookie (if your browser accepts them : P )! Solved! Many thanks!
  15. Baking Indirect Illumination

    Hi, The title says it all - it seems like it should be a very simple thing (or widely used at least) and yet after two evenings of trying and failing and googeling I'm nowhere closer to a solution. I want to bake the indirect illumination of a scene into a texture - which later I want to add to the diffuse channel to save some rendering time and cut the diffuse bounces. The Houdini help mentions baking indirect illumination, but in the "bake Texture" node there are no options on number of bounces like I have in a normal Mantra. I never see the effects in the generated map - there's only the direct lights - and when I extract just the indirect channel it's always pitch black. I tried using a gilight - but that also doesn't seem to work. I can use it fine rendering with a normal mantra (there I see it's effects with using a prerendered photon map that I load) but it has no effect on the baked texture. Any help pointing me in the right direction would be greatly appreciated. Cheers, Marco