Jump to content


  • Content count

  • Donations

    0.00 CAD 
  • Joined

  • Last visited

Community Reputation

0 Neutral

About marcosimonvfx

  • Rank

Personal Information

  • Name

Recent Profile Visitors

510 profile views
  1. Hi all, this is a technical question to help me understand what's going on under the hood about how geometry lights are rendered: I am playing with writing my own raytracer within Houdini, where I also want to use geometry lights. The way I go about it is that each shaded point does a shadow test to a randomly generated point on the surface of the light geometry. As this happens completely at random it is somewhat likely that the generated point will lie on the far side of the geometry, so the light geometry itself will cast a shadow (self shadow on). This results in the first attached picture. This makes sense to me - if you consider a shaded point on the wall behind the turquoise cube - it has only a 1/6th chance of generating a point on that cube that it can actually see whereas a point to the top and right of that point sees 3 sides of the same light geometry, therefore it has a higher chance of generating a point it is illuminated by. This is not what I would imagine seeing in real life (though maybe my conception is wrong). However when I rebuild and render the scene in Houdini/Mantra (type geometry with a box as geo and self shadow on), the result is very different (see attachment 2). Maybe someone can shed a light on how Mantra does its magic. Cheers!
  2. .rat files, linear and color spaces

    Hey, can you help us with a bit more information. What is exactly your workflow, where do you import the rats (houdini or mplay), where/how were those rats generated would be helpful.
  3. Hi all, I stumbled across this video about using filmic LUTs in blender: https://www.youtube.com/watch?v=m9AT7H4GGrA This led me down a rabbit hole about how color works and is managed in Houdini. First, what colorspace does Houdini work with natively? I would expect it to be linear sRGB, like that it's using the same primaries but working of course in a linear space. Then textures that come in as ACES for example would need to be transformed internally to sRGB to be workable? As Houdini has been around much longer than ACES, I don't suppose it's working with this internally. So as an example: If I create a shader with an emission of R:1,G:0,B:0 - the renderer will display in sRGB a value of 1,0,0 and if I save that as png, which converts it to sRGB when the transform into colorspace is active, the same values are there - which is why I think it's using the same primaries. Secondly (this is related now to the above video): At work we use the OCIO ACES color management with a display LUT of "sRGB" - however it doesn't specify what color space goes in. If I'm correct with the above statement, that Houdini works in linear sRGB, then the ACES color management software would have to translate linear sRGB into ACES and ACES back into sRGB (non linear) for display, right? This then leads me to another point from the video - where he uses LUTs to preview his renders within Blender that are based on ACES. However he has several LUTs to choose from that give him different looks (high contrast, low, ...). As a lighter I can see how this has a huge influence on how I light the scene. Again, in the video he shows that without a LUT the image in the preview of Blender looks too contrast-y, because values above 1 immediately get clipped. As we don't want to produce clipped images (even if the data isn't clipped but the artist sees the clipped preview) they will use less bright lights and overuse fill lights to compensate for missing bounce lights. But with a LUT you can squash much higher contrast levels into the display range of 0-1 - so the sunlight could have crazy values, there'd be a lot of bounce lights and still the highlights wouldn't clip. I guess more similar to what we and also a camera sees. How can I be sure now that the standard "sRGB" LUT we use at work is ok for the scene/show I do. Is there one fits all or should we - as I know they do on sets - create different preview LUTs for different shows?Has anybody experience with using different LUTs? I could imagine even taking one from a real camera - what for example a Canon DSLR uses to convert their raws into 8bit (display referred) images. Or I create one myself, but then how do I know what is physical and what isn't? Cheers, M
  4. Awesome, that works! Thanks for your help!
  5. Unexpected result with Portal Lights

    Very odd - I downloaded your scene and did nothing but remove the texture from the light, because I don't have it and try to render it - with and without the portal geo on. With portal geo it's much faster and less noisy than without - Didn't change a thing. What version are you using (I'm on 16.5.323).
  6. Hi, at work I discovered a problem trying to get mirror-like, completely sharp reflections. My setup is super simple: A Sphere, in the center a camera that views towards a grid. The sphere has a high resolution texture, the grid has a mirror-like material: Everything but reflectivity is set to 0, roughness is at 0. I've tried with both the principled shader as well as the classic shader (I'm using the education license of 16.5 here at home). I attached a picture I get with both the principled as well as the classic shader. However! The classic shader let's me change the shading model. If I switch this to for example phong, I get super crisp and clear reflections as expected (attachment 2). Is there a way to get as crisp reflections in the principled shader? I attached the scene if anybody wants to play. mirror_reflection.rar
  7. Baking color channel

    @lamer3d still doesn't seem to work for me. I mean, it's ok for now, I can get my exports with custom layers. It would just be interesting, why these layers are not there. Even when I don't connect anything in the shader to the base color input and instead load a texture in the "Textures" Tab it comes out black (I also tried to ad the Baking tab to my mantra node).
  8. Baking color channel

    Thanks davpe, your help got me on the right track. I managed to output a texture in the Cf channel. Keeping at this I finally solved the output problem by throwing everything into a materialbuilder and putting my bind_exports there (outside they didn't work). Now also my custom channels get exported, whether the "disable ligthing and emission" is enabled or disabled. Still though - unless I additionally put a bind export in there and name it "export_basecolor" I don't get a basecolor channel from my principled shader, eventhough if I dive into the principled shader I see the bind export with the same name in there - rather strange. At least like this I can work around it
  9. Baking color channel

    Hi, this seemingly simple thing has bugged me all day and I couldn't get this to work: I want to bake the color channel of a principled shader to a file. 1. When I create a principled shader and connect my texture to the "basecolor" input I can't see it in the "basecolor" image plane. After trial and error I found that it's in the Cf image plane - why? 2. When I create a baketexture node, plug my object and now say I want to bake the Cf channel, the baked texture is always completely black - When I render the image with a mantra node I can see the texture in the Cf channel. But the baking is always black. Thanks a bunch for any help!
  10. Hi folks, let's get the disclaimer out of the way first: This question is rather technical. I use "the problem" in a rendering circumstance and therefore I thought people here might have the answer - but it could also be used somewhere else. Here we go: I'm trying - for my own learning - to build a raytracer in VOPs - it has been done before, but I want to build it from scratch to get the most out of it. Currently I'm stuck on the "diffuse/blurry reflection" of a ray, or adding randomness to the direction of a vector. Here I couldn't find a good manual on how to do it so I figured it might work with expressing the vector in radial coordinates and then adding a random offset to the angle (and converting it back for further use). Seems cumbersome but so far it was my only solution. Here's what I got: Picture 1 shows the vector I'd like to randomly transform - let's say by about 10 degrees (+/-5 north-south, +/-5 east-west). Picture 2 shows the result. That's looking rather good. The problem comes when the initial direction of the vector is close to the up-vector (y-Axis in this case) - Picture 3. Picture 4 shows it from the top. As I use the approach detailed on Scratchapixel (as far as I could follow), I first offset theta, the angle to the up vector and afterwards phi - the angle between x and z. The problem is that if the initial vector is very close to the up vector and the random offset for theta happens to be close to 0, the offset of phi afterwards just turns the vector around itself - not "down" towards the plane of the x/z axis. Any help is much appreciated - not just on how to solve this, but also for a better approach on how to do this. Thanks a bunch!
  11. Baking Indirect Illumination

    Wohoo - Sir, you deserve a Cookie (if your browser accepts them : P )! Solved! Many thanks!
  12. Baking Indirect Illumination

    Hi, The title says it all - it seems like it should be a very simple thing (or widely used at least) and yet after two evenings of trying and failing and googeling I'm nowhere closer to a solution. I want to bake the indirect illumination of a scene into a texture - which later I want to add to the diffuse channel to save some rendering time and cut the diffuse bounces. The Houdini help mentions baking indirect illumination, but in the "bake Texture" node there are no options on number of bounces like I have in a normal Mantra. I never see the effects in the generated map - there's only the direct lights - and when I extract just the indirect channel it's always pitch black. I tried using a gilight - but that also doesn't seem to work. I can use it fine rendering with a normal mantra (there I see it's effects with using a prerendered photon map that I load) but it has no effect on the baked texture. Any help pointing me in the right direction would be greatly appreciated. Cheers, Marco
  13. Expanding Footprints

    Hi. In the project I'm doing a character is running across a plane. Every time his feet touch the ground they want a "ring" in the shape of his foot outline expanding out from the point of contact (can be rendered as a texture override, no need for geometry). The ring shall eventually fade but stay in the form of the foot all his lifetime (except the size, that is expanding). I've tried various ways, the most promising was to transfer attributes (color) from the foot to the groundplane with a solver and then (with the same solver) use those points to again transfer attributes to the same groundplane geometry itself so that it expands for every frame. The problem is that the result is always a solid (filled) shape, not a ring. Also the shape of the foot soon gets lost to become a rectangle with rounded corners. Is there a better way to do it?
  14. Flip Fluid collapsing

    Thanks! I tried that. For mid res simmulations this is a good value and I can even scale up a little with the resolution. But when I tried to sim with the final particle count it collapsed again. Is that the only parameter I can adjust? Why does the old flipsolver then is so fast compared to the newer one even when I disable the evaluation of viscosity and friction 0.o
  15. Flip Fluid collapsing

    Hi there, I'm having problems with the FLIP Fluids in Houdini. I'm trying to simmulate a kind of "empty-watercooler-over-head" effect and I can't get good results. Here is what I have so far: https://drive.google.com/open?id=0B2Ra4vnwxtl_VDQ4aUQ2TUZON2s&authuser=0- this is the result with the default values for seperation and grid scale etc. (few particles for the sake of sim times) As yo see the particles collapse over time. https://drive.google.com/open?id=0B2Ra4vnwxtl_Tml3dWpaaFctSEU&authuser=0- this is the flipfluid with adjusted parameters. Particle Radius scale is up to 2, Grid Scale is down to 0.5 (3 Substeps of the DOP Network) - this would be fine but the problem is that it takes FOREVER to simmulate (i guess bc of the gridsize) and doesn't scale well. Sim times shoot up fast and also at some point the particles start collapsing again and I would need to adjust the parameters - which is an ordeal with simtimes that high (above 10 hours for less than 400k particles). To compare: https://drive.google.com/open?id=0B2Ra4vnwxtl_bEFMVzg2YlhMajA&authuser=0- This is what I get with the old Flipsolver (not the 2.0). This doesn't look like much but it scales really well: https://drive.google.com/open?id=0B2Ra4vnwxtl_VjZla29oVVVHZ0k&authuser=0- this took just a couple of minutes to simulate and looks already very good. Over night I did another sim with even more particles and it looks exactly like I want. Cool, nice splashes and good collision behavior. Unfortunately the old flipsolver lacks solving for viscosity and friction etc. which is why I would prever the 2.0. Is there just some parameter that i'm missing - I played around with this forever and also get the same errors (fluid collapsing) in quickly setup scenes with a quickly modeled bowl as container. https://drive.google.com/open?id=0B2Ra4vnwxtl_RDluQ08zM1pwd0k&authuser=0- here's the scene, though the cooler is normaly an alembic so I just substituted it with a tube in this file. The problem is the same. Cheers!