Jump to content

Search the Community

Showing results for tags 'shading'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 39 results

  1. As I understand it, stylesheets give you the opportunity to have multiple conditions per override. I have basically organised my stylesheet with a material per override and then I add in the material assignments per geometry with conditions. However, I have geometry that drops out and renders default grey if I have multiple conditions on the same override. Does that sound familiar? Am I doing stylesheets wrong?
  2. Hello friends, I was using Arnold for a long time while also playing with Mantra, but until now I never really had a time to dive deep to Mantra. The thing I really adore about Arnold is that in its settings it has this beatiful "Samples calculator". It seems just like a small detail but in my personal experience it was a great help for optimizing heavy renders. So few weeks ago I decided that I would try to create some similar calculator for Mantra. At first I implemented it by Arnold example which works like this (I'm not 100% sure with the equations but in my tests they work ): Camera (AA) Samples = pow(Camera (AA) samples parameter, 2) Diffuse Samples (Min) = pow(Camera (AA) samples parameter, 2) * pow(Diffuse samples parameter, 2) Diffuse Samples (Max) = pow(Camera (AA) samples parameter, 2) * pow(Diffuse samples parameter, 2) + (Diffuse depth parameter - 1) * pow(Camera (AA) samples parameter, 2) Specular Samples (Min) = pow(Camera (AA) samples parameter, 2) * pow(Specular samples parameter, 2) Specular Samples (Max) = pow(Camera (AA) samples parameter, 2) * pow(Specular samples parameter, 2) + (Specular depth parameter - 1) * pow(Camera (AA) samples parameter, 2) Transmission Samples (Min) = pow(Camera (AA) samples parameter, 2) * pow(Transmission samples parameter, 2) Transmission Samples (Max) = pow(Camera (AA) samples parameter, 2) * pow(Transmission samples parameter, 2) + (Transmission depth parameter - 1) * pow(Camera (AA) samples parameter, 2) Total (No lights) Samples (Min) = Sum of all min samples above Total (No lights) Samples (Max) = Sum of all max samples above But soon I realized that Mantra does not work this way. (Well yes, it was silly to think it works the same way ). So after reading a lot about how sampling works in Mantra and talking to my friends I came up with this calculator: ray_count_calculator.hdanc Which counts ray count like this: Camera Samples (Min) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) Camera Samples (Max) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) Diffuse Samples (Min) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Diffuse samples parameter * Global multiplier parameter * Min ray samples parameter Diffuse Samples (Max) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Diffuse samples parameter * Global multiplier parameter * Max ray samples parameter Reflection Samples (Min) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Reflection samples parameter * Global multiplier parameter * Min ray samples parameter Reflection Samples (Max) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Reflection samples parameter * Global multiplier parameter * Max ray samples parameter Refraction Samples (Min) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Refraction samples parameter * Global multiplier parameter * Min ray samples parameter Refraction Samples (Max) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Refraction samples parameter * Global multiplier parameter * Max ray samples parameter Total (No lights) Samples (Min) = Sum of all min samples above Total (No lights) Samples (Max) = Sum of all max samples above While using premise that these parameters in Arnold and Mantra influence the same things: Arnold Camera Samples = Mantra Pixel Samples Arnold Diffuse Samples = Mantra Diffuse Quality Arnold Specular Samples = Mantra Reflection Quality Arnold Transmission Samples = Mantra Refraction Quality Arnold SSS Samples = Mantra SSS Quality But there is a catch: In Arnold if you set Diffuse Samples to 0 you will get black diffuse indirect pass In Arnold if you set Specular Samples to 0 you will get black specular indirect pass In Mantra if you set Diffuse Quality to 0 you still get samples in diffuse indirect pass In Mantra if you set Reflection Quality to 0 you still get samples in reflection indirect pass So I think we can be sure that Mantra pixel samples fire also diff/refl/refr/sss samples - so when having diff/refl/refr/sss parameters set to 0, their corresponding rays cant be 0 (but I really don't know and can't find out how much of them is fired) Also pay attention to the clamping of pixel samples - in my tests pixel samples parameters were always clamped like this: clamp(Pixel samples parameter, 1, ∞) - when using values lower than 1 the result was always the same as when using 1 This catch made my calculator useless It seems that Mantra fires all kinds of rays even when using pixel samples only while Arnold does not. I personally did not expect this behavior and as far as I know it is even not documented. (Or at least I could not find it). I spent few days trying to figure out how these parameters could relate to each other but I did not find any good solution. So in my frustration I decided that it would be probably better to ask you guys if you did not try to create some calculator like this before or to find out how all Mantra parameters relate to each other I think it would be a great help for all Mantra users to find out how Mantra works "under the hood" Thank you very much for any advice and have a nice day.
  3. Hope this in the right section, it's a bit of both effects and lighting/shading! I am working on a shot that involves an ocean surface in overcast lighting and I'm really struggling to get it right. I am aiming for something like the attached image but my attempts have come out looking way too glossy/specular by comparison. When I try to push the overcast lighting, I also run in to the problem of the scene really flattening out. How do I keep the detail and contrast in the waves without any bright highlights? I am currently working with mantra but hope to switch to htoa. Thank you!
  4. Do any of you experience with replicating 1 to 1 the look you have in Substance Painter in Houdini 16 using the Principled Shader? The PBR metal/rough workflow with the metallic and roughness maps seems to be straight forward, but plugging in my maps from Substance Painter as is, produces a render that is much darker than what I have in Substance Painter. Perhaps the colour space is off?
  5. Mantra render trouble

    Look at my render here... Notice I have these funny effects where some render buckets appear to not have finished rendering. I have pasted in my mantra settings to the right top and bottom. What is causing these render anomalies?
  6. Again I am comparing to Maya's Hypershade, where you have all these nodes for adjusting a texture's HSV (Hue Saturation and Value), levels, gamma, curves, brightness and contrast etc. Does Houdini have any equivalents in the /mat context? I see all the nodes in /img, but does that mean I will have to leave /mat and adjust my textures in /img or is there a way to keep things inside the /mat context? ----- Or is the correct/intended workflow that you work with your textures over in /img and then bring them into your shader via op:img/img1/null1"?
  7. I am trying my hand at random values. My expression here fit(rand($PT), 0, 1, 0.7, 1) is meant to input a random number into saturation between 0.7 and 1.0. But as the render shows, it seems to just input 0 into saturation. What am I doing wrong here?
  8. Hello guys, I am trying to build a shader, that in addition to what I have in the image, layers a black and white gradient in along the Y-axis. I am certain it must be very straight forward for anybody with more than a week of Houdini experience. Could anybody please point me in the right direction or outline the steps? Thanks in advance!
  9. From SideFX' tutorials on UV Basics III, I understand that you can connect to an image brought in and edited in the /img context (via the null node, named OUT for example), and then reference it in other contexts, specifically the /mat context, via the syntax of op:/img/img1/OUT On the SideFX forums I got the suggestion of instead using op:/img/tex/OUT This is what I get. Actually, I get that whether I use “op:/img/tex/OUT” or the standard “op:/img/img1/OUT” Oddly, my material turns this light pink as opposed to the standard grey, either way, when I try to call in the texture from null node. I want to be able to edit my texture images with all kinds of colour corrections and procedural tools over in /img and then bring them in live into my shaders in /mat. What am I missing, or messing up?
  10. Hello, does anybody know how to render flat shaded scene from OpenGL ROP? I need to do some viewport renders and I would like to have the exact look as in the viewport when set to flat shading. OpenGL ROP renders out stuff in smooth shading, even if it is set to flat. Is it a bug, or I am doing something wrong. Thanks, Juraj
  11. I've run into this issue a few times now. I have a shader that generates several output variables for extra image planes. The variables that are generated would be really useful in SOPs. The only thing I can think of that would allow me to run the same calculation on each point, is to collapse the SHOP network into an OTL that I could also exist inside a VOP. But the parameters are being driven on the shader and it seems a bit messy to channel reference all the ramps and other parameters. Is there some way to apply the shop_materialpath and then compute and export a variable on a per point basis? Thanks!
  12. Making Dessert in Houdini - Training

    Hello Everyone, This training is an update to the Tea and Cookies training. The training covers fairly similar topics such as modeling, shading, lighting and rendering. The primary difference is that instead of Mantra the training focuses on using the third party render engines namely, Redshift, Octane and Arnold. The modeling part of the training covers a variety of techniques ranging from basic poly modeling, VDB, fluid simulations and even POP grains to build the scene. This shading and lighting part primarily focuses is on building all the various shaders required for the scene using a variety of procedural textures and bitmaps. The training will also cover SSS, displacement and building fluid shaders using absorption. We will also build relatively detailed metal and plastic shaders. Trailer for further details kindly click on the link given below http://www.rohandalvi.net/dessert/
  13. Procedural Lake Houses tutorial

    Hello, everyone! I'm very excited to share my Procedural Lake Houses tutorial series, where I show how to generate the houses all the way from base silhouette to final shading. Example of the Generated Content: Link to cmiVFX page: https://cmivfx.com/products/494-procedural-lake-house-building-creation-in-houdini-volume-1 Thank you for watching and have a good day!
  14. Hi everyone I'm doing a personal project for improve my skills w/ Houdini, in this project I'm creating a Jellyfish, I already animated and modeled inside Houdini, but I'm having some problems with shading, I wanna create a shade like this one: https://www.behance.net/gallery/24460837/Jellyfish-Rise-3D-CGI, I don't know how I can do that, and for where I can start. Do you have some suggestion?? Thank you.
  15. I'm posting this on behalf of Benuts, a VFX company based in La Hulpe, Belgium. We are looking for an experienced artist to help on water/ocean simulation and shading. We're looking for someone with a lot of experience on that specific topic to help finalise shots for an ad, with opening sea - Moses type effect. More information will be provided after contact. If you are interested, please e-mail Alexandra Meese : alex@benuts.be
  16. I have primitive groups set up in SOP level, something simple like a box with a voronoi fracture applied with the inside and outside primitive groups. How can I access this primitive group in shop context? I am trying to compare whether or not a shading point belongs to a specified group so I can create a mask. I can create a attribute at sop level to store the group information on the points and that works, but I want to do this in the shop context because the geometry will be heavily subdivided and displaced and I need the accuracy. Any thoughts?
  17. Hi everyone. First post here and new houdini user, so be gentle! I'm trying to understand how one might go about adjusting an HDR within Houdini rather than adjust in another DCC and reload in Houdini. I tried loading the HDR in to /img/ then adjusting with a color-correct before loading in to the mantra environment light using an "op:" path, but this introduced a painful delay as houdini seems to "cook" all too often even when as far as I would assume, the HDR doesn't need colour corrected until render-time, and even then, only on a per-pixel basis during render. So, I assume I am approaching this from a totally non-houdini way of thinking. The options as I see them are 1) Adjust the HDR in a different application. 2) Adjust the HDR in the environment light by adjusting the color tint values. 3) Adjust the HDR in the /img/ context and save out with a ROP when satisfied with the result and then pass the direct path to this saved file to the environment light (rather than pathing to the file-in node within \img). 4) Access the file-in node within the environment-light and add a color-correction in there (although when I open the environment light node network, there is no sign of the file-in as one would expect or as you can find in a standard mantra shader node network). So, I guess my question is, where I have a 'map' in houdini, what is the best way to colour correct the map either by processing outside the object or inside the object node network? (where 'object' could mean primitive/light/shader/etc). Thanks
  18. What are derivatives?

    Hi, This is something I really don't understand and I didn't find anything in the Houdini help. Can anyone explain it in simple terms? Is it only useful in shading? Thanks
  19. I have been fiddling with making a rust texture that has features that are able to be controlled from a point cloud. That way I can have a bit more artistic control, while still being entirely procedural. That is kind of the essence of the assignment I am working on. Unfortunately I am pretty sure that I am doing something wrong with the point cloud in the VOP network, because It will go ok without an error for a bit, but then it will suddenly start giving me an error that some shader doesn't exist, and the name is something like Ѣæŗ.vfl or some other random ascii code. I go reoutput my .sc file and it works for a bit, then suddenly breaks again. Also, I tried to use an op reference to it at one point, and it didn't really like that either. Any ideas? rustExperiments.0.0.2.hipnc
  20. I'm curious about the displacement vs the bump settings in Houdini's principled and mantra surface shaders. Is there any difference between bump and displacement with 'True Displacements' unchecked?
  21. Hey Guys, I have been playing with a raytracing shader. The surface I apply the shader on and the actual position I am computing in the shader is quite a faire distance away. It's like I'm looking through a window. All works well and good however I am now looking into implementing Deep data in this shader We can read deep data with dsmpixel. But is there anyway of writing it out from the shader ? At the moment my deep data comes out with the actual plane surface data that the shader is applied to but not the objects and procedural inside the "window" which my shader generates. Do any of you have any clues on how to feed this custom data to Mantra do so ? Is it just an simple array variable to export or a twin array both depth and color/alpha ? Is it even possible ? Thanks for any idea you might have. J
  22. Shading Fractal Shapes

    Hello everyone, does anyone know how can I shade a fractal shape like this on the picture with houdini mantra?? Thank you.
  23. Layering Normal Maps

    Hi guys, I've been trying to layer 2 normal maps inside one shader and I can't quite figure out what is the right way of blending them. I tried mixing them, adding them, multiplying and subtracting the difference but I can't get them to look correct. As a reference I combined the 2 normal maps in Photoshop by overlaying the red and green channels and multiplying the blue one and I still couldn't get it to match in Mantra. In my file you can see that I have the displacement texture nodes and I'm plugging them into the baseN, and then you can compare it with the displacement texture that is already loading the layered normal from Photoshop. The only way I found of doing this in Houdini is going inside the displacement texture node and doing the same I did with Photoshop combining the RGB channels right before they go into the Displacement node and the normals are calculated. The problem is that it's not a very elegant solution, and I also have the problem that I can't do this with a bump and a normal, so I'm trying to figure out how I can layer the normals themselves, not the textures with the color data. How can I blend 2 normal maps together, or 1 normal map and a bump map? Is there any way of doing this manipulating the normals or will I always have to resort to blending the RGB channels and then getting the value of the normals? Here are the renders showing the difference between the layered normals in Photoshop and the one where I add the normals together: Here is the file with the textures. layerNormalMaps.zip Cheers and happy new year!
  24. Dennis Albus | Technical Reel 2015

    This is my Technical Reel with some of the things I've been working on lately - mostly focused on lighting, shading and rendering. Please go to vimeo and watch it in HD as a lot of stuff is really hard or impossible to see in the SD version. Feel free to also check out my other work on my website (95% Houdini / Mantra) www.dennisalbus.com/portfolio The Blog is unfortunately a graveyard right now but I am planning on posting regular updates and information about some of my tools and shaders soon. Cheers, Dennis
  25. Hello Community, ( I hope that's the right thread for this type of questions.) in the course of my second year studies at the Animationsinstitut in Germany I'm going to shoot a short film showing the metamorphose of a man, whose skin turns into printing press letters. Concept and references: My skill set mainly covers Compositing and the basics of Visual Effects. So I thought to use a project with real deadlines to boost my 3D skills. For some time now I gradually familiarize myself with Houdini. Feels like doing some kind of 3D Compositing. Like it so far. Now I definitively could need some help for the technical realization of the concept. Tutorials on the web are great, but don't cover this particular Effect. As can be seen from the images I want to randomly scatter letters across a surface. So far I experimented with displacement maps but I'm not sure if that's the right approach. I was told SOPs are the way to go. Is it possible to randomly instancing geometry so they move close together without overlapping? And if that was the case, would there be a procedural approach to generate the different letters/symbols? Regarding animation: I just want the transition to be animated. So we move from skin (live action) to roughly brushed plumb to the letters. Imagine the brushed plumb gets grooved and reveals the letters underneath. Any ideas and comments are very appreciated. (By the way, I put in my head to finish this project somehow by the end of July.) Thank you very much for your attention and interest. Regards, dk
×