Jump to content

Search the Community

Showing results for tags 'Shading'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 66 results

  1. Material Builder to Layer

    Hey guys, I'm trying to get a custom material to export a layer properly so I can combine other shaders. I've attached the .hip file below. This is the material I built. As a simple example, it works as intended. However, there's no layer yet: So, I dove into the principle_core shader and was able to (unsuccessfully) create a pbrdiffuse node, plug that into the layerpack and then export the layer for mixing: But unfortunately when I assign the layermix to the object, I lose the ramp. It seems to take just the first input of the ramp: It's probably very simple, but this is driving me crazy. Any ideas? material_help.hipnc cw
  2. Hello, Here is short video showing how volumes can be used to shade and texture semi-translucent organic things. Volumes For Organic Assets This approach allows to achieve close-up realistic organic look for semi-transclucent assets where SSS or colored refraction is not enough. An object with UV's is converted to signed distance volume with UVW grid. Then volume is used to set density and perform uv lookup in rendering. This way density can be adjusted by depth. I.e. not so dense near the surface and very dense at the core. Or not very dense, then dense patch like island of fat, then not dense again. UVW grid is used to texture volume. Different textures can be used at different depth. This approach allows very flexible yet powerful way to place any texture at given depth. Texture X at depth Y. I.e. big veins are 5mm under the surface. This approach is best for close up organic hero renders where SSS or refraction are not looking good enough. Attaching example file volumetric_textures_example.hip
  3. World Space Position?

    Hi everyone, I've got a problem that has me stumped. In my setup, I'm flying with a camera through a volume that has its density modulated via a couple of noises in the mat context. But whatever I do, I'm unable to make the noises stick to their position, it seems like the P attribute from the globals node is always in camera space. I must be missing something obvious since a lengthy internet search didn't give me any meaningful results. I've attached my file, to observe the problem just render a frame and then jump to another frame, the camera will have moved but the noise pattern is in exactly the same place. Thanks in advance Paul world_space_noise.hip
  4. Hello friends, Today I watched Custom Shading in Game Trailers presentation from Sergio Caires (FMX 2018). The thing which really caught my eye was topic about anisotropic bump anti-aliasing (or specular anti-aliasing). This topic is presented at about 45:00 in the video. Please has anyone tried doing something like this? Do you have any hints how to do this kind of effect inside Houdini? I did not understand it good enough from the presentation, to implement it myself. Thanks for any kind of advice and have a nice day Link to the video:
  5. Hi all, I am working on a project that has pages of a book flipping with camera on top. Audience will be able to see the content on pages. I have already created animation of pages flipping. Problem is to assigning 10 different / unique page texture content wise to 10 pages of animated book. What is the best solution to do that. I am using copy and transform to define no of pages. Any pointers will be helpful. Thanks in advance.
  6. Check this out guys: The bumps on top of the tentacle are done with displacement. In fact the same displacement map across the tentacle. How can it be, that the amount of displacement seems to be inverse to the thickness of the geometry? I would expect the displacement to be uniform across the geometry, regardless of the geometry on which it sits? Can anybody make sense of this for me, please?
  7. Hi, is there a way ro reach texture from Agent materials? opdef:../..?clothing_`padzero(3,ch("../../texture"))`.jpg I need just slightly edit it... Thanks BK
  8. Hello! So recently I saw two new videos that were produced by FORTICHE, the same company who produced the League of Legends video for Imagine Dragons' "Warriors" a few years back. While that video was impressive, these two newer ones are outstanding. I am sure it takes a lot of money, time, and work to produce anything in this style, but I just adore it and would like to create something similar one day. Does anyone have and information, theories, or even tutorials on how they achieved the look? (I know they implement a lot of 2D elements along with the 3d but I'm curious about the 3D part) Also I have some questions about hair. I have a basic understanding of how hair works with guide lines and such. What I don't understand is how they make the hair so unique, and held up, but still have it be rendered with individual strands. Also, for the k-pop video, how do they make the hair seem so flatly shaded over certain sections but still individual strands? Here are some good frames that showcase mostly 2D esk 3D They also have a really nice flat shading effect on the hay which I am really curious about. The main shine for this looks hand drawn over but the rest of the jewel looks half painted and half actually shaded. This jacket looks really 2D as well, curious how they got the simulated lighting on it to look so drawn. Here is where we start talking about the hair. The hair in this frame is very artistically styled. It may be a basic question but is that just achieved through guide hairs? How are the guide hairs made to not fall due to gravity but still simulate like actual hair? It doesn't just fall down but acts like it is constantly being pulled back to a certain hairstyle, are volumes used? These two shots exemplify what I mean by the hair looks like it is rendered in layers. The hair is made up of individual strands but its as though when they are grouped together they are instead rendered as one entity. Right under the thing south west of her ear you can see the shadows affecting the individual hairs but everything looks so held together, there are very little stray hairs. Here is the hair in motion. Sorry if this post is long, but if someone could help explain how companies get hair beyond just an undercut or simple part, that would be great. y2mate.com_-_kda_popstars_ft_madison_beer_gi_dle_jaira_burns_official_music_video_league_of_legends_UOxkGD8qRB4_1080p.mp4
  9. Hey! Could someone help me understand this? I have noise going into the vector displacement of a principled shader. Why do I have change the amplitude of the noise in z-axis to affect the y (up/down) position? If I do the same at sop level I can just adjust the amplitude in the y value. I tried messing around with the transform vop, but I didn't get anything working. Thanks displace.hiplc
  10. Looking for a way to apply a flat and fast shader to particles and also be able to control size, color and opacity over life as you could do with trapcode particular after effects plugin or x-particles. Thx!
  11. Hello people! Please tell me how to build the material networks correctly. For example I have an object with different components and different materials. I need to apply one mask for all materials (such as dirt).I can create the mask on the top level and bring out parameters for all materials to connect it. Seems it works and i can export the general mask as image component. But all other export components which were created in the individual materials level do not work now. Please tell me how to build such a connection properly. Thank! matnet_v01.hipnc
  12. As I understand it, stylesheets give you the opportunity to have multiple conditions per override. I have basically organised my stylesheet with a material per override and then I add in the material assignments per geometry with conditions. However, I have geometry that drops out and renders default grey if I have multiple conditions on the same override. Does that sound familiar? Am I doing stylesheets wrong?
  13. Houdini For The New Artist

    Download the Course Files & Watch for Free at CG Circuit https://www.cgcircuit.com/tutorial/houdini-for-the-new-artist Hello ODFORCE! I'm very excited to share my first Houdini course titled: Houdini For the New Artist Houdini for The New Artist is perfect for anyone interested in learning Houdini for the first time. To keep things interesting, we learn about the basics while building "Alfred the Rhino" from scratch. If you're looking for an intro tutorial that gives you a bit of everything, fun to work with, and straight to the point - this is for you. Be sure to check out the full course and download the course files and practice along. Thank you for watching! Course Outline Intro 42s The Interface 12m 26s Setting up Our Project 12m 53s Utilizing Attributes 10m 47s Caching 8m 23s Applying Materials 9m 55s Adding the Backdrop 6m 41s Basic Shading Parameters 5m 36s Lighting 9m 30s Rendering 12m 29s
  14. Introduction to FX using Houdini - workshop

    Hey what's up guys, Only 1 week left to sign up for my fast-track Houdini workshop "Introduction to FX using Houdini". Limited student capacity! Over the course of 9 weeks and 33+ hours of tutorial content students learn how to do procedural modeling, particle-, RBD-, pyro-, FLIP- and whitewater simulations. On top of that we're going to talk about lighting, shading, rendering and we're going to dive deep into VEX scripting. I would love to answer any questions that you might have!!! Cheers, Manu Find the complete course syllabus here: https://www.cgmasteracademy.com/courses/16-intro-to-fx-using-houdini Also for more detailed information on the workshop, check out an interview that was done on 80.lv recently: https://80.lv/articles/making-first-steps-in-vfx-with-procedural-tools/ Here's a quick teaser:
  15. Hi guys, I'm attempting to get the thickness of an object and use this to blend between two pbr layers - in this case a sharp refractive layer into a sss-y one. I could sample an sdf volume on disk and read the thickness from there. But I'm wondering is there a way to do this at render time. For example, catching the position of a ray entering and exiting a surface with lots of sample. Any ideas? Example scene much appreciated! Thanks.
  16. Hey, I have been trying to create groups from FBX subnet, which can be accessed after the crowd simulation done. So I can apply different base colors through Material Style sheet and I am also curious to know if I can export different groups of object to other 3D packages if I have to render it there. what I want to do is, I want to create primitive groups which can be accessed after unpacking the agent SOP. So I can access that data for further process. I somehow manage to make the groups based on the shaders applied to particular objects, but some of the shaders were not included in the groups, I would like to have a proper work around for this. Or is there any particular method to make groups during exporting the fbx from maya ? Thank you All.
  17. cops to material

    I am trying to edit a texture using cop2net or img network. I can do the edit of the image alright, but in my principled shader, trying to refer to that image it wont work. shouldn't I be able,in my bump map slot on the principled shader, to input a relative path like: ../../cop2net1/Bump_map/OUT/ or in case of img network: /img/Bump_map/OUT/ I have not used cops before, so maybe I am doing something wrong. Do I need to force it to export into other context? any help is appreciated. cheers
  18. Hello friends, I was using Arnold for a long time while also playing with Mantra, but until now I never really had a time to dive deep to Mantra. The thing I really adore about Arnold is that in its settings it has this beatiful "Samples calculator". It seems just like a small detail but in my personal experience it was a great help for optimizing heavy renders. So few weeks ago I decided that I would try to create some similar calculator for Mantra. At first I implemented it by Arnold example which works like this (I'm not 100% sure with the equations but in my tests they work ): Camera (AA) Samples = pow(Camera (AA) samples parameter, 2) Diffuse Samples (Min) = pow(Camera (AA) samples parameter, 2) * pow(Diffuse samples parameter, 2) Diffuse Samples (Max) = pow(Camera (AA) samples parameter, 2) * pow(Diffuse samples parameter, 2) + (Diffuse depth parameter - 1) * pow(Camera (AA) samples parameter, 2) Specular Samples (Min) = pow(Camera (AA) samples parameter, 2) * pow(Specular samples parameter, 2) Specular Samples (Max) = pow(Camera (AA) samples parameter, 2) * pow(Specular samples parameter, 2) + (Specular depth parameter - 1) * pow(Camera (AA) samples parameter, 2) Transmission Samples (Min) = pow(Camera (AA) samples parameter, 2) * pow(Transmission samples parameter, 2) Transmission Samples (Max) = pow(Camera (AA) samples parameter, 2) * pow(Transmission samples parameter, 2) + (Transmission depth parameter - 1) * pow(Camera (AA) samples parameter, 2) Total (No lights) Samples (Min) = Sum of all min samples above Total (No lights) Samples (Max) = Sum of all max samples above But soon I realized that Mantra does not work this way. (Well yes, it was silly to think it works the same way ). So after reading a lot about how sampling works in Mantra and talking to my friends I came up with this calculator: ray_count_calculator.hdanc Which counts ray count like this: Camera Samples (Min) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) Camera Samples (Max) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) Diffuse Samples (Min) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Diffuse samples parameter * Global multiplier parameter * Min ray samples parameter Diffuse Samples (Max) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Diffuse samples parameter * Global multiplier parameter * Max ray samples parameter Reflection Samples (Min) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Reflection samples parameter * Global multiplier parameter * Min ray samples parameter Reflection Samples (Max) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Reflection samples parameter * Global multiplier parameter * Max ray samples parameter Refraction Samples (Min) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Refraction samples parameter * Global multiplier parameter * Min ray samples parameter Refraction Samples (Max) = clamp(Pixel samples X parameter, 1, ∞) * clamp(Pixel samples Y parameter, 1, ∞) * Refraction samples parameter * Global multiplier parameter * Max ray samples parameter Total (No lights) Samples (Min) = Sum of all min samples above Total (No lights) Samples (Max) = Sum of all max samples above While using premise that these parameters in Arnold and Mantra influence the same things: Arnold Camera Samples = Mantra Pixel Samples Arnold Diffuse Samples = Mantra Diffuse Quality Arnold Specular Samples = Mantra Reflection Quality Arnold Transmission Samples = Mantra Refraction Quality Arnold SSS Samples = Mantra SSS Quality But there is a catch: In Arnold if you set Diffuse Samples to 0 you will get black diffuse indirect pass In Arnold if you set Specular Samples to 0 you will get black specular indirect pass In Mantra if you set Diffuse Quality to 0 you still get samples in diffuse indirect pass In Mantra if you set Reflection Quality to 0 you still get samples in reflection indirect pass So I think we can be sure that Mantra pixel samples fire also diff/refl/refr/sss samples - so when having diff/refl/refr/sss parameters set to 0, their corresponding rays cant be 0 (but I really don't know and can't find out how much of them is fired) Also pay attention to the clamping of pixel samples - in my tests pixel samples parameters were always clamped like this: clamp(Pixel samples parameter, 1, ∞) - when using values lower than 1 the result was always the same as when using 1 This catch made my calculator useless It seems that Mantra fires all kinds of rays even when using pixel samples only while Arnold does not. I personally did not expect this behavior and as far as I know it is even not documented. (Or at least I could not find it). I spent few days trying to figure out how these parameters could relate to each other but I did not find any good solution. So in my frustration I decided that it would be probably better to ask you guys if you did not try to create some calculator like this before or to find out how all Mantra parameters relate to each other I think it would be a great help for all Mantra users to find out how Mantra works "under the hood" Thank you very much for any advice and have a nice day.
  19. VDB smoke uv mapping

    Hi guys I'm trying out some stuff for my upcoming project and was wondering how to make uvs for smoke converted to poligons, so that they would remain stable rather than jumping around every frame. I found some info on using dual rest position, however I'm not sure how to tackle it, or if it could work with this approach. Here is what I have right now ( I'm not sure if you can see that, but there are scratches on the surface of the smoke which as expected change where they are every frame, the question is how to make them remain in place even tho the number of points is changing and whole thing is deforming) Here is the type of effect I'd like to achieve: Any tips are very much appreciated!
  20. MultiColored Material

    Hi, Would anyone know good tutorial on how to make multicolored materials like this? It tried it with color mixer note and cell noice pattern , but i doesnt give me a good result , also is there a way to expand the color mixer note to use more than 2 colors? Thanks
  21. Layering Normal Maps

    Hi guys, I've been trying to layer 2 normal maps inside one shader and I can't quite figure out what is the right way of blending them. I tried mixing them, adding them, multiplying and subtracting the difference but I can't get them to look correct. As a reference I combined the 2 normal maps in Photoshop by overlaying the red and green channels and multiplying the blue one and I still couldn't get it to match in Mantra. In my file you can see that I have the displacement texture nodes and I'm plugging them into the baseN, and then you can compare it with the displacement texture that is already loading the layered normal from Photoshop. The only way I found of doing this in Houdini is going inside the displacement texture node and doing the same I did with Photoshop combining the RGB channels right before they go into the Displacement node and the normals are calculated. The problem is that it's not a very elegant solution, and I also have the problem that I can't do this with a bump and a normal, so I'm trying to figure out how I can layer the normals themselves, not the textures with the color data. How can I blend 2 normal maps together, or 1 normal map and a bump map? Is there any way of doing this manipulating the normals or will I always have to resort to blending the RGB channels and then getting the value of the normals? Here are the renders showing the difference between the layered normals in Photoshop and the one where I add the normals together: Here is the file with the textures. layerNormalMaps.zip Cheers and happy new year!
  22. electric shader

    Hi guys, Do you know when i make this kind of electric effect what kind of material should i use? just basic principled shader or anything else? Thanks.
  23. Hi everyone, I am trying to find out how exactly to go about exporting procedural displacement patterns from within a material builder, that can then be used to augment or modify other displacement patterns going into other shaders. Somehow, however, whatever I try I cannot get it to work properly. It does not render the way it should. I do not want to use the layer struct for this, since I want to be able to extract only parts of the displacement texture and not the whole thing being used in my principled shader (inside the material builder). In my mind this should be an easy thing to do, so I am really wondering what I am doing wrong, or if it is actually a bug in Houdini? Either way I would really appreciate if someone would check out the attached hip file and analyse. Perhaps someone knows exactly what is going wrong? Help is much appreciated. Thanks, Doug shader_test_v002.hipnc
  24. I would like to ask some shading advices to render the same type of particles system like the one see in Black Panther credits. Is it geo instancing with low polygone objects ? how can we manage the glossiness variation on the particles ? Any tips to create the same result with Redshift would be awesome. Thanks Julien
  25. Hi!! I'm a VFX student new to Houdini and i'm having some problems to render a "stone" shader because some of the stones in my displacement show some kind of glitch / flicker. I'm rendering in mantra in a 3-5 samples setup, I've tried to put normal node and it make the same error. My shading setup for the displacement is "Zbrush displacement map + 2 stone textures with some fits + uv noise + uvxform, all of this connected to color mix that goes to displacement" my zbrush displacement is 8k resolution in tiff. Dragons_FlickerGlitch_h264_ivanp.mov
×