Jump to content

Search the Community

Showing results for tags 'rendering'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
    • Marketplace
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 197 results

  1. Redahift Z depth

    Hi I'm trying to render Z depth for a smoke in Houdini via Redshift, it's look like working fine with object, but somehow not working with a Pyro smoke. It's look like Redshift doesn't "see" the volume, any idea why? Thank you
  2. Hi all, I'm not a lighting or shading person. I'm barely a Houdini person, if the truth was told, but I struggle on... I am having a terrible time trying to figure out how to assign random textures to packed geometry in Solaris/Lops/USD/Karma (referred to as SULK from now on, OK?) I have a Crowd scene with some flag waving agents. I want the flags to have a material on them where I can choose a random texture from a set of three textures. With me so far? Excellent. How? I've tried 'editMaterial' node I've tried 'materialVariation' node I've even looked at 'variations' but that's s total mess in there - I'm not going anywhere near that! The flag geometry is packed and has an attribute called 'flagTex' which has a random int value from 1 - 3. When I import the geometry into SULK, that attribute becomes primvar@flagTex So far so mediocre. Now the problem: How do I use that attribute to choose a texture file on the shader? I can do this in Mantra, so I would expect it can be done in SULK, but how? I expected SULK to relieve some of the pain of StyleSheets but in truth it is a lot worse. Please, someone, help me out here before I throw in the towel and ditch SULK. Many thanks. Dan
  3. I am tring to assign either .tiff or .rstexbin files to my character that i am using for RBD. when i assign the textures to the character, they seem flat even though i assigned the displacement map. The character have at least 4 UV sets per texture it seems. is there an easy way to create all of the groups needed since the character has at least 12 different objects that were merged together before being packed. I unpacked the geo and could se the uv sets that were all layered on top of one another. i have 25 different texture maps and i am having to select the polygons individually and then group them to use a material sop to assign to the group. if there is an easier way about this please let me know. I have attached url for the textures and the hip file. https://drive.google.com/uc?id=1mjLxLb2AWJcL2qAZTXe0fIVlP860Ku78&export=download DinoCharacterHelp.v01.hipnc
  4. It's my first week using the apprentice version (build: 18.0.391) and I am having troubles rendering my files out in mantra. It seems that this problem is not unique but the available solutions are not working for me (like spacing in file paths, firewall issues, etc). I am still going my way through this but been not able to render my builds is really bumming me out since long time. Any leads will be appreciated. Thanks.
  5. Looking for some advice on rendering a smoke/dust sim with a heavy building destruction sim as a holdout using mantra. Without holdouts the smoke takes about an hour or so per frame as expected, but with the holdouts it is taking many hours (up to 6 hour) for some of the heavier frames. I am using packed alembic caches for each of the layers then merging them together and setting as force matte in mantra.
  6. Pyro Sim not rendering properly

    So i got some help from a guy on here for my uni project. The file he gave me looks amazing but when i come to render it, the fire doesnt show at all. Does anyone know how to resolve this?PyroSim.zip
  7. Hello! I just got myself a threadripper 3960x for freelance simulation purposes etc. My workstation is now up and I'm doing some performance tests. I'm getting some very unexpected results when rendering in mantra. And I'm very curious if someone can help me understand what's going on under the hood? A simple scene I created(with some geo, lights and reflective materials) takes about 2 minutes to render in mantra in bucket mode. All good and as expected! The same scene takes about x3 times as long in progressive mode. If I monitor the core activity, they're almost idling while rendering, then suddenly, about halfway through, they all accelerate up to max speeds and finish the render in seconds... Anyone wanna take me to school? Thanks, Joachim
  8. hey guys, do you know if it is possible to use intrinsic information in a shader? something like intrinsic:indexorder in a render state just like with packed attributes. also, still in a render state, packed:primnum or packed:ptnum doesn't seem to be working. if i make an attribute in a wrangle on packed prims, let's say i@primid = i@primnum, and i call that in the shader with a render state it works as expected. any idea? thanks Martino
  9. Hi all, this is a technical question to help me understand what's going on under the hood about how geometry lights are rendered: I am playing with writing my own raytracer within Houdini, where I also want to use geometry lights. The way I go about it is that each shaded point does a shadow test to a randomly generated point on the surface of the light geometry. As this happens completely at random it is somewhat likely that the generated point will lie on the far side of the geometry, so the light geometry itself will cast a shadow (self shadow on). This results in the first attached picture. This makes sense to me - if you consider a shaded point on the wall behind the turquoise cube - it has only a 1/6th chance of generating a point on that cube that it can actually see whereas a point to the top and right of that point sees 3 sides of the same light geometry, therefore it has a higher chance of generating a point it is illuminated by. This is not what I would imagine seeing in real life (though maybe my conception is wrong). However when I rebuild and render the scene in Houdini/Mantra (type geometry with a box as geo and self shadow on), the result is very different (see attachment 2). Maybe someone can shed a light on how Mantra does its magic. Cheers!
  10. Forest Fire

    Forest Fire entirely done using Houdini and Rendering in Arnold.
  11. Quick question. This is the diffuse AOV from a render using nothing but a distant light. To give you an idea, this scene is rather large. The geometry in this scene is a ocean volume and a plane. I have removed the displacement map from the material to isolate this as much as possible. I can not for the life of me find what sampling quality I'm missing here. Does anyone recognize this kind of circle pattern coming from the light?
  12. Reflection with alpha

    Hey everyone, I've got a question that, even after lengthy research still has me stumped. I've got a pyro fireball and a ground geometry that reflects the fireball. The simple question is: How do I set up the ground plane and its material so it reflects the fireball, including alpha, and just gives me a black alpha where there is nothing to reflect? I've tried to force it to phantom, that just makes it invisible, other, somewhat hacky workarounds like this one are rather old and in the SHOP context. Nothing I tried seems to work. When I'm working in 3ds max and Vray, I use this technique all the time (vray object properties -> reflection/refraction matte), I just don't get how this can be this much of a faff in Houdini, so I'm probably missing something pretty obvious. I'm grateful for any and all help, thanks in advance!
  13. Hi all! There's 1 week left to register for my Mastering Destruction 8-week class on CGMA! In supplemental videos and live sessions we'll be focusing on some of the awesome new features in Houdini 18, and exploring anything else you might be interested in, such as vehicle destruction. For those that haven't seen me post before, I'm an FX Lead at DNEG, and previously worked at ILM / Blue Sky / Tippett. Destruction is my specialty, with some highlights including Pacific Rim 2, Transformers, Jurassic World, and Marvel projects. I've also done presentations for SideFX at FMX/SIGGRAPH/etc, which are available on their Vimeo page. If you have any questions, feel free to ask or drop me a message! https://www.cgmasteracademy.com/courses/25-mastering-destruction-in-houdini https://vimeo.com/keithkamholz/cgma-destruction
  14. hey i am new to houdini and i just created something like explosion, and i want to render with a moving camera, so the scene is moving. can i do it? will it take a longer time to render with animated camera? i am afraid that it wont work so mybe someone who has done it before can tell me?
  15. I've start test Houdini 18 and Arnold 6. the first test was simple splines rendering, 250.000 splines instanced 25 times. it loads a 140MB alembic file. rendered in 6 core Xeon CPU and Nvidia Quadro RTX 5000. (windows 10 pro) the startup for Arnold GPU is slow, it renders faster, so it seems but for clear up the final image it takes for forever or just dropped /crashed, hard to tell on the GPU. the CPU is quite fast but much slower then GPU if it ever would finish. (adaptive sampling was on) As soon as Arnold finishes rendering the scene, it stops and do not refresh any more on parameter changes. so far i am not impressed with the Arnold GPU rendering. here is the same scene Arnold CPU with only direct Lighting. (on my MacBook) some test with Arnold GPU. it performed much better with just direct lighting.
  16. I am working with the sea of cloud.It takes me too mach time to rendering it,so I want to render this cloud seperately and composite them by using the DEEP Camera Map,but there is no relationship between clouds,because they are rendering seperately.I want to know if there is a solution to add the shadow to the cloud. secondly ,what's the difference between Deep Shadow Map(DSM) and Deep Camera Map(DCM)?
  17. How can I render my particles with the isotropic volume lighting model and get self shadows? Its like there is no occlusion and I can see my particle field from one way to the other side, and I can't distinguish wich are in front and wich are in the back... Cheers, Diogo
  18. Are there any good resources on rendering particles with Karma? And does it support micropolygon rendering? Thanks
  19. Recent work from Chinese animated feature 哪吒 (NeZha), I was responsible for creating and rendering of the water effects using Houdini flip to build a guided system for water and render in Mantra, final compositing done in Nuke.
  20. Hi all, Im exposing a few parameters on a shader within a material builder node. This are things such as spec roughness etc. For some reason they do not have any effect after having added a few. Any pointers?
  21. Hi! I'm trying to figure out how to add a glittery effect to houdini mantra hair. Would I have to build that into the hair shader? If so could anyone giv eme a quick rundown on how to start doing that? The effect I'm going for is like you dipped your hair in glitter water. So the end result is a very sparkly hair. Also I would like to adjust the color of the glitter. Edit: Also I'm generally finding it hard to find a good explanation for how to start adjusting/bulding your own materials.
  22. Friends, I am worried about this effect, I need your advice.
  23. Render deep data with Mantra

    Hey guys, I was wondering if it is possible to export exr images with deep data information in them. It would be to use those in Nuke later for Deep Compositing. Also, I would like to know what is the difference between exporting a fully textured model from an explosion or a fire. Cheers! Trapo.
  24. Hi Everyone, I'm trying to setup a TOPs network to cache out sims with multiple parameters being sent to the dop network. Seemingly basic stuff, however nothing I'm doing seems to be working how I think it's going to work. What I've tried: Using a wedge node to set a random attribute and trying to use that attribute in an exterior dop network - caches into a view able geo rop output but all the wedges were the same Rebuilding the pyro sim purely in the tops network - it makes something but when viewed all the frames appear empty Have the source inside the tops and the dop outside - same result as the first
  25. Anamorphic Lens Squeeze

    I know this is a bit of a bizarre request but I'm trying to simulate rendering in anamorphic squeezed, to later be unsqueezed in comp. I've set up my resolutions and pixel aspect ratio but as expected it just reframes the image and doesn't provide that squeezing effect you might get from something similar to Maya's lens squeeze parameter? Any advice on how to achieve this? Thanks!
×