Jump to content

Search the Community

Showing results for tags 'redshift'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 82 results

  1. Hi everyone, I've been tasked to lay down a decent workflow pipeline plan for my team revolving around creating motion graphics or key visuals for advertising in houdini and I was hoping to find some help/suggestions for it. I chose houdini as the core element just to avoid extra plug-ins that from my experience were creating a wast amount of issues in the long run, but I feel I won't be able to do everything I need quick enough using it alone. My first idea was to get license for zbrush, substance and redshift to complement houdini, so basically it would be like this: start by modeling non procedural things in zbrush, texturing them in substance, animating everything in houdini and finally render in redshift. When I started testing the workflow, I immediately noticed that substance designer plugin is not compatible with latest versions of houdini and my biggest fear came back, since I was hopping to achieve something at least slightly more seamless. I am now considering making all maps and textures through photoshop and zbrush only, skipping substance altogether and sticking as much as possible to procedural shaders in houdini if I can produce them quick enough. Anyway, what kind of workflows do you guys have or recommend? Could you give me any tips on how I could make things go smoother and avoid unexpected compatibility issues? How was your experience using Substance painter or designer within the workflow so far? Cheers!
  2. Houdini Enviroment

    Landscape, created in Houdini using the heightfield sop and redshift proxys.
  3. Hi, I have 2 a computer "farm" setup to render out PDG wedge iterations. Rendering it with Mantra works fine with local scheduler as well as with HQueue. Redshift works just fine with the local scheduler but when rendering with HQueue it renders out just one frame of my iterations without any error messages. The frames are numbered by @wedgeindex which apparently worked for 3/4 of the cases listed. Any ideas? Thanks Hendrik
  4. Redshift Substance Shader

    I've made this to save time loading shaders from Substance Source, but should work for any substance material. Just select the parent folder of the texture you want. Figured some others might find it useful too. It has proper viewport handling of Base Colour, Roughness, Metallic, Normal, Emission and Displacement. Not 100% perfect, but pretty damn close. Hdalc for now. Tried to upload to orbolt but kept getting an error (Invalid node type name). Don't think it likes redshift nodes. If anyone has any experience with this let me know! MT_RSSubstance_1.0.hdalc
  5. Hi, I'm trying to setup and use stylesheets with redshift and not having much luck. The main thing I'm trying to do is override the redshift Object ID so that I can have unique IDs appear in the cryptomatte per point. All I've done is a voronoi shatter, then assembled them with packing on. I haven't had any luck past this point. If someone can point me in the right direction would be helpful or propose an alternative solution. The only other thing I can find or think of is assigning per point materials but this is out of the question since there will be 2000+ points. The post here seems to mean that this is possible: I've also gone over the Mantra way of doing this here several times without any progress: http://www.sidefx.com/docs/houdini/shade/stylesheets.html
  6. Split Half Underwater

    Hi there everyone, I am trying to recreate this shot from shutterstock as accurately as possible in Houdini: https://www.shutterstock.com/video/clip-14617168-slow-motion-split-half-underwater-close-up Creating the wave I imagine is pretty straightforward, and have done so before using realflow with good results, but the main challenge would be how to handle the thickness of the surface of the water as it intersects the camera, how to render different environments for the under and above water portions, how to handle these (previously I used masking with mixed results), and then secondary elements like the bubbles that form with splashes and the distortion to the above water image caused by water splashing over the camera lens. Would ideally like to use redshift to render this, but anything that can go towards achieving a photorealistic result, or any idea or resources anyone might have or can point me towards would be great. I'm a beginner in Houdini but relatively experienced with 3d in general. Many thanks, H
  7. Hello there, I'm relatively new to Redshift an encountered a problem that by searching and looking into the documentation I couldn't solve. I set up a simple scene with an Ocean (Standard material with slightly modified Water Preset) and a rough sand-like surface beyond the Ocean. Whith caustics enabled in the Domelight settings i get smarties, where - beyond the water surface - caustics should render. Why? Modified Redshift ROP Settings: Unified Sampling: 128 - 1024 Global Illumination: Primary>Brute Force, Secondary>Irrandiance Point Cloud Photon Mapping: Caustics & GI enabled A hint here would be much appreciated! Thanks
  8. Hello Houdini people! May I have a simple question on U? Im trying to make a render of extreme difficult scene, with a lot of lights and textures with redshift. But unfortunately I can let rendering my computer only durring the night (because durring day I need it for my work ). The render which Im trying to make usually takes almost 20 hours, but as I wrote when I need the computer I have to stop it and then start again, because the render is unfinished (in corners because the buckes didnt finish it there) and also its unusable. What I need is lets say select some part of camera view and render it (RENDER TO DISK !!), basically I will divide whole frame on small parts (each part per night lets say) and then I will put them together in photoshop. So my question is: Is possible to select region in camera view (for example like in Blender/Cycles: its Shift+b+LMB) and then render this selected region to disk (I do not want render in scene view)? Thanks a lot for any advices!! Bye!
  9. Hi guys i need help in Fisheye rendering of geometry while using redshift Render or mantra in houdini . Any idea how to do that. Thanks Rahul
  10. Hi there, Total Houdini novice here with a bit of a strange question: I have a smoke simulation that I have created the swirls around a central character that is being rendered in Unreal Engine. https://drive.google.com/file/d/1fuoitdNjuBZ1UPgkAi-e9G63CKW-40YS/view?usp=sharing The smoke itself is rendered in out of Houdini in slices and played back in Unreal as the alpha of a video texture, but I was wondering if anyone had any ideas of how it might be possible to set up something like a shadow catcher to catch the shadows cast by the smoke on the surface of the character, but instead of just rendering shadows that hit a specific piece of geometry to a frame as scene by the camera it could be applied to the character and render out as a texture sequence taking into account the characters UVs, so that it can be played back in Unreal? I feel like this is a pretty weird request and no doubt there is tons wrong with it, but I thought I might as well ask, you can do anything in Houdini it seems! Many thanks for any advice, H
  11. Hello, How do you render multiple redshift Rops? There are several ways to do this with mantra nodes, but none of them seem to work with Redhift nodes. Thanks.
  12. Hey magicians, I'm having crashes (houdini directly closes without any message) when rendering. This happens in random frames, and in some of the setups. I checked the log and seems to be a GPU issue, I have 3 Geforce 980 Ti. I will contact their support, but posting here just in case anybody knows a solution. Cheers
  13. Hi guys i need help in Spherical rendering of geometry while using redshift Render in houdini . While using Redshift in maya we can change the Camera type to Spherical. I am not getting Same kind of option in houdini any other way to do Spherical rendering while using redshift or mantra.
  14. Hi Guys I have a problem with redshift for Houdini. I've got some triangular geo from marvelous designer. There is a simple uv shader and a bump map - no displacement. I get this tiny white edge at seemingly random places. I've tried adding normals, tried subdividing, tried playing around with the redshift tag, but nothing works. any ideas? best Matias
  15. Recently I check Rohan Dalvi Procedural texturing and baking lessons and it really useful for learning Texture Baking. But the reason I bought this tutorial series was to get a better understanding behind the concept on baking. What I really want to learn is, how to bake lightmaps using Redshift, but there is little information on the subject. I check the documentation on the redshift website (https://docs.redshift3d.com/display/RSDOCS/Baking), but honestly, I don't understand anything even after I learned how to do texture baking. The process goes like this: 1. Creating an appropriate unwrapped UV channel that will be used by baking 2. Creating and configuring one or more bake sets 3. Setting up the AOVs 4. Executing the bake It starts for me at step 2, do I need to use the bake texture note? 3. Why do I need to set up AOV and how is this connected to the bake texture note. Am I missing some important information here? If somebody could explain to me step by step how do to create a Lightmap using Redshift? Currently, I'm using Houdini 16 and Redshift 2.5.32.
  16. With the help of both the Redshift community and resources here, I finally figured out the proper workflow for dealing with Redshift proxies in Houdini. Quick summary: Out of the box, Mantra does a fantastic job automagically dealing with instanced packed primitives, carrying all the wonderful Houdini efficiencies right into the render. If you use the same workflow with Redshift, though, RS unpacks all of the primitives, consumes all your VRAM, blows out of core, devours your CPU RAM, and causes a star in nearby galaxy to supernova, annihilating several inhabited planets in the process. Okay, maybe not that last one, but you can't prove me wrong so it stays. The trick is to use RS proxies instead of Houdini instances that are in turn driven by the Houdini instances. A lot of this was based on Michael Buckley's post. I wanted to share an annotated file with some additional tweaks to make it easier for others to get up to speed quickly with RS proxies. Trust me; it's absolutely worth it. The speed compared to Mantra is just crazy. A few notes: Keep the workflow procedural by flagging Compute Number of Points in the Points Generate SOP instead of hard-coding a number Use paths that reference the Houdini $HIP and/or $JOB variables. For some reason the RS proxy calls fail if absolute paths are used Do not use the SOP Instance node in Houdini; instead use the instancefile attribute in a wrangle. This was confusing as it doesn’t match the typical Houdini workflow for instancing. There are a lot of posts on RS proxies that mention you always need to set the proxy geo at the world origin before caching them. That was not the case here, but I left the bypassed transform nodes in the network in case your mileage varies The newest version of Redshift for Houdini has a Instance SOP Level Packed Primitives flag on the OBJ node under the Instancing tab. This is designed to basically automatically do the same thing that Mantra does. It works for some scenarios but not all; it didn't work for this simple wall fracturing example. You might want to take that option for a spin before trying this workflow. If anyone just needs the Attribute Wrangle VEX code to copy, here it is: v@pivot = primintrinsic(1, “pivot”, @ptnum); 3@transform = primintrinsic(1, “transform”, @ptnum); s@name = point(1, “name_orig”, @ptnum); v@pos = point(1, “P”, @ptnum); v@v = point(1, “v”, @ptnum); Hope someone finds this useful. -- mC Proxy_Example_Final.hiplc
  17. I assume I'm missing something obvious. When you have multiple redshift rops how do you specify which one is used for IPR rendering? The Render View only allows you select from Redshift_IPR ROPS. So I'd like to have a Redshift_IPR ROP for each Redshift ROP. How do you explicitly define this connection between the Redshift_IPR ROP and the Redhsift_ROP? The "Linked ROP" attribute doesn't seem to do anything. I'm a little lost. Please help. Thanks
  18. Stark Houdini Palm Tree FX

    Link to Course In this 3 hour course I show you how to setup a system for dynamic and unique palm trees. No two are alike. I start from the very first leaf and go all the way through to rendering with Redshift in the most efficient way possible. At the end you will understand how to make your own trees and take these concepts and apply them to other setups that will bring your skills to the next level! Includes 13 Chapters and the final Houdini file. Also the assets to follow along. Full HD
  19. I have a dynamic volume that I want to render with redshift. It is not a pyro sim but when rendering the density changes as a function of time, so by about frame 100 it has dissapeared. I want it to stay the same. I cant find an attribute in the redshift volume shader related to time. Can anybody help me? I basically want the render to be the same on frame 1 as it is on frame 500. The volume is generated via a point cloud that is rasterized. THanks!
  20. Hey, I took advantage of some of the price drops on GPU's lately with the release of the 20 series cards from Nvidia. I Got an evga 1070ti ftw2 card for 429$( also has a 20$ MIR to drop final cost to 409$). I put this into my machine that has had an evga 1080ti FE card in it since I built it a year and a half ago. I wanted to share the "real world" test results in case anyone else is wondering if it is worth it to pick up another card. The PC is running win10pro, 64gb ddr4 ram, intel i7 6850k 3.6ghz, primary drive is a samsung 960 evo m.2 ssd and a secondary crucial ssd, 1000 watt evga gold G3 psu, Houdini 16.5.268 and redshift 2.5.48 ( i think ) etc... I ran a redshift test on a scene that is a decent size pyro sim, rendered 60 frames at full HD with fairly hq setting. With just the 1080ti in the pc, the render took 38min17seconds. With the addition of the 1070ti, the render took 25min26seconds for the 60 frame sequence. Adding the second card took almost 13 minutes of the sequence render time. I would say it is worth the roughly $400 bucks. With the option of enabling/disabling gpu's in the redshift plugin options, I ran a single frame of the render and here was the result: with just the 1080ti - 26 seconds for the first frame. With just the 1070ti - 34s, with a little boost to the gpu settings on the 1070ti using the evga overclock software - 32 seconds( not enough for me to want to keep it overclocked beyond how it arrived). With both gpu's enabled - 15 seconds. I think I would be willing to buy another 1070ti while the sale/rebate is going on if it will reduce the render time a further 13 minutes. I'm assuming it would, but maybe I'm not adding something up right here. If adding one 1070ti to the machine cut 13 minutes off the render, wouldn't the addition of another 1070ti take another 13 minutes off the render time.? It would be incredible to drop the test sequence render time down from 38 min to 12 min for roughly $800 in hardware upgrades.! I ran all the PC's parts through a component checker online and even If I add a 3rd card, it should still have about a 100watts of buffer on the 1000w psu. Would probably want to add some more/better case fans if increasing the GPU count from one to three.! Anyways, thats what adding an extra card did for me. E
  21. Hi I was playing with this wool effect .otl and it works for the main thicker geometry but I cannot get the fine fluff / strands with Redshift. I have enabled OBJprams on the strands and enabled render as strands in the redshift settings but the finer fluff that he is getting with Mantra does not show with Redshift. Anybody want to save me hundreds more minutes trying to figure it out? Thanks! Video: https://vimeo.com/243011851 Otl file: https://www.dropbox.com/s/6eg0uyhzqm1fdr6/Create_Wool.otl?dl=0 Ps: If there are better methods for wool/felt effect I’d love to know!
  22. Hi, Has anyone seen much information about the latest Nvidia cards that will be coming out end September in relation to GPU rendering.? I have been waiting for the specs on these cards for months, and finally they have been released, but of course all the articles so far that I have seen are still somewhat speculation on performance and "leaks" of specs that may or may not be real and all are geared towards gaming. I must say, some of these leaked tests aren't too impressive, like 5% performance increases on the new GTX2080ti over the old 1080ti, but I would have to assume that's because the software doing the tests isn't taking advantage of the RT and tensor cores. I am disappointed that on a $1200 card, they still only have the same 11GB of ram as the 1080ti has, although it is faster/newer ram, I was hoping for more ram.! Have there been any statements made by redshift or otoy about what speed improvements will come form having a card with "RT and tensor cores"..? Just wondering because I will be needing another 2 gfx cards in the next month, and the 10 series cards are having great price drops recently, some 1070ti's are as low as $399. If these new flashy RT cores are going to be a huge performance gain, then I will probably hold out for at least the 2070's. Any info would be great. Thanks. E
  23. I'm emitting smoke with colors. After adding and creating dop fields import with density, vel and Cd channel for rendering the colors are not visible in the render. When I'm visualizing them as color channel everything works fine. How can I make volume colors visible in Redshift render?
  24. Droid_v1

    Hi all, Here is a quick doodle in Houdini + Redshift + DAZ. Any comments are welcome. Cheers.
  25. Hello everyone, I am new to houdini, and I am trying to do something I think should be simple, but I can't wrap my head around how to set it up correctly so that it works. I want to transfer a point attribute from points generated on a source mesh, back onto set mesh or ideally into a redshift shader (RS_material) to control the opacity. I am trying to use the super cool tutorial below, and use the points generated to drive the reveal of a mesh through the use of the opacity channel in the redshift shader. I can't seem to understand how to set this up so that the parts that need to see each other can, and do I need to convert the attribute into something that can go from 0 - 1 and / black to white? I'm just a bit lost and im sure I over complicate things, so if anyone would please just point me in the right direction at least, it would be much appreciated. Like what is the general basic order I should of this? mesh - generate points - I can see the attribute in my geo-sheet but now what? attribute transfer I can't seem to use correctly Thank youuu !!!