Jump to content

Search the Community

Showing results for tags 'redshift'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • General
    • Lounge/General chat
    • Education
    • Jobs
  • Houdini
    • General Houdini Questions
    • Effects
    • Modeling
    • Animation & Rigging
    • Lighting & Rendering
    • Compositing
    • Games
    • Tools (HDA's etc.)
  • Coders Corner
    • HDK : Houdini Development Kit
    • Scripting
    • Shaders
  • Art and Challenges
    • Finished Work
    • Work in Progress
    • VFX Challenge
    • Effects Challenge Archive
  • Systems and Other Applications
    • Other 3d Packages
    • Operating Systems
    • Hardware
    • Pipeline
  • od|force
    • Feedback, Suggestions, Bugs

Found 60 results

  1. I'm emitting smoke with colors. After adding and creating dop fields import with density, vel and Cd channel for rendering the colors are not visible in the render. When I'm visualizing them as color channel everything works fine. How can I make volume colors visible in Redshift render?
  2. Droid_v1

    Hi all, Here is a quick doodle in Houdini + Redshift + DAZ. Any comments are welcome. Cheers.
  3. Hello everyone, I am new to houdini, and I am trying to do something I think should be simple, but I can't wrap my head around how to set it up correctly so that it works. I want to transfer a point attribute from points generated on a source mesh, back onto set mesh or ideally into a redshift shader (RS_material) to control the opacity. I am trying to use the super cool tutorial below, and use the points generated to drive the reveal of a mesh through the use of the opacity channel in the redshift shader. I can't seem to understand how to set this up so that the parts that need to see each other can, and do I need to convert the attribute into something that can go from 0 - 1 and / black to white? I'm just a bit lost and im sure I over complicate things, so if anyone would please just point me in the right direction at least, it would be much appreciated. Like what is the general basic order I should of this? mesh - generate points - I can see the attribute in my geo-sheet but now what? attribute transfer I can't seem to use correctly Thank youuu !!!
  4. Houdini Lighting Artists - ETC LA

    Hey guys, we have a big need for Houdini Lighting artists here at Electric Theatre Collective LA. Any referrals are also welcome!
  5. I want to scatter some trees as instance .rs files on my ground. I have a trees that contain around 130K points. I have scattered them on a grid, using redshift proxy files and instance. +My apporach: export each tree as .rs import on redshift proxy node. instance proxy scatter on grid the end.... rendertime fuqqen slooow(or is it)? +My comptuer: 32gb ram graphic cards one nvidia 1080 gtx and one nvidia 1070 gtx I have 1000 trees scattered, when i try to render the trees up close, one frames takes around 2 minutes to render, is this slow? When I checked out Atoms video on how to do something like this, he had a much faster render response, but his obj/rs. files was lower on points as mine objects are. Is this the main reason or am I using the redshift instance wrong? I'm not using his workflow, i import each .rs file on separate nodes. Thanks for any help at all I want to optimize my scene, but unsure if i need to reduce my points on each obj/rs or if i can do something else to reduce rendertime. SCENE FILE: redshiftProxy_v01.hiplc
  6. Redshift Object Id's

    Hey All, I'm trying to figure out why I can't get Redshifts puzzleMatte AOV to work.. I understand the workflow in Redshift for Maya, as its pretty easy to set an objects ID.. where as in Houdini I'm not quite sure where to set the SOP's object ID. I've even tried Material ID right in the RS_Material node and still no dice. What is the proper workflow for something like this? Do I create a wrangle and set the ID in there? or is there some sort of parameter that i need to create in the interface? The issue is I'm not sure what variable (if it is at all) Redshift is looking for on the points? prims? detail? I'm a bit lost, and there doesn't seem to be any tutorials or posts about this! Cheers, -J
  7. What's the latest Houdini + RS versions people are successfully running? For me it's Hindie 16.5.439 + RS 2.6.12 (built for Hindie 16.5.268). Any more recent production or daily build of Hindie doesn't launch on my setup (Ubuntu 18.04 + Nvidia TitanV GPU) and any more recent RS version simply won't render. Get strange results in RS Renderview especially.
  8. Hey all, I have started using redshift a few months ago and so far i like it a lot.! I currently have one 1080ti 11gb gpu in the machine I am using it on. Would like to get either another one of those cards since the gpu prices seem to have dropped lately back to something more reasonable, or get 2 1070ti 8gb cards. I have heard that the performance diff is only like a 20% gain for the 1080 over the 1070's, so might be better off with getting 2 of the 1070's. The main question is though, what happens when you hit the render button on your redshift node for a sequence inside houdini if you have more than one gpu.? If I had 3 gpu's, would it automatically render frame one on the 1st gpu, frame 2 on the second, frame 3 on the 3rd and so on..? Would it render the first frame across all 3 cards simultaneously by breaking up the frame to submit to the different cards, is that even possible.? Do the extra gpu's only get used if you use a distributed rendering software like deadline or by launching RS renders from command line.? It seems like you have to launch from command line in order to get the other gpu's to render, but I have never worked with a machine with more than one gpu installed. If I were to submit a 100 frame render from within houdini by clicking the render to disk button on my RS node, would it only use the main gpu even with other gpu's installed.? Any info from artists using multi gpu systems would be great. I didn't find a lot of info about this on the redshift site, but might not have looked deep enough. The end goal would be to have the ability to kick off say 2 sequences to render on 2 of the gpu's while leaving my main gpu free to continue working in houdini, and if its time to end the work day, allow the main gpu to then render another sequence so all 3 cards are running at the same time. I will most likely need to get a bigger PSU for the machine, but that is fine. I am running windows 10 pro on a I7-6850K hex core 3.6ghz cpu with 64gig ram in an asus x99-Deluxe II MB if that helps evaluate the potential of the setup. One last question, sli bridges.? Do you need to do this if trying to setup 2 additional gpu's that will only be used to render RS on.? I do not wish to have the extra gpu's combined to increase power in say a video game. I don't know much about sli and when/why it's needed in multi gpu setups, but was under the impression that is to combine the power of the cards for gaming performance. Thanks for any info E
  9. Greetings on behalf of the Principium team! We are a group of seven third year students from NHTV University of Applied Sciences in Breda, the Netherlands. We have been working on this cinematic short film project for 19+ weeks, from it's conception and storyboard to the stage you see now. We hope to complete and release this short film in July, 2018. We're using Houdini and Redshift for Houdini for various effects in the film, some being crucial narrative points, and others being to add juice! We want to bring you along on the last two months of our project, and share our progress as we near completion! Feel free to give any feedback or comments, we look forward to hearing from you!
  10. I’ve got a simple voronoi fracture sim with added edge detail (from Jonny Farmfield's example). An object from two Boolean’d spheres falls and breaks on the ground. Everything is working great except I can’t figure out how to get my RSNoise bump texture to stick to my fractured pieces. I’ve tried to do everything suggested from my search results, but it’s not working. I tried to create a vertex UV attribute as well as a rest SOP to no avail. I’m quite certain it’s something simple. Always is. On a side note, I am creating three point attributes from boolean and fracture groups to pass to my redshift shaders, inside, outside, and ainsideb. This works great and is picked up by RS even though I’ve not specified that the attributes get passed through my assemble SOP. When I do pass them though explicitly they get all jumbled up and don’t correspond to the right points anymore. Does redshift just figure out these attributes on it’s own. They aren’t even listed in the Geometry Spreadsheet yet work correctly. I didn’t want to assume this behavior as it might be messing up other attributes needed for the noise to work. I’ve attached a file. simple.edge.displace.v7.hiplc
  11. Hi ! I am working on RBD in Houdini 16.5.268 and I render it with redshift 2.5.56 I have some problems during the rendering : we see the edges of the fractures ! I tried a lot of things without managing to get rid of it. It works fine in mantra, no artefact. My geo has UVs, Normals and rest position. Do you have any idea of how to correct that please ? Scene is attached and some pics of the results. PS : I also had other problems while rendering rbd in redshift. In some scenes, applying two different materials with different displacement values on inside and outside groups of an object results with a gap between inside and outside faces. In other scenes, the displacement of inside surfaces flicked. It seemed that it was moving along time, I didn't find out why. I read the redshift documentation and watched several tutorials but maybe I don't work the correct way with displacement in redshift, could someone help me ? Thank you very much ! Have a good day ! no tessellation no displacement tessellation no displacement no tessellation displacement tessellation displacement mantra displacement material MAT_prob_v01.hipnc
  12. REDSHIFT AOV EMPTY

    if I render with redshift my aov are always empty, whether zdepth, gi, diffuse raw....
  13. Redshift_Mesh_Light

    I'm practicing on Redshift from Rohan Dalvi tutorials but i stuck where he using geometry light or known as Mesh light, it doesnt work for me the mesh is there and linked with the RS_light node but nothing happen
  14. Hi guys , Recently I followed some tutorials and made this guy. but when I start render at Redshift, It is too thin that I can't see. Just like this T_T When I rendered it in Mantra, It looks like has thick wire rightly, but I don't like this look. So I wanted to render this man in Redshift and looking for some clever guys giving me some tips about it. Sorry for my poor English ! and Pleas help me ! Thanks for read and have a good day
  15. My houdini version is 16.0.736 ,I read the official document the plugin 16.0.705 can compatible with Houdini 16.0 version 705 and above. But I failed. Just like this.
  16. My houdini version is 16.0.736 ,I read the official document the plugin 16.0.705 can compatible with Houdini 16.0 version 705 and above. But I failed:
  17. Hey magicians, I'm trying to render millions of particles, I'm using a method I saw in Peter Quint videos using the wedge rop, my issue is when I load lots of instances, pc gets super slow. PQ recommends to use packed disc primitive to avoid this, but when I use, I can't read attributes other than P on the instance redshift, so I can't apply things like motion blur. Any recommendations to solve this? Thanks!
  18. Hey magicians, I'm new to redshift, I'm trying to generate some sort of glossy particles to add some sexyness, but I can't get the desired result. What I tried is making 2 instances, 1 with the full point cloud, and the other with the glossy ones, but I can't make them to pop like the references. References: } What I have so far: Thanks!
  19. Hi, In building a terrain, I used Entagma's tutorial on live rendering of Heightfields in Houdini using Redshift: The issue I'm running into is that my UV's are now super stretched on the terrain from using displacement on a grid. My terrain is huge, like 8,600 units across square. If I convert to polygons, it's pretty heavy. I'm trying to figure out the best workflow that will allow me to have the detail I need, but not be so polygon heavy. Would exporting out some lower resolution geo for rendering along with a baked displacement map be a good option? I need to be able to somehow edit the UVs as well so they are not stretched on my terrain texture. Any help would be appreciated. Jim
  20. Hello, I recently went through Steven Knipping's tutorial on Rigid Bodies (Volume 2). And now I have a nice scene with a building blowing up. The simulation runs off low-res geometry and then Knipping uses instancing to replace these parts with high-res geometry that has detailing in the break-areas. The low-res geometry has about 500K points as packed disk primitives so it runs pretty fast in the editor. Also, it gets ported to Redshift in a reasonable amount of time (~10 seconds). However, when I switch to the high-res geometry, as you might guess, the 10 seconds turn into around 4 minutes, with a crash rate of about 30% due to memory. When I unpacked it I think it was 40M points, which I can understand are slow to write/read in the Houdini-Redshift bridge, but is there no way to make Redshift use the instancing and packed disk primitives? My theory kind of is that RS unpacks all of that and that's why it takes forever, because when I unpack it beforehand, it works somewhat faster - at least for the low-res. The high-res just crashes. I probably don't understand how Redshift works, and have wrong expectations. It would be nice if someone could give me an explanation. Attached you'll find an archive with the scene with one frame (Frame 18) of the simulation included as well as the folder for the saved instances of low- and high-res geometry. Thanks a lot for your help, Martin (Here's the pic of the low-res geometry, the high-res is basically every piece bricked down into ~80 times more polygons.) Martin_RBDdestr_RSproblem_archive.zip
  21. Hello All, I have created a model in Houdini which consists of 6 different sections. All those areas have their unique Point attribute, so I merged everything into one object with the help of VDB and transferred all the point attributes. I would like to have a different shader on all those 6 sections of the geometry. My approach was to create one Redshift Shader network and with the help of Blend material and Unique attributes for Blend weight, separate all shaded areas. But for one of the areas I needed to use separate Blend Material and a different displacement. So do I need to create Groups based on those Unique attributes and assign shader within the geometry node with 6 unique Redshift Shader networks? Or there is a way to assign those 6 shaders by the attribute?
  22. Hello fellow scientists,I am currently working on a shot where I want to render out an indoor-explosion over about 50 frames. Now since I can't get my mantra render-times below 20 minutes (in FHD) I had hoped to render the volume in redshift more quickly because I have had good experiences with rendering just density in RS.The problem comes up as soon as I start to use emission with redshift. I start to get very weird boxes of emission around the edges of my volume (it seems to be like 16 cubed boxes of voxels reminding me of compression.) - screen attached. Screenshot_50 has about the look I want to achieve, but with this weird halo. When I amp up the emission, the boxes become apparent.Not sure if I tried switching to VDB to render, but I'm not hopeful that would change much.Has anyone come across this issue before?Attached is also my hipfile with the way I set up my render and one cached frame so you don't have to sim again. (Including my approach to an indoor-explosion for anyone who might learn something from that - please do!)I would really appreciate any thoughts on this. (And if my decision to move away from mantra in this case is actually remotely sensible - I'm really unsure.)Thanks,Martin Grenade_RnD_v20.hip
  23. PIPELINE TD NEEDED

    Hi everyone, We're urgently looking for a solid Pipeline TD to join our growing team in Bournemouth, UK. Must be eligible to work in the UK. Desired knowledge/expertise: Maya, Houdini, Redshift, Golaem Crowd, Python, Deadline, Windows and Linux, UI design, solving per show render and data-flow challenges, liaising with department heads. We are an 80-seat full service VFX studio working on film, TV and commercials. Recent projects include Life, 47 Meters Down, Philip K Dick's Electric Dreams and Nocturnal Animals. We are looking to expand even further over the coming months to accommodate a number of big shows that we've won recently. We operate a little differently to a lot of our contemporaries, putting staff treatment first and encouraging healthy work/life balance through 7-hour working days and discretionary overtime/TOIL for the odd occasion that deadlines stack up. We're five minutes away from the beach, too. If you get the chance, please check out what we've been up to at www.outpostvfx.co.uk! IF YOU'RE INTERESTED, PLEASE EMAIL US AT TALENT@OUTPOSTVFX.CO.UK. Can't wait to hear from you!
  24. I understand the relationship between Unified max samples and so called 'local' samples. And the importance of keeping Unified samples less than Local samples. But what I find confusing is the relationship between BF, material and light 'Local' samples. Which takes priority and is there a recommended sequence of steps when reducing noise?
  25. I can't figure out how to get the output created by the "Sky Rig" shelf tool from the Cloud FX shelf to render with Redshift. I have no similar issue with the volume created by the "Cloud Rig" tool, although both seem to me like they create a VDB volume with a channel called "density" (which is what I've tried to render in Redshift with an "RS Volume" shader, which again works just fine with the kind of volume that "Cloud Rig" creates). What am I missing?
×