rohandalvi Posted April 2, 2014 Share Posted April 2, 2014 Hi, I thought this would be interesting to post. Octane just announced version 2.0. It looks really cool. They seem to be developing at a really fast pace. Feature list includes: Displacement Object Motion Blur ( deformation motion blur coming by the end of the year) Fur rendering Open Subdiv support Improved Sky rendering etc. They also announced render passes by the end of the year http://render.otoy.com/forum/viewtopic.php?f=5&t=39059 The link I am posting contains a truck animation which pretty much had me fooled in the beginning. I almost thought it was a live shoot with the truck composited into the scene. Apparently it's all 3d and the scene only uses 1.5 GB of ram. rendertime is 4-5 mins per frame on 4 Nvidia Titans. The rest of the feature list is given in the link below http://render.otoy.com/forum/viewtopic.php?f=7&t=38935 There is also a video from their presentation from GTC 2014 which I am posting below. In the video towards the end they also lay out their roadplan for this year. http://nvidia.fullviewmedia.com/gtc2014/S4766.html My main reason for posting this is that Maybe it's time that Sidefx started looking into GPU rendering as well. With cards getting more and more ram every passing year, as it stands today the Titan has 6gb and the GTX 780 has 3 ( a 6GB GTX 780 has also been announced) and the top end quaddro has 12GB, the situation will only improve in the years to come. I recently picked up a GTX 780 and was trying out a demo version of Octane and also Blender Cycles. Both renderers can on a easily challenge a 16 or 24 core Xeon system ( as long as the scene fits with memory) even with a single GTX 780. For me that's really nice because the price between adding a GTX 780 and trying to assemble or purchase a 24 core xeon is pretty huge. I do understand that GPU renderers have certain limitations , but with every passing year the list of limitations seems to get shorter. So do you think it's time that Mantra got a GPU version as well. Like blender cycles, have a CPU and a GPU version so the user has a choice and something to fall back on in case of very heavy scenes. Because honestly the speed difference is getting a little hard to ignore. regards Rohan Dalvi 1 Quote Link to comment Share on other sites More sharing options...
JonathanGranskg Posted April 2, 2014 Share Posted April 2, 2014 Looks so cool! I'm sure SideFX is already looking into GPU rendering. Quote Link to comment Share on other sites More sharing options...
Tom Posted April 2, 2014 Share Posted April 2, 2014 (edited) Redshift already have passes and all the stuff, except volumetric rendering (wich is gonna be there soon) and its just so much faster than octane and cycles. So only limitation is bugs and some other integration things (like not supporting all procedural textures). After using redshift for some time i can say that GPUs will replace (and are replacing) CPUs soon, atleast for freelancers. So id be more willing to have Redshift in H, than Octane or other gpu renders (except Mantra itslef, that would be awesome) Edited April 2, 2014 by Tom Quote Link to comment Share on other sites More sharing options...
rohandalvi Posted April 2, 2014 Author Share Posted April 2, 2014 (edited) Hi, I recently tried redshift. It felt a lot like a GPU version of vray. All those multiple GI methods a ton of parameters to tweak. And honestly after spending years tweaking parameters in Vray I don't want to go back to doing the same thing except on the GPU. I know there is a brute force mode in Redshift as well but I just wasn't very happy with the result. Somehow I felt that I was getting better looking images out of Octane than redshift. I know the look of a render is a very subjective thing but I just felt that octane was better looking. Besides octane has a 3dsmax plugin which is my primary software for work.(sadly) But it's nice to see some many different GPU renderers popping up all over the place. Now we just need the 3d software developers to convert one of the integrated renderers to GPU like blender has done. So, eventually I would prefer if mantra got a GPU mode that way I don't have to depend on any external plugin. regards Rohan Edited April 2, 2014 by rohandalvi Quote Link to comment Share on other sites More sharing options...
sebkaine Posted April 2, 2014 Share Posted April 2, 2014 (edited) The thing i don't understand with GPU based engine is how do you manage renderfarm ... If i am not mistaking ? - A GPU farm will be far more expensive than a CPUone . - if you run highend GPU you will at the end have to buy very good CPU to make them work. - Also the lifecycle parameter CPU run around 35-60degre while GPU are more around 100-120 and their lifespan is usually shorter in my experience So to me if you want to deploy you computing power on a farm, a CPU base engine is still better. GPU computation to accelerate the previz process is very cool indeed , and popular engine like Arnold / Maxwell / Prman / Vray all have this kind of features. But i don't understand exactly the utility (hype) of a 100% GPU based engine , and maybe i am missing a strategic element in my argumentation ? I also don't see the point of using an external BIASED engine (redshift) inside houdini when you have Mantra at your disposal. For indies with only one desktop it could be useful maybe to have quick rendertime on a single machine. But for a company that get unlimitted mantra token for free and a decent CPU renderfarm , i don't know if a GPU engine could compete in term of Price/Speed against Mantra. Cheers E Edited April 2, 2014 by sebkaine Quote Link to comment Share on other sites More sharing options...
Guest mantragora Posted April 2, 2014 Share Posted April 2, 2014 The thing i don't understand with GPU based engine is how do you manage renderfarm ... If i am not mistaking ? - A GPU farm will be far more expensive than a CPUone . - if you run highend GPU you will at the end have to buy very good CPU to make them work. - Also the lifecycle parameter CPU run around 35-60degre while GPU are more around 100-120 and their lifespan is usually shorter in my experience You can fit couple tesla cards in one workstation so you need less nodes to do the same work. And one such node with 3 tesla cards will make more work than couple multiprocessor CPU nodes. This is older OTOY presentation but they show more cool stuff there with instant rendering on 60 GPU's. http://www.ustream.tv/recorded/36313045 Quote Link to comment Share on other sites More sharing options...
altbighead Posted April 2, 2014 Share Posted April 2, 2014 The thing i don't understand with GPU based engine is how do you manage renderfarm ... If i am not mistaking ? - A GPU farm will be far more expensive than a CPUone . - if you run highend GPU you will at the end have to buy very good CPU to make them work. - Also the lifecycle parameter CPU run around 35-60degre while GPU are more around 100-120 and their lifespan is usually shorter in my experience So to me if you want to deploy you computing power on a farm, a CPU base engine is still better. GPU computation to accelerate the previz process is very cool indeed , and popular engine like Arnold / Maxwell / Prman / Vray all have this kind of features. But i don't understand exactly the utility (hype) of a 100% GPU based engine , and maybe i am missing a strategic element in my argumentation ? I also don't see the point of using an external BIASED engine (redshift) inside houdini when you have Mantra at your disposal. For indies with only one desktop it could be useful maybe to have quick rendertime on a single machine. But for a company that get unlimitted mantra token for free and a decent CPU renderfarm , i don't know if a GPU engine could compete in term of Price/Speed against Mantra. Cheers E You just summed up the reason why GPU render is still not popular. more expensive than CPU and probably a lot of heat too. Quote Link to comment Share on other sites More sharing options...
Guest tar Posted April 2, 2014 Share Posted April 2, 2014 It's great to have GPU renderers improve but at the same time CPU renderers are also improving. Ironically the final goal is to merge CPU to GPU i.e. Nvidia Pascal. The big question is what will make GPUs look slow in the future? Quote Link to comment Share on other sites More sharing options...
Guest mantragora Posted April 2, 2014 Share Posted April 2, 2014 (edited) You just summed up the reason why GPU render is still not popular. more expensive than CPU and probably a lot of heat too. XEON's are not for free and I don't think that 60 XEONS can outperform 60 tesla cards. EDIT: So for XEON node you need also more pricey server motherboard too. And you will need most powerfull XEONS too, to much power of teslas. I'm not sure but the cost may be not an issue here. Heat, maybe. Memory is a problem. Redshift solves this differently than Octane so maybe it isn't a problem anymore too. Edited April 2, 2014 by mantragora 1 Quote Link to comment Share on other sites More sharing options...
rohandalvi Posted April 2, 2014 Author Share Posted April 2, 2014 Actually a 780 ti will perform the same as a Tesla for rendering ( both have 2880 cuda cores) and would be a lot cheaper. there is just a massive difference in Ram. 3gb vs 12gb But if you don't deal with very heavy scenes a 780 or a titan would be just fine. 780ti - $690 titan - $1000 tesla k40 - $5000 in comparison a top of the line xeon is about a $1000 plus just for the processor. and I am pretty sure a 780ti could outperform that if ram is not a consideration. Also not everyone deals with film level scenes. Sometimes the jobs are a lot simpler , don,t have massive textures or geometry but can take a long time to render because you have a lot of complex material effects. I recently did a medical animation for a client. The scene file was barely over 8mb. But the file would take over 15-20 mins a frame at full HD on my i7 920 with 4 gb ram ( not the greatest of machines). total frames 750. So eventually I rendered it at my friend's studio on a HP z820. that's a 32 core system. The render time on that was around 3-5 mins a frame. This was using vray. Later on I transferred the scene to blender to try out Cycles. I had an old card GTS 450 ( 120 cores- 1 gb ram) Using that card I was getting 4 mins a frame at full HD. I recently upgraded to a gtx780. The same scene now renders in under a min. So for someone like me a GPU render can be godsend. Quote Link to comment Share on other sites More sharing options...
altbighead Posted April 2, 2014 Share Posted April 2, 2014 XEON's are not for free and I don't think that 60 XEONS can outperform 60 tesla cards. It's not a case of outperforming but how multi-purpose the hardware is.60 tesla cards are going to sit idle when there is no jobs in the pipeline to output using specialized GPU renderer or GPU software that is written for telsa. CPU -throw at me 3d rendering,nuke rendering,simulation ,caching which mean it doesn't discriminate. I see GPU as a aid for productivity but not gonna take a CPU spot for near future.I think Nvidia is pretty stoked about gpu renderer but that isn't surprising. Quote Link to comment Share on other sites More sharing options...
sebkaine Posted April 2, 2014 Share Posted April 2, 2014 (edited) What's interesting is that the gap beetween CPU/GPU is not that big now. And i agree when you said that not everybody has film computing needs. Rohan has you look to know pretty well hardware perf would it be possible to devise 2 system that behave with similar perf , one GPU base and one CPU base. i just pick a rough exemple. CPU base - Xeon E5-2630V2 = 600Euro - s2011 mobo = 200Euro - GPU inside the CPU = 800Euro GPU - GTX 780Ti = 600Euro - decent CPU like the 4770 = 300Euro - s1150 mobo = 100Euro = 1000Euro so we are quite close, you think that for the same price GPU engine are now/soon? better in price/perf than CPU ones ? that's quite interesting to know if you put out the heat emission pb ! EDIT: One other important thing is power needs, Intel has done a great job with their CPU needs. I guess NVIDIA cards are still sucking your electricity like bloods. So you have to pick high power supply and thus you have to take account of - electricity - air conditional to cool this - also the risk that with a GPU farm your electricity installation will not handle the charge .... Edited April 3, 2014 by sebkaine 1 Quote Link to comment Share on other sites More sharing options...
symek Posted April 2, 2014 Share Posted April 2, 2014 A) Every Tessla needs server with CPU. The price of two modern Xeons: 3000$ + 1500$ for a node. C) Tessla: 5000$ + 1500$ for a node. Speedy rendering, still pricy, limited, and doesn't scale at all. Quote Link to comment Share on other sites More sharing options...
lisux Posted April 3, 2014 Share Posted April 3, 2014 Looks so cool! I'm sure SideFX is already looking into GPU rendering. I hope not.As many others have said before, I can't see GPU rendering as something that fits wel in the CGI pipeline for animation/vfx. Quote Link to comment Share on other sites More sharing options...
Guest tar Posted April 3, 2014 Share Posted April 3, 2014 If GPU rendering can be implemented like the current OpenCL acceleration for Dops, a checkbox, why not? Quote Link to comment Share on other sites More sharing options...
willow wafflebeard Posted April 3, 2014 Share Posted April 3, 2014 i remember my boss bought 4 titans for 2 nodes for octane renders. under 2 days our fuse broke and everybody out for a break... for me atleast GPU renders still looks like game graphics or maybe octane inherit such optimizations for realtime games Quote Link to comment Share on other sites More sharing options...
willow wafflebeard Posted April 3, 2014 Share Posted April 3, 2014 (edited) I hope not. As many others have said before, I can't see GPU rendering as something that fits wel in the CGI pipeline for animation/vfx. the only thing it holds against CPU renders is speed.. everything else CPU won. when im sitting next to ourLighting/Rendering Artist it feels like a volcano under your desk Edited April 3, 2014 by gunk0001 Quote Link to comment Share on other sites More sharing options...
Netvudu Posted April 3, 2014 Share Posted April 3, 2014 when im sitting next to ourLighting/Rendering Artist it feels like a volcano under your desk why...is she really attractive? 4 Quote Link to comment Share on other sites More sharing options...
JonathanGranskg Posted April 3, 2014 Share Posted April 3, 2014 I hope not. As many others have said before, I can't see GPU rendering as something that fits wel in the CGI pipeline for animation/vfx. I didn't mean that they're going to suddenly include a GPU renderer with Houdini. I definitely think that SESI should not rule out any possibilities just because people think it doesn't work well enough right now. It would be stupid not to research it. And not necessarily for final rendering... The viewport for example can still be improved that can be made. (not really the topic of this thread though) Quote Link to comment Share on other sites More sharing options...
krautsourced Posted April 4, 2014 Share Posted April 4, 2014 i remember my boss bought 4 titans for 2 nodes for octane renders. under 2 days our fuse broke and everybody out for a break... for me atleast GPU renders still looks like game graphics or maybe octane inherit such optimizations for realtime games 4 Titans plus PC around are probably something like 1400W at a 100%. So, let's say 3MW for both machines, at 220v, that's 13,something Amps. In a normal household, fuses are usually rated around 16 Amps, so, yeah, if there was other stuff connected, this may trigger a fuse. Something to keep in mind with any hardware tbh, not just GPU rendering. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.