Jump to content

GPU and Mantra?


rohandalvi

Recommended Posts

Do you think Mantra should be GPU accelerated?

Over the last couple of years the RAM capacity of Graphics cards has increased quite a bit. Also GPU renderers are catching up as far as feature set is concerned. The speed benefits for certain situations are definitely there. Renderers like red shift are being used for proper production work and I believe it also supports out of core rendering so that you're not limited to RAM on your graphics card.

Beyond that a lot of other render engines are adding a GPU mode. Vray already has GPU mode for Vray rt. Maxwell previewed GPU acceleration recently. We're even getting Octane for Houdini, and people seem to quite happy with that.

With all these advances happening, do you think that Mantra should also have an option to use the GPU? I don't see the harm in it. I know it can take a fair bit of time to develop but i think it will definitely help attract more people to Houdini.

Link to comment
Share on other sites

IIRC from the multithreading notes: Mantra and GPU rendering has 'Difficulty is sharing cached structures'

 

http://www.multithreadingandvfx.org/course_notes/2015/houdini_threading.pdf 

 

EDIT: found it: under What is VEX?

 

 

What is VEX? [snip] Often it is compared to hardware shading languages, such as GLSL, OpenCL, CUDA, or Cg. This is a bit misleading. If they were truly similar, multithreading VEX would be trivial. And moving it onto the GPU would likewise be a manageable task. We will talk more about the difficulties VEX poses later. 

 

http://www.multithreadingandvfx.org/course_notes/MultithreadingHoudini.pdf

Edited by tar
Link to comment
Share on other sites

I wouldn't be against some GPU acceleration. But it is extremely important imo that you always have the ability to use it in a CPU only mode.

Even if it's longer and need more blade, i will always prefer invest in intel CPU for my renderfarm.

 

Big card like 980Ti / Titan are very expensive / power suckers / generate a lot of heat and are manufactured by nvidia and their performance are driver dependant.

I would not like to bind myself and my outpouts to nvidia.

 

Of course if the mantra ROP offer an option to use the GPU if there is one to optimise certain computation it would not hurt i guess.

Also using the GPU of the local workstation to optimise the preview of the scene / fasten lookdev / fasten iteration is definitly a great idea.

 

This is what Arnold / Ris / Maxwell are investigating ... and i guess sesi is also doing the same !

 

But keep the possibility to stick with low power consumption / few heat / intel farm is very important imo.

 

Cheers 

 

E

Edited by sebkaine
Link to comment
Share on other sites

I think GPU rendering has found its place for some type of projects which is where Octane shines, but it still mighty unusable for some others. I´m also sure SESI, Solid Angle and many others are looking at this and seeing how the tech evolves.

Any reasoning involving Vray RT as an example is immediately void of truth in my mind because that´s not a useable tool. More like a cpu-hogger joke that kills your machine with terrible performance in exchange for something similar to a hint of interactive shading/lighting.

Link to comment
Share on other sites

I'm not so eager to have Mantea in n GPU, is a difficult task which will consume many SESI resources, and at the end of the day Houdini shines managing huge amount of data, and this is the main problem with GPUs.

I prefer to get improvements in the IPR.

Link to comment
Share on other sites

the upcoming xeon phi should solve some problems in that area but whats still open is the price :-/

 

Thanks for pointing this Xeon Phi stuff Francis !

I've never heard about this ? :blush:

 

It's not clear in my mind what it is exactly ?

- it is not a GPU , but looks to be plug in the GPU slot

- so i guess it's a sort of dedicated processor for executing instruction in parallel that will communicate with the CPU ?

http://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/high-performance-xeon-phi-coprocessor-brief.pdf

 

From their brochure marketing , you can optimise code for parallel execution quite easily without the needs of coding a GPU apps like with CUDA.

That's extremely interesting ! thanks again for the info ! :)

 

Cheers

 

E

Edited by sebkaine
Link to comment
Share on other sites

There is always a wall you hit with GPU rendering as far as scene complexity goes. This is certainly the case with Blender's Cycles. And when you exceed that limit you either have to drop back into CPU rendering or reduce the complexity of your scene.

 

The speed of GPU rendering is great for smaller things and products shots that is why Octane and the like have such a following. Quicker turn around in a fast paced environment.

 

What I would like to see is GPU acceleration put into the OpenGL rop. Let's fix that system up for high speed rendering. At least let us render at the same quality as the viewport.

  • Like 2
Link to comment
Share on other sites

Well we do get more and more stuff with OpenCL lately. And with GPU RAM increasing steadily, up to 32GB next year, the gap is closing. If everyone invested more into OpenCL (for example nvidia instead of their cuda) I reckon things would start moving in the right direction. GPU for fast workstation turnaround, then send everything off to CPU farm.

Link to comment
Share on other sites

From their brochure marketing , you can optimise code for parallel execution quite easily without the needs of coding a GPU apps like with CUDA.

That's extremely interesting ! thanks again for the info ! :)

Cheers

E

Yep and I believe is the main reason because SESI hasn't invest too much time into getting VEX in GPU.

Is a huge task and basically you have to battle against a platform that needs lots of custom code. The guys from Fabric have trying it for some years and they still don't have it working.

VEX has the same features as K-language, is exactly the same idea, but SESI talk to manufacturers and sooner or later there will be CPUs with vector processing capabilities similar to GPUs, but without the need of custom code, is a matter of time.

In my opinion investing in GPU code in the long term is not a good choice.

Cheers

Edited by lisux
Link to comment
Share on other sites

You are right but you may be surprised the kind of scenes we are rendering using RedShift... very surprised.

 

There is always a wall you hit with GPU rendering as far as scene complexity goes. This is certainly the case with Blender's Cycles. And when you exceed that limit you either have to drop back into CPU rendering or reduce the complexity of your scene.

 

The speed of GPU rendering is great for smaller things and products shots that is why Octane and the like have such a following. Quicker turn around in a fast paced environment.

 

What I would like to see is GPU acceleration put into the OpenGL rop. Let's fix that system up for high speed rendering. At least let us render at the same quality as the viewport.

Edited by jordibares
Link to comment
Share on other sites

GPU is good at doing very specific things. Most of them are simple and predictable tasks. You have programmable APIs like CUDA or OpenCL, but those kernels are limited in terms of code size, number, data volume, above which they stop doing their job. The same for super fast GPU memory, which is limited and slow in off-card bandwidth. This makes GPU computing great for some sort of tasks, but unusable for others, which happens to be most of tasks in general.  Once you create a general purpose GPU renderer, which matches in terms of functionality offline renders, it is no longer fast enough to justify investments required to make such renderer for (a), to possibly buying it for ( B), and to maintain it and its hardware for ©, as GPU hardware has shorter life span than CPU (yes, I do render on 8 years old hardware!). 

 

There is just a bunch of production ready, widely used renderers on the market (and off-the-market): Arnold, Sony Arnold, PRMan, 3delight, VRay, Mantra, Manuka, CG Studio, Hyperion. None of them is GPU enabled. For a reason.

 

 

skk.

 

 

ps I wouldn't mind GPU renderer in Houdini. Just not Mantra. (Mantrag, with subset of functionality would be fine). 

 

ps2 This may change anytime, as GPU companies and people of Redshift/Octane do their job also.

Link to comment
Share on other sites

Yep and I believe is the main reason because SESI hasn't invest too much time into getting VEX in GPU.

Is a huge task and basically you have to battle against a platform that needs lots of custom code. The guys from Fabric have trying it for some years and they still don't have it working.

VEX has the same features as K-language, is exactly the same idea, but SESI talk to manufacturers and sooner or later there will be CPUs with vector processing capabilities similar to GPUs, but without the need of custom code, is a matter of time.

In my opinion investing in GPU code in the long term is not a good choice.

Cheers

 

 

This is a good call too. GPU are running out of options to get faster, the memory is being built in '3d' and placed closer to the GPU processor, the PCI bus is being shorted by NVLINK, Pascal is lower precision to run computations faster... GPU are to hit a wall too.

Link to comment
Share on other sites

Thanks for pointing this Xeon Phi stuff Francis !

I've never heard about this ? :blush:

 

It's not clear in my mind what it is exactly ?

- it is not a GPU , but looks to be plug in the GPU slot

- so i guess it's a sort of dedicated processor for executing instruction in parallel that will communicate with the CPU ?

http://www.intel.com/content/dam/www/public/us/en/documents/product-briefs/high-performance-xeon-phi-coprocessor-brief.pdf

 

From their brochure marketing , you can optimise code for parallel execution quite easily without the needs of coding a GPU apps like with CUDA.

That's extremely interesting ! thanks again for the info ! :)

 

Cheers

 

E

 

There is a article that mentions that it is possible to run windows server on the new xeon phi's because the cores are atom based. so when your code works with the TBB from intel the effort of porting should be very low.

 

Article: http://www.theplatform.net/2015/03/25/more-knights-landing-xeon-phi-secrets-unveiled/

  • Like 1
Link to comment
Share on other sites

@jordi: Yeah, but redshift is not available to the Houdini Indie user, only fully licensed Houdini users. That is why I advocate for OpenGL rop repair.

 

Have you tried the GL ROP in H15? The more serious issues with it have been resolved (transparent objects, material issues, BG image support).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...