Jump to content

Suggestions towards developing inhouse Renderer


Recommended Posts

we are currently planning to develop our inhouse tools and we already did some of the tools for lighting compositing etc... but now we are feeling to develop inhouse Renderer so If anybody has any idea how complicated and workforce approximation required to develop a REYES based renderer? Ive checked the code for pixie but would like to develop our own based on the Pixie...

any ideas or suggestions please...

Link to comment
Share on other sites

we are currently planning to develop our inhouse tools and we already did some of the tools for lighting compositing etc... but now we are feeling to develop inhouse Renderer so If anybody has any idea how complicated and workforce approximation required to develop a REYES based renderer? Ive checked the code for pixie but would like to develop our own based on the Pixie...

any ideas or suggestions please...

It really depends who you have to do it, in my experience. If you have a Mark Elendt or a Larry Gritz or a Andrew Clinton you might be able to maintain and develop a new renderer with 1 or 2 people, especially if you've had a kick start with good code base (like Pixie, perhaps). It's not a case of "Lets put programmer X on it, and we'll do fine" - the developer needs to be strong, insightful and passionate about rendering. If not you'll end up lagging in features and stability and you won't be able to innovate and enjoy your work at all. If you don't have those calibre people then I'd suggest you rather put effort into pipeline tools and getting an existing renderer (like Mantra) to sing for you and your pipeline. Pipeline people are easier to find than senior mathemetician types:)

Can I ask why you're interested in your own renderer? Is there something you're trying to do that current options aren't suited for? Say, specialised NPR rendering or some such?

I'm all for innovation and custom tools but only if they fill a new niche, not if they're just going to shadow existing solutions. It's quite exciting, though, :) (but that's my nerd side outshining my production-practical side).

Link to comment
Share on other sites

well the reason is that I want to use CUDA technology for that so that the renders can rampen up... my tech guy said that CUDA has very good mem management structure( which GPGPU doesnt have and hence Nvidia Gelato was not fast enough) so we can expect significant amount of speeds... may be 20x to 40x so I was thinking of going with the concept...

Edited by kensonuken
Link to comment
Share on other sites

well the reason is that I want to use CUDA technology for that so that the renders can rampen up... my tech guy said that CUDA has very good mem management structure( which GPGPU doesnt have and hence Nvidia Gelato was not fast enough) so we can expect significant amount of speeds... may be 20x to 40x so I was thinking of going with the concept...

There has been talk about a swing to multiple core machines (like a rumoured 90-core processor in development) so the performance gap might be narrowed without CUDA specialization. It might take you a year to develop such a beast, right? I don't know - perhaps Mark Alexander can give us his latest impressions and predictions in the hardware market?

Or, just pay SESI make some CUDA specializations into Mantra ;) Maybe they'd be open to it?

Link to comment
Share on other sites

well actually the current 8800GT or 9800GT a pair can go upto 1TFLOPS and in many applications the performance boosted 40x even 100x in molecular sims... so there could be a factor of atleast 10x in renderings if things goes accordingly... im working on that side... but I just got amazed if i can do anything to Mantra to use such GPU abilities... can you tell me how this is possible...?

Link to comment
Share on other sites

Is there any possibility that if I use SDK and can code the gpu abilities into Mantra?

Well, I'd hazard that if you can code the GPU this effectively, you're probably more qualified to answer the question than I am:)

No, but seriously - you don't have any access to the internals of Mantra yourself. You can only code VEX operations and Procedurals in the HDK. I'd think that perhaps even Pixie relies on a software architecture that probably isn't terribly amenable to such specialization, except in certain phases of the render process. If you want to squeeze every flop out of a GPU then it'd have to be a completely novel renderer, I'd think.

I'm interested in what you find out on this research of yours:) Please share your findings if you can!

Link to comment
Share on other sites

Gelato was not fast.. 3delight or Prman was much faster though... I dont know even though it was developed by big shots like Larry Gritz things were still slow... nothing significant improvement...

so we thought of going in different direction during crunching the render... lets see how it gonna work...

Link to comment
Share on other sites

well actually the current 8800GT or 9800GT a pair can go upto 1TFLOPS and in many applications the performance boosted 40x even 100x in molecular sims... so there could be a factor of atleast 10x in renderings if things goes accordingly... im working on that side... but I just got amazed if i can do anything to Mantra to use such GPU abilities... can you tell me how this is possible...?

Just so that you are aware of some things before you head down this road

- Currently OpenGL is single threaded, meaning CUDA runs on a single thread, even if it s N times faster

- To take advantage of the massive parallelism, you need to be able to work on a lot of data at once, and do a lot of operations to that data

I remember hearing that Pixar did a little experimentation with their shader engine (which is SIMD). They found that most shaders ran .5-1.5 times the CPU speed (i.e. some shaders ran slower). With the exception of a ray-marching volume shader which was 40 times faster. Which sort of underlines the second point. That you have to be able to be able to a lot of work on the data to pay for the transportation costs.

With molecular sims, I'm guessing that they transmit the data, then do a lot of work on the GPU.

Regardless, you might want to have a peek at http://www.redway3d.com/pages/index.php, though I expect there are other players in this space.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...