digitallysane Posted December 9, 2008 Share Posted December 9, 2008 http://www.khronos.org/news/press/releases..._specification/ Would be interesting to see if it catches on so GPGPU becomes easier for developers. Dragos Quote Link to comment Share on other sites More sharing options...
malexander Posted December 11, 2008 Share Posted December 11, 2008 Now all we need is the vendor implementations... The spec without a supporting lib to play with is such a tease I'm glad it's been modified from its original API, which looked a lot more like OpenGL. The familiarity was nice, but it just didn't quite fit. Quote Link to comment Share on other sites More sharing options...
Marc Posted December 13, 2008 Share Posted December 13, 2008 I read an article somewhere that said that Nvidia was definitely going to support it on their next generation of cards, and that ATI was probably going to . I just wish I could remember where the article was for proof. M Quote Link to comment Share on other sites More sharing options...
malexander Posted December 13, 2008 Share Posted December 13, 2008 Yep, Nvidia and AMD/ATI will support OpenCL. No official news from Intel that I can find, but they'd be a little crazy not to support it with Larrabee (especially since the next architectural enhancement to the i7 will include Larrabee's AVX (16-wide vector) instruction set). Lots of rumours out there, though. As long as an OpenCL implementation has default CPU support, then I think its adoption will be more widespread. Having to write two versions of your algorithm, one for the GPU and one for the CPU (in case the user doesn't have a supported GPU) was a main concern of many of the developers I talked with an Nvision. Forever after, you have to keep them in sync, which is a developer's nightmare as the number of GPU algorithms in your code increases. On the other hand, Microsoft isn't going to support it, presumably because they will support compute shaders in DX11. However, this would only be a problem if you had a pre-8x000 nvidia card, a pre-X1x00 ATI/AMD card, or one of those integrated graphics disasters from Intel, or an older driver that doesn't include OpenCL within the driver package. Quote Link to comment Share on other sites More sharing options...
Marc Posted December 14, 2008 Share Posted December 14, 2008 As long as an OpenCL implementation has default CPU support, then I think its adoption will be more widespread. Having to write two versions of your algorithm, one for the GPU and one for the CPU (in case the user doesn't have a supported GPU) was a main concern of many of the developers I talked with an Nvision. Forever after, you have to keep them in sync, which is a developer's nightmare as the number of GPU algorithms in your code increases. yeah, that would suck. On the other hand, Microsoft isn't going to support it, presumably because they will support compute shaders in DX11. Which is to be expected really . Idiots. M Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.