Jump to content

Move forward back to RISC?


symek

Recommended Posts

We did have support for the Sony cell processor a few releases ago for Mantra. Can't say too much about it though (from sheer ignorance btw) as the cell processor had some limitations (limited cache size?) from what I understand.

SGI ran MIPS chips which were also RISC based and Mantra ran just fine in the day on that hardware, so there you go. Where are they now? Dwarfed by intel and amd chips.

This chip architecture looks to have a much healthier on-chip cache size though compared to the cell processor.

All comes down to cost/platform and whether all platforms on average ship with these RISC chips on-board.

Just looking at the GPU side of things, even there hardware specs are all over the map what with consumer grade cards of varying capabilities and what seems like little continuity over drivers across those platforms offering it's challenges to all.

  • Like 1
Link to comment
Share on other sites

The cell processor had some silly management decisions on Sony's side, that's what! :D There were a number of projects using it for rendering, protein folding, etc... and then Sony decided to close down the architecture once and for all!

I was really excited about this Adapteva architecture until I remembered that I am not a developer and won't be able to port anything high-end to it :). I mean, my humble python scripts won't necessarily benefit from such development.

Edited by rafaelfs
Link to comment
Share on other sites

All is true Jeff, but this platform seems to avoid some, say, environmental problems, involved in "big predecessors" chips you mentioned.

It's an open platform, meant to be cheap ($100 for a single 64bit chip + whole platform +SDK?), and developer's friendly, being normal C/C++ platform full of open standards, and it won't be stopped by any corporate's executives. The main target for the company now is a scientific computing, which is enough close to rendering to start looking at this as an option (or make some tests).

The way I see it, is that it might be the right path for render farm accelerators (instead of GPUs).

Link to comment
Share on other sites

With only one gigabyte of memory I think they're interested in a different audience. Being small and efficient this would be an ideal platform for realtime computer vision for autonomous vehicles and drones (which could be done with one gigabyte of memory). If you're curious there are other players in the game that have been at it for years, including Intel. Not sure what it could bring to the table in terms of rendering but if it brings any limitations like GPU rendering has (hair, instances, displacement, motion blur, etc.) then I'm not interested.

Link to comment
Share on other sites

With only one gigabyte of memory I think they're interested in a different audience. Being small and efficient this would be an ideal platform for realtime computer vision for autonomous vehicles and drones (which could be done with one gigabyte of memory). If you're curious there are other players in the game that have been at it for years, including Intel. Not sure what it could bring to the table in terms of rendering but if it brings any limitations like GPU rendering has (hair, instances, displacement, motion blur, etc.) then I'm not interested.

I'm obviously not talking about the currently advertised boards as an option for rendering. This is just a platform, and in a year it could be reorganized for rendering needs, specially that's it's going to be an open specification in hardware and software. This is exactly why Intel can't change CPUs drastically: it's simply won't make any money out of it. Small startup company quite contrary. I see a couple of other reasons why this platform might not play well with rendering though...

Link to comment
Share on other sites

I'm obviously not talking about the currently advertised boards as an option for rendering. This is just a platform, and in a year it could be reorganized for rendering needs, specially that's it's going to be an open specification in hardware and software. This is exactly why Intel can't change CPUs drastically: it's simply won't make any money out of it. Small startup company quite contrary. I see a couple of other reasons why this platform might not play well with rendering though...

This is from their website. They probably have only one gigabyte of memory because they can't support a whole lot more than that anyway.

The Epiphany memory architecture is based on a flat memory map in which each compute node has a small amount of local memory as a unique addressable slice of the total 32-bit address space.

I think there's potential for other processor architectures besides AMD64 (x86-64) and co-processors like these (accelerators, whatever you want to call them) but these are not ready for production. At least rendering anyway.

Link to comment
Share on other sites

Pure "CISC" has been dead for a very long time. Modern x86 processors (circa 2000 or later) have a front-end which decodes the x86 CISC instructions into micro-ops which are executed by the RISC back-end. So yeah, talking about RISC vs CISC is a bit silly because x86 processors have had a RISC execution core for over a decade now. Even as an intern back in '96, Intel was already talking about this privately with their ISVs.

Link to comment
Share on other sites

  • 1 month later...

Pure "CISC" has been dead for a very long time. Modern x86 processors (circa 2000 or later) have a front-end which decodes the x86 CISC instructions into micro-ops which are executed by the RISC back-end. So yeah, talking about RISC vs CISC is a bit silly because x86 processors have had a RISC execution core for over a decade now. Even as an intern back in '96, Intel was already talking about this privately with their ISVs.

I haven't intended to contrast CISC and RISC per se, but the fact is that RISC has gone (or took a back seat so to speak) in high performance computing world for a while now, so it's interesting how new approaches are made, even apparently premature one. (although if Adaptiva's chips have comparable performance with x86 for a 1/100 watts, it says something about its potential.)

I still read bits about that subject and it seems like you really can't treat new x86 as a risc-embedded. There is a significant difference in risc-ness of x86 processors and a risc-based chip like the one mentioned in topic or ARM. Because Intel/Amd have to keep compatibility with CISC instructions, they build over-complicated chips whose one of the current biggest bottleneck is the exact instruction decoder which makes them risc-like. So staying hybrid solution is what makes them big, energy-hungry, expensive and practically slower (from what they could be without CISC overhead) - and these are exactly what differs x86 from modern risc chips.

So, as far as I can read, talking about risc vs cisc might make some sense these days ;).

Edited by SYmek
Link to comment
Share on other sites

The project really needs to use the new ARM64 architecture before it could conceivably be used for tasks that require hordes of memory, like rendering and simulation. A 32b virtual memory size really limits the application memory size, especially if it is constantly allocating and deallocating large chunks of memory. You'd be lucky to get much more than 1GB in that case. Memory fragmentation prevents the application from getting much more than that, even if they did provide more than 1GB of physical memory.

It'd also need some pretty fast vector units for our purposes, which doesn't seem to be specified and so is likely missing.

Because Intel/Amd have to keep compatibility with CISC instructions, they build over-complicated chips whose one of the current biggest bottleneck is the exact instruction decoder which makes them risc-like.

According to reports from Intel, the decoder logic on their next gen CPU occupies 2% of the core area. Since x86's backwards compatibility has led to Intels dominance in the consumer space for decades, I'd say that's a 2% well spent. They've also optimized the decoder to the point where it's nearly a non-issue (regular and micro-op instruction caches, 4 decoders, the ability to shut off the decoders).

However when it comes to Xeon Phi, I don't think maintaining the x86 instruction set makes as much sense from a hardware standpoint. By their own admission you need to compile the app with the Phi's architecture in mind to achieve maximum performance, and even then, the average improvement is roughly 2x over a top-of-the-line 12 thread Xeon. So while technically Xeon Phi can run regular x86 code, you'd really be missing the point to do so. And as soon as you need a different compiler (or compiler settings) to produce code for it, who really cares what the instruction set is on the other side?

So the only reason Intel is sticking with x86 CISC for the Phi is because they have massive amounts of experience with it. I don't think their argument that users of the Phi will benefit from its x86 compatibility hold much water. They'd be better off saying that you can use the Phi with minimal (if any) changes to your source code.

So while the x86 decoder makes sense when you have 2-8 cores, it starts to become a burden in the 20+ core range (additonal power, area). For a project like this, you'd definitely want a RISC processor. Nvidia's shader cores could be considered RISC, as is AMD's new architecture for the AMD 7000 series (the older 2000-6000 series was VLIW, which were also bad for compute).

It'll be interesting to see what their next step is. It seems like this is geared more towards low-power server and DSP applications currently.

Link to comment
Share on other sites

  • 2 years later...

it looks like intel has listen to the phi customers: http://www.theplatform.net/2015/03/25/more-knights-landing-xeon-phi-secrets-unveiled/

you even can run windows on it so the porting requerments are very low. 

just the price is a big question!!!

I want that knights landing chip so bad right now.  I am trying hard not to drool all over my desk.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...