AMD: Biggest Microprocessor Evolution Since x86-64

Phil Hester, chief technology officer at Advanced Micro Devices, the world's second largest maker of central processing units, said at a conference that the integration of graphics processing units (GPUs) into central processing units will allow personal computers to achieve performance of supercomputers eventually.

"Get ready for round two of the "attack of the killer micros. By combining graphics processing unit (GPU) and CPU functions in heterogeneous cores, microprocessors will bring supercomputer performance to the desktop," said Phil Hester, in a keynote speech at the International Conference on Computer-Aided Design (ICCAD) in San Jose, California, reports EETimes web-site.

View: The full story
News source: Xbit Labs

Report a problem with article
Previous Story

Vodaphone lays bare broadband bundle

Next Story

HL2: Episode Two, Portal, TF2 all slip

49 Comments

Commenting is disabled on this article.

Sounds like a really lame idea so now (when this becomes main stream) we would have to upgrade not just a GPU but a CPU at the same time...

Torrenzo is a good way to start in the direction that this news clipping implies, there is already stuff out there jumping on the HT Bus via a 2nd socket, AMD being the good chaps they are and allowing the HT bus to be used by other vendors... I just dont know if its gonna happen that way. Motherboards will have to "gain" VGA/DVI/HDMI connections, and all the manufacturers will have to play ball.

Take a look at "mxm" in laptops, good idea, bad implementation. (cant get mxm cards anywhere, the ones you can get probably wont work or dont fit, what good is a standard if its not standard!)

I'm not sure this is the way forward...cause with this technology if you decided to get a machine based on this tech it will be AMD/ATI whether you like it or not..you can't choose to have it with say an Nvidia GPU and if Intel jump onto the bandwagon, that will most likely be Intel/Nvidia whether you like it or not...I dunno maybe I got the wrong end of the stick...but for now I'm happy with how things are

Load of... you know what... AMD doen't have a valid road map for that any time soon, they can make claims about things that come eventually all they want

Well, like some1 said somewhere up in the posts.... I would prefer a high performance "computing" chip rather than a GPU ... because games and graphics are not the only things... think about scientific calculations and stuff like that... they should make it more CELL like thing ....

and for the integrated part... i would like it if they have separate RAM slots or something similar for the GPU as well cause I hate it when the GPU shares the main memory no matter how much they have of it.... eventually applications will catch up and we will need that small bit of extra memory to go a little further without buying a new rig. and yes... 2 GPUs on a system is a more realistic and logical idea... its should be upto the consumer what they want in their machine... not the company. but then again... if both Intel and AMD go the same way... we might not have a chance.

Quote - david13lt said @ #16
As I think, they are going to release on chip CPU-GPU... Mostly people have 2+GB of RAM nowadays... :)

What? No?

Dude... I'm still running on 512MB... and even if I buy a new one now... its going to be 2GB... not more than that... most ppl i know is buying 1GB ... but I always encourage them to buy more because of Vista ... as soon as we will start having high end applications running on vista... 1GB will be crippled.

Most people have 2+GB of ram? What kind of low grade crap are you smoking? I DO NOT know a single person personally that has that much memory in their systems. Obviously you are one of those "have to keep up with the Jone's" type people. I know there are many, many people that do, but to say what you did is absurd! I have 7 machines and not a single one of them has more than 512MB's and I bet I can do anything you can do. Maybe not as fast or as many things at one time, but that's not the most important thing in the world to me. To always think you have to be in a rush only leads to ulcers, high blood pressure, etc... As far as recommending people to buy more memory in preparation for Vista, I don't and won't. XP is quite sufficient. I DON'T play "have to keep up with the Jone's."

Does this whole thing mean we are going to have 2 GPUs in our system? (1 in the CPU and another in the Video Card?)

or did I miss an important point? Surely they DON'T mean each time we want a video upgrade that we need a new CPU... Let's not forget, that the Video Memory is still on the video card itself, and we need somewhere to plug in our monitor... unless... AMD boards will start the use of a video riser? or worse - integrated graphics controller!!! :o

No seriously, the 2 GPU on a system idea sounds more realistic, right? Please fill me in, this is all too much for me. :cheeky:

will allow personal computers to achieve performance of supercomputers eventually

This is going to happen no matter what AMD or any other company does. The computers we have today would have been considered "supercomputers" 10 or 15 years ago.

It's fun to speculate that in just 10-15 years we could have computer games rendering scenes that we see in advanced virtual cinematography today (eg. The Matrix Reloaded's Burly Brawl).

i'm willing to bet less than 10 years. Technology development is increasing at an exponential rate, especially in the graphics industry.

Quote - Mathiasdm said @ #13.4
Looks pretty exponential to me (look at the numbers on the left side).
'Doubling every 18 or 24 months' is not linear.
Right. That graph only appears linear because the vertical axis isn't linear (i.e. there's the same amount of space between 10K and 100K as there is between 1B and 10B). One of the most important things to look at in a graph is how it's scaled. There are all kinds of tricks to make the data look a certain way. They are especially favorites of activists who have an agenda. Always pay close attention to the scales so that you aren't deceived.

Quote - PureLegend said @ #12
Hmm...will we see an Intel/nVidia hybrid as well?

No, because nVidia might still be thinking about making CPUs on their own.

Quote - Sartoris said @ #12.1

No, because nVidia might still be thinking about making CPUs on their own.

Which will own anything out there :nuts:

and the biggest refrigerating system ever made. This will be fun too see (more with you take that system off the (multi core) processor) haha

Sounds pretty interesting. Just imagine if they get the Physics GPU too into picture that would be pretty amazing.

I guess AMD's future depends on this new technology. If they succeed, they'll own Intel, if they don't... they're gone.

Quote - KzR said @ #8
I guess AMD's future depends on this new technology. If they succeed, they'll own Intel, if they don't... they're gone.

I remember reading that Intel is working on intergrating the GPU into the CPU sometime ago. Since they already do intergrated graphics I think they have a headstart, but I think AMD is pushing to beat them to the launch.

they tried, but like all Intel ideas of recent times, they stopped developing it after they reached a hiccup. Maybe Intel needs to be thumped by AMD before they do anything new nowadays?

The only way i would buy into a setup like this, is if the resulting chip is like the CELL, with the fast pipelines on the "GPU" dedicated entirely to computation, not graphics.

If i want to upgrade my GFX card, i can, can't if it's on the same chip as my CPU (without buying a new CPU that is)

Quote - The_Decryptor said @ #7
If i want to upgrade my GFX card, i can, can't if it's on the same chip as my CPU (without buying a new CPU that is)

I also like to upgrade my GFX cards every so often. I would like to see the GFX card become a "chip" that plugs into a motherboard, like the CPU does.

Quote - hagjohn said @ #7.1

I also like to upgrade my GFX cards every so often. I would like to see the GFX card become a "chip" that plugs into a motherboard, like the CPU does.

Your nearly there

AMD Torrenza
a non X86 (Opteron/Athlon) CPU may be placed in an Opteron socket with a direct connection to RAM and CPU

My bet you will See ATI and NVIDIA graphics chips using Torrenzo before integrated GPU/CPU package.


my bet you will see ATI's Xenos - Which has 16MB of Video Memory intergated onto the Die

Microsoft's Xbox 360 video game console
Some of these features include “Intelligent Memory” – a section of on-die memory that has logic built in (192 parallel pixel processors)

Seems like the CPU and GPU cores are combined and utilized for different tasks, cause GPUs have a higher peak performance they can be optimized for those tasks, seems like an intresting idea.
Hope it turns out for the better, I like extreme performance.

Thats not really going to make it the same as hundreds of CPU's in an array but is it? The performance will be faster but its hardly a super computer. The cpu will still be a single CPU after all (ignore core count).

From what I know, the GPU is extremely good at certain operations. (Such as vertex transformation?) By including this ability into the CPU, and using it when needed, i'm sure you would see huge performance gains, but I doubt it will be anywhere near that of a super-computer.

Quote - K3vlar said @ #3.4
From what I know, the GPU is extremely good at certain operations. (Such as vertex transformation?) By including this ability into the CPU, and using it when needed, i'm sure you would see huge performance gains, but I doubt it will be anywhere near that of a super-computer.

A GPU is just a CPU with special instructions/operations for graphics.

let's not forget that Folding@Home can not only work on ATI GPUs (not all, but still), but it is *much* faster than using the CPU.

Quote - Doli said @ #3.5

A GPU is just a CPU with special instructions/operations for graphics.

Actually it isn't, GPU's are massively parallel CPU's, multitasking and threading on GPU's is considerably more advanced than on a CPU. Ofcourse applying GPU technologies to CPU's could allow for massive gains in performance, especially in multi-threaded applications.

GPU's are becoming more general purpose, like CPU's, but from a design point of view, CPU's and GPU's are nothing alike.

Remember that supercomputers are generally measured in Floating-Point-OPerations, and one thing a GPU is very good at is floating point stuff.

The thing I like about this is that GPU's are starting to become more like Co-CPU's than just dedicated graphics chips. Even nvidia is jumping on this bandwagon with their special non-graphical API for the G80, so hopefully in a few years the applications will be established to really take advantage of this kind of technology.