With An 80-Core Chip On The Way, Software Needs To Change

With executives from Intel Corp. saying they're only five to eight years away from producing an 80-core chip, now may be the time for software developers to start working on applications that can take advantage of it. Intel is about a month away from unveiling the specs for a research prototype of an 80-core chip that they've developed. That's right: Not an 8-core; this is an 80-core chip. The microprocessor manufacturer has jumped way ahead of the expected progression from dual-core to quad-core to 8-core, etc., to delve into different ways to make something as complicated as an 80-core chip actually work.

Researchers have built the prototype to study how best to make that many cores communicate with each other. They're also studying new designs for cores and new architectural techniques, according to Manny Vara, a technology strategist with Intel's R&D labs. The chip is just for research purposes and lacks some necessary functionality at this point. To get that many cores on a single chip, while keeping the chip at nearly the same size, Intel's researchers made the cores themselves less complex. "If you look at it, by the time you put dozens of cores on a chip, they won't be the same kind that you can put three or four on a chip today," says Vara. "The new ones will be much simpler. You break the core's tasks into pieces and each task can be assigned to a core. Even if the cores are simpler and slower, you have a lot more of them, so you have more performance."

View: The full story
News source: CRN

Report a problem with article
Previous Story

UltimateDefrag 1.39

Next Story

'Nvidia only' strategy pays off for XFX


Commenting is disabled on this article.

Isnt this getting stupid, how much processing power do we need, its just like the mhz race of the early 21st century we will end up hitting a barrier.

lylesback2 said,


an 80-core chip! thats awesome news :D

as stated, 80cores is nothing new, sun and others have been capable of making chips that had dozens of cores for quite some time
UltraSPARC t1 chips are 8 core by default but each core can do 4 threds at a time, 32 threds at once.


honestly, all of this as stated above is old news, its only "news" on neowin today because intel just said it......

Well, they're right that software has to move to multithreaded designs. There are people who have been shouting this from the rooftops for years (myself included). For example, "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software", http://www.gotw.ca/publications/concurrency-ddj.htm

One problem we're now seeing empirically is that in software that is multithreaded, we're already running into diminishing returns on 4- and 8-core systems. I've been running Paint.NET rendering benchmarks on various systems and a high priority for an upcoming release is going to be getting it to scale better on 4+ core systems. With my code I'm seeing about 3x scaling for a quad-core QX6700, and 5.2x for an 8-core system (dual quad-core Xeon) (this is compared to running single-threaded). Part of the problem is that imaging code quickly reaches a point where memory bandwidth is the bottleneck, even with dual-channel DDR2-800. I wouldn't be surprised to see quad-channel memory configurations coming to the desktop (this is just hopeful speculation on my part though). Synchronization and division of work incur more overhead as you add more cores as well.

-Rick Brewster (Paint.NET lead dev)

um thats old news to, software has needed to evolve to be multi threded for years now, just that software developers are slow to adapt, hey wont be any faster then they are now IMHO, by the time 80 core intel chips hit the market the only os's that will beable to use them even close to potential will be unix variants with opensorce software.

dont take this the wrong way, im not a linux/unix fan, i use windows, but the facts still remain that more linux/unix apps are multi threded then windows apps, its a dirrent phylophisy(sp)
in the world of windows, good enought is as good as it normaly gets, advances to adapt to tech are slow, see how we barly use dual cpu/dual core(smp) currently yet they have been around for a VERY long time(i had a dual p133 system back in the socket5 days) software developers that work with ms dont tend to be nimble or fast to adapte to "better" ways of doing things.

just like game developers en large try and say its very hard to multithred games, when in reality its just hard because they have very little exp programing multitireded code.
again i have some OLD games that are SMP enabled, tribes1, starsiege,tribes2 are good examples, they can take advantage of multi cores/chips and they are from the 90's

linux and unix on the other hand are developed by for the most part people who are fimiliar with multithreded coding and smp, they try and make their apps take advantege of the resorces the system has to offer, for example my old dual p2 400 xeon system running linux is as fast as my newer and far more robust duron 1.2gz because the software os an apps are made to take advantege of resorces they are given as apposed to windows where, well you get whatever the company selling you the software feels like giving you.

diffrent root concepts of design and emplimentation.

so as i said this is all old news, at least to anybody whos likely to read sites like neowin........

this is not news, its OLD info, and these chips are RISC not CISC (wikipedia it)

sun has had procs with dozens of cores for years now, sparc niagra for example, take a look, its very efficent RISC design.

via's c3-c7 chips are RISC and could easly be multi cored, and use FAR LESS POWER then even c2d or 65nm k8 chips.

as i said this isnt anything new, its just that intels pimping it as a breakthru.

They pretty much said that in the article when they talked about how each of the cores were simpler in design, and as for the old news comment I hope it feels good to think you know so much more about it than everyone else. However the article is about how software will need to evolve to take advantage of the chips, it isn't saying multi-core chips are something new. No offense, but it's pretty annoying when people post "old news" in articles as if everyone in the world should know about it just because they do.

Or maybe they're raising the issue that nowadays most software is just begining to have dual core and 64 bit support while tens of thousands of people dish out lots of money for the latest and the fastest parts and for what if no applications support their potential power?

why dont you guys understand 80 cores is better? I'm tired of people saying "stop adding cores and start making it better" well guess what you can do with 80 cores? divide up instructions so you can do 80 symetric instructions at once, and get the same thing done 80 times faster... well takes a good know how of op code and assembly and some very powerful assemblers / compilers but it is extreamly possible if people would just teach themself how to code this way instead of thinking in the classic single core single instruction serial processing method, do processing in parallel instead of serial (yes single cores do also do parallel processing, just in a different sense)

There is already a 9 core project in the works by a major company for the embedded market. Of course, the processing power of each core is significantly lower than desktop speeds, but heat is a major player, and dissipating that much heat is a significant leap.

I don't know about 80 cores, but I'm sure it's possible... 10 cores added each year for 8 years... sure.

Windows XP and all of Microsoft Office 2007 working full bore couldn't lag an 80 core processor. It's definitely time to add some serious bloat :cheeky:

Intel going the easy way of adding more instead of makign better.
We need dual core processors or quad core proecessors that are better, not just more of the same old s...

Well this is what AMD announced not long ago with regards to their tech. And to be honest IBM have been doing this with their cell processor.

I wholeheartedly agree with intel on this stand. Time for software manufacturers to start learning to code more efficient apps, games and so on and so forth, instead of relying on grunt.

I would love for you to explain your definition of better.

I thought it was overkill at first but on second thought this sounds great to me. I've always wondered why they didn't concentrate on making the chips do more work per clock cycle instead of trying to make them run at "ludicrous speed".