NVIDIA VP claims Moore's law is now dead

Nvidia is not shy about the benefits of its CUDA platform.  They are also not shy about taking shots at Intel when given the chance.  In a feature article on Forbes, NVIDIA chief scientist and vice president Bill Dally said that while CPU speeds have increased, the overall processing power has not followed Moore’s law.

According to Forbes, via Slashgear, ”as a result, the CPU scaling predicted by Moore’s Law is now dead”.  Dally then goes on to promote the CUDA platform that NVIDIA is currently selling as the future.

“Going forward, the critical need is to build energy-efficient parallel computers, sometimes called throughput computers, in which many processing cores, each optimized for efficiency, not serial speed, work together on the solution of a problem. A fundamental advantage of parallel computers is that they efficiently turn more transistors into more performance. Doubling the number of processors causes many programs to go twice as fast. In contrast, doubling the number of transistors in a serial CPU results in a very modest increase in performance–at a tremendous expense in energy.”

The basis of the argument is that current CPUs draw too much power and there will soon be a wall where the exuberant power consumption of current processors will no longer be beneficial to performance.  While this may be true, Intel has built its massive empire around Moore’s law and it seems unlikely that they will suddenly abandon their business model in favor of a competing technology.

Report a problem with article
Previous Story

28 days later...1 Million iPads sold

Next Story

Spirit jailbreak for iPad released

47 Comments

Commenting is disabled on this article.

nVidia need to STFU already. More smoke and mirrors bull****. Fermi needs a damn nuclear power plant to run in sli with very little performance gains over ATI's alternatives not to mention the fact that you can use it to cook food.

How is Moore's law broken if CPUs haven't been doubling in speed (or performance) every 18 or so months? Moore's law isn't about speed, it's about complexity. From Dictionary.com
Moore's law: "The observation that steady technological improvements in miniaturization leads to a doubling of the density of transistors on new integrated circuits every 18 months."

What I don't understand if it's a science "law", how the heck is it going to be broken if it has already been proven?

Jose_49 said,
What I don't understand if it's a science "law", how the heck is it going to be broken if it has already been proven?
It's not a scientific law. It's more a theory of economics and scientific advancement.

Kirkburn said,
It's not a scientific law. It's more a theory of economics and scientific advancement.

Indeed. It's more of a stupid media-perpetuated catchphrase.

OpenCL and DirectCompute are just another layers on the top of CUDA technology. They doesn't replace it, they doesn't kill it, they actually made it way more valuable. nVidia support for OpenCL is the best out there exactly because of this. BTW Adobe CS5 build on CUDA too, and believe or not they don't support ATI's solution at all. Why? Exactly because CUDA are light years ahead of anything else available out there, light years... not just the underlying technology but the development tools as well (what is actually a very important point).

Neobond said,
Who remembers that graphics card NVidia made that sounded like a vacuum cleaner

Who can forget the magnificent product that was the Geforce FX 5800?

Hah, CUDA.

When will nVidia realise CUDA is and has been dead ever since OpenCL and DirectCompute arrived on the scene. Why would anyone want to use something that is hardware-platform specific, where both of the competitors allow for multiple hardware platforms, and in the case of OpenCL, multiple software platforms too (If Applicable).

Athernar said,
Hah, CUDA.

When will nVidia realise CUDA is and has been dead ever since OpenCL and DirectCompute arrived on the scene. Why would anyone want to use something that is hardware-platform specific, where both of the competitors allow for multiple hardware platforms, and in the case of OpenCL, multiple software platforms too (If Applicable).

from what i've been reading, CUDA has more programs that actually use it than OpenCL or directcompute.

though the pickings are sparse all the way around.
at least nvidia hardware supports them all.

OpenCL and DirectCompute are just another layers on the top of CUDA technology. They doesn't replace it, they doesn't kill it, they actually made it way more valuable. nVidia support for OpenCL is the best out there exactly because of this. BTW Adobe CS5 build on CUDA too, and believe or not they don't support ATI's solution at all. Why? Exactly because CUDA are light years ahead of anything else available out there, light years... not just the underlying technology but the development tools as well (what is actually a very important point).

aludanyi said,
OpenCL and DirectCompute are just another layers on the top of CUDA technology. They doesn't replace it, they doesn't kill it, they actually made it way more valuable. nVidia support for OpenCL is the best out there exactly because of this. BTW Adobe CS5 build on CUDA too, and believe or not they don't support ATI's solution at all. Why? Exactly because CUDA are light years ahead of anything else available out there, light years... not just the underlying technology but the development tools as well (what is actually a very important point).

Uh, no.

For a start, you're utterly wrong in regard to OpenCL and DirectCompute. If they were just a "layer" as you put it, then they wouldn't operate at all on ATi hardware, which isn't the case.

The rest of your post sounds like nothing more then fanboy trash.

Athernar said,

Uh, no.

For a start, you're utterly wrong in regard to OpenCL and DirectCompute. If they were just a "layer" as you put it, then they wouldn't operate at all on ATi hardware, which isn't the case.

The rest of your post sounds like nothing more then fanboy trash.

Actually on nVidia hardware it is on ATI it isn't. nVidia have a layer over the actual hardware which enable us to use the GPU for "general purpose" floating point computation known as CUDA. ATI have something similar (but not nearly as advanced) known as Stream, OpenCL and DirectCompute (which is actually a part of DirectX 11) are layers over CUDA and Stream. So I am not utterly wrong, check your facts.


And about being a fanboy... well I am not a fan and definitely not a boy try to show me something nearly as advanced as CUDA no matter if those are applications, scientific stuff, research stuff, development tools etc. there is none, that's a fact. I would prefer if there are at least 10 competing technologies out there and not just CUDA, but the reality is that there isn't. I hate markets without competition, I don't like single vendor situation..., not a very "fanboy" way of thinking don't you think?

aludanyi said,
Actually on nVidia hardware it is on ATI it isn't. nVidia have a layer over the actual hardware which enable us to use the GPU for "general purpose" floating point computation known as CUDA. ATI have something similar (but not nearly as advanced) known as Stream, OpenCL and DirectCompute (which is actually a part of DirectX 11) are layers over CUDA and Stream. So I am not utterly wrong, check your facts.

[Citation needed]

aludanyi said,
And about being a fanboy... well I am not a fan and definitely not a boy try to show me something nearly as advanced as CUDA no matter if those are applications, scientific stuff, research stuff, development tools etc. there is none, that's a fact. I would prefer if there are at least 10 competing technologies out there and not just CUDA, but the reality is that there isn't. I hate markets without competition, I don't like single vendor situation..., not a very "fanboy" way of thinking don't you think?

You hate single vendor situations yet you like CUDA? A technology that is locked into a specific vendor and likely software-platform too. Seems like a bit of a double standard to me.

CUDA being more advanced? [Citation needed]

Athernar said,

[Citation needed]

You hate single vendor situations yet you like CUDA? A technology that is locked into a specific vendor and likely software-platform too. Seems like a bit of a double standard to me.

CUDA being more advanced? [Citation needed]

There are competition with proprietary technologies... and a single vendor world is possible with open standards as well. CUDA is a proprietary close-to-the-metal implementation of general purpose floating point calculation on nVidia hardware, Stream are the same on ATI hardware. It is like machine language (it isn't but for the sake of argument imagine that), OpenCL is a higher level OPEN STANDARD specification/implementation (imagine it as a C language) it still needs a machine instructions under to exist. DirectCompute is a higher level PROPRIETARY, but open specification/implementation too.

So before we move forward...

1. Proprietary and closed (for example AutoCAD source code) = closed to view and in a sole ownership/decision domain of one company.


2. Proprietary and open (for example CUDA, DirectX, Windows API) = closed to view and in a sole ownership/decision domain of one company but open to everyone to develop solutions, applications using it.


3. Owned and open (for example OpenCL) = everyone can implement it but it is controlled by a standard committee or organization.


4. Open and open (for example Linux source code) = free to do almost anything you wish with it.


For example some libraries are proprietary and close, only one company design it and use it, it is not available for anyone else. Some libraries are proprietary but open, only one company design it but it is available (free or for a license fee) to anyone else to use it. Some libraries are open source and designed by one organization, and open for everyone to use it. Some libraries are open to design and open to use to everyone.


OpenCL is a bad name, just like OpenGL is, it is not Open it is controlled by a more or less democratic organization. You can't make an OpenCLXYZ if you don't like the official OpenCL, it is copyrighted and you simply are not free to take it just like that. All you are free to do is to implement it AS IS. Now CUDA is different, you can't implement it (unless you are nVidia) but you can use it just as freely as you can use OpenCL, the only difference is that you have to use it on nVidia hardware.

CUDA is more advanced because it has the best and most advanced tools, on Fermi you can use C++ and FORTRAN as well, and there are CUDA libraries available for further use (not supplied by nVidia), that is why people who develop stuff like scientific simulations, financial applications etc. are using CUDA, that is why there are so much CUDA applications available that is why universities have CUDA development in their curriculum etc. So when I said more advanced I mean you can be much more productive, you have much more literature and people with much better experience with CUDA than with any other "similar" technology out there. You can design a better stuff than CUDA, that is not a big deal, but you can't make it a standard over night, you need time, you need people who use your solution... you need an ecosystem which can't be designed, it must grow like an organism, nVidia is not enough you need the people who actually build stuff using your solution and helping you to advance it further by showing you new ways how to use it... ways you can't even imagine in your research labs... the world cannot be designed, nor a way to live a way to work and a way to build stuff. Being first is a good way to lead (CUDA does) but the real value of CUDA doesn't come from nVidia, it comes from the people who actually use it and build stuff using it.


I hope OpenCL, DirectCompute etc. will have a significant impact here too, but until they are not more useful than CUDA it is immature and naive to trow it away alive, especially if it is actually the underlying technology for OpenCL and DirectCompute on at least 30% of the market (nVidia hardware).

aludanyi said,
<snip>

You seem to misunderstand what I mean when I say CUDA is "locked-in", it's not so much a source model as CUDA locks you into using nVidia hardware. You can go back and forth on the respective advantages/disadvantages of propriatery-closed/free-open source models, but hardware lock-in such as is with CUDA is always a bad thing.


Hardware does not retain it's value as software does, so if you want to scale up your hardware then you're forced to pick nVidia. Consider the utter trainwreck of failiure that Fermi is and you can see why this is categorically not a good idea. You've been locked into nVidia, and no matter how superior the new Intel/AMD/Whoever GPU is, you're stuck with it.


So no, it's not immature or naive to "throw" CUDA away, doing so is a boon as it allows people to choose the hardware they want, not what nVidia wants.

Edited by Athernar, May 4 2010, 12:05am : Comment system is sucky

maybe not noticeable to the individual, but on a large scale when you can save 2 watts per computer and multiply that by millions,,well I hope you get the point. Thats a huge savings in power consumption.

mls67 said,
Thats a huge savings in power consumption.

No it isn't. It's huge compared to ONE person. It's still less than chump change compared to what we are all using.

Regardless, the whole point is that power should be FREE (re: solar). The fact that we are still paying for our energy consumption is so 19th century.

They kill me with all this gotta be more energy effiecent crap. Its not like changing out your processor you are going to see a noticeable benefit on your power bill or something. Seriously unless its going to cut the bill by like $50 or more per pc this isnt going to be something people should be worrying about.

Gotenks98 said,
They kill me with all this gotta be more energy effiecent crap. Its not like changing out your processor you are going to see a noticeable benefit on your power bill or something. Seriously unless its going to cut the bill by like $50 or more per pc this isnt going to be something people should be worrying about.

They could care less about your power bill. It ultimately comes down to heat. More power = more heat.

CyBeRiANx said,

They could care less about your power bill. It ultimately comes down to heat. More power = more heat.

And more heat = wasted energy.

Gotenks98 said,
They kill me with all this gotta be more energy effiecent crap. Its not like changing out your processor you are going to see a noticeable benefit on your power bill or something. Seriously unless its going to cut the bill by like $50 or more per pc this isnt going to be something people should be worrying about.
It could quite easily cut the bill by $50. Taking a wattage, 200W. A UK cost: 10 pence / kilowatt hour. 2 pence per hour. 6 hours a day, 365 days in a year: £43.80.

GPUs are too specific in what they do. They can't do general logic or string problems as well as CPUs (yet) and when they do they'll just be a CPU and no longer a GPU(or General Purpose-GPU if you will)

Its just a matter of time before the materials we use today will reach their max threshold. This will end up breaking the law. It's been one hell of a run so far.

Pixil Eyes said,
Its just a matter of time before the materials we use today will reach their max threshold. This will end up breaking the law. It's been one hell of a run so far.

I've been hearing that argument for 20+ years now about Intel hitting the wall with Moore's law but they just seem to keep busting through that wall every time. Nvidia would really, really like for Moore's law to be dead, but saying it in a press release doesn't make it so.

It might be for them, we'll see how others cope. Intel seem to be awesome in the respect of besting themselves. I am a fan.

seb5150 said,
I've been hearing that argument for 20+ years now about Intel hitting the wall with Moore's law but they just seem to keep busting through that wall every time.

We've never dealt with atoms before. We are, now.