Compared to the 1950s with superpower computers such as CSIRAC playing music for the first time, your home desktop or laptop computer has several thousand times the power of this fairly basic machine. However, just like your home computer is to CSIRAC, there is a whole degree of computers out there that are many thousand times as powerful as what you are using right now to read this article.
These supercomputers are quite amazing in what they are able to achieve. Currently, the fastest single supercomputer site is Japan’s “K Computer” (yep, imaginative name) capable of 10.51 petaFLOPS: that is, 10.51 quadrillion floating-point operations per second. The K Computer is not even fully operational, with a theoretical top performance of over 11 petaFLOPS.
In comparison, a top-performing Intel “Sandy Bridge” Core i7-2600K clocked at 3.4 GHz only manages between 80 and 90 gigaFLOPS. This means that the K Computer can do around 110,000 times more floating-point calculations per second; to be fair though, the supercomputer has a slight advantage as it has 88,128 Fujitsu SPARC64 2 GHz processors each with 8 cores (for a total of 705,024 cores).
This is the K Computer. It's large and very fast
Unfortunately supercomputers like the K Computer do consume an enormous amount of power; something in the range of 12-13 megawatts (~15,000 average homes), and this is apparently “efficient” for a supercomputer. The annual running costs for this computer are around US$10 million.
The TOP500 list, which ranks the top 500 fastest supercomputers in the world, stated late last year that if you add all the supercomputers’ power together you get a whopping combined total of 74.2 petaFLOPs. 62% of these systems use 6+ core processors and the favourite brand for GPU acceleration is NVIDIA. 29 of the top 500 supercomputers consume more than 1 MW of energy to run.
A hybrid AMD-NVIDIA supercomputer named “Titan” is being built at the Oak Ridge National Laboratory that will apparently reach up to 20 petaFLOPS; it will become operational in 2012. The rate of increase in supercomputing power means that by 2019 we should be seeing supercomputers that are capable of one exaFLOP (1,000 petaFLOPS), although this is short of the one zettaFLOP system that is predicted to be required for accurate 2-week weather predictions.
On a slightly more local level, popular distributed supercomputing program Folding@Home achieves over 6 petaFLOPS from accessing around 350,000 processors in household computers. Users simply install a program on their machine and when it is not in use it helps to simulate protein-folding amongst other medical-related functions. The distributed computing program is quite useful and beneficial to the community as results from calculations go towards combatting disease and producing drugs. Neowin has its own Folding@Home team, check out this thread for details on how to join.
Other uses for supercomputing range from quantum physics and weather/climate calculations to nuclear simulations and (to a lesser extent) code breaking. They are a vital part of research and many academic institutions rely on them to do things that simple desktop computers are not capable of.