10 Major Players in Supercomputers

Need to brush up on your supercomputing history? Watch the video!
MediaLink

Humans have always had a thing for building big. History is littered with the remains of Great Walls, pyramids and coliseums, each one pushing engineers to imagine the next big step forward. Now we have buildings like the Burj Khalifa, with its towering height of 2,716.5 feet or 828 meters, as testament to our competitive -- and creative -- nature. And no matter how high we go, there's always some architect quietly drawing up plans for the next big thing.

The same holds true for the world of computing, except taller equals smaller and cooler means faster. This industry makes the pace of constructing the world's tallest buildings seem downright glacial. Thanks to the biannual Top500 list of supercomputers painstakingly compiled by Hans Meuer, Erich Strohmaier, Horst Simon and Jack Dongarra, we can see just how rapidly the industry moves forward. The top three in 2012 will be lucky to occupy one of the top 20 spots in two years.

Processing power in supercomputers (or high-performance computers, for those who point out that "super" is a relative term) is rated in FLOPS, short for floating-point operations per second. A floating-point operation is basically a computation using fractional numbers, so flops is a measurement of how many of these can be performed per second. Top supercomputers are measured in petaflops (1,000,000,000,000,000, or 1 quadrillion, flops). The first system to break the 10-petaflop barrier was Japan's K Computer in early 2012. By that summer, Sequoia at Lawrence Livermore National Laboratory was running even faster. Yep, the competition moves that quickly.

Although the top seat is in constant flux, the major supercomputing locations are usually the United States, Japan and Europe. China has burst onto the scene, too, with big ambitions of being a leader. In this article, we'll stop by 10 sites blowing the roof off of supercomputing. They make the list for a combination of two reasons: total number of petaflops and their impact on the world of supercomputing. Most are long-standing leaders, but you'll also see a few newcomers to the field who have shaken up the establishment. First up is a space agency.

10
NASA Advanced Supercomputing Division (U.S.)
Awww yeah. NASA's ribbon-cutting crew for Pleiades gets ready to celebrate on Dec. 11, 2008.
Awww yeah. NASA's ribbon-cutting crew for Pleiades gets ready to celebrate on Dec. 11, 2008.
Photo courtesy NASA Ames Research Center

Do you know what one of these technological marvels is really capable of? NASA's Advanced Supercomputing (NAS) division at Ames Research Center is happy to explain. After all, Pleiades is the supercomputer behind some of the coolest extraterrestrial computing projects around.

Astronomers on NASA's Kepler Science Operations Center are using Pleiades to search images of the galaxy for other Earth-like planets. Thanks to Pleiades, the team discovered a planet that orbits two stars, called a circumbinary planet, in 2011. A few months later, they discovered more multistar planets, establishing a whole new class of planetary systems [source: Dunbar].

And when Pleiades wasn't busy finding new star systems or revealing the forces that make hurricanes work, it was running simulations to understand how galaxies formed after the big bang. Yes, the SGI system can run the math required to calculate entire galaxies, thanks to its 1.24-petaflop performance [source: Dunbar].

See kids, that's how you supercompute. It's all about thinking big -- sometimes astronomically big.

9
National Supercomputing Center in Shenzhen (China)
Manning one of the supercomputers at the Korea Institute of Science and Technology Information
Manning one of the supercomputers at the Korea Institute of Science and Technology Information
Chung Sung-Jun/Getty Images

While the people who work with high-performance computers will tend to say that the machines exist to further science, the fact remains that being a player in the supercomputer field offers a certain level of status to the host country. No other national program exemplifies this better than China.

Like almost everything else that's big in China today, if you roll the calendar back only a few years, you'll find no trace of the National Supercomputing Center in Shenzhen. Along with its sister facility in Tianjin (see No. 5), Shenzhen's Nebulae computer blasted onto the scene in 2010 at the No. 2 spot with no previous history in the field. The sudden arrival of a 1.27-petaflop computer in China was a game changer for long-established labs in other parts of the world. In the early 2000s, the Chinese wanted to become a dominant force in this field, and about a decade later they'd effectively gone from zero to hero.

Nebulae (made by Chinese vendor Dawning) arguably hasn't been breaking much ground since its debut. Funding shoulders part of the blame. Since local governments footed much of the bill for the computer (which cost more than $1 billion), they got a large say in which projects Nebulae will work on. The system was originally intended to hash out calculations for astrophysicists, but it has actually spent much of its time developing local economic projects such as aerospace industry advancements, improved weather forecasting and even developing animated films [source: Davis]. It's humbling work for one of the world's top supercomputers – sort of like hiring Stephen Hawking on a research retainer but making him tutor your teenager in algebra.

8
Jülich Supercomputing Center (Germany)

When it comes to supercomputing, Jülich Supercomputing Center operates on the same “no job is too big or too small” principle as every handyman in the phone book. In June 2012, the center made the Top500 list at No. 8 with JuQueen, its 1.38-petaflop IBM Blue Gene/Q (you’ll be seeing a lot of Blue Gene supercomputing systems on this list). It complements JuGene, a Blue Gene/P (it received an upgrade in 2009 to make it Europe’s first petaflop computer), and JuRopa, a Sun-based system.

With all that power, Jülich has become a computational and research simulation hub for many fascinating disciplines: quantum systems, climate science, plasma physics, biology, molecular systems, and mathematical modeling and algorithms.

Call it a stereotype, but the Germans are efficiency nuts. They -- like the Chinese -- have begun incorporating graphical processing units (GPUs) into their systems to achieve higher computational speeds using less energy. The goal is to eventually cross that magical exascale line (see sidebar). The future of supercomputing may not be as far out there as we think.

7
Cineca (Italy)

For years, Italy has been involved in supercomputing, but only recently has it emerged as a major player in the field. It owes that newfound standing in part to the Cineca High-Performance Computing Center, an academic consortium of more than 50 Italian universities. Ever since people have been ranking the world's top supercomputers, Cineca has held a spot, usually someplace near the middle [source: Top500]. However, in June 2012, the center broke into the top 10 for the first time with FERMI, a 1.72-petaflop IBM Blue Gene/Q system that's actually the latest in a long line of IBM and Cray products in Cineca's history.

With deep enough funding, lots of organizations can become part of the supercomputing elite, but Cineca gets into this article for two reasons: First, it has always maintained an aggressive series of computers, and second, it's becoming somewhat of a uniting force in international computing

Started in September 2012, iCORDI is an international forum between E.U. and U.S. computing agencies. The goal is to develop universal data structures between the two groups that will make the sharing of information and research neater and easier. Cineca has positioned itself as a leader in solid earth sciences, which is everything from tectonics to ocean temperatures. With its new role and more powerful computers, Italy doesn't appear to be saying ciao to the international stage anytime soon.

6
Oak Ridge National Laboratory (U.S.)

Have you ever had one of those friends who's back at the dealership the moment the new-car smell wears off his or her last vehicle? That's pretty close to what Oak Ridge National Laboratory's (ORNL) rankings look like; you can fully expect that today's hot new system will either be upgraded or replaced by the new model in a couple years.

Aside from the aggressive turnover, another interesting fact about ORNL is its use of Cray-built systems (lately, many of the top U.S. players run IBM-built Blue Gene systems). In 2012, its Jaguar XT5 system was getting upgraded to the new XK6 Titan system. Those fancy name changes will help the system leapfrog from 1.94 petaflops to somewhere between 10 and 20 petaflops, which could land it near the top of the world rankings.

The Titan uses a combination of CPUs by AMD and GPUs by NVIDIA to conduct research for the U.S. Department of Energy, including studying the effect of extending the life cycles of nuclear power plants, the viability of new biofuels, population dynamics, development of solar technology and climate change modeling. When Japan's Fukushima Daiichi plant was damaged in the earthquake and ensuing tsunami of 2011, ORNL researchers put in long hours to simulate some of the emergency scenarios at the nuclear plant [source: Munger].

5
National Supercomputing Center in Tianjin (China)

In 2001, the Chinese didn't hold a single spot on the Top 500 list; today they are second in number of facilities only to the United States. And they drove their ambitious point home by switching on the system at the National Supercomputing Center in Tianjin (near Beijing).

The Tianhe-1A computer peaks at almost 2.56 petaflops, and it launched the National Supercomputing Center to the top of the international list in early 2010. That was the center's first-ever entry on the list. The international community couldn't ignore that kind of entry into the marketplace – it's often called China's supercomputing Sputnik moment.

Unfortunately, Tianhe-1A has always been viewed with a certain level of criticism. Its makeup involves a combination of CPUs and GPUs (graphical processing units) to achieve its speed. GPUs offer an energy-efficient way of adding significant processing power to a supercomputer. The problem is that not much of China's software for high-performance computing was compatible with GPU-based systems. Tianhe-1A's detractors like to call this the biggest, baddest gaming machine on the international stage. Chinese scientists counter that argument by pointing outing out the fact that Tianhe-1A is used for real research, particularly in the fields of energy and mineral exploration [source: Lim].

The fact remains that Tianhe-1A was a fairly obvious and successful grab at the world title, but it also belied China's newcomer status in supercomputing and showed the areas that it still needs to develop. Given the country's commitment to leading in this field, it won't be long before the bugs will be worked out of its computing initiative.

4
Leibniz-Rechenzentrum (Germany)

Imagine going to school at a university that had a 2.89-petaflop supercomputer stashed away. Oh, the Warcraft sessions that you would have! Seriously though, Leibniz Computing Center stands out for a few reasons: First, it was Europe's fastest high-performance computer as of September 2012. Also, it's the fastest system on this list that's run by an academic institution, Bayerische Akademie der Wissenschaften, near Munich.

The Intel-compatible IBM system dubbed SuperMUC, which debuted on the Top500 list at No. 4 in June 2012, sees use in many disciplines -- unlike the computers at Argonne and Lawrence Livermore that have more rigid areas of research. Fluid dynamics and aerodynamics of cars are some of the early tests being calculated in the Leibniz machine, and the system is also making headway in modeling earthquakes.

What's perhaps most impressive is Leibniz's use of supercomputers for education. Where other facilities seem to keep everyone's grubby hands away from their computers, Leibniz has a more open approach that makes access to learning about high-performance computing seem more accessible [source: Jülich].

3
Argonne National Laboratory (U.S.)
There's Argonne's supercomputer Mira, which ranked as the world's third-fastest supercomputer in June 2012.
There's Argonne's supercomputer Mira, which ranked as the world's third-fastest supercomputer in June 2012.

Spoiler alert! Two of the top three spots on this list are from the United States Department of Energy (DOE). Researchers at the No. 1 spot are virtually blowing stuff up for the U.S. nuclear program, while the crew at Argonne National Laboratory is quietly working away on its science experiments. Sort of.

Argonne was the first national science and engineering computer lab in the United States, opening in 1946 and using its first computer in 1953. So many of the last century's great technological advances came by way of Argonne's labs, including nuclear reactor technology, nuclear submarine technology and research on subatomic particles.

In 2012, Argonne's 8.16 petaflops of supercomputing power was used to advance many different fields of scientific research, ranging from biosciences to transportation and climate research. Want to find out if your city's proposed suspension bridge will come crashing down in high winds? Argonne folks can simulate all the forces involved. After that, they can give you the latest climate change prediction data.

Another great part of Argonne's computing program? It's open to so many researchers outside of the DOE. Researchers apply for time on Argonne's system, and they're awarded a set number of hours on the system. Think of it as a highly elevated form of helping a nation with its science homework.

2
Riken (Japan)

No. 1 is a tough spot to hold onto in the supercomputing game, as witnessed by K Computer at Japan's Riken Advanced Institute for Computational Science. In mid-2011, the Fujitsu-made machine was up and running, and by the end of the year its installation was complete; it had just become the first computer to surpass the 10-petaflop barrier [source: Metz]. K Computer grabbed the top spot on the worldwide list in January 2012. But before the celebratory bottles of sake had cooled, the new No. 1 computer in world was already being installed.

That's not to say anything against the K Computer's strength. It's an extremely powerful computer system capable of some astounding research. Much of its work will be related to medical advancement. For example, Riken's research teams use computer simulation to study and predict biological processes in the human body at the molecular, genetic, cellular and organ levels.

Finally, one other lesson about international competition to take from the K Computer has as much to do with China as it does with Japan. The Japanese have been leaders in supercomputing for a long time, but like everything in Asia, pressure from a technologically rising China is driving everyone to work harder. Japan has made no secret of its intentions to stay on top.

1
Lawrence Livermore National Laboratory (U.S.)

It's the granddaddy and big mama of U.S. supercomputing all wrapped into one. Lawrence Livermore National Laboratory got its start as a nuclear technology testing facility outside the San Francisco Bay area in 1952. That's when researchers used an old Naval infirmary to house the facility's first computer. Since that first machine was installed, Livermore Lab has been among the world's computing elite. Granted, your first smartphone probably had more processing speed than that first computer, but Livermore's machines have always been at the heart of America's nuclear program.

For more than 60 years, this is where the U.S. Department of Defense (DoD) has studied nuclear reactions and detonations. Today, every time the DoD changes a piece of hardware on its nuclear arsenal, it gets run through the simulators at Livermore to make sure the systems will still work if they ever get used.

The lab fired up Sequoia in 2012, its IBM Blue Gene/Q 16.32-petaflop monster. This system can supposedly calculate all the reactions and interactions in a nuclear blast in about a week. Its predecessor took about a month to do the same task. And to put that in even more perspective, your laptop computer would take about 1,600 years to do the job [source: Derene].

UP NEXT

White House Aims for Supercomputer Capable of Achieving Exascale

White House Aims for Supercomputer Capable of Achieving Exascale

Supercomputers are getting backing from the White House. Learn more about supercomputers in the U.S. at HowStuffWorks Now.


Author's Note: 10 Major Players in Supercomputers

Going into this article, I really didn't know what to expect. I was afraid the information was going to be drier than a day-old piece of toast. But as I researched the different labs, I was amazed at the range of work that these computers are being used for -- everything from medical research, space research, climate prediction and even animation. Bravo, supercomputers and the people who run you. Bravo!

Related Articles

Sources

  • "Cineca". Top500 Computer Sites. June 2012. (Sept. 2, 2012) http://i.top500.org/site/47495
  • Derene, Glenn. "How IBM Built the Most Powerful Computer in the World." Popular Mechanics. Dec. 28, 2011. (Aug. 27, 2012) http://www.popularmechanics.com/technology/engineering/extreme-machines/how-ibm-built-the-most-powerful-computer-in-the-world
  • Davis, Bob. "China's Not-So-Super Computers." The Wall Street Journal. March 2013. (Sept 1, 2012) http://online.wsj.com/article/SB10001424052702303812904577298062429510918.html
  • Dunbar, Jill. "Searching for Sister Planets." NASA Ames Research Center. March 6, 2012. (Aug. 29, 2012) http://www.nas.nasa.gov/publications/articles/feature_sister_planets_Kepler.html
  • Dunbar, Jill and Jenvey, Karen. "NASA Supercomputer Enables Largest Cosmological Simulations" NASA Ames Research Center. Sept. 29, 2011. http://www.nasa.gov/centers/ames/news/releases/2011/11-77AR.html
  • Hämmerle, Hannelore and Nicole Crémel. "CERN makes it into supercomputing TOP500." Cern Courier. Aug. 20, 2007. (Aug. 29, 2012) http://cerncourier.com/cws/article/cern/30870
  • Hsu, Jeremy. "Supercomputer 'Titans' Face Huge Energy Costs". MSNBC News. Jan 23, 2012. (Sept. 1, 2012) http://www.msnbc.msn.com/id/46077648/ns/technology_and_science-innovation/t/supercomputer-titans-face-huge-energy-costs
  • Lim, Louisa. "China's Supercomputing Goal: From 'Zero To Hero'." All Things Considered. Aug. 2, 2011. (Sept. 1, 2012) http://www.npr.org/2011/08/02/138901851/chinas-supercomputing-goal-from-zero-to-hero
  • Metz, Cade. "Japan Pushes World's Fastest Computer Past 10 Petaflop Barrier." Nov. 2, 2011. (Aug. 30, 2012) http://www.wired.com/wiredenterprise/2011/11/japanese_megamachine/