How Moore's Law Works

Gordon Moore at Intel
Gordon Moore at Intel's headquarters
AP Photo/Paul Sakuma

There's a joke about personal computers that has been around almost as long as the devices have been on the market: You buy a new computer, take it home and just as you finish unpacking it you see an advertisement for a new computer that makes yours obsolete. If you're the kind of person who demands to have the fastest, most powerful machines, it seems like you're destined for frustration and a lot of trips to the computer store.

While the joke is obviously an exaggeration, it's not that far off the mark. Even one of today's modest personal computers has more processing power and storage space than the famous Cray-1 supercomputer. In 1976, the Cray-1 was state-of-the-art: it could process 160 million floating-point operations per second (flops) and had 8 megabytes (MB) of memory.

Advertisement

Today, many personal computers can perform more than 10 times that number of floating-point operations in a second and have 100 times the amount of memory. Meanwhile on the supercomputer front, the Cray XT5 Jaguar at the Oak Ridge National Laboratory performed a sustained 1.4 petaflops in 2008 [source: Cray]. The prefix peta means 10 to the 15th power -- in other words, one quadrillion. That means the Cray XT5 can process 8.75 million times more flops than the Cray-1. It only took a little over three decades to reach that milestone.

If you were to chart the evolution of the computer in terms of processing power, you would see that progress has been exponential. The man who first made this famous observation is Gordon Moore, a co-founder of the microprocessor company Intel. Computer scientists, electrical engineers, manufacturers and journalists extrapolated Moore's Law from his original observation. In general, most people interpret Moore's Law to mean the number of transistors on a 1-inch (2.5 centimeter) diameter of silicon doubles every x number of months.

­The number of months shifts as conditions in the microprocessor market change. Some people say it takes 18 months and others say 24. Some interpret the law to be about the doubling of processing power, not the number of transistors. And the law sometimes seems to be more of a self-fulfilling prophecy than an actual law, principle or observation. To understand why, it's best to go back to the beginning.

Advertisement

Semiconductors, Transistors and Integrated Circuits

First transistor replica
A replica of the first transistor
AP Photo/Paul Sakuma

The discovery of semiconductors, the invention of transistors and the creation of the integrated circuit are what make Moore's Law -- and by extension modern electronics -- possible. Before the invention of the transistor, the most widely-used element in electronics was the vacuum tube. Electrical engineers used vacuum tubes to amplify electrical signals. But vacuum tubes had a tendency to break down and they generated a lot of heat, too.

Bell Laboratories began looking for an alternative to vacuum tubes to stabilize and strengthen the growing national telephone network in the 1930s. In 1945, the lab concentrated on finding a way to take advantage of semiconductors. A semiconductor is a material that can act as both a conductor and an insulator. Conductors are materials that permit the flow of electrons -- they conduct electricity. Insulators have an atomic structure that inhibits electron flow. Semiconductors can do both.

Advertisement

The control of the flow of electrons is what makes electronics work. Finding a way to harness the unique nature of semiconductors became a high priority for Bell Labs. In 1947, John Bardeen and Walter Brattain built the first working transistor. The transistor is a device designed to control electron flows -- it has a gate that, when closed, prevents electrons from flowing through the transistor. This basic idea is the foundation for the way practically all electronics work.

Early transistors were huge compared to the transistors manufacturers produce today. The very first one was half an inch (1.3 centimeters) tall. But once engineers learned how to build a working transistor, the race was on to build them better and smaller. For the first few years, transistors existed only in scientific laboratories as engineers improved the design.

In 1958, Jack Kilby made the next huge contribution to the world of electronics: the integrated circuit. Earlier electric circuits consisted of a series of individual components. Electrical engineers would construct each piece and then attach them to a foundation called a substrate. Kilby experimented with building a circuit out of a single piece of semiconductor material and overlaying the metal parts necessary to connect the different pieces of circuitry on top of it. The result was an integrated circuit.

The next big development was the planar transistor. To make a planar transistor, components are etched directly onto a semiconductor substrate. This makes some parts of the substrate higher than others. Then you apply an evaporated metal film to the substrate. The film adheres to the raised portions of the semiconductor material, coating it in metal. The metal creates the connections between the different components that allow electrons to flow from one component to another. It's almost like printing a circuit directly onto a semiconductor wafer.

Advertisement

Moore's Observation

Gordon Moore holding a vacuum tube
Gordon Moore holds a vacuum tube, the precursor to transistors.
AP Photo/Paul Sakuma

The physicist Jean Hoerni developed the technique for creating planar transistors in 1959. By 1961, a company called Fairchild Semiconductor produced the first planar integrated circuit. From that moment on, the technology advanced rapidly. Physicists and engineers found new and more efficient ways to create integrated circuits. They refined the processes they used to make components smaller and more compact. This meant they could fit more transistors on a single semiconductor wafer than previous generations of the technology.

During this time, the director for research and development at Fairchild was Gordon Moore. Electronics magazine asked Moore to predict what would happen over the next 10 years of development in the field of electronics. Moore wrote an article with the snappy title "Cramming more components onto integrated circuits." The magazine published the article on April 19, 1965.

Advertisement

Moore based his prediction upon the rapid development of the industry since the introduction of the integrated circuit. He saw that as techniques improved and components on circuits shrank, the price for producing an individual component dropped. Semiconductor companies had an incentive to refine their production techniques -- not only were the new circuits more powerful, the individual elements were more cost efficient. As long as that relationship held true, Moore said the trend would continue.

But Moore included a caveat: As ­circuits become more complex, the cost to produce the circuit as a whole goes up. So while the individual components are inexpensive to produce, really complex circuits are more expensive to develop. As techniques improve, the cost of creating complex circuits decreases. The cost per component and cost per circuit created a balancing effect on the industry and resulted in a log-linear evolutionary trend.

Moore also pointed out that over a 12-month period, the number of components on a one-inch (2.5-centimeter) diameter semiconductor wafer doubled. Moore attributed this to two major trends: companies were finding ways to make smaller components and they were getting better at arranging the components to conserve space on the wafer.

Moore extrapolated his prediction of the next 10 years by looking at this data. At the time of the article, Moore said that the optimal number of components on a circuit was 50. He projected that by 1975, that number would be closer to 65,000. The prediction held -- by 1975, integrated circuits had nearly 65,000 components.

Advertisement

Interpretations of Moore's Law

­In 1975, Moore wrote a paper for the Institute of Electrical and Electronics Engineers (IEEE) International Electron Devices Meeting. He titled the paper "Progress in digital integrated electronics." Moore acknowledged that his prediction for the rate of advancements in circuit technology had held true and discussed the possibility of the trend continuing.

Moore pointed out that as techniques improved, the potential for defects decreased. That meant circuit manufacturers could work with larger wafers and produce more chips per wafer. Production continued to become more efficient, which in turn helped drive innovation to create even smaller components.

Advertisement

Moore said the trend he predicted 10 years earlier would progress at that same rate for at least a few more years. But Moore also said that he believed the semiconductor industry was approaching the limit for some techniques, such as conserving space on a circuit. He called this factor "circuit cleverness." He believed that we'd reach a limit on how clever we could arrange components -- eventually we'd have the optimal use of space. Once that factor is removed from the equation, the rate of advancements must slow down. He said he believed after a few years components would double only every 24 months.

While Moore's original observation focused on technological advances and the economics behind producing circuits, many people reduce his observation to the simple statement we call Moore's Law. The most common version of Moore's Law is that the number of transistors on a circuit doubles every 18 (or 24) months. Remarkably, this prediction has held true -- today, Intel's Core i7 microprocessor has 731 million transistors, while its Xeon processor has 1.9 billion transistors [source: Intel].

Cramming more components on an integrated circuit doesn't just mean devices are becoming more powerful -- it also means they're getting smaller. The tiny components on compact integrated circuits power all sorts of portable electronic devices. Even a small microprocessor chip today is as powerful as a full-sized chip was a few years ago. The advances in circuit production make devices like smartphones and netbooks possible.

Advertisement

Moore's Law in Action

Why have Moore's observations and predictions held true over so many decades? Moore's Law isn't really a law at all -- in fact, there's no fundamental law of physics behind it. Moore's Law only holds true because of the actions of human beings. But what keeps the cycle going even as the challenge to make more powerful circuits grows?

Much of the reason is psychological and driven by the market. Companies that make integrated circuits are competing against each other and everyone knows about Moore's Law. That means every corporate executive has this in mind: if our company doesn't double the power of our circuits in 18 months, another company will beat us to it.

Advertisement

Because companies don't want to give an edge to competitors, they pour a lot of money into research and development (R&D). These R&D divisions work to develop new techniques to create smaller components and arrange them in such a way that maximizes their performance. It costs a lot of money to keep up the cycle of research, but this cost is balanced against the threat of competitors gaining a foothold and dominating the market.

Another factor is the simple desire to overcome a challenge. Many people have predicted the end of Moore's Law over the years. Some people thought it would come to an end during the 1980s. Others said the same thing in the mid '90s. It seemed like engineers would eventually bump up against a barrier that would be fundamentally impossible to breach. But engineers somehow manage to find a solution each time, keeping Moore's Law alive.

Consumers also drive Moore's Law. The rapid development of electronics has created a sense of expectation among consumers. Every year, faster and more advanced electronics hit the market. From the consumer's point of view, there's no reason not to expect something better next year.

Advertisement

Moore's Law and the Nanoscale

Circuit board
Today's circuit boards have millions of transistors.
© iStockphoto/Robert Hunt

Today, transistors on integrated circuits have reached a size so small that it would take more than 2,000 of them stacked next to each other to equal the thickness of a human hair. The transistors on Intel's latest chips are only 45 nanometers wide -- the average human hair is about 100,000 nanometers thick [source: National Nanotechnology Initiative].

Creating such narrow transistors is an amazing achievement. Visible light has wavelengths in the range of 400-700 nanometers. Conventional light microscope lenses can only focus on objects half the size of the wavelength of visible light or larger. You have to rely on special equipment like scanning electron microscopes to create an image of something on so small a scale.

Advertisement

One thing you must consider when dealing with such small devices is that as you approach the nanoscale, you leave behind the world of classical physics and enter the realm of quantum mechanics. The rules of physics in the quantum world are very different from the way things work on the macro scale. For example, quantum particles like electrons can pass through extremely thin walls even if they don't have the kinetic energy necessary to break through the barriers. Quantum physicists call this phenomenon quantum tunneling.

Because electronics depend upon controlling the flow of electrons to work, issues like quantum tunneling create serious problems. These problems force electrical engineers to re-evaluate the way they design circuits. In some cases, shifting to different materials solves the issue. In others, finding a completely new way to build circuits might work.

There's even the possibility that someone will come up with a revolutionary idea that makes the transistor and integrated circuit obsolete. While that may sound far-fetched, the fact remains that despite numerous tech pundits and engineers pronouncing the end of Moore's Law, circuit manufacturers are still finding ways to keep it going. As it turns out, the challenges may not be quite as impossible as some believe.

­

Advertisement

The Future of Moore's Law

Even Gordon Moore has questioned how long the cycle of innovation and production can keep up the frenzied pace of the last four decades. He has also expressed amazement at the way companies like Intel find new ways to work around what initially seemed like an insurmountable problem. Will there ever be an end to Moore's Law?

The answer is yes, but it's difficult to pin down when that might happen. For one, we could hit a technical barrier that prevents engineers from finding a way to make smaller components. But even if we don't encounter a technical barrier, economics could come into the equation. If it's not economically feasible to produce circuits with smaller transistors there may be no reason to pursue further development. Or we could bump up against the fundamental laws of physics -- like the speed of light, for instance.

Advertisement

The problem with predicting a specific date when one or more of these barriers will stop progress is that we have to base it on what we know today. But every day engineers are learning new ways to design, build and produce circuits. What we know tomorrow may make the things that seem impossible today completely achievable.

Is Moore's Law even relevant today? The era of the personal computer has been dominated by a sense that the consumer needs the latest and greatest machine on the market. But today, some people are questioning that philosophy. Part of that is due to changes in consumer behavior -- many computer owners use their computers for simple tasks like browsing the Web or sending e-mail. These applications don't put a heavy demand on the computer's hardware.

Another reason powerful PCs aren't as necessary is the rise in popularity of cloud computing. Cloud computing shifts the burden of processing and storing data to a network of computers. Users can access applications and information using the Internet, so they don't necessarily need a powerful machine of their own to take advantage of cloud computing.

As a result, devices like smartphones and netbooks are becoming more popular. These devices don't have the raw processing power of the latest desktop and laptop computers. But they still allow users to access the applications and data they need.

Advertisement

Lessons Learned from Moore's Law

If consumers continue to purchase devices like smartphones and netbooks, microprocessor manufacturers will have less of an incentive to meet the expectations of Moore's Law. If there's no market for ultra-powerful processors, then we've hit the economic barrier that could bring an end to the cycle.

That said, some facilities may still push the limits of integrated circuit production. While the average consumer may not see the value in a powerful PC, research facilities still rely on the fastest processors in production. More powerful microprocessors can aid in everything from weather prediction to cosmological studies.

Advertisement

One lesson we can draw from Moore's Law and the semiconductor industry is that pure research can yield beneficial results for society. The engineers at Bell Laboratories had no guarantee that their experimental work with the earliest transistor models would yield positive results. But their research and hard work spawned an industry that changed the way we live. It's an example of how scientific research can have a dramatic impact on our lives even when there's no obvious or immediate benefit.

Perhaps the most important lesson we can take from Moore's Law is that we shouldn't be too quick to say something is impossible. Henry L. Ellsworth, the commissioner of the U.S. Patent Office in 1843, once said "the advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end" [source: Sass]. Ellsworth was pointing out that the rate of human inventiveness and innovation was so impressive that it was hard to believe. He was not, as some have implied, suggesting that everything that could be invented had been invented already. And as a matter of fact, the rate of innovation has only increased since then.

While we know a great deal about building electronics, there may yet be very much we do not know. Moore's Law helps as a motivational device for inventive engineers. They don't want to disappoint Gordon Moore, even if it means they have to find unique solutions to seemingly impossible problems.

To learn more about Moore's law and other related topics, take a look at the links on the next page.

Advertisement

Lots More Information

Related HowStuffWorks Articles
More Great Links

  • ACEPT W3 Group. "Color and Light." Department of Physics and Astronomy, Arizona State University. 2000. (Feb. 4, 2009) http://acept.asu.edu/PiN/rdg/color/color.shtml
  • Cray, Inc. "Cray History." (Feb. 3, 2009) http://www.cray.com/About/History.aspx
  • Cray, Inc. "Oak Ridge National Laboratory's Cray XT5 "Jaguar" Supercomputer." (Feb. 3, 2009) http://www.cray.com/Products/XT/Product/ORNLJaguar.aspx
  • Fox, Karen C. and Shurkin, Joel. "Transistorized!" ScienCentral, Inc. and the American Institute of Physics. 1999. (Feb. 3, 2009). http://www.pbs.org/transistor/
  • Intel. "Microprocessor Quick Reference Guide." (Feb. 5, 2009) http://www.intel.com/pressroom/kits/quickreffam.htm
  • Kanellos, Michael. "Myths of Moore's Law." CNET. June 11, 2003. (Feb. 4, 2009) http://news.cnet.com/Myths-of-Moores-Law/2010-1071_3-1014887.html
  • Moore, Gordon E. "Cramming more components onto integrated circuits." Electronics. April 19, 1965. Vol. 38, No. 8.
  • Moore, Gordon E. "Progress in Digital Integrated Electronics." Technical Digest 1975, International Electron Devices Meeting, IEEE. 1975.
  • National Nanotechnology Initiative. "The Scale of Things - Nanometers and More." U.S. Department of Energy. (Feb. 4, 2009) http://www.nano.gov/html/facts/The_scale_of_things.html
  • Nobelprize.org. "The History of the Integrated Circuit." (Feb. 4, 2009) http://nobelprize.org/educational_games/physics/integrated_circuit/history/
  • Sass, Samuel. "A Patently False Patent Myth." The Skeptical Inquirer. Spring 1989. Vol. 13, pp. 310-313.
  • Schaller, Bob. "The Origin, Nature, and Implications of 'Moore's Law'." Microsoft Research. Sept. 26, 1996. http://research.microsoft.com/en-us/um/people/gray/moore_law.html
  • Stern, Dr. David P. "Quantum Tunneling." NASA. Feb. 13, 2005. (Feb. 5, 2009) http://www-istp.gsfc.nasa.gov/stargaze/Q8.htm
  • Stokes, Jon. "Understanding Moore's Law." Ars Technica. Sept. 27, 2008. (Feb. 3, 2009) http://arstechnica.com/hardware/news/2008/09/moore.ars
  • Welter, Kira. "Nano-objects under the light microscope." Royal Society of Chemistry. March 6, 2007. (Feb. 4, 2009) http://www.rsc.org/chemistryworld/News/2007/March/06030701.asp

Advertisement

Loading...