Like HowStuffWorks on Facebook!

How Mini PCs Work

The Big Deal About Getting Smaller
The Raspberry Pi computer has everything you need for basic computing crammed onto a single circuit board.
The Raspberry Pi computer has everything you need for basic computing crammed onto a single circuit board.
Image courtesy of the Raspberry Pi Foundation

To understand how a PC can fit onto something as small as a USB stick, we need to look at the history of miniaturization in the computer industry. One of the most important developments for computers -- and electronics in general -- happened in a lab in 1947.

That's when John Bardeen, William Shockley and Walter Brattain created the first transistor. They worked for Bell Laboratories and had been experimenting with germanium crystals, an early semiconductor material in use near the end of World War II. Brattain wrapped a thin strip of gold around the point of a triangular piece of plastic, leaving a gap right at the tip of the point. He suspended the plastic triangle so that it just barely made contact with the germanium crystal.

Brattain discovered that if he applied a voltage to one side of the gold strip, it would come out the other side as an amplified current. Although this early transistor wasn't a practical component for electronic devices, it paved the way to replacing the vacuum tube. Because vacuum tubes are large and give off a lot of heat, this opened up new opportunities for computer designs.

Over the course of several years, engineers refined the design of the transistor. Eventually, they were able to miniaturize transistors so that they could fit on a small chip of semiconductor material -- which in some ways acts as a conductor and in other ways as an insulator.

Then, in 1965, a man named Gordon Moore made an observation that would become something of a self-fulfilling prophecy. He noted that within the span of a certain amount of time -- depending on whom you ask and when, the period ranges between 18 and 24 months -- improvements in technology and manufacturing processes permit the number of discrete components on a square inch (6.5 square centimeters) of silicon wafer to double. He saw that companies that designed chips would find new ways to create smaller components and then optimize the manufacturing process so that it made more sense financially to build more powerful chips. Today, we call this observation Moore's Law.

One way to interpret Moore's Law is to say that computer processors double in processing power every 18 months or so. Another way is to say that at the end of any 18-month span of time, engineers will discover ways to cram twice as many transistors onto a silicon wafer as they did when they started. Yet another way is to say that the size of discrete components on processors gets dramatically smaller every 18 months.

This means that not only are our computers getting more powerful -- far more powerful than the building-sized monsters from the early days of computing -- but they're also getting smaller. And if you're willing to sacrifice a few features for the sake of size, you can get very small indeed.