Server computers -- machines that host files and applications on computer networks -- have to be powerful. Some have central processing units (CPUs) with multiple processors that give these servers the ability to run complex tasks with ease. Computer network administrators usually dedicate each server to a specific application or task. Many of these tasks don't play well with others -- each needs its own dedicated machine. One application per server also makes it easier to track down problems as they arise. It's a simple way to streamline a computer network from a technical standpoint.
There are a couple of problems with this approach, though. One is that it doesn't take advantage of modern server computers' processing power. Most servers use only a small fraction of their overall processing capabilities. Another problem is that as a computer network gets larger and more complex, the servers begin to take up a lot of physical space. A data center might become overcrowded with racks of servers consuming a lot of power and generating heat.
Server virtualization attempts to address both of these issues in one fell swoop. By using specially designed software, an administrator can convert one physical server into multiple virtual machines. Each virtual server acts like a unique physical device, capable of running its own operating system (OS). In theory, you could create enough virtual servers to to use all of a machine's processing power, though in practice that's not always the best idea.
Virtualization isn't a new concept. Computer scientists have been creating virtual machines on supercomputers for decades. But it's only been a few years since virtualization has become feasible for servers. In the world of information technology (IT), server virtualization is a hot topic. It's still a young technology and several companies offer different approaches.
Why are so many companies using server virtualization in their computer networks? Find out in the next section.