How Operating Systems Work

Processor Management

The heart of managing the processor comes down to two related issues:

  • Ensuring that each process and application receives enough of the processor's time to function properly
  • Using as many processor cycles as possible for real work

The basic unit of software that the operating system deals with in scheduling the work done by the processor. Applications contain at least one process, and within each process there is at least one thread. Threads execute parts of the process code and operating systems m anage units even as small as threads, assigning the resources they need to function properly.


It's tempting to think of a process as an application, but that gives an incomplete picture of how processes relate to the operating system and hardware. The application you see (word processor, spreadsheet or game) is, indeed, a process, but that application may cause several other processes to begin, for tasks like communications with other devices or other computers. There are also numerous processes that run without giving you direct evidence that they ever exist. For example, an operating system can have dozens of background processes running to handle the network, memory management, disk management, virus checks and so on.

A process, then, is software that performs some action and can be controlled — by a user, by other applications or by the operating system.

It is processes, rather than applications, that the operating system controls and schedules for execution by the CPU. In a single-tasking system, the schedule is straightforward. The operating system allows the application to begin running, suspending the execution only long enough to deal with interrupts and user input.

Interrupts are special signals sent by hardware or software to the CPU. It's as if some part of the computer suddenly raised its hand to ask for the CPU's attention in a lively meeting. Sometimes the operating system schedules the priority of processes so that interrupts are masked — that is, the operating system ignores the interrupts from some sources so that a particular job can be finished as quickly as possible. Some interrupts (such as those from error conditions or problems with memory) are so important that they can't be ignored, such as the delivery of a message to you pointing out the battery in your laptop running out. These non-maskable interrupts (NMIs) must be dealt with immediately, regardless of the other tasks at hand.

While interrupts add some complication to the execution of processes in a single-tasking system, the job of the operating system becomes much more complicated in a multitasking system. Now, the operating system must arrange the execution of applications so that you believe that there are several things happening at once. This is complicated because each CPU can only do one thing at a time. Today's multicore processors and multiprocessor machines can handle more work, but each processor core is still capable of managing one task at a time.

To give the appearance of lots of things happening at the same time, the operating system has to switch between different processes thousands of times a second. Here's how it happens:

  • A process occupies a certain amount of RAM. It also makes use of registers, stacks and queues (forms of computer storage) within the CPU and operating-system memory space.
  • When two processes are multitasking, the operating system allots a certain number of CPU execution cycles to one program.
  • After that number of cycles, the operating system makes copies of all the registers, stacks and queues used by the processes, and notes the point at which the process paused in its execution.
  • It then loads all the registers, stacks and queues used by the second process and allows it a certain number of CPU cycles.
  • When those are complete, it makes copies of all the registers, stacks and queues used by the second program, and loads the first program.