Prev NEXT

How Operating Systems Work

Process Control Block

All of the information needed to keep track of a process when switching is kept in a data package called a process control block. The process control block typically contains:

  • An ID number that identifies the process
  • Pointers to the locations in the program and its data where processing last occurred
  • Register contents
  • States of various flags and switches
  • Pointers to the upper and lower bounds of the memory required for the process
  • A list of files opened by the process
  • The priority of the process
  • The status of all I/O devices needed by the process

Each process has a status associated with it. Many processes consume no CPU time until they get some sort of input. For example, a process might be waiting for a keystroke from the user. While it is waiting for the keystroke, it uses no CPU time. It is "suspended." When the keystroke arrives, the OS changes its status. When the status of the process changes from pending to active, for example or from suspended to running, the information in the process control block must be used like the data in any other program to direct execution of the task-switching portion of the operating system.

Advertisement

This process swapping happens without direct user interference, and each process gets enough CPU cycles to accomplish its task in a reasonable amount of time. Trouble may begin if the user tries to have too many processes functioning at the same time. The operating system itself requires some CPU cycles to perform the saving and swapping of all the registers, queues and stacks of the application processes.

Each process requires its own memory allocation, but the operating system must balance the load. The more applications you open, the less memory each app has to operate. If enough processes are started, and if the operating system hasn't been carefully designed, the system begins to use more of its available CPU cycles to swap between processes rather than running processes. When this happens, it's called thrashing, and it usually requires some sort of direct user intervention to stop processes and bring order back to the system. It's a lot like you trying to do too many things at once. Once you've hit your limit you will feel overwhelmed. That's what thrashing is to a computer.

Developers design their systems to avoid thrashing, but you can do your part by adding more RAM to your computer and closing applications you aren't using. That helps your OS manage resources more effectively and keep things running smoothly.

So far, all the scheduling we've discussed has concerned a single CPU. In a system with two or more CPUs, the operating system must divide the workload among the CPUs, trying to balance the demands of the required processes with the available cycles on the different processors. Asymmetric operating systems use one processor for their own needs and divide application processes among the remaining CPUs. Symmetric operating systems divide the work between the various processors, balancing demand versus availability even when the operating system itself is all that's running. They share the available memory. In fact, symmetric processing also applies to using multiple processor cores on the same chip.

Depending on your computer and operating system, you may be using symmetric processing right now.

If the operating system is the only software with execution needs, the processors are not the only resource that needs to be scheduled. Memory management is the next crucial step in making sure that all processes run smoothly.