How Shared Computing Works


In a shared computing system, a user can access the processing power of an entire network of computers.
In a shared computing system, a user can access the processing power of an entire network of computers.
HowStuffWorks

Imagine that you've been assigned the task of pushing a very heavy car up a hill. You're allowed to recruit people who aren't doing anything else to help you move the car. You've got two choices: You can look around for one person big and strong enough to do it all by him or herself, or you could grab several average people to push together. While you might eventually find someone large enough to push the car alone, most of the time it will be easier to just gather a group of average-sized people. It might sound strange, but shared computer systems use the same principle.

When a computational problem is really complex, it can take a single computer a long time to process it -- millions of days, in some cases. Even supercomputers have processing limitations. They're also rare and expensive. Many research facilities require a lot of computational power, but don't have access to a supercomputer. For these organizations, shared computing is often an attractive alternative to supercomputers.

Shared computing is a kind of high-performance computing. A shared computing system is a network of computers that work together to accomplish a specific task. Each computer donates part of its processing power -- and sometimes other resources -- to help achieve a goal. By networking thousands of computers together, a shared computing system can equal or even surpass the processing power of a supercomputer.

Most of the time, your computer isn't using all of its computational resources. There are other times when you might have your computer on, but aren't actually using it. A shared computing system takes advantage of these resources that otherwise would remain unused.

Shared computing systems are great for certain complex problems, but aren't useful for others. They can be complicated to design and administer. While several computer scientists are working on a way to standardize shared computing systems, many existing systems rely on unique hardware, software and architecture.

What pieces make up a typical shared computing system? Keep reading to find out.

­

Shared Computing Systems

In a traditional high-performance computing system, all the computers are the same model and run on the same operating system. Much of the time, every application run on the system has its own dedicated server. Sometimes the entire network relies on hardwired connections, meaning all the elements in the system connect to each other through various hubs. The entire system is efficient and elegant.

A shared computing system can be just as efficient, but it doesn't necessarily look very elegant. A shared computing system is limited only by the software it relies upon to connect computers together. With the right software, a shared computing system can work on different kinds of computers running on different operating systems. Network connections might exist over hardwired networks, local area networks (LANs), wireless area networks (WANs) or the Internet. The biggest advantage a shared computing system has over traditional HPC systems is that it's easier to add more resources to a shared computing system. Anyone with a computer capable of running the system's software can join.

The system's software is what gives it access to each computer's unused processing power. Every computer connected to the system must have this software installed in order to participate. There's no definitive shared computing software kit, but in general the software must do the following:

  • Contact the system's administrative server to get a chunk of data
  • Monitor the host computer's CPU usage and utilize the processing power whenever it's available
  • Send analyzed data back to the administrative server in exchange for new data

Shared computing systems have a relatively narrow use. They're great for solving big computational problems that scientists can break down into smaller sections. If breaking the problem into smaller chunks is particularly simple, it's called an embarrassingly parallel problem.

For small computational problems or problems that aren't easy to break up, shared computing systems are less useful. The whole point of the system is to decrease the amount of time it takes to finish complex calculations. It won't necessarily increase the speed of simple calculations across the network.

What are the different parts of a shared computing system? Keep reading to find out.

Shared Computing Architecture

Unlike grid computing systems -- which in theory can have as many network interface points as there are users -- a shared computing system usually only has a few points of control. That's because most shared computing systems have specific purposes and aren't general utilities.

It's useful to imagine a typical shared computing system as having a front end and a back end. On the front end are all the computers that are volunteering CPU resources to the project. On the back end are the computers and servers that manage the overall project, divide the main task into smaller chunks, communicate with the computers on the front end and store the information the front end computers send after completing an analysis.

In general, the job of dividing up the computational problem into smaller chunks falls to a program on a back end computer, usually a server. This computer uses specific software to divide up the task into smaller pieces that are easier for an average computer system to manage. When contacted by the companion software installed on a front end computer, the server will send data over the network for analysis. Upon receiving a completed analysis job, the server will direct the data to an appropriate database.

The system's administrators will usually use another computer to piece completed analyses together. The end goal is to come to a solution of a very large problem by solving it in tiny bits. In many cases, the system's administrators will publish the results so that others can benefit from the information.

If this architecture description seems a little vague, it's because there's no single way to create and administer a shared computing system. Each system has its own unique software and architecture. In most cases, a programmer customizes the software for the specific system's goals. While two different shared computer systems might work the same way in general, once you dig down into details, they can look very different.

What are some shared computing applications, and why do they need specialized software? Find out in the next section.

Shared Computing Applications

Scientists are using shared computing systems to analyze complex computational problems.
Scientists are using shared computing systems to analyze complex computational problems.
James Steidl/iStockphoto

There are dozens of active shared computing system projects, each with its own networks and computational tasks. Some of these networks overlap -- it's possible for a user to participate in more than one network, though it does mean that different projects have to divvy up the idle resources. As a result, each individual task takes a little longer.

One example of a shared computer system is the Grid Laboratory of Wisconsin (GLOW). The University of Wisconsin-Madison uses GLOW for multiple projects, which in some ways sets it apart from most shared computing systems. One project uses the GLOW network to study the human genome. Another takes advantage of GLOW's resources to research potential treatments for cancer. Unlike the shared computing systems that are dedicated to a single task, GLOW can accommodate multiple projects.

The software that makes GLOW possible is called Condor. It's Condor's job to seek out idle processors within the GLOW network and use them to work on individual projects. When one project is inactive, Condor borrows its resources for the other projects. However, if any previously inactive project comes back online, Condor releases the respective computers' processors.

Some other shared computing systems include:

  • SETI@home: A project that analyzes data from radio telescopes in search of intelligent extraterrestrial life.
  • Africa@home: This project dedicates computer power to research programs designed to improve the quality of life in Africa with a focus on malaria control initiatives.
  • Proteins@home, Predictor@home, Rosetta@home and Folding@home: Each of these projects studies proteins in various ways.
  • Einstein@home, Cosmology@home, Milkyway@home and Orbit@home: These projects study astronomical data.

Other projects study everything from the physics of fluid dynamics to simulated nanotechnology environments.

So shared computing systems can be really useful, but are there any dangers? Read on, if you aren't scared.

Concerns About Shared Computing

Any time a system allows one computer access to another computer's resources, questions come up about safety and privacy. What stops the program's administrators from snooping around a particular user's computer? If the administrators can tap into CPU power, can they also access files and sensitive data?

The simple answer to this question is that it depends on the software the participating computer has to install to be part of the system. Everything a shared computing system can do with an individual computer depends upon that software application. Most of the time, the software doesn't allow anyone direct access to the contents on the host computer. Everything is automated, and only the CPU's processing power is accessible.

There are exceptions, though. A zombie computer system or botnet is an example of a malicious shared computing system. Headed by a hacker, a zombie computer system turns innocent computer owners into victims. First, the victim must install specific software on his or her computer before a hacker can access it. Usually, such a software application is disguised as a harmless program. Once installed, the hacker can access the victim's computer to perform malicious tasks like a direct denial of service (DDoS) attack or send out massive amounts of spam. A botnet can span hundreds or thousands of computers, all without the victims being aware of what's going on.

Shared computing systems also need a plan in place for the times when a particular computer goes offline or otherwise becomes unavailable for an extended time. Most systems have a procedure in place that puts a time limit on each task. If the participant's computer doesn't complete the task in a certain amount of time, the control server will cancel that computer's task and assign the task to a new computer.

One criticism of shared computing is that while it capitalizes on idle processors, it increases power consumption and heat output. As computers use more of their processing power, they require more electricity. Some shared computing system administrators urge participants to leave their computers on all the time so that the system has constant access to resources. Sometimes a shared computing system initiative comes into conflict with green initiatives, which emphasize energy conservation.

Perhaps the biggest criticism of shared computing systems is that they aren't comprehensive enough. While they pool processing power resources together, they don't take advantage of other resources like storage. For that reason, many organizations are looking at implementing grid computing systems, which take advantage of more resources and allow a larger variety of applications to leverage networks.

Are shared computing systems the future, or will grid computing systems take their place? As both models become more commonplace, we'll see which system wins out. To learn more about shared computing and other topics, hop on over to the next page and follow the links.

Related HowStuffWorks Articles

More Great Links

Sources

  • Appavoo, Jonathan, Uhlig Volkmar and Waterland, Amos. "Project Kittyhawk: Building a Global-Scale Computer." IBM. January, 2008. http://domino.research.ibm.com/comm/research_projects.nsf/pages/kittyhawk.index.html/$FILE/Kittyhawk%20OSR%2008.pdf
  • Haddad, Charles H. "The Biggest Share in Shared Computing." Business Week. March 22, 2000. Issue 3682, pp. 134-136.
  • Kessler, Michelle. "High tech's latest bright idea: Shared computing." USA Today. Jan. 9, 2003. pg. 1b.
  • Koyretis, G. "LiveWN: CPU Scavenging in the Grid Era." National Technical University of Athens, Greece.
  • McAllister, Neil. "Server virtualization." InfoWorld. Feb. 12, 2007. Retrieved March 12, 2008. http://www.infoworld.com/article/07/02/12/07FEvirtualserv_1.html
  • "Middleware." Carnegie Mellon Software Engineering Institute. Retrieved March 12, 2004. http://www.sei.cmu.edu/str/descriptions/middleware_body.html.
  • Versweyveld, Leslie. "Shared computing grid cuts data mountains down to size." Physorg.com. Retrieved March 12, 2008.