Parallel Processing Computations
Individually, each processor works the same as any other microprocessor. The processors act on instructions written in assembly language. Based on these instructions, the processors perform mathematical operations on data pulled from computer memory. The processors can also move data to a different memory location.
In a sequential system, it's not a problem if data values change as a result of a processor operation. The processor can incorporate the new value into future processes and carry on. In a parallel system, changes in values can be problematic. If multiple processors are working from the same data but the data's values change over time, the conflicting values can cause the system to falter or crash. To prevent this, many parallel processing systems use some form of messaging between processors.
Processors rely on software to send and receive messages. The software allows a processor to communicate information to other processors. By exchanging messages, processors can adjust data values and stay in sync with one another. This is important because once all processors finish their tasks, the CPU must reassemble all the individual solutions into an overall solution for the original computational problem. Think of it like a puzzle -- if all the processors remain in sync, the pieces of the puzzle fit together seamlessly. If the processors aren't in sync, pieces of the puzzle might not fit together at all.
There are two major factors that can impact system performance: latency and bandwidth. Latency refers to the amount of time it takes for a processor to transmit results back to the system. It's not good if it takes the processor less time to run an algorithm than it does to transmit the resulting information back to the overall system. In such cases, a sequential computer system would be more appropriate. Bandwidth refers to how much data the processor can transmit in a specific amount of time. A good parallel processing system will have both low latency and high bandwidth.
Sometimes, parallel processing isn't faster than sequential computing. If it takes too long for the computer's CPU to reassemble all the individual parallel processor solutions, a sequential computer might be the better choice. As computer scientists refine parallel processing techniques and programmers write effective software, this might become less of an issue.
To learn more about parallel processing, follow the links below.
Related HowStuffWorks Articles
- How Bits and Bytes Work
- How Boolean Logic Works
- How Computer Memory Works
- How E-commerce Works
- How Electronic Gates Work
- How Encryption Works
- How Hackers Work
- How Home Networking Works
- How Microprocessors Work
- How Operating Systems Work
- How PCs Work
- How Quantum Computers Will Work
- How Semiconductors Work
- How Web Servers Work
More Great Links
- Brown, Martin. "Comparing traditional grids with high-performance computing." IBM. Jun 13, 2006. http://mcslp.com/gridpdfs/gr-tradhp.pdf
- Dietz, Hank. "Linux Parallel Processing HOWTO." Aggregate.org. Jan. 5, 1998. Retrieved March 29, 2008. http://aggregate.org/LDP/19980105/pphowto.html
- "Parallel processing." SearchDataCenter. March 27, 2007. Retrieved March 29, 2008. http://searchdatacenter.techtarget.com/sDefinition/0,,sid80_gci212747,00.html
- "Programming Models." Tommesani. Retrieved March 29, 2008. http://www.tommesani.com/ProgrammingModels.html.