Two of the major approaches to designing computer systems are serial or single processor systems and parallel processor systems.
Single processor systems have just one processor (computer chip). Although modern processors are very fast, performing calculations in milliseconds or less, these systems are still limited in speed. The processor performs one calculation at a time, then loads some new data and performs another calculation. It does this in a rapid-fire fashion, over and over and over again.
Parallel processor systems, on the other hand, can have many computer processors working in tandem. Part of the computer breaks each big problem down into many smaller calculations. The central processor then assigns each of these smaller calculations to one of many processors. Each processor works on its share of the problem by itself, at the same time as all the other processors. When the processors are done with their small calculations, they report their results back to the central processor, which then assigns them more work. Although a small amount of performance is lost due to the need to coordinate tasks, the overall increase in computing efficiency is very, very large when tackling complex computational projects.
As an example, NCAR's Cheyenne supercomputer, which began operations in January 2017, has more than 140,000 processors.
Supercomputers with many parallel processors are used to run models of extremely complex systems. Examples include:
- Global climate models
- Models of supernova explosions in space
- Aerodynamic modeling of state-of-the-art jet aircraft
- The complex folding patterns of proteins, which help us understand diseases like Alzheimer's and Cystic Fibrosis
- Simulations of the ways tsunamis interact with coastlines
- Modeling nuclear explosions, limiting the need for real nuclear testing