Serial vs. Parallel Processing Activity

Main content

Students learn the difference between parallel processing and serial processing computer systems by racing to complete a task of stacking blocks. This uneven race shows how a computer using a parallel processing approach can solve problems much more quickly than a computer using a single serial processor. Supercomputers use thousands of parallel processors to solve extremely complex problems in a fraction of the time required by single-processor computers.

Learning Objectives

  • Students will learn about two major approaches to computing: serial processing and parallel processing.
  • Students will experience the significantly greater speed of the parallel processing approach over the serial processing approach via a hands-on activity with blocks.
  • Students will learn that supercomputers make extensive use of the parallel processing approach, which makes them much faster at performing complex calculations than they otherwise would be.


  • Class time: 15-20 minutes

Educational Standards

Next Generation Science Standards

  • Science and Engineering Practices
    • developing and using models
    • planning and carrying out investigations
    • analyzing and interpreting data
    • using mathematics and computational thinking
  • Crosscutting Concepts
    • scale, proportion, and quantity
    • structure and function
  • Disciplinary Core Ideas
    • 3-5-ETS1.A Defining and Delimiting Engineering Problems
    • 3-5-ETS1.B Developing Possible Solutions
    • 4-PS4.C:Information Technologies and Instrumentation
    • MS-PS4.C:Information Technologies and Instrumentation
    • MS-ETS1.A: Defining and Delimiting Engineering Problems
    • MS-ETS1.B: Developing Possible Solutions
    • HS-PS4.C: Information Technologies and Instrumentation
    • HS-ETS1.A: Defining and Delimiting Engineering Problems
    • HS-ETS1.B: Developing Possible Solutions


  • Stopwatch or similar timer
  • Stackable blocks (Lego® or similar)
    • 6 sets of 18 blocks - each set should be a single, distinct color or similar shades (e.g. shades of blue)
    • one set of18 blocks of each of the six colors (108 total blocks)
  • 7 small bins or bowls to hold groups of blocks


Place blocks into bins or bowls.

  • In the first six bins, place 18 blocks of the same (or similar) colors into the first bin, 18 blocks of a different color into the second bin, and so on. For example, bin 1 would contain 18 red blocks, bin 2 would contain 18 blue blocks, etc.
  • In the seventh bin, place 18 blocks of each color (a total of 108 blocks!). Keep the different colors segregated by color in different areas of the bin.

These instructions presume that groups of 7 students will participate in the activity. The activity can readily be modified for smaller groups of 5 or 6 students as follows:

  • If the group has 5 students, prepare 4 bins with 18 blocks of a single color in each bin and one bin with 18 blocks of each of 4 colors (72 total blocks) using the same colors as the 4 individual color bins.
  • For a group of 6 students, prepare 5 bins with 18 blocks of a single color in each bin and one bin with 18 blocks of each of 5 colors (90 blocks total).


Arrange students into groups of 5-7 students per group. In each group, assign students to the following roles:

  • One student in each group will play the role of the serial or single computer processor. Give this student the bin with the large collection of multicolored blocks.
  • 4-6 students will work together as a team, representing the individual processors of a parallel computer system. Give each of these students a bin with 18 blocks of a single color.

The block-stacking task is the same for each simulated computer system, both the 4-6 person parallel processing group and the serial processing "group" of one student. The task is to assemble the blocks into the longest possible stack, with blocks of the same color grouped together within the overall stack. Each of the individual members of the parallel system will work on separate parts of the task and then have a few moments to coordinate their work and complete the task in unison.

Allow students to discover, during the course of the activity, that the parallel processor is much, much faster at completing the task than the single, serial processor. The student who represents a single processing computer will only have a fraction of the block-stacking task completed when the parallel processing group is done with the task. This is a very unfair, uneven race; make sure the student who serves as the single, serial processor understands that she or he is in no way at fault for being much slower to finish the task.

Step-by-step Instructions

  1. Explain the block-stacking task to the students. Tell them that they don't need to stack the blocks vertically; it's easier to assemble the stack horizontally on a table or desk.
  2. Arrange the students into groups of 5-7 students per group.
  3. Assign roles within the groups. One student will act as the single, serial processor. The remaining students in each group join the parallel processing system.
  4. Provide students with bins of blocks.
  5. When students and materials are in place, start the timer and tell the students to begin stacking the blocks.
  6. When the individual processors in the multiprocessor system have all finished their sub-tasks (stacking the blocks of a single color), tell them to work together to assemble their single-colored stacks into an overall stack of all of the blocks of various colors in their group. Note the time required by the individual processors to complete their sub-tasks (typically 45-90 seconds). Allow the single, serial processor student to continue working while the multi-processor group is combining their separate stacks.
  7. Note the time again once the students in the parallel processing group have combined their individual stacks of blocks into a single, large, multi-colored stack. This will typically require only about 10-15 more seconds. This represents the "task completion time" for the parallel processing group.
  8. Pause the timer and lead a brief discussion with your students. Ask them to discuss the significant difference in progress between the parallel processor group (which has entirely completed the assigned task) and the serial processor (which has quite a bit more work to do!).
  9. Restart the timer. Have the single, serial processor student continue to stack blocks.
  10. Once the serial processor student finishes stacking blocks, stop the timer and note the elapsed time.
  11. Reinforce the point that the parallel processor group completed the task far more quickly than the serial processor by leading another brief discussion to wrap up the activity.

Assembly Steps - click images to enlarge

Blocks representing the serial processor are sorted into color with some stacks begun. Blocks representing the parallel processor have all been stacked by color.
Step 1
Blocks representing the serial processor are still being stacked by color. Blocks representing the parallel processor have been stacked together to complete the task
Step 2
Blocks representing the serial processor are almost completely assembled into stacks based on color. Blocks representing the parallel processor have already been completely assembled.
Step 3
Blocks representing the serial processor are stacked by color but the stacks are still separate. Blocks representing the parallel processor are already completed.
Step 4

In each step shown above, the serial processor is shown in the top part of the image and the parallel processor is shown in the bottom portion. These photos illustrate a setup in which three processors work together in the parallel processing group.

Discussion & Assessment

  • Have the students compare the time required to complete the entire block-stacking task by the parallel processing system versus the single, serial processor. How much faster was the parallel processor?
  • Ask the students to consider the time lost by the parallel processing group as they worked together to assemble their individual, single-colored stacks into the overall, combined stack. Explain to the students that this overhead cost in terms of time is a standard feature of a parallel processing system. Guide them through a discussion comparing the small amount of time lost in coordinating their efforts versus the tremendous amount of time saved by having multiple processors and dividing up the overall task.
  • Ask students to contemplate how far one might go with this parallel processing approach. Do they think that even faster computers could be built with dozens or even hundreds of processors? Explain to students that modern supercomputers have many thousands of processors.
  • With advanced students, lead a discussion about how the work might be distributed among the processors if some of the sub-tasks were more complex than others.


Two of the major approaches to designing computer systems are serial or single processor systems and parallel processor systems.

Single processor systems have just one processor (computer chip). Although modern processors are very fast, performing calculations in milliseconds or less, these systems are still limited in speed. The processor performs one calculation at a time, then loads some new data and performs another calculation. It does this in a rapid-fire fashion, over and over and over again.

Parallel processor systems, on the other hand, can have many computer processors working in tandem. Part of the computer breaks each big problem down into many smaller calculations. The central processor then assigns each of these smaller calculations to one of many processors. Each processor works on its share of the problem by itself, at the same time as all the other processors. When the processors are done with their small calculations, they report their results back to the central processor, which then assigns them more work. Although a small amount of performance is lost due to the need to coordinate tasks, the overall increase in computing efficiency is very, very large when tackling complex computational projects.

Cheyenne supercomputer at NCAR's Wyoming Supercomputing CenterAs an example, NCAR's Cheyenne supercomputer, which began operations in January 2017, has more than 140,000processors.

Supercomputers with many parallel processors are used to run models of extremely complex systems. Examples include:

  • Global climate models
  • Models of supernova explosions in space
  • Aerodynamic modeling of state-of-the-art jet aircraft
  • The complex folding patterns of proteins, which help us understand diseases likeAlzheimer's and Cystic Fibrosis
  • Simulations of the ways tsunamis interact with coastlines
  • Modeling nuclear explosions,limiting the need for real nuclear testing


This activity was developed by Marijke Unger of the Computational & Information Systems (CISL) Lab at the National Center for Atmospheric Research (NCAR) and Tim Barnes of the UCAR Center for Science Education.