SAVE TIME AND/OR MONEY: In theory, throwing more resources at a task will shorten its time to completion, with potential cost savings. Parallel computers can be built from cheap, commodity components.

SOLVE LARGER / MORE COMPLEX PROBLEMS: Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. Example: "Grand Challenge Problems" (en.wikipedia.org/wiki/Grand_Challenge) requiring PetaFLOPS and PetaBytes of computing resources. Example: Web search engines/databases processing millions of transactions every second

PROVIDE CONCURRENCY: A single compute resource can only do one thing at a time. Multiple compute resources can do many things simultaneously. Example: Collaborative Networks provide a global venue where people from around the world can meet and conduct work "virtually".

TAKE ADVANTAGE OF NON-LOCAL RESOURCES: Using compute resources on a wide area network, or even the Internet when local compute resources are scarce or insufficient. Two examples below, each of which has over 1.7 million contributors globally (May 2018): Example: SETI@home (setiathome.berkeley.edu) Example: Folding@home (folding.stanford.edu)

MAKE BETTER USE OF UNDERLYING PARALLEL HARDWARE: Modern computers, even laptops, are parallel in architecture with multiple processors/cores. Parallel software is specifically intended for parallel hardware with multiple cores, threads, etc. In most cases, serial programs run on modern computers "waste" potential computing power.

Intel Xeon processor with 6 cores and 6 L3 cache units

The Future: During the past 20+ years, the trends indicated by ever faster networks, distributed systems, and multi-processor computer architectures (even at the desktop level) clearly show that parallelism is the future of computing .

. In this same time period, there has been a greater than 500,000x increase in supercomputer performance, with no end currently in sight.

increase in supercomputer performance, with no end currently in sight. The race is already on for Exascale Computing! Exaflop = 10 18 calculations per second

Source: Top500.org