Introduction to Parallel Computing
Traditionally, software has been written for serial computation:
• To be run on a single computer having a single Central Processing Unit (CPU);
• A problem is broken into a discrete series of instructions.
• Instructions are executed one after another.
• Only one instruction may execute at any moment in time.
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:
• To be run using multiple CPUs
• A problem is broken into discrete parts that can be solved concurrently
• Each part is further broken down to a series of instructions
• Instructions from each part execute simultaneously on different CPUs
• The computer resources can include:
• A single computer with multiple processors;
• An arbitrary number of computers connected by a network;
• A combination of both.
Introduction to Parallel Computing In India
Although the performance of single processors has been steadily increasing over the years, the only way to build the next generation teraflop architecture supercomputers seems to be through parallel processing technology.
Even with today's workstation-class high performance processors exceeding 100 megaflops, thousands of processors are required to build a teraflop architecture machine. Further, the fastest special purpose vector processors have a few
Gigaflop peak performance, and thus they too need to be utilized in parallel to achieve Teraflop levels of performance.
In 1987, India decided to launch a national initiative in supercomputing in the form of a time-bound mission to design, develop and deliver a supercomputer in the gigaflops range. The major motivation came from delays (political) in getting a CRAY XMP for weather forecasting. A decision was made to support the development of indigenous parallel processing technology. The Center for
Development of