Nowadays, commercial applications are most used on parallel computers. A computer that runs such an application has to be able to process large amount of data in sophisticated ways. We can say with no doubt that commercial applications will define future parallel computers architecture. But scientific applications will still remain important users of parallel computing technology. Trends in commercial and scientific applications are merging as commercial applications perform more sophisticated computations and scientific applications become more data intensive. Today, a lot of parallel programming languages and compilers, based on dependencies detected in source code, are able to automatically split a program into multiple processes and/or threads to be executed concurrently on the available processors from a parallel system.
Parallel computing is an efficient form of information processing which emphasizes the exploitation of concurrent events in the computing process. Concurrency implies parallelism, simultaneity and pipelining. Parallel events may occur in multiple resources during the same time interval; simultaneous events may occur at the same time instant; and pipelined events may occur in overlapped time spans. Parallel processing demands concurrent execution of many programs in the computer. It is a cost effective means to improve system performance through concurrent activities in the computer. The highest level of parallel processing is conducted among multiple jobs or programs through multiprogramming, time-sharing, and multiprocessing. This presentation covers the basics of parallel computing. Beginning with a brief overview and some concepts and terminology associated with parallel computing, the topics of parallel memory architectures, Parallel computer architectures and Parallel programming models are then explored.
CHAPTER ONE
1.0 Preamble
Parallel computing is a form of computation in which many calculations are carried out