INTRODUCTION:
In the last few years there has been a rapid exponential increase in computer processing power, data storage and communication.But still there are many complex and computation intensive problems, which cannot be solved by supercomputers.These problems can only be met with a vast variety of heterogeneous resources. The increased use and popularity of the Internet and the availability of high-speed networks have gradually changed the way we do computing. These technologies have enabled the cooperative use of a wide variety of geographically distributed resources as a single more powerful computer. This new method of pooling resources for solving large-scale problems is called as grid computing. This paper describes the concepts underlying grid computing.
Key points:
History
Definitions
Working
Related technologies
Methods
Pros and cons
Projects
HISTORY:
The term grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid. The power grid metaphor for accessible computing quickly became canonical when Ian Foster and Carl Kesselman published their seminal work, "The Grid: Blueprint for a new computing infrastructure" (2004).
CPU scavenging and volunteer computing were popularized beginning in 1997 by distributed.net and later in 1999 by SETI@home to harness the power of networked PCs worldwide, in order to solve CPU-intensive research problems.
The ideas of the grid (including those from distributed computing, object-oriented programming, and Web services) were brought together by Ian Foster, Carl Kesselman, and Steve Tuecke, widely regarded as the "fathers of the grid". They led the effort to create the Globus Toolkit incorporating not just computation management but also storage management, security provisioning, data movement, monitoring, and a toolkit for developing additional services based on the same infrastructure, including agreement