Grid computing
Grid computing is an emerging computing model that provides the ability to perform higher throughput computing by taking advantage of many networked computers to model a virtual computer architecture that is able to distribute process execution across a parallel infrastructure.
Grids use the resources of many separate computers connected by a network (usually the Internet) to solve large-scale computation problems. Grids provide the ability to perform computations on large data sets, by breaking them down into many smaller ones, or provide the ability to perform many more computations at once than would be possible on a single computer, by modeling a parallel division of labor between processes. Today resource allocation in a grid is done in accordance with SLAs (service level agreements).
Origins
Like the Internet, grid computing evolved from the computational needs of "big science". The Internet was developed to meet the need for a common communication medium between large, federally funded computing centers. These communication links led to resource and information sharing between these centers and eventually to provide access to them for additional users. Ad hoc resource sharing 'procedures' among these original groups pointed the way toward standardization of the protocols needed to communicate between any administrative domain. The current grid technology can be viewed as an extension or application of this framework to create a more generic resource sharing context.
Fully functional proto-grid systems date back to the early 1970s with the Distributed Computing System (DCS) project at the University of California, Irvine. David Farber was the main architect. This system was well known enough to merit coverage and a cartoon depiction in Business Week on 14 July 1973. The caption read "The ring acts as a single, highly flexible machine in which individual units can bid for jobs". In modern terminology ring = network, and units = computers, very similar to how computational capabilities are utilized on the grid. The project's final report was published in 1977. This technology was mostly abandoned in the 1980s as the administrative and security issues involved in having machines you did not control do your computation were (and are still by some) seen as insurmountable.
The ideas of the grid were brought together by Ian Foster, Carl Kesselman and Steve Tuecke, the so called "fathers of the grid." They led the effort to create the Globus Toolkit incorporating not just CPU management (examples: cluster management and cycle scavenging) but also storage management, security provisioning, data movement, monitoring and a toolkit for developing additional services based on the same infrastructure including agreement negotiation, notification mechanisms, trigger services and information aggregation. In short, the term grid has much further reaching implications than the general public believes. While Globus Toolkit remains the de facto standard for building grid solutions, a number of other tools have been built that answer some subset of services needed to create an enterprise grid.
The remainder of this article discusses the details behind these notions.
Features
Functionally, one can classify Grids into several types:
Computational Grids (including CPU scavenging Grids) which focuses primarily on computationally-intensive operations.
Data Grids or the controlled sharing and management of large amounts of distributed data.
Equipment Grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyse the data produced.
Grid computing offers a model for solving massive computational problems by making use of the unused resources (CPU cycles and/or disk storage) of large numbers of disparate computers, often desktop computers, treated as a virtual cluster embedded in a distributed telecommunications infrastructure. Grid computing's focus on the ability to support computation across administrative domains sets it apart from traditional computer clusters or traditional distributed computing.
Grids offer a way to solve Grand Challenge problems like protein folding, financial modelling, earthquake simulation, and climate/weather modeling. Grids offer a way of using the information technology resources optimally inside an organization. They also provide a means for offering information technology as a utility bureau for commercial and non-commercial clients, with those clients paying only for what they use, as with electricity or water.
Grid computing has the design goal of solving problems too big for any single supercomputer, whilst retaining the flexibility to work on multiple smaller problems. Thus Grid computing provides a multi-user environment. Its secondary aims are better exploitation of available computing power and catering for the intermittent demands of large computational exercises.
This approach implies the use of secure authorization techniques to allow remote users to control computing resources.
Grid computing involves sharing heterogeneous resources (based on different platforms, hardware/software architectures, and computer languages), located in different places belonging to different administrative domains over a network using open standards. In short, it involves virtualizing computing resources.
Grid computing is often confused with cluster computing. The key difference is that a cluster is a single set of nodes sitting in one location, while a Grid is composed of many clusters and other kinds of resources (e.g. networks, storage facilities).
Definitions
The term Grid computing originated in the early 1990s as a metaphor for making computer power as easy to access as an electric power grid.
Today there are many definitions of Grid computing:
The definitive definition of a Grid is provided by Ian Foster in his article "What is the Grid? A Three Point Checklist". The three points of this checklist are:
o Computing resources are not administered centrally.
o Open standards are used.
o Non-trivial quality of service is achieved.
Plaszczak/Wellner define Grid technology as "the technology that enables resource virtualization, on-demand provisioning, and service (resource) sharing between organizations."
IBM defines Grid Computing as "the ability, using a set of open standards and protocols, to gain access to applications and data, processing power, storage capacity and a vast array of other computing resources over the Internet. A Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of resources distributed across 'multiple' administrative domains based on their (resources) availability, capacity, performance, cost and users' quality-of-service requirements"
An earlier example of the notion of computing as utility was in 1965 by MIT's Fernando Corbató. Fernando and the other designers of the Multics operating system envisioned a computer facility operating "like a power company or water company". http://www.multicians.org/fjcc3.html
Buyya defines Grid as "a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed autonomous resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements".
CERN, one of the largest users of Grid technology, talk of The Grid: "a service for sharing computer power and data storage capacity over the Internet."
Pragmatically, Grid computing is attractive to geographically-distributed non-profit collaborative research efforts like the NCSA Bioinformatics Grids such as BIRN: external Grids.
Grid computing is also attractive to large commercial enterprises with complex computation problems who aim to fully exploit their internal computing power: internal Grids.
A recent survey (done by Heinz Stockinger in spring 2006; to be published in the Journal of Supercomputing in early 2007) presents a snapshot on the view in 2006.
Another survey (done by Miguel L. Bote-Lorenzo et al. in autumn 2002; published in the LNCS series of Springer-Verlag) presents a snapshot on the view in 2002.
Grids can be categorized with a three stage model of departmental Grids, enterprise Grids and global Grids. These correspond to a firm initially utilising resources within a single group i.e. an engineering department connecting desktop machines, clusters and equipment. This progresses to enterprise Grids where non-technical staff's computing resources can be used for cycle-stealing and storage. A global Grid is a connection of enterprise and departmental Grids which can be used in a commercial or collaborative manner.
Grid computing is a subset of distributed computing.
Conceptual framework
Grid computing reflects a conceptual framework rather than a physical resource. The Grid approach is utilized to provision a computational task with administratively-distant resources. The focus of Grid technology is associated with the issues and requirements of flexible computational provisioning beyond the local (home) administrative domain.
Virtual organization
A Grid environment is created to address resource needs. The use of that resource(s) (eg. CPU cycles, disk storage, data, software programs, peripherals) is usually characterized by its availability outside of the context of the local administrative domain. This 'external provisioning' approach entails creating a new administrative domain referred to as a Virtual Organization (VO) with a distinct and separate set of administrative policies (home administration policies plus external resource administrative policies equals the VO (aka your Grid) administrative policies). The context for a Grid 'job execution' is distinguished by the requirements created when operating outside of the home administrative context. Grid technology (aka. middleware) is employed to facilitate formalizing and complying with the Grid context associated with your application execution.
Virtual Organizations accessing different and overlapping sets of resources
Virtual Organizations accessing different and overlapping sets of resources
Resources
One characteristic that currently distinguishes Grid computing from distributed computing is the abstraction of a 'distributed resource' into a Grid resource. One result of abstraction is that it allows resource substitution to be more easily accomplished. Some of the overhead associated with this flexibility is reflected in the middleware layer and the temporal latency associated with the access of a Grid (or any distributed) resource. This overhead, especially the temporal latency, must be evaluated in terms of the impact on computational performance when a Grid resource is employed.
Web based resources or Web based resource access is an appealing approach to Grid resource provisioning. A recent OGF (Open Grid Forum) Grid middleware evolutionary development "re-factored" the architecture/design of the Grid resource concept to reflect using the W3C WSDL (Web Service Description Language) to implement the concept of a WS-Resource. The stateless nature of the Web, while enhancing the ability to scale, can be a concern for applications that migrate from a stateful protocol for accessing resources to the Web-based stateless protocol. The GGF WS-Resource concept includes discussions on accommodating the statelessness associated with Web resources access.
No comments:
Post a Comment