Algorithms and the Grid Geoffrey Fox Computer Science

  • Slides: 52
Download presentation
Algorithms and the Grid Geoffrey Fox Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana

Algorithms and the Grid Geoffrey Fox Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN 47401 March 18 2005 gcf@indiana. edu http: //www. infomall. org 1

Trends in Simulation Research n n 1990 -2000 the HPCC High Performance Computing and

Trends in Simulation Research n n 1990 -2000 the HPCC High Performance Computing and Communication Initiative • Established Parallel Computing • Developed wonderful algorithms – especially in partial differential equation and particle dynamics areas • Almost no useful software except for MPI – messaging between parallel computer nodes 1995 -now Internet explosion and development of Web Service distributed system model • Replaces CORBA, Java RMI, HLA, COM etc. 2000 - now: almost no USA academic work in core simulation • Major projects like ASCI (Do. E) and HPCMO (Do. D) thrive 2003 -? Data Deluge apparent and Grid links Internet and HPCC with focus on data-simulation integration 2

e-Business e-Science and the Grid n n n e-Business captures an emerging view of

e-Business e-Science and the Grid n n n e-Business captures an emerging view of corporations as dynamic virtual organizations linking employees, customers and stakeholders across the world. e-Science is the similar vision for scientific research with international participation in large accelerators, satellites or distributed gene analyses. The Grid or Cyber. Infrastructure integrates the best of the Web, Agents, traditional enterprise software, high performance computing and Peer-to-peer systems to provide the information technology e-infrastructure for e-moreorlessanything. A deluge of data of unprecedented and inevitable size must be managed and understood. People, computers, data and instruments must be linked. On demand assignment of experts, computers, networks and storage resources must be supported 3

Some Important Styles of Grids n n n Computational Grids were origin of concepts

Some Important Styles of Grids n n n Computational Grids were origin of concepts and link computers across the globe – high latency stops this from being used as parallel machine Knowledge and Information Grids link sensors and information repositories as in Virtual Observatories or Bio. Informatics • More detail on next slide Collaborative Grids link multidisciplinary researchers across laboratories and universities Community Grids focus on Grids involving large numbers of peers rather than focusing on linking major resources – links Grid and Peer-to-peer network concepts Semantic Grid links Grid, and AI community with Semantic web (ontology/meta-data enriched resources) and Agent concepts Grid Service Farms supply services-on-demand as in collaboration, GIS support, Image processing filter 4

Information/Knowledge Grids n n Distributed (10’s to 1000’s) of data sources (instruments, file systems,

Information/Knowledge Grids n n Distributed (10’s to 1000’s) of data sources (instruments, file systems, curated databases …) Data Deluge: 1 (now) to 100’s petabytes/year (2012) • Moore’s law for Sensors n n Possible filters assigned dynamically (on-demand) • Run image processing algorithm on telescope image • Run Gene sequencing algorithm on compiled data Needs decision support front end with “what-if” simulations Metadata (provenance) critical to annotate data Integrate across experiments as in multi-wavelength astronomy Data Deluge comes from pixels/year available 5

Virtual Observatory Astronomy Grid Integrate Experiments Radio Far-Infrared Visible Dust Map Visible + X-ray

Virtual Observatory Astronomy Grid Integrate Experiments Radio Far-Infrared Visible Dust Map Visible + X-ray Galaxy Density Map 6

e-Business and (Virtual) Organizations n n n Enterprise Grid supports information system for an

e-Business and (Virtual) Organizations n n n Enterprise Grid supports information system for an organization; includes “university computer center”, “(digital) library”, sales, marketing, manufacturing … Outsourcing Grid links different parts of an enterprise together Manufacturing plants with designers • Animators with electronic game or film designers and producers • Coaches with aspiring players (e-NCAA or e-NFL etc. ) • Outsourcing will become easier ……. . Customer Grid links businesses and their customers as in many web sites such as amazon. com e-Multimedia can use secure peer-to-peer Grids to link creators, distributors and consumers of digital music, games and films respecting rights Distance education Grid links teacher at one place, students all over the place, mentors and graders; shared curriculum, homework, live classes … 7

DAME In flight data ~ Gigabyte per aircraft per Engine per transatlantic flight Airline

DAME In flight data ~ Gigabyte per aircraft per Engine per transatlantic flight Airline ~5000 engines Global Network Such as SITA Ground Station Engine Health (Data) Center Maintenance Centre Internet, e-mail, pager Rolls Royce and UK e-Science Program Distributed Aircraft Maintenance Environment 8

NASA Aerospace Engineering Grid It takes a distributed virtual organization to design, simulate and

NASA Aerospace Engineering Grid It takes a distributed virtual organization to design, simulate and build a complex system like an aircraft 9

e-Defense and e-Crisis n Grids support Command Control and provide Global Situational Awareness •

e-Defense and e-Crisis n Grids support Command Control and provide Global Situational Awareness • Link commanders and frontline troops to themselves and to archival and real-time data; link to what-if simulations • Dynamic heterogeneous wired and wireless networks • Security and fault tolerance essential n System of Systems; Grid of Grids • The command information infrastructure of each ship is a Grid; each fleet is linked together by a Grid; the President is informed by and informs the national defense Grid • Grids must be heterogeneous and federated n n Crisis Management and Response enabled by a Grid linking sensors, disaster managers, and first responders with decision support Define and Build Do. D relevant Services – Collaboration, Sensors, GIS, Database etc. 10

Analysis and Visualization Large Disks Old Style Metacomputing Grid Large Scale Parallel Computers Spread

Analysis and Visualization Large Disks Old Style Metacomputing Grid Large Scale Parallel Computers Spread a single large Problem over multiple supercomputers 11

Classes of Computing Grid Applications n n Running “Pleasing Parallel Jobs” as in United

Classes of Computing Grid Applications n n Running “Pleasing Parallel Jobs” as in United Devices, Entropia (Desktop Grid) “cycle stealing systems” Can be managed (“inside” the enterprise as in Condor) or more informal (as in SETI@Home) Computing-on-demand in Industry where jobs spawned are perhaps very large (SAP, Oracle …) Support distributed file systems as in Legion (Avaki), Globus with (web-enhanced) UNIX programming paradigm • Particle Physics will run some 30, 000 simultaneous jobs this way n n Pipelined applications linking data/instruments, compute, visualization Seamless Access where Grid portals allow one to choose one of multiple resources with a common interfaces 12

What is Happening? n n n n Grid ideas are being developed in (at

What is Happening? n n n n Grid ideas are being developed in (at least) two communities • Web Service – W 3 C, OASIS • Grid Forum (High Performance Computing, e-Science) • Open Middleware Infrastructure Institute OMII currently only in UK but maybe spreads to EU and USA Service Standards are being debated Grid Operational Infrastructure is being deployed Grid Architecture and core software being developed Particular System Services are being developed “centrally” – OGSA framework for this in Lots of fields are setting domain specific standards and building domain specific services Grids are viewed differently in different areas • Largely “computing-on-demand” in industry (IBM, Oracle, HP, Sun) • Largely distributed collaboratories in academia 13

A typical Web Service n n In principle, services can be in any language

A typical Web Service n n In principle, services can be in any language (Fortran. . Java. . Perl. . Python) and the interfaces can be method calls, Java RMI Messages, CGI Web invocations, totally compiled away (inlining) The simplest implementations involve XML messages (SOAP) and programs written in net friendly languages like Java and Python Web Services WSDL interfaces Portal Service Security WSDL interfaces Web Services Payment Credit Card Catalog Warehouse Shipping control 14

Services and Distributed Objects n n A web service is a computer program running

Services and Distributed Objects n n A web service is a computer program running on either the local or remote machine with a set of well defined interfaces (ports) specified in XML (WSDL) Web Services (WS) have many similarities with Distributed Object (DO) technology but there are some (important) technical and religious points (not easy to distinguish) • CORBA Java COM are typical DO technologies • Agents are typically SOA (Service Oriented Architecture) n Both involve distributed entities but Web Services are more loosely coupled • WS interact with messages; DO with RPC (Remote Procedure Call) • DO have “factories”; WS manage instances internally and interactionspecific state not exposed and hence need not be managed • DO have explicit state (statefull services); WS use context in the messages to link interactions (statefull interactions) n Claim: DO’s do NOT scale; WS build on experience (with CORBA) and do scale 15

Grid impact on Algorithms I n n Your favorite parallel algorithm will often run

Grid impact on Algorithms I n n Your favorite parallel algorithm will often run untouched on a Grid node linked to other simulations using traditional algorithms Algorithms tolerant of high latency Algorithms for new applications enabled by the Grid Data assimilation for data-deluged science generalizing data mining • Where and how to process data • Incorporation of data in simulation n Complex Systems algorithms for non traditional simulations as in biology, social systems • Cellular automata 16

Grid impact on Algorithms II n n n MPI software model not suited for

Grid impact on Algorithms II n n n MPI software model not suited for Grid; use SOAP and publish/subscribe • Microseconds and milliseconds Latency Grid workflow needs “integration algorithms” • Multidisciplinary algorithms for loose code coupling • Workflow scheduling algorithms (data oriented) • Data caching algorithms Algorithms like distributed hash tables for distributed storage and look up of data Algorithms for Grid security • Efficient support of group keys for multicast • Detection of Denial of Service attacks Much better software available for building toolkits and Problem Solving Environments i. e. for using algorithms 17

Data Deluged Science n n In the past, we worried about data in the

Data Deluged Science n n In the past, we worried about data in the form of parallel I/O or MPI-IO, but we didn’t consider it as an enabler of new algorithms and new ways of computing Data assimilation was not central to HPCC Do. E ASCI set up because didn’t want test data! Now particle physics will get 100 petabytes from CERN • Nuclear physics (Jefferson Lab) in same situation • Use around 30, 000 CPU’s simultaneously 24 X 7 n n Weather, climate, solid earth (Earth. Scope) Bioinformatics curated databases (Biocomplexity only 1000’s of data points at present) Virtual Observatory and Sky. Server in Astronomy Environmental Sensor nets 18

Weather Requirements 19

Weather Requirements 19

Data Deluged Science Computing Paradigm Data Assimilation Simulation Informatics Model Ideas Computational Science Datamining

Data Deluged Science Computing Paradigm Data Assimilation Simulation Informatics Model Ideas Computational Science Datamining Reasoning

USArray Seismic Sensors 21

USArray Seismic Sensors 21

a Site-specific Irregular Scalar Measurements Ice Sheets Constellations for Plate Boundary-Scale Vector Measurements a

a Site-specific Irregular Scalar Measurements Ice Sheets Constellations for Plate Boundary-Scale Vector Measurements a a Volcanoes PBO Greenland Long Valley, CA Topography 1 km Stress Change Northridge, CA Earthquakes Hector Mine, CA 22

Repositories Federated Databases Database Sensors Streaming Data Field Trip Database Research SERVOGrid Data Filter

Repositories Federated Databases Database Sensors Streaming Data Field Trip Database Research SERVOGrid Data Filter Services Research Simulations ? Discovery Services Geoscience Research and Education Grids Education Customization Services From Research to Education Analysis and Visualization Portal Education Grid Computer 23 Farm

SERVOGrid Requirements n n Seamless Access to Data repositories and large scale computers Integration

SERVOGrid Requirements n n Seamless Access to Data repositories and large scale computers Integration of multiple data sources including sensors, databases, file systems with analysis system • Including filtered OGSA-DAI (Grid database access) n n n Rich meta-data generation and access with SERVOGrid specific Schema extending open. GIS (Geography as a Web service) standards and using Semantic Grid Portals with component model for user interfaces and web control of all capabilities Collaboration to support world-wide work Basic Grid tools: workflow and notification NOT metacomputing 24

Dat a a Dat Grid F i l t e r er Filt OGSA-DAI

Dat a a Dat Grid F i l t e r er Filt OGSA-DAI Grid Services Dat a F ilter Grid Data Assimilation Data Deluged Science Computing Architecture HPC Simulation O r Dat a Distributed Filters massage data For simulation an G the Se d rid r rv We ic b es er t l i F Filte a t a D Analysis Control Visualize 25

Data Assimilation n n Data assimilation implies one is solving some optimization problem which

Data Assimilation n n Data assimilation implies one is solving some optimization problem which might have Kalman Filter like structure Due to data deluge, one will become more and more dominated by the data (Nobs much larger than number of simulation points). Natural approach is to form for each local (position, time) patch the “important” data combinations so that optimization doesn’t waste time on large error or insensitive data. Data reduction done in natural distributed fashion NOT on HPC machine as distributed computing most cost effective if calculations essentially independent • Filter functions must be transmitted from HPC machine 26

Distributed Filtering Nobslocal patch >> Nfilteredlocal patch ≈ Number_of_Unknownslocal patch In simplest approach, filtered

Distributed Filtering Nobslocal patch >> Nfilteredlocal patch ≈ Number_of_Unknownslocal patch In simplest approach, filtered data gotten by linear transformations on original data based on Singular Value Decomposition of Least squares matrix Send needed Filter Receive filtered data Nobslocal patch 1 Data Filter Nfilteredlocal patch 1 Geographically Distributed Sensor patches Nobslocal patch 2 Data Filter Factorize Matrix to product of local patches Distributed Machine Nfilteredlocal patch 2 HPC Machine 27

Non Traditional Applications: Critical Infrastructure Simulations n n These include electrical/gas/water grids and Internet,

Non Traditional Applications: Critical Infrastructure Simulations n n These include electrical/gas/water grids and Internet, transportation, cell/wired phone dynamics. One has some “classic SPICE style” network simulations in area like power grid (although load and infrastructure data incomplete) • 6000 to 17000 generators • 50000 to 140000 transmission lines • 40000 to 100000 substations n Need algorithms both for simulating infrastructures but also to link them 28

Non Traditional Applications: Critical Infrastructure Simulations n Activity data for people/institutions essential for detailed

Non Traditional Applications: Critical Infrastructure Simulations n Activity data for people/institutions essential for detailed dynamics but again these are not “classic” data but need to be “fitted” in data assimilation style in terms of some assumed lower level model. • They tell you goals of people but not their low level movement n Disease and Internet virus spread and social network simulations can be built on dynamics coming from infrastructure simulations • Many results like “small world” internet connection structure are qualitative and unclear if they can be extended to detailed simulations • A lot of interest in (regulatory) networks in Biology 29

(Non) Traditional Structure n n n 1) Traditional: Known equations plus boundary values 2)

(Non) Traditional Structure n n n 1) Traditional: Known equations plus boundary values 2) Data assimilation: somewhat uncertain initial conditions and approximations corrected by data assimilation 3) Data deluged Science: Phenomenological degrees of freedom swimming in a sea of data Known Data Known Equations on Agreed Do. F Prediction Phenomenological Degrees of Freedom Swimming in a Sea of Data 30

Some Questions for Non Traditional Applications n n n No systematic study of how

Some Questions for Non Traditional Applications n n n No systematic study of how best to represent data deluged sciences without known equations Obviously data assimilation very relevant Role of Cellular Automata (CA) and refinements like the New Kind of Science by Wolfram • Can CA or Potts model parameterize any system? n n n Relationship to back propagation and other neural network representations Relationship to “just” interpolating data and then extrapolating a little Role of Uncertainty Analysis – everything (equations, model, data) is uncertain! Relationship of data mining and simulation A new trade-off: How to split funds between sensors and simulation engines 31

When is a High Performance Computer? n n n n We might wish to

When is a High Performance Computer? n n n n We might wish to consider three classes of multi-node computers 1) Classic MPP with microsecond latency and scalable internode bandwidth (tcomm/tcalc ~ 10 or so) 2) Classic Cluster which can vary from configurations like 1) to 3) but typically have millisecond latency and modest bandwidth 3) Classic Grid or distributed systems of computers around the network • Latencies of inter-node communication – 100’s of milliseconds but can have good bandwidth All have same peak CPU performance but synchronization costs increase as one goes from 1) to 3) Cost of system (dollars per gigaflop) decreases by factors of 2 at each step from 1) to 2) to 3) One should NOT use classic MPP if class 2) or 3) suffices unless some security or data issues dominates over cost-performance One should not use a Grid as a true parallel computer – it can link parallel computers together for convenient access etc. 32

Building PSE’s with the Rule of the Millisecond I n n n Typical Web

Building PSE’s with the Rule of the Millisecond I n n n Typical Web Services are used in situations with interaction delays (network transit) of 100’s of milliseconds But basic message-based interaction architecture only incurs fraction of a millisecond delay Thus use Web Services to build ALL PSE components • Use messages and NOT method/subroutine call or RPC Interaction Nugget 1 Nugget 3 Nugget 2 Nugget 4 Data 33

Building PSE’s with the Rule of the Millisecond II n n n Messaging has

Building PSE’s with the Rule of the Millisecond II n n n Messaging has several advantages over scripting languages • Collaboration trivial by sharing messages • Software Engineering due to greater modularity • Web Services do/will have wonderful support “Loose” Application coupling uses workflow technologies Find characteristic interaction time (millisecond programs; microseconds MPI and particle) and use best supported architecture at this level • Two levels: Web Service (Grid) and C/C++/C#/Fortran/Java/Python Major difficulty in frameworks is NOT building them but rather in supporting them • IMHO only hope is to always minimize life-cycle support risks • Simulation/science is too small a field to support much! Expect to use DIFFERENT technologies at each level even though possible to do everything with one technology 34 • Trade off support versus performance/customization

Requirements for MPI Messaging tcalc n tcomm tcalc MPI and SOAP Messaging both send

Requirements for MPI Messaging tcalc n tcomm tcalc MPI and SOAP Messaging both send data from a source to a destination • MPI supports multicast (broadcast) communication; • MPI specifies destination and a context (in comm parameter) • MPI specifies data to send • MPI has a tag to allow flexibility in processing in source processor • MPI has calls to understand context (number of processors etc. ) n MPI requires very low latency and high bandwidth so that tcomm/tcalc is at most 10 • Blue. Gene/L has bandwidth between 0. 25 and 3 Gigabytes/sec/node and latency of about 5 microseconds • Latency determined so Message Size/Bandwidth > Latency 35

Requirements for SOAP Messaging n Web Services has much of the same requirements as

Requirements for SOAP Messaging n Web Services has much of the same requirements as MPI with two differences where MPI more stringent than SOAP • Latencies are inevitably 1 (local) to 100 milliseconds which is 200 to 20, 000 times that of Blue. Gene/L n n n 1) 0. 000001 ms – CPU does a calculation 2) 0. 001 to 0. 01 ms – MPI latency 3) 1 to 10 ms – wake-up a thread or process 4) 10 to 1000 ms – Internet delay • Bandwidths for many business applications are low as one just needs to send enough information for ATM and Bank to define transactions SOAP has MUCH greater flexibility in areas like security, faulttolerance, “virtualizing addressing” because one can run a lot of software in 100 milliseconds • Typically takes 1 -3 milliseconds to gobble up a modest message in Java and “add value” 36

Structure of SOAP n n SOAP defines a very obvious message structure with a

Structure of SOAP n n SOAP defines a very obvious message structure with a header and a body just like email The header contains information used by the “Internet operating system” • Destination, Source, Routing, Context, Sequence Number … n The message body is partly further information used by the operating system and partly information for application when it is not looked at by “operating system” except to encrypt, compress it etc. • Note WS-Security supports separate encryption for different parts of a document n n Much discussion in field revolves around what is referenced in header This structure makes it possible to define VERY Sophisticated messaging 37

MPI and SOAP Integration n n Note SOAP Specifies format and through WSDL interfaces

MPI and SOAP Integration n n Note SOAP Specifies format and through WSDL interfaces MPI only specifies interface and so interoperability between different MPIs requires additional work • IMPI http: //impi. nist. gov/IMPI/ n n n Pervasive networks can support high bandwidth (Terabits/sec soon) but latency issue is not resolvable in general way Can combine MPI interfaces with SOAP messaging but I don’t think this has been done Just as walking, cars, planes, phones coexist with different properties; so SOAP and MPI are both good and should be used where appropriate 38

Narada. Brokering n n http: //www. naradabrokering. org We have built a messaging system

Narada. Brokering n n http: //www. naradabrokering. org We have built a messaging system that is designed to support traditional Web Services but has an architecture that allows it to support high performance data transport as required for Scientific applications • We suggest using this system whenever your application can tolerate 1 -10 millisecond latency in linking components • Use MPI when you need much lower latency n Use SOAP approach when MPI interfaces required but latency high • As in linking two parallel applications at remote sites n Technically it forms an overlay network supporting in software features often done at IP Level 39

Transit Delay (Milliseconds) Mean transit delay for message samples in Narada. Brokering: Different communication

Transit Delay (Milliseconds) Mean transit delay for message samples in Narada. Brokering: Different communication hops 9 8 7 6 5 4 3 2 1 0 hop-2 hop-3 hop-5 hop-7 1000 Message Payload Size (Bytes) Pentium-3, 1 GHz, 256 MB RAM 100 Mbps LAN 40 JRE 1. 3 Linux

41

41

Fast Web Service Communication I • Internet Messaging systems allow one to optimize message

Fast Web Service Communication I • Internet Messaging systems allow one to optimize message streams at the cost of “startup time”, • Web Services can deliver the fastest possible interconnections with or without reliable messaging • Typical results from Grossman (UIC) comparing Slow SOAP over TCP with binary and UDP transport (latter gains a factor of 1000) Pure SOAP 7020 SOAP over UDP Binary over UDP 5. 60 42

Fast Web Service Communication II • Mechanism only works for streams – sets of

Fast Web Service Communication II • Mechanism only works for streams – sets of related messages • SOAP header in streams is constant except for sequence number (Message ID), time-stamp. . • One needs two types of new Web Service Specification • “WS-Stream. Negotiation” to define how one can use WS-Policy to send messages at start of a stream to define the methodology for treating remaining messages in stream • “WS-Flexible. Representation” to define new encodings of messages 43

Fast Web Service Communication III • Then use “WS-Stream. Negotiation” to negotiate stream in

Fast Web Service Communication III • Then use “WS-Stream. Negotiation” to negotiate stream in Tortoise SOAP – ASCII XML over HTTP and TCP – – Deposit basic SOAP header through connection – it is part of context for stream (linking of 2 services) – Agree on firewall penetration, reliability mechanism, binary representation and fast transport protocol – Naturally transport UDP plus WS-RM • Use “WS-Flexible. Representation” to define encoding of a Fast transport (On a different port) with messages just having “Flexible. Representation. Context. Token”, Sequence Number, Time stamp if needed – RTP packets have essentially this structure – Could add stream termination status • Can monitor and control with original negotiation stream 44 • Can generate different streams optimized for different end-points

Role of Workflow Service-1 Service-3 Service-2 n n n Programming SOAP and Web Services

Role of Workflow Service-1 Service-3 Service-2 n n n Programming SOAP and Web Services (the Grid): Workflow describes linkage between services As distributed, linkage must be by messages Linkage is two-way and has both control and data Apply to multi-disciplinary, multi-scale linkage, multi-program linkage, link visualization to simulation, GIS to simulations and visualization filters to each other Microsoft-IBM specification BPEL is current preferred Web Service XML specification of workflow 45

Example workflow Here a sensor feeds a datamining application (We are extending datamining in

Example workflow Here a sensor feeds a datamining application (We are extending datamining in Do. D applications with Grossman from UIC) The data-mining application drives a visualization 46

Example Flood Simulation workflow 47

Example Flood Simulation workflow 47

SERVOGrid Codes, Relationships Elastic Dislocation Inversion Viscoelastic FEM Viscoelastic Layered BEM Elastic Dislocation Pattern

SERVOGrid Codes, Relationships Elastic Dislocation Inversion Viscoelastic FEM Viscoelastic Layered BEM Elastic Dislocation Pattern Recognizers Fault Model BEM 48 This linkage called Workflow in Grid/Web Service parlance

Two-level Programming I • The Web Service (Grid) paradigm implicitly assumes a two -level

Two-level Programming I • The Web Service (Grid) paradigm implicitly assumes a two -level Programming Model • We make a Service (same as a “distributed object” or “computer program” running on a remote computer) using conventional technologies – C++ Java or Fortran Monte Carlo module – Data streaming from a sensor or Satellite – Specialized (JDBC) database access • Such services accept and produce data from users files and databases Service Data • The Grid is built by coordinating such services assuming we have solved problem of programming the service 49

Two-level Programming II n n The Grid is discussing the composition of distributed services

Two-level Programming II n n The Grid is discussing the composition of distributed services with the runtime Service 1 Service 2 interfaces to Grid as opposed to UNIX Service 3 Service 4 pipes/data streams Familiar from use of UNIX Shell, PERL or Python scripts to produce real applications from core programs Such interpretative environments are the single processor analog of Grid Programming Some projects like Gr. ADS from Rice University are looking at integration between service and composition levels but dominant effort looks at each level separately 50

3 Layer Programming Model Application (level 1 Programming) Application Semantics (Metadata, Ontology) Level 2

3 Layer Programming Model Application (level 1 Programming) Application Semantics (Metadata, Ontology) Level 2 “Programming” MPI Fortran C++ etc. Semantic Web Basic Web Service Infrastructure Web Service 1 WS 2 WS 3 WS 4 Workflow (level 3) Programming BPEL Workflow will be built on top of Narada. Brokering as messaging layer 51

Conclusions n n n Grids are inevitable and pervasive Can expect Web Services and

Conclusions n n n Grids are inevitable and pervasive Can expect Web Services and Grids to merge with a common set of general principles but different implementations with different scaling and functionality trade-offs We will be flooded with data, information and purported knowledge Develop algorithms that exploit and support the data deluge Software infrastructure for building tools getting much better Use MPI where its appropriate 52