iservogrid architecture working group brisbane australia june 5 2003

8
iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003 Geoffrey Fox Community Grids Lab Indiana University [email protected] academia.web.cern.ch/academia/lectures/grid/ tp://www.grid2002.org

Upload: oakes

Post on 25-Feb-2016

36 views

Category:

Documents


0 download

DESCRIPTION

http://www.grid2002.org. iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003. Geoffrey Fox Community Grids Lab Indiana University [email protected]. http://academia.web.cern.ch/academia/lectures/grid/. SERVOGrid Grid Requirements. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003

iSERVOGridArchitecture

Working GroupBrisbaneAustralia

June 5 2003Geoffrey Fox

Community Grids LabIndiana [email protected]

http://academia.web.cern.ch/academia/lectures/grid/

http://www.grid2002.org

Page 2: iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003

SERVOGrid Grid Requirements• Seamless Access to Data repositories and large scale

computers• Integration of multiple data sources including sensors,

databases, file systems with analysis system– Including filtered OGSA-DAI

• Rich meta-data generation and access with SERVOGrid specific Schema extending industry standards

• Portals with component model for user interfaces and web control of all capabilities

• Collaboration to support world-wide work• Basic Grid tools: workflow and notification

Page 3: iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003

What should SERVOGrid do ?• Make use of Grid technologies and architecture from

around the world• Coordinate with broad community through Global Grid

Forum and OMII• Decide on domain specific standards SERVOGridML• Agree on particular approach within choices in

international suite (use GT3 or not?, use portlets or not?, choose meta-data technology) and define SERVOGrid community practice

• Develop software system infrastructure and applications specific to solid earth science

• Worry about network interconnection between earthquake scientists and sensors

Page 4: iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003

Database Database

Closely Coupled Compute Nodes

Analysis and Visualization

RepositoriesFederated Databases

Sensor NetsStreaming Data

Loosely Coupled Filters

SERVOGrid Caricature

Page 5: iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003

HPCSimulation

Data Filter

Data FilterDa

taFi

lter

DataFilter

DataFilter

Distributed Filters massage dataFor simulation

Other

Grid

and W

eb

Service

s

AnalysisControl

Visualize

SERVOGrid (Complexity)Computing Model

Grid

OGSA-DAIGrid Services

This Type of Gridintegrates with

Parallel computingMultiple HPC

facilities but only use one at a time

Many simultaneous data sources and

sinks

Grid Data Assimilation

Page 6: iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003

Discussion I• People• Compute Nodes

– ACcESS SGI Altix– RIST Intel/Myrianet– University of Tokyo– VPAC Alpha/Quadrics– Indiana IBM SP– NASA Ames probably

• Issues– Data formats/translation service for input and output data– Need list of software, databases– Security– Visualization

Page 7: iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003

Discussion II• Grid Software Components• Core Infrastructure – Apache Axis• Metadata Catalog/Registry e.g. Globus MDS but

maybe can be done as now in Gateway (as implement for NASA JPL project)

• Databases – Xindice MySQL plus OGSA-DAI• Workflow – Gateway• Job Submission – Gateway• Portlets for User Interface – Apache Jetspeed

Page 8: iSERVOGrid Architecture Working Group Brisbane Australia June 5 2003

Discussion III• Process

– email, shared internal web pages, external -- iSERVO– Desktop AV, Conferencing

• Activities– Identify responsible people– Debug collaboration infrastructure– Tutorial from Marlon on current US Project

• July 2003– Analyze test cases for hardware, software, data– Policy on machines/data access -- implications