nick brook current status future collaboration plans future uk plans

14
Nick Brook • Current status • Future Collaboration Plans • Future UK plans

Upload: hope-newman

Post on 01-Jan-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Nick Brook Current status Future Collaboration Plans Future UK plans

Nick Brook

• Current status

• Future Collaboration Plans

• Future UK plans

Page 2: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 2

Needs & Requirements

The Goals of the LHCb Data Challenge Size Date 3*106 event bursts 2000-2001 6*106 event bursts 2002-2003 107 event bursts 2004-2005

Extrapolated hardware profile for large scale MC production in UK

End 2001 Jan 2003

CPU(SI95) 5650 13200-24750

Disk (Tb) 3.5 7-14

Robotic Tape (Tb)

2-13 23-48

Page 3: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 3

Current Status

UK major external Monte Carlo production centre (RAL & MAP)

UK major i/p to formulation of LHCb Grid plans

Frank Harris (Oxford) 75% (WP8 & external computing coordination)

Nick Brook (Bristol) 50%(UK coordination & analysis model)

Glenn Patrick (RAL) 30%(analysis model & testbed)

Girish Patel (Liverpool) 20%(MAP+testbed)

Ulrik Egede (IC) 10%(analysis model)

Artur Barczyk (Edinburgh)

20%(analysis model)

Akram Khan (Edinburgh) 20%(testbed+WP2,analysis framework)

Ian McArthur(Oxford) 20%(metadata management)

A N Other (Oxford) 100%(Gaudi/Grid- WP8)

Chris Jones (Cambridge) 10%(analysis model)

Page 4: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 4

Monte Carlo Production

Centre OS Max (ave) nos. of CPUs used simultaneously

Typical weekly produced (nos. of k evts)

Percentage submitted through Grid

CERN Linux 315(60) 85 10

RAL Linux 50(30) 35 100

IN2P3 Linux 225(60) 35 100

Liverpool Linux 300(250) 150 0

Bologna Linux 20(20) 35 0

Hope to include ScotGrid & Bristol soon

Page 5: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 5

Monte Carlo Production

Execute on farmSubmit via Web

Performance monitoring via web

Update database at CERN Transfer to mass storage at

CERN

Data quality checking

Page 6: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 6

Monte Carlo Production

Execute on farmSubmit via Web

Performance monitoring via web

Update database at CERN

Transfer to mass storage at CERN, RAL, …

Data quality checking

WP1 submission tools

WP4 environment

WP1 submission tools

WP3 monitoring tools

WP2 metadata tools

WP1 tools

WP2 data replication

WP5 API for mass storage

Online histos production using

Grid pipes

Page 7: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 7

MC Production Requirements

• Production site aware scripts• Installation kits• System should be resistant crashes of:

– any share filesystem used– book-keeping database system– mass storage system– file transfer technology– job submission system– LHCb executables!

• Scalability i.e. easy to add new MC production site• Allow replication of remote datasets• Monitoring and control of the LHCb MC system

Interface to GANGA

Page 8: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 8

Installation Kit Requirements

• Production Installation Kit– Installs self-contained run time environment

• given version of executable• “call” external dynamic libraries in a “path independent” way• appropriate detector database• script to check installation

– Original format compatible with EDG WP4 i.e. RPM’s– Production on Windows platform??

• Developers Installation Kit– Installs self-contained developers environment

• Source code• Appropriate detector database• CMT, CVS, C++ installation if needed

– Supports Windows

Interface to GANGA

Page 9: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 9

What is Gaudi?

Converter

Algorithm

Event DataService

PersistencyService

DataFiles

AlgorithmAlgorithm

Transient Event Store

Detec. DataService

PersistencyService

DataFiles

Transient Detector

Store

MessageService

JobOptionsService

Particle Prop.Service

OtherServices

HistogramService

PersistencyService

DataFiles

TransientHistogram

Store

ApplicationManager

ConverterConverterEventSelector

Page 10: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 10

Gaudi & The Grid World

Converter

Algorithm

Event DataService

PersistencyService

AlgorithmAlgorithm

Detec. DataService

PersistencyService

MessageService

JobOptionsService

Particle Prop.Service

OtherServices Histogram

ServicePersistency

Service

ApplicationManager

ConverterConverterEventSelector

Analysis Program

OSMass

Storage

EventDatabasePDG

Database

DataSetDB

Other

MonitoringService

HistoPresenter

Other

JobService

Config.Service

Transient Detector

Store

TransientHistogram

Store

Transient Event Store

Page 11: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 11

GANGA: Grid And Gaudi Alliance

GAUDI Program

GANGAGU

I

JobOptionsAlgorithms

Collective&

ResourceGrid

Services

HistogramsMonitoringResults

Page 12: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 12

GANGA Requirements

• Prior to job submission– configuration of Gaudi program e.g. algorithm

selection, event data i/p, requested o/p,…– Grid services estimations of resource reqt e.g. CPU,

storage, …– user job reqts e.g. s/w needed– submitting job (or parallel sub-jobs)

• During execution– monitor progress – displaying of messages,

histograms,…• After execution

– termination status and validation– e.g. MC production – o/p files on mass storage &

updating of book-keeping database– e.g. analysis - return o/p to user

Page 13: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 13

• ATLAS & LHCb are using the same s/w framework – Athena /Gaudi

• Large part of proposed work has overlap e.g. GANGA grid interfaces

• Proposed programme has been discussed not only betweeh the 2 UK collaborations BUT also at CERN

ATLAS & LHCb

Page 14: Nick Brook Current status Future Collaboration Plans Future UK plans

5th November 2001 GridPP Collab Meeting 14

Summary

• Proposing a challenging programme• Programme developed in conjunction with central

collaboration• Programme developed in conjunction with ATALS

as a whole & in the UK• Requesting for a minimum of 8 staff years of effort

(some will be common with ATLAS)• UK already playing a leading role in the LHCb Grid

activities• UK already leading centre(s) in producing Monte

Carlo for LHCb