us lhc tier-2s on behalf of us atlas, us cms, osg
DESCRIPTION
US LHC Tier-2s on behalf of US ATLAS, US CMS, OSG. Ruth Pordes , Fermilab Nov 17th, 2007. - PowerPoint PPT PresentationTRANSCRIPT
US LHC Tier-2son behalf of US ATLAS, US CMS,
OSG
Ruth Pordes , Fermilab
Nov 17th, 2007
Supported by the Department of Energy Office of Science SciDAC-2 program from the High Energy Physics, Nuclear Physics and Advanced Software and Computing Research programs, and the National Science Foundation Math and
Physical Sciences, Office of CyberInfrastructure and Office of International Science and Engineering Directorates.
2US LHC Tier-2s for WLCG 04/20/23
WLCG Tier-2 Accounted Resources (from WLCG)
2007 CPU (kS12K) Pledge
Accounting Name Site(s)
USA, Great Lakes ATLAS T2 581 US-AGLT2 AGLT2
USA, Northeast ATLAS T2 394 US-NET2 BU_ATLAS_Tier2
BU_ATLAS_Tier2o
USA, Midwest ATLAS T2 826 US-MWT2 MWT2_UC
MWT2_IU
UC_Teraport
UC_ATLAS_MWT2
IU_ATLAS_Tier2
IU_OSG
USA, Southwest ATLAS T2 998 US-SWT2 OU_OCHEP_SWT2
OU_OSCER_ATLAS
UTA_DPCC
UTA_SWT2
USA, SLAC ATLAS T2 550 US-WT2 PROD_SLAC
USA, Caltech CMS T2 470 US-Caltech CIT_CMS_T2
USA, Florida CMS T2 470 US-Florida Uflorida-PG
Uflorida-IHEPA
UFlorida-HPC
USA, MIT CMS T2 470 US-MIT MIT_CMS
USA, Nebraska CMS T2 470 US-Nebraska Nebraska
USA, Purdue CMS T2 470 US-Purdue-T2 Purdue-Lear
Purdue-RCAC
USA, UC San Diego CMS T2 470 US-UCSD UCSDT2
USA, U. Wisconsin CMS T2 470 US-Wisconsin GLOW
3US LHC Tier-2s for WLCG 04/20/23
US LHC Tier-2s Resources (2007/2008)
KSi2000 Disk cache TB
Midwest T2: Indiana U.
U. of Chicago
1032 / 1390 266 / 353
SouthWest T2: Oklahoma U, U of Texas Arlington,
1248 / 1732 179 / 320
SLAC 687 / 1025 285 / 578
NorthEast T2: Boston U 492 / 831 129 / 305
Great Lakes T2: Michigan 726 / 1206 194/ 403
Caltech 586 / 1000 60 / 200
Florida 519 / 1000 104 / 200
MIT 474 / 1000 157 / 200
Nebraska 650 / 1000 105 / 200
Purdue 743 / 1000 184 / 200
UCSD 932 / 1000 150 / 200
Wisconsin 547 / 1000 110 / 200
4US LHC Tier-2s for WLCG 04/20/23
Issues
• Fast ramp up stress on purchasing and operational teams.
• ATLAS Targets in 2010,2011 not met by current plans.
Total SiK2000 Target Disk TB Target
US ATLAS
2007
2008
2009
2010
2011
3,348
4,947
6,367
7,681
9,982
794
5,948
9,171
17,525
23,504
842
1,567
2,467
3,482
5,015
428
2633
4,458
7,525
10,571
US CMS
2007
2008
2009
2010
2011
4,451
7,000
7,700
13,580
13,580
870
1,200
2,520
3,990
3,990
5US LHC Tier-2s for WLCG 04/20/23
VO MiddlewareIn
fras
tru
ctu
re A
pp
lica
tio
ns
HEP Data and workflow
management etc
BiologyPortals,
databases etc
User Science Codes and Interfaces
AstrophysicsData
replication etc
OSG Release Cache:OSG specific configurations, utilities, etc
Virtual Data Toolkit (VDT)Core Grid Technologies + stakeholder needs:Condor, Globus, MyProxy: shared with and support for EGEE, TeraGrid, accounting, authz, monitoring, VOMS and others.
Existing Operating, Batch systems and Utilities
Re
so
urc
e
All US LHC Tier-2s are part of OSG
ATLAS and CMS software and services installed on sites through Grid interfaces.
Batch queue configurations ensure priority to ATLAS or CMS jobs.
Experiments responsible for end-to-end systems.
Operations Center dispatches and “owns” problems til they are solved.
Activities provide common forums for s/w technical & operational issues.
6US LHC Tier-2s for WLCG 04/20/23
US LHC Tier-2s are fully integrated into the experiments
• All sites are funded through the US NSF research program, except for DOE support for SLAC.
• Provide monte-carlo processing and analysis/mc data hosting and CPU.
• Distribute data to/from Tier-1s. Provide analysis centers for Tier-3->N physicists
• All Tier-2s successfully accounting to the WLCG Tier-2 accounting reports.
US ATLAS and CMS Tier-2s contributing at least their share to the ATLAS analysis challenge and the CMS Challenge for Software and Analysis (CSA07).
7US LHC Tier-2s for WLCG 04/20/23
ATLAS report - Michael Ernst, Rob Gardner
• Robust data distribution supported by BNL Tier-1.• Support Panda pilot job infrastructure with DQ
servers locally or remotely. Athena analysis framework available locally.
• Facility Integration program provides forum for Tier-2 administrators to communicate and have common solutions. Computing Integration and Operations meetings and mail
lists are effective forums.
• Tier-2 workshops semi-annually help newer Tier-2s get up to speed more quickly.
• Mix of dCache and xrootd based storage elements.
8US LHC Tier-2s for WLCG 04/20/23
Atlas Concerns
• Performance and scalability of dCache for analysis I/O needs.
• End-to-end performance of data distribution and management tools.
9US LHC Tier-2s for WLCG 04/20/23
ATLAS Tier-1 distributes data to Tier2-s
• Data distribution driven by Tier-2 processing and analysis needs.
e.g. BNL to University of Chicago data distribution
10US LHC Tier-2s for WLCG 04/20/23
ATLAS Jobs
US 33%
96% Walltime Efficiency
UTA
11US LHC Tier-2s for WLCG 04/20/23
US CMS - report from Ken Bloom, Tier-2 coordinator
• Funding of $500K/site provides 2 FTE/site for support.
• Site specific configurations (e.g. different batch systems) but all sites have common approaches.
• All use dCache to manage the storage. 1 FTE (of the 2) per site needed for this support.
12US LHC Tier-2s for WLCG 04/20/23
CMS Concerns
• Robustness and performance of T1 sites hosting data All Tier-1s serve data to US Tier-2s. The
majority of CMS data will live across an ocean. Reliability is crucial.
• Will the grid be sufficiently robust when we get to analysis? Can user jobs get to the sites, and can the
output get back?
• Are we being challenged enough in advance of 2000 users showing up?
13US LHC Tier-2s for WLCG 04/20/23
Tier-2s Data Movement
14US LHC Tier-2s for WLCG 04/20/23
Job hosting 11/07
15US LHC Tier-2s for WLCG 04/20/23
Summary
• US LHC Tier-2s are full participants in the US ATLAS, US CMS and OSG organizations.
• Communications between the collaborations and the projects is good.
• The 2 collaborations use each others resources when capacity is available. Mechanisms are in place that priorities don’t get
inverted!
• The Tier-2s are ready to contribute to CCRC and data commissioning.