cifts a coordinated infrastructure for fault tolerant systems...–with support linux, cray and bg...

22
CIFTS A Coordinated Infrastructure for Fault Tolerant Systems Rinku Gupta Argonne National Laboratory

Upload: others

Post on 22-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

CIFTSA Coordinated Infrastructure for

Fault Tolerant Systems

Rinku Gupta

Argonne National Laboratory

Page 2: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Current HPC Systems

20 reboots/day; 2-3% machines replaced /year (storage, memory)15,000Google

MTBI: 9.7hrs3,016PSC Lemieux

MTBF: 5hrs (’01) and 40 hrs (’03)

(storage, CPU, 3rd party HW)8,192ASCI-W

MTBI: 6.5hrs

(storage, CPU, memory)8,192ASCI-Q

ReliabilityCPUs*System

*“A Power-aware Run-Time System for High-Performance Computing”, Chung-hsing Hsu and Wu-chun Feng, IEEE InternationalSupercomputing Conference (SC), 2005

• Top 500 statistics

– Performance growth by #1 cluster• 35.86TF/s (2002) to 1PF/s (2008)

– Average node count growth• 128-258 (2002) to 1024-2048 (2008)• Today’s capability computing is tomorrow’s capacity computing

Page 3: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Downtime Cost

$90,000Catalog Sales Center

$113,000Home Shopping Channel

$150,000Package Shipping Services

$180,000Amazon

$225,000eBay

$2,600,000Credit Card Authorization

$6,450,000Brokerage Operations

Cost of One Hour Downtime*Service

“Faults directly impact system downtime and TCO”

Page 4: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Fault Tolerance in HPC• Available for many HPC components

– Storage (RAID variations) and File Systems ( dCache, TeraGrid FS, Panasas, IBRIX, BulkFS)

– Checkpointing software (application checkpointing ex:BLCR, Condor; operating system checkpointing ex: TICK)

– Software built using hardware technologies like lmsensors,OpenIPMI, BMC and other monitoring software like Ganglia

– Middleware (FT-MPI, MPICH-V, FE-MPI, FT ARMCI)

• What’s the Problem?– Virtually no coordination between different components– A component cannot inform other components about the faults it is seeing

or benefit from the knowledge of faults that other components are seeing– Every man for himself is not going to work !

Page 5: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Other software on the cluster are agnostic of this MPI job failureOther software are also agnostic of the reason of MPI job failure

Current Fault Tolerance (Scenario #1)

Launches MPI Job 1

FT Job Scheduler

More failures

detects “communication

failure” with node X

FT ApplicationOn FT-MPI (Job1)

MPI tells application

ApplicationCheckpoints &

aborts

Launches MPI Job 2

Page 6: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Current Fault Tolerance (Scenario #2)

IO node failure. File system down

Parallel FS

I/O Fails

FT-Application

I/O Fails

FT- Application

Page 7: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

The Fault Tolerance Scene today

`

HPCMiddleware

Job Scheduler/Resource manager

File Systems Operating

SystemApplications

SystemMonitoring Software

(Ganglia)

SystemManagement

Hardware(BMC,

OpenIPMI)

CheckPointSoftware

SpecificHardwa)re

(RAID)AdminScripts

……..

Inbuilt SystemFault

Tolerance

Page 8: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Our Solution: The CoordinatedInfrastructure for Fault Tolerant Systems

• CIFTS aims at providing a coordination framework for different components to exchangehardware and software fault information

• Core of the CIFTS infrastructure is a fault-tolerance backplane (FTB)

– Provides a scalable event handling and notification mechanism for all HPC software

• Several components can utilize FTB to coordinate with other components

– MPI (MPICH2, MVAPICH2, OpenMPI), File-systems (PVFS), …

• CIFTS provides software developers opportunities to improve their software’s faulttolerance capabilities -- based on faults experienced by others on the system

Page 9: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

CIFTS Framework

Fault Tolerance Backplane

Linear Algebra Libraries

HPCMiddleware

UniversalLogger

AutomaticActions

DiagnosticsTools

EventAnalysis

Job Scheduler/Resource manager

File Systems

CheckpointingSoftware

Advanced Networks

&Networking

libraries

SystemMonitoring

software

SystemManagement

hardware

Operating System Applications

AdvancedSystems

(Crays, BGs)

Page 10: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

FTB Architectural layers

Manager Library

Network

Client Library

Component 1

NetworkModule1

FTB Agent

Component n

Linux BG CRAY

NetworkModule2

Manager Library

NetworkNetworkModule1

NetworkModule2

FTB Client API

FTB Manager API

FTB Agent software stackComponent software stack

Page 11: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

FTB Architectural layers

Manager Library

Network

Client Library

Component 1

NetworkModule1

FTB Agent

Component n

Linux BGL CRAY

NetworkModule2

Manager Library

NetworkNetworkModule1

NetworkModule2

FTB Manager API

Just the FTB Client API

Page 12: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

The FTB Client API

• Provides a set of simple FTB routines, loosely based onthe publish-subscribe framework– FTB_Connect()

– FTB_Publish_event()– FTB_Subscribe()

– FTB_Poll_for_event()

– FTB_Unsubscribe()– FTB_Disconnect()

– …..

Page 13: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

The FTB Framework

FTB Agent

FTB Agent

FTB Agent

FTB Agent

FTB AgentFTB Agent

FTB Agent

ConnectConnect

Software1

Subscribeto a set of

events

Software2

Software3

Connect

Publish event

Publish event

Page 14: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

A scenario using FTB-enabled softwares

IO node failure. File system down

Parallel FS

File System shares this information

FT-Job Scheduler

Launch jobs with NFS file system

MPI-IOPrints a coherent

error message

Checkpoints to a diff FS

FT-Application

Checkpoints to a diff FS

FT-Application

Migrates existingjobs

Page 15: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Benchmarking the Impact of FTB• Benchmarking a system like FTB is challenging

– Series of synthetic benchmarks to show the impact of using FTBin different scenarios.

Benchmark #11. Understanding the impact of FTB

communication on FTB-agnosticapplications

2. Measures FTB-agnostic applicationperformance in presence ofcommunicating FTB agents

3. Run agnostic application on some nodes,while the other nodes do FTB All-to-All

Root Nodes

Leaf Nodes

Page 16: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Impact of FTB-communication

• Test Environment– MPI Latency Test OSU v3.1.1– FTB all-to-all benchmark is being run on 88 nodes: Each node publishing 50 events and catching 88*50 =

4400 events– Agent runs on all nodes (connected through Myri 10G MX) – Nodes: 2.8GHz dual-core AMD Opteron 2220, 4GB memory

Page 17: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Benchmarking the Impact of FTB onInteger sort application

Benchmark #2

1. NPB 3.3 Integer sort benchmark was modified to publish NUM_EVENTS events andcatch NUM_EVENTS events. NUM_EVENTS varies from 1 to 1024 events.

2. A logger software, which subscribed and caught ALL system-wide events runs on allnodes

3. Agent runs on all nodes

Logger catches all eventsLogger catches all events

Integer sort applicationinstance

Integer sort applicationinstance

For 8-node run and NUM_EVENTS 1024 , each agent handles > 80K events

Page 18: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Impact of over-worked FTB agent on ISapplication

• Test Environment– NPB3.3 IS (Class C) benchmark run on 2 (or 8 nodes)– Polling Logger benchmark, which subscribes to all events, running on the 2 (or 8 nodes)– Agent runs on all nodes (connected through GbE)– Nodes: 2.8GHz dual-core AMD Opteron 2220, 4GB memory

Page 19: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

CIFTS group

• Argonne National Laboratory

• Indiana University

• Lawrence Berkeley National Laboratory

• Oak Ridge National Laboratory

• Ohio State University

• University of Tennessee, Knoxville

Page 20: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

FTB-enabled Software -- Planned

BLCR

Fault Tolerance Backplane

FT-LA

SWIMIPS

LAMMPSOpen MPI

PVFS

MPICH2

MVAPICH2

LAM/MPI

Cobalt

ScaLAPACK

ROMIO

NWChem

ZeptoOSCCA

Applications

InfiniBand

LinuxClusters

IBM BG

CRAYs

Page 21: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Status Quo/Future Plans

• FTB 0.6 released in Sept 2008– With support Linux, Cray and BG series

• FTB 0.7 planned for Q2’09– Improvements to the inherent fault-tolerance of FTB

• Distributed database with multiple distributed replicas• Topological improvements for fast restoration on faults (currently tree)

– Event Coalescing (currently each event is handled separately)– Event Prioritization– Native support for high-speed networks within FTB

• Upcoming events– BOF at SC’08– Multiple CIFTS demos at SC’08

Page 22: CIFTS A Coordinated Infrastructure for Fault Tolerant Systems...–With support Linux, Cray and BG series • FTB 0.7 planned for Q2’09 –Improvements to the inherent fault-tolerance

Questions• Website : http://www.mcs.anl.gov/research/cifts/

• Mailing list– [email protected]

• Contacts– Rinku Gupta ([email protected]), Technical Lead

– Pete Beckman ([email protected]), PI