appro xtreme-x supercomputers

12
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C

Upload: kemp

Post on 06-Jan-2016

55 views

Category:

Documents


2 download

DESCRIPTION

Appro Xtreme-X Supercomputers. A P P R O I N T E R N A T I O N A L I N C. Company Overview. :: Corporate Snapshot. Leading developer of high performance servers, clusters and supercomputers Established in 1991 Headquartered in Milpitas, CA Sales & Service office in Houston, TX - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Appro Xtreme-X Supercomputers

Appro Xtreme-X Supercomputers Appro Xtreme-X Supercomputers

A P P R O I N T E R N A T I O N A L I N CA P P R O I N T E R N A T I O N A L I N C

Page 2: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

2

:: Corporate Snapshot

Company Overview

• Leading developer of high performance servers, clusters and supercomputers

– Established in 1991

– Headquartered in Milpitas, CA

– Sales & Service office in Houston, TX

– Manufacturing Hardware in Asia

– Global Presence via Strategic and Channel Partners

– 72% Profitable CAGR over past 3 years

– Deployed the second largest Supercomputer in Japan

– Six top ranked computing systems listed in the Top 500

– Delivering balanced architecture for scalable performance

• Target Markets

– Financial Services

– Government / Defense

– Manufacturing

– Oil & Gas

Page 3: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

3

Strategic Partnership

• NEC has a strong presence in the EMEA HPC Market with over 20 years of experience

• This is a breakthrough for Appro’s entry into the EMEA HPC market

• Provides sustainable competitive advantages enabling both companies to participate in this growing market segment

• Appro and NEC look forward to working together to offer powerful, flexible and reliable solutions to EMEA HPC markets

:: Appro & NEC Join Forces in HPC Market

Formal Press Announcement will go out on Tuesday, 9/16/08

Page 4: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

4

• 1,424 Cores• 2.8TB System Memory• 15 TFlops

NOAA Cluster2006. 9

• 6,912 Cores• 13.8TB System

Memory• 33 TFlops

LLNL Minos Cluster2007. 6

• 9,216 Cores• 18.4TB System

Memory• 44 TFlops

LLNL Atlas Cluster2006. 11

• 4,608 Cores• 9.2 TB System Memory• 49 TFlops

2008. 2

:: Past Performance History

HPC Experience

DE Shaw Research Cluster

Page 5: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

5

TLCC Cluster2008. 4

• 13,824 Cores• 120 TFlops

2008. 8

• 48,384 Cores• LLNL, LANL, SNL• 426 TFlops

• 4,000 Cores• Dual-rail IB• 38 TFlops

Renault F1 CFD Cluster2008. 7

• 10,784 Cores• Quad-rail IB• 95 TFlops

Tsukuba University Cluster2008. 6

:: Past Performance History

HPC Experience

LLNL Hera Cluster

Page 6: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

6

HPC Challenges

• Petascale deployments (4000+ node deployment)

–Balanced Systems (CPU/Memory/Network)

–Scalability (SW & Network)

–Reliability (Real RAS: Network, Node, SW)

–Facilities (Space, Power & Cooling)

• Integrated exotics (GPU cluster)

–Solutions still being evaluated

:: Changes in the Industry

Page 7: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

7

:: Based on a Scalable Multi-Tier Architecture

: InfiniBandfor Computing

: 10GbEOperation

: GbE Operation

: GbEManagement

InfiniBandNetwork

Operation Network(10GbE)

ExternalNetwork

FirewallRouter

OperationNetwork

(GbE)

ComputeNodeCompute

NodeComputeNode

IONodeI/O

Node

ComputeServerGroup

OperationNetwork

(GbE)

ComputeNodeCompute

NodeComputeNode

IONodeI/O

Node

OperationNetwork

(GbE)

ComputeNodeCompute

NodeComputeNode

IONodeI/O

Node

OperationNetwork

(GbE)

ComputeNodeCompute

NodeComputeNode

IONodeI/O

Node

Parallel File System

Serversor

Bridge4X IB

4x IB

2x GbE per node

2x 10GbE

2x GbE

N GbE

MgmtNode

StorageControllers

StorageControllers

StorageServer

StorageServer

MgmtNetwork

(GbE)

FC or GbE

Global File System

GbE or 10GbE2x 10GbE

2x GbE

Petascale Deployments

Page 8: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

8

Middle Ware-HooksMiddle Ware-Hooks

Job SchedulingJob Scheduling

IB-Subnet ManagerIB-Subnet Manager

Virtual Cluster ManagerVirtual Cluster Manager

Instant SW ProvisioningInstant SW Provisioning

BIOS SynchronizationBIOS Synchronization

3D Torus Network Topology Support3D Torus Network Topology Support

ACE

Dual Rail NetworksDual Rail Networks

Stateless OperationStateless Operation

Remote Lights out ManagementRemote Lights out Management

Standard Linux OS SupportStandard Linux OS Support

Failover & RecoveryFailover & Recovery

:: Scalable cluster management software

“Appro Cluster Engine™ software turns a cluster of Servers into a,” functional, usable, reliable and available computing system”

Jim Ballew, CTO Appro

Petascale Deployments

Page 9: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

9

• Delivers Cold Air directly to the equipment for optimum cooling efficiency.

• Delivers comfortable air temperature to the room for return to Chillers

• Back-to-Back Rack configuration saves floor space in the datacenter and encloses the Cold isles inside the racks

• FRU and maintenance is done from the front side of the rack cabinet

• Delivers Cold Air directly to the equipment for optimum cooling efficiency.

• Delivers comfortable air temperature to the room for return to Chillers

• Back-to-Back Rack configuration saves floor space in the datacenter and encloses the Cold isles inside the racks

• FRU and maintenance is done from the front side of the rack cabinet

:: Innovative Cooling and Density needed

Petascale Deployments

Top ViewTop View

• Up to 30% Improvement in Density with Greater Cooling Efficiency

• Up to 30% Improvement in Density with Greater Cooling Efficiency

Page 10: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

10

:: Path to PetaFLOP Computing

Appro Xtreme-X Supercomputer - Modular Scalable Performance Number of Racks 1 2 8 48 96 192

Number of Processors 128 256 1024 5,952 11,904 23,808

Number of Cores 512 1,024 4096 23,808 47,616 95,232

Peak Performance 6TF/s 12TF/s 49TF/s 279TF/s 558TF/s 1.1PF/s

Memory Capacity 1.5TB 3TB 12TB 72TB 143TB 286TB

Memory BW Ration GB/s per GF/s - 0.68GB/s per GF/s Memory Capacity Ratio GB per GF/s - 0.26GB per GF/s

IO Fabric Interconnect – Dual-Rail QDR IO BW Ratio GB/sec per GF/s - 0.17GB/s per GF/s

Usable Node-Node BW GB/s - 6.4GB/s Node-Node Latency - <2us

Performance Numbers are Based on 2.93GHz Intel Nehalem Processors and Includes only Compute Nodes

....

Petascale Deployments

Page 11: Appro Xtreme-X Supercomputers

A P P R O H P C P R E S E N T A T I O N

SLID

E |

11

:: Possible Path to PetaFLOP GPU Computing

GPU Computing Cluster – Solution still being evaluated

Number of Racks 3 5 10 18 34

Number of Blades 64 128 256 512 1024

Number of GPUs 32 64 128 256 512

Peak GPU Performance 128TF 256TF 512TF 1PF 2PF

Peak CPU Performance 6TF 12TF 24TF 48TF 96TF

Max Memory Capacity 1.6TB 3.2TB 6.4TB 13TB 26TB

Bandwidth to GPU – 6.4GB/sec Node Memory Bandwidth – 32GB/sec

Max IO Bandwidth (2 QDR X4 IB) - 6.4GB/sec Node to Node Latency – 2us

....

Xtreme-X Supercomputer

Page 12: Appro Xtreme-X Supercomputers

Appro Xtreme-X Supercomputers Thank you

Questions?

Appro Xtreme-X Supercomputers Thank you

Questions?

A P P R O I N T E R N A T I O N A L I N CA P P R O I N T E R N A T I O N A L I N C