paving the road to exascale computing

26
Paving The Road to Exascale Computing Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage Peter Waxman VP of HPC Sales April 2011 [email protected]

Upload: jenn

Post on 15-Jan-2016

48 views

Category:

Documents


3 download

DESCRIPTION

Paving The Road to Exascale Computing. Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage. Peter Waxman VP of HPC Sales April 2011 [email protected]. Company Overview. Leading connectivity solutions provider for data center servers and storage systems - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Paving The Road to Exascale Computing

Paving The Road to Exascale Computing

Highest Performing, Most EfficientEnd-to-End Connectivity for Servers and Storage

Peter Waxman

VP of HPC Sales

April 2011

[email protected]

Page 2: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 22

Company Overview

Leading connectivity solutions provider for data center servers and storage systems• Foundation for the world’s most powerful and energy-

efficient systems• >7.0M ports shipped as of Dec.’10

Company headquarters: • Yokneam, Israel; Sunnyvale, California

• ~700 employees; worldwide sales & support

Solid financial position• Record Revenue in FY’10; $154.6M• Q4’10 revenue = $40.7M

Completed acquisition of Voltaire, Ltd.

Ticker: MLNX

Recent Awards

Page 3: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 3

Connectivity Solutions for Efficient Computing

Enterprise HPC

High-end HPC

Leading Connectivity Solution Provider For Servers and Storage

HPC Clouds

Host/FabricSoftware

Mellanox Interconnect Networking Solutions

ICs Switches/GatewaysAdapter Cards Cables

Page 4: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 4

Combining Best-in-Class Systems Knowledge and Software with Best-in-Class Silicon

4

Mellanox Brings• InfiniBand and 10GbE Silicon

Technology & Roadmap• Adapter Leadership• Advanced HW Features• End to End Experience• Strong OEM Engagements

Voltaire Brings• InfiniBand and 10GbE Switch

Systems Experience• IB Switch Market Share Leadership• End to End SW & Systems Solutions• Strong Enterprise Customer

Engagements

Infi

niB

and

Eth

ern

et/V

PI

+

+

+ =

=

Combined Entity• Silicon, Adapters and Systems

Leadership• IB Market Share Leadership• Full Service Offering• Strong Customer and OEMs

Engagements

InfiniScale & ConnectX• HCA and Switch Silicon• HCA Adapters, FIT• Scalable Switch Systems• Dell, HP, IBM, and Oracle

Grid Directors & Software• UFM Fabric Management SW• Applications Acceleration SW• Enterprise Class Switches• HP, IBM

InfiniBand Market Leader• End to End Silicon, Systems, Software

Solutions• FDR/EDR Roadmap• Application Acceleration and

Fabric Management Software• Full OEM Coverage

10GbE and 40GbE Adapters• Highest performance

Ethernet Silicon • 10GbE LOM and Mezz

Adapters at Dell, HP and IBM

10GbE Vantage Switches & SW• UFM Fabric Management SW• Applications Acceleration SW• 24, 48, 288 Port 10GbE Switches• HP, IBM

Ethernet Innovator• End to End Silicon, Systems, Software

Solutions• 10GbE, 40GbE and 100GbE Roadmap• Application Acceleration and

Fabric Management Software• Strong OEM Coverage

Page 6: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 6

Adapter market and performance leadership• First to market with 40Gb/s (QDR) adapters

– Roadmap to end-to-end 56Gb/s (FDR) in 2011

• Delivers next-gen application efficiency capabilities

• Global Tier-1 server and storage availability- Bull, Dawning, Dell, Fujitsu, HP, IBM, Oracle, SGI, T-Platforms

Comprehensive, performance-leading switch family• Industry’s highest density and scalability

• World’s lowest port-to-port latency (25-50% lower than competitors)

Comprehensive and feature-rich management/acceleration software• Enhancing application performance and network ease-of-use

High-performance converged I/O gateways• Optimal scaling, consolidation, energy efficiency

• Lowers space and power and increases application performance

Copper and Fiber Cables• Exceeds IBTA mechanical & electrical standards

• Ultimate reliability and signal integrity6

Most Complete End-to-End InfiniBand Solutions

Page 7: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 77

Expanding End-to-End Ethernet Leadership

Industry’s highest performing Ethernet NIC• 10/40GigE w/FCoE with hardware offload• Ethernet industry’s lowest1.3μs end-to-end latency• Faster application completion, better server utilization

Tremendous ecosystem support momentum• Multiple Tier-1 OEM design wins (Dell, IBM, HP)

– Servers, LAN on Motherboard (LOM), and storage systems

• Comprehensive OS Support- VMware, Citrix, Windows, Linux

High capacity, low latency 10GigE switches• 24 to 288 ports with 600-1200ns latency• Sold through multiple Tier-1 OEMs (IBM, HP)• Consolidation over shared fabrics

Integrated, complete management offering• Service Oriented Infrastructure Management, with Open APIs

Page 8: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 88

Mellanox in the TOP500

Mellanox InfiniBand builds the most powerful clusters• Connects 4 out of the Top 10 and 61 systems in the Top 100

InfiniBand represents 43% of the TOP500• 98% of the InfiniBand clusters use Mellanox solutions

Mellanox InfiniBand enables the highest utilization on the TOP500• Up to 96% system utilization

Mellanox 10GigE is the highest ranked 10GigE system (#126)

8

250

200

150

100

50

0

Nu

mb

er o

f C

lust

ers

Top500 InfiniBand Trends

142

182

215

Nov 08 Nov 09 Nov 10

Page 9: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 9

Mellanox Accelerations for Scalable HPC

9

GPUDirect

Accelerating GPU CommunicationsScalable Offloading for MPI/SHMEM

Highest Throughput and Scalability(Paving to Road to Exascale Computing)

Maximizing Network Utilization Through Routing & Management (3D-Torus, Fat-Tree)

30+% Boost10s-100s% Boost

80+% Boost

Page 10: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 10

Software Accelerators

10

HighestPerformance

iSCSI Storage

Messaging Latency

MPI Performance

Page 11: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 11

UFM Fabric Management

Provides Deep Visibility• Real-time and historical monitoring of fabric health and performance• Central fabric dashboard• Unique fabric-wide congestion map

Optimizes performance• Quality of Service• Traffic Aware Routing Algorithm (TARA)• Multicast routing optimization

Eliminates Complexity • One pane of glass to monitor and configure fabrics of thousand of nodes• Enable advanced features like segmentation and QoS by automating provisioning• Abstract the physical layer into logical entities such as jobs and resource groups

Maximizes Fabric Utilization• Threshold based alerts to quickly identify fabric faults• Performance optimization for maximum link utilization• Open architecture for integration with other tools in-context actions and fabric database

Page 12: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 12

LLNL Hyperion Cluster

1152 nodes, dedicated cluster for development testing

Open Environment

CPUs: mix of Intel 4-core Xeon L5420 and 4-core Xeon E5530

Mellanox InfiniBand QDR switches and adapters

Page 13: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 13

Mellanox MPI Optimizations – MPI Natural Ring

Page 14: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 14

Mellanox MPI Optimization – MPI Random Ring

Page 15: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 15

Mellanox MPI optimization enable linear strong scaling for LLNL application

World Leading Performance and Scalability

Mellanox MPI Optimization – Highest Scalability at LLNL

Page 16: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 16

Performance: Lowest latency, highest throughput , highest message rate

Scalability: highest applications scalability through network accelerations

Reliability: from silicon to system, highest signal/data integrity

Efficiency: highest CPU/GPU availability through advanced offloading

Summary

Financial Mellanox Connectivity Solutions

Clustered DatabaseAcademic Research Computational Aided

Engineering

Cloud & Web 2.0Bioscience

Financial

Oil and Gas Weather

Page 17: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 17

Software Accelerators

17

HighestPerformance

iSCSI Storage

Messaging Latency

MPI Performance

Page 18: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 18

Thank [email protected]

Page 19: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 19

Mellanox Scalable InfiniBand Solutions

Mellanox InfiniBand solutions are Petascale-proven• Connecting 4 of 7 WW Petascale systems• Delivering highest scalability, performance, robustness• Advanced offloading/acceleration capabilities for MPI/SHMEM• Efficiency, congestion-free networking solutions

Mellanox InfiniBand solutions enable flexible HPC• Complete hardware offloads – transport, MPI• Allows CPU interventions and PIO transactions• Latency: ~1us ping pong; Bandwidth: 40Gb/s with QDR, 56Gb/s with FDR per port

Delivering advanced HPC technologies and solutions• Fabric Collectives Acceleration (FCA) MPI/SHMEM collectives offload• GPUDirect for GPU accelerations• Congestion control and adaptive routing

Mellanox MPI optimizations• Optimize and accelerate the InfiniBand channel interface• Optimize resource management and resource utilization (HW, SW)

Page 20: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 20

Mellanox Advanced InfiniBand Solutions

- Collectives Accelerations (FCA/CORE-Direct)- GPU Accelerations (GPUDirect)- MPI/SHMEM- RDMA- Quality of Service

- Adaptive Routing- Congestion Management- Traffic aware Routing (TARA)

- UFM, Mellanox-OS- Integration with job schedulers- Inbox Drivers

Server and Storage High-Speed Connectivity

Networking Efficiency/Scalability

Application Accelerations

Host/Fabric Software Management

- Latency- Bandwidth

- CPU Utilization- Message rate

Page 21: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 21

Scalable Performance

Page 22: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 22

LLNL Hyperion Cluster

1152 nodes, dedicated cluster for development testing

Open Environment

CPUs: mix of Intel 4-core Xeon L5420 and 4-core Xeon E5530

Mellanox InfiniBand QDR switches and adapters

Page 23: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 23

Mellanox MPI Optimizations – MPI Natural Ring

Page 24: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 24

Mellanox MPI Optimization – MPI Random Ring

Page 25: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 25

Mellanox MPI optimization enable linear strong scaling for LLNL application

World Leading Performance and Scalability

Mellanox MPI Optimization – Highest Scalability at LLNL

Page 26: Paving The Road to Exascale Computing

© 2011 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 26

Host/FabricSoftware

26

Leading End-to-End Connectivity Solution Provider for Servers and Storage Systems

Virtual Protocol Interconnect

StorageFront / Back-EndServer / Compute Switch / Gateway

40G IB & FCoIB 40G InfiniBand

10/40GigE & FCoE10/40GigE

Industries Only End-to-End InfiniBand and Ethernet Portfolio

ICs Switches/GatewaysAdapter Cards Cables

Fibre Channel

Virtual Protocol Interconnect