interconnect your future with connect-ib

14
Supercomputing 2013 Interconnect Your Future with Connect- IB

Upload: mellanox-technologies

Post on 06-May-2015

359 views

Category:

Technology


1 download

DESCRIPTION

Presented during SuperComputing 2013 by Yoni Luzon

TRANSCRIPT

Page 1: Interconnect Your Future with Connect-IB

Supercomputing 2013

Interconnect Your Future with Connect-IB

Page 2: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 2

The Only Provider of End-to-End 40/56Gb/s Solutions

From Data Center to Metro and WAN

X86, ARM and Power based Compute and Storage Platforms

The Interconnect Provider For 10Gb/s and Beyond

Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules

Comprehensive End-to-End InfiniBand and Ethernet Portfolio

Metro / WAN

Page 3: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 3

Technology Roadmap – One-Generation Lead over the Competition

2000 202020102005

20Gbs 40Gbs 56Gbs 100Gbs

“Roadrunner”Mellanox Connected

1st 3rd

TOP500 2003Virginia Tech (Apple)

2015

200Gbs

Mega Supercomputers

Terascale Petascale Exascale

Mellanox

Page 4: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 4

Mellanox Connect-IB Adapter Delivers Highest Clustering ROI

The 7th generation of Mellanox interconnect adapters, based on the new Exascale architecture

World’s first 100Gb/s interconnect adapter (dual-port FDR 56Gb/s InfiniBand)

Delivers 137 million messages per second – 4X higher than competition / previous generations

Support the new innovative InfiniBand scalable transport – Dynamically Connected

Accelerates applications performance – cases of 200% higher performance than competition

Page 5: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 5

Memory Scalability with Dynamic Connected Transport

InfiniHost, RC 2002 InfiniHost-III, SRQ 2005 ConnectX, XRC 2008 Connect-IB, DCT 20121

1,000

1,000,000

1,000,000,000 8 nodes

2K nodes

10K nodes

100K nodes

Ho

st

Me

mo

ry C

on

su

mp

tio

n (

MB

)

Page 6: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 6

Dynamically Connected Transport Performance Advantages

Page 7: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 7

Connect-IB Provides Highest Server and Storage Throughput

Source: Prof. DK Panda

Connect-IB FDR(Dual ports)

ConnectX-3 FDR

Connect-2 QDR

Competition (InfiniBand)

Connect-IB FDR(Dual ports)

ConnectX-3 FDR

Connect-2 QDR

Competition (InfiniBand)

Hig

her

is B

ette

r

Performance Leadership

Page 8: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 8

Connect-IB Delivers Highest Application Performance

WIEN2k is a quantum mechanical simulation software FDR InfiniBand delivers scalable performance Connect-IB adapter demonstrates 200% higher performance versus competition

80 160 3200

20

40

60

80

100

120

140

160

180

200

WIEN2k Performance

Competition (InfiniBand) ConnectX-3 FDR Connect-IB FDR

Number of Cores

Pe

rfo

rma

nc

e R

ati

ng

(J

ob

s/D

ay

)

Hig

her

is B

ette

r

Page 9: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 9

Connect-IB Delivers Highest Application Performance

Weather Research and Forecasting (WRF) simulation software Connect-IB delivers 54% higher performance versus competition

320 6400

500

1000

1500

2000

2500

3000

WRF Performance(conus12km)

Competition (InfiniBand) ConnectX-3 FDR Connect-IB FDR

Number of Cores

Pe

rfo

rma

nc

e (

Jo

bs

/Da

y)

Hig

her

is B

ette

r

Page 10: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 10

GPUDirect RDMA Technology

TransmitReceive

CPU

GPUChipset

GPUMemory

InfiniBand

System Memory1CPU

GPU Chipset

GPUMemory

InfiniBand

System Memory

1

GPUDirect RDMA

CPU

GPUChipset

GPUMemory

InfiniBand

System Memory1CPU

GPU Chipset

GPUMemory

InfiniBand

System Memory

1

GPUDirect 1.0

Page 11: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 11

GPU-GPU Internode MPI Latency

1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K0

5

10

15

20

25

30

35

MVAPICH2-1.9 MVAPICH2-1.9-GDR

Small Message Latency

Message Size (bytes)

La

ten

cy

(u

s)

Low

er is Be

tter

19.78

69 %

6.12

Performance of MVAPICH2 with GPUDirect RDMA

69% Lower Latency

GPU-GPU Internode MPI Bandwidth

1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K0

100

200

300

400

500

600

700

800

900

MVAPICH2-1.9 MVAPICH2-1.9-GDR

Message Size (bytes)

Ba

nd

wid

th (

MB

/s)

Small Message Bandwidth

3x

Hig

her

is B

ette

r

3X Increase in Throughput

Source: Prof. DK Panda

Page 12: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 12

Execution Time of HSG

(Heisenberg Spin Glass)

Application with 2 GPU Nodes

Source: Prof. DK Panda

Performance of MVAPICH2 with GPU-Direct-RDMA

Problem Size

Page 13: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 13

The Only Provider of End-to-End 40/56Gb/s Solutions

From Data Center to Metro and WAN

X86, ARM and Power based Compute and Storage Platforms

The Interconnect Provider For 10Gb/s and Beyond

Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules

Comprehensive End-to-End InfiniBand and Ethernet Portfolio

Metro / WAN

Page 14: Interconnect Your Future with Connect-IB

© 2013 Mellanox Technologies 14

Thank YouThank You