leader in next generation ethernet. 2 outline where is iwarp today? some proof points conclusion...
TRANSCRIPT
Leader in Next Generation Ethernet
2
Outline
• Where is iWARP Today?
• Some Proof Points
• Conclusion
• Questions
3
Latency in Multi-Processor/Multi-Core Systems
Cumulative Half Round Trip Latency for 16B Writes
0
10
20
30
40
50
60
70
80
90
1 2 4 8 16 32 64
Number of Connections
us
NE020
InfiniHostIII
Chelsio T3 (R310)
ConnectX
4
Industry Leading Multiple Connection Throughput
Predominant RDMA Command for; Intel MPI, MVAPICH2, HP MPI, Scali MPI, OpenMPI, MPICH2 and iSER
1 Connection Uni-directional Write Bandwidth
0
200
400
600
800
1000
1200
1400
1600
64 128 256 512 1024 2048 4096 8192
Message Size (Bytes)
MB
/s
InfiniHostIII
NE020Chelsio T3 (R310)
Chelsio T3 (R310) JumboConnectX
2 Connection Uni-directional Write Bandwidth
0
200
400
600
800
1000
1200
1400
1600
64 128 256 512 1024 2048 4096 8192
Message Size (Bytes)
MB
/s
InfiniHostIII
NE020Chelsio T3 (R310)
Chelsio T3 (R310) JumboConnectX
4 Connection Uni-directional Write Bandwidth
0
200
400
600
800
1000
1200
1400
1600
64 128 256 512 1024 2048 4096 8192
Message Size (Bytes)
MB
/s
InfiniHostIII
NE020Chelsio T3 (R310)
Chelsio T3 (R310) JumboConnectX
8 Connection Uni-directional Write Bandwidth
0
200
400
600
800
1000
1200
1400
1600
64 128 256 512 1024 2048 4096 8192
Message Size (Bytes)
MB
/s
InfiniHostIII
NE020Chelsio T3 (R310)
Chelsio T3 (R310) JumboConnectX
16 Connection Uni-directional Write Bandwidth
0
200
400
600
800
1000
1200
1400
1600
64 128 256 512 1024 2048 4096 8192
Message Size (Bytes)
MB
/s
InfiniHostIII
NE020Chelsio T3 (R310)
Chelsio T3 (R310) JumboConnectX
5
In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand
0
2,000
4,000
6,000
8,000
10,000
12,000
4 8 16 24 32
processes
wa
ll cl
ock
tim
es
(se
c)
E1
E2
E3
E4E5
E6
Infiniband
NetEffect 10Gb
Abaqus/Explicit: finite element analysis test suite (E1-E6)
6
In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand
– Unmodified– Automatically scaled across nodes– Wall-clock times equal to or better
than 20-Gbps InfiniBand
m13 Dataset (100% Frontal Crash)
0
5,000
10,000
15,000
20,000
8 16 32 64 128
nodes
wa
ll-cl
ock
(se
c)
bm1 Dataset (100% Side Impact)
0
10,000
20,000
30,000
40,000
50,000
8 16 32 64 128
nodes
wa
ll-cl
ock
(se
c)
DDR Infiniband (20Gbps)NetEffect NE020 10GbE
PAM-CRASH crash simulation
7
Flu
ent
rati
ng
In Production Applications Now… 10Gb iWARP Ethernet achieves same results as InfiniBand
0
1,000
2,000
3,000
4,000
5,000
6,000
7,000
8 16 32 64
cores
FL5L1 FL5L2 FL5L3
0
200
400
600
800
1,000
8 16 32 64
cores
0
1,000
2,000
3,000
4,000
5,000
8 16 32 64
cores
8 16 32 64
Infiniband 978.8 1938.3 4004.6 6843.6
NE020 10GbE 1008.3 1926.9 3901.4 6861.7
8 16 32 64
568.1 1145.5 2355.8 4702.0
611.5 1163.1 2359.8 4821.9
8 16 32 64
96.0 191.2 379.0 742.6
195.3 385.3 784.7
Fluent ‘rating’: the number of benchmarks that can be run (in sequence) in a 24 hour period. A higher rating means faster performance.
Fluent FL5Lx Benchmarks: Computational Fluid Dynamics (CFD)
8
Landmark Nexus Performance Results
0:00:00
0:28:48
0:57:36
1:26:24
1:55:12
2:24:00
2:52:48
2-2 4-2 4-4 8-4 8-8 16-8 32-8
system configuration (processors-nodes)
wal
l-cl
ock
tim
e (h
:mm
:ss)
0:00:00
0:28:48
0:57:36
1:26:24
1:55:12
2:24:00
2:52:48
2-2 4-2 4-4 8-4 8-8 16-8 32-8
system configuration (processors-nodes)
wal
l-cl
ock
tim
e (h
:mm
:ss)
In Production Applications Now… Even 1Gb iWARP Ethernet achieves results on par with InfiniBand in some applications
Landmark Nexus reservoir simulation (dominated by latency and message rate)
NetEffect 1GbE adapter
8 CoresStandard 1GbE
2:27:17
16 CoresStandard 1GbE
2:58:41
NetEffect 10GbE adapter
10Gb Infiniband
9
Conclusion
• We joined forces 2.5 years ago to make the concept of a single RDMA API work on multiple fabrics from multiple vendors and be interoperable
• InfiniBand & Ethernet Working Together– Multiple IB Vendors– Multiple iWARP Vendors
• It is now a Reality!
iWARP has moved “from benchmarks to applications”
Questions
Thank You!www.neteffect.com