interop: the 10gbe top 10

33
1 1 Shaun Walsh Emulex The 10GbE Top 10

Upload: emulex-corporation

Post on 17-Jan-2015

1.030 views

Category:

Technology


0 download

DESCRIPTION

Shaun Walsh, VP of marketing, Emulex, presented the '10GbE Top 10" at Interop Las Vegas in May 2011. If you missed it, here are the slides from that discussion.

TRANSCRIPT

Page 1: Interop: The 10GbE Top 10

11

Shaun WalshEmulex

The 10GbE Top 10

Page 2: Interop: The 10GbE Top 10

2

Discrete Data Center

10G Ethernet and FCoE are enabling technologies to support the Virtual Data Center and Cloud Computing.

The Data Center of the Future

Virtual Data Center

Cloud Data Center

• Cloud Computing (Private & Public)• On-Demand Provisioning and Scale• Modular Building Blocks “Legos”• Avoid CAPEX & OPEX

Private cloud

• 3 Discrete Networks• Equipment Proliferation• Management Complexities• Expanding OPEX & CAPEX

• Converged Networks• Virtualized• Simplifies I/O Management• Reduces CAPEX & OPEX

Page 3: Interop: The 10GbE Top 10

3

Drivers of the Move to 10/40GbE

Storage

Universe

• 7ZB by 2013

• Mobile and VDI

• Device-Centric

o

Virtual

Networking

• VM I/O Density

• Scalable vDevices

• End-to-End vI/O

Cloud

Connectivity

• New I/O Models

• I/O Isolation

• New Server Models

Network

Convergence

• Multi-Fabric I/O

• Evolutionary Steps

• RoCEE Low Latency

Page 4: Interop: The 10GbE Top 10

4

VM Density Drives More I/O Throughput

© 2011 Enterprise Strategy Group

0%

5%

10%

15%

20%

25%

30%

35%

40%

26%

36%

23%

12%

3%

8%

24%

30% 31%

7%

What would you estimate is the average number of virtual machines per phys-ical x86 server in your environment today? How do you expect this to change

over the next 24 months? (Percent of respondents, N=463)

Today 24 months from now

<5 5-10 11-25 >25

Server Virtualization Impact on the

Network

It has created more network traffic in the data center

(30%)

10GbE & 16 Gb FC

40 Gb

Don’t Know

Page 5: Interop: The 10GbE Top 10

5

ESG’s Evolution of Network Convergence

© 2011 Enterprise Strategy Group

Curr

ent L

evel Dedicated

NetworksOrganizations keep LAN, SAN and HPC clusters on their own separate networksSeparate management tools and teamsUnique skills and training required

Prog

ress

ing

Leve

l Consolidated NetworksStarts with Ethernet – run LAN, IP storage and HPC together Maintain existing investment in FCConsolidate connectivity at the server and storage level“Convergence Ready”

Adva

nced

Lev

el Fully Converged NetworksMerge on to a single fully converged networkRun LAN and all storage over Ethernet -FCoEConverged adaptors, cabling and switches

Page 6: Interop: The 10GbE Top 10

6

Evolving Network Models

Target ConnectHost Connect

DiscreteNetworks

ConvergedFabric

Networks

ConvergedNetworks

Page 7: Interop: The 10GbE Top 10

7

Emulex Connect I/O Roadmap

Ethernet HighPerformanceComputing

Unified Storage

2010 2011 2012 2013

Multi-Fabric Technology

Low Latency RoCEERDMA

Value AddedI/O Services

I/O Management

Networked Server/ Power Management

16Gb

Converged Networking

UniversalLOMs

10Gb

10GBaseT10Gb 40Gb 100Gb

3rd Gen BMC

32Gb

PCIe Gen3

SR-IOV, Multichannel

8Gb Fibre Channel

40Gb10GBaseT

4th Gen BMC

Page 8: Interop: The 10GbE Top 10

8

Waves of 10GbE Adoption

10Gb LOM and Adapters for Blade Servers50% of IT Managers Cite Virtualization as the Driver of 10GbE in Blades

10G

bE A

dopt

ion

Wav

es

2010 2011 2013 Beyond2012

1-2 Socket Serverwith NICs and Modular LOMs

Cloud Container Servers for Web Giants and Telcos

FCoE Storage 25% of I/O10GbE LOM Everywhere

Sources:

Dell’Oro 2010 Adapter Forecast

ESG Deployment Paper 2010

IDC WW External Storage Report 2010

IT Brand Pulse 10GbE Survey April 2010

10Gb NAS and iSCSI Drive IP SAN Storage for Unstructured Data

2-4 Socket X86 and Unix Rack Servers Drive 10Gb with Modular LOMs and UCNAs

Wav

e 1

Wav

e 2

Wav

e 3

Page 9: Interop: The 10GbE Top 10

9

Wave 2 of 10GbE Market Adoption

65% of Server Shipments

More that 2X the Revenue Opportunity

Romley from Q4:11 to Q2:11

Modular LOM on Rack

More Cards vs. LOMs

x86 & Unix Rack Servers

Wave 2 – More Than Doubles 10GbE Revenue Opportunity CY12/13

Cisco, Juniper, Brocade announced Multi-Hop FCoE in last 90 days

10GbE NAS, ISCSI & FCoE Convergence

10GbE IP Storage

Next Gen Internet Scale Applications (Search, Games & Social Media)

Container & High Density servers migrating to 10GbE

Growing to 15% of Servers

Web Giants

Page 10: Interop: The 10GbE Top 10

10

The 10Gb Transition – Cross Over in 2012

2009 2010 2011 2012 2013 20140%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%

1 Gbps 10 Gbps

Serv

er C

onne

ction

Sha

re

Source Dell Oro Group

Page 11: Interop: The 10GbE Top 10

11

#1) Switch Architecture Top of Rack

Modular unit design – managed by rack

Servers connect to 1-2 Ethernet switches inside rack

Rack connected to data center aggregation layer with SR optic or twisted pair

Blades effectively incorporate TOR design with integrated switch

FIBER FIBER FIBER

SANEth Aggregation

SERVER CABINET ROW

Page 12: Interop: The 10GbE Top 10

12

Switch Architecture End of Row

Server cabinets lined up side-by-side

Rack or cabinet at end (or middle) of row with network switches

Fewer switches in the topology

Managed by row – fewer switches to manage

SANEth Aggregation

SERVER CABINET ROW

FIBER FIBER FIBER FIBER

COPPER

Page 13: Interop: The 10GbE Top 10

13

#2) 10Gb LOM on Rack Servers

Generally more I/O & memory expansion

More expensive 10G physical interconnect

Later 10G LOM due to 10GBASE-T

Typically limited I/O expansion (1-2 slots)

Much cheaper 10G physical interconnect – backplane Ethernet & integrated switch – leading 10G transition

Earlier 10G LOM

VS.

Page 14: Interop: The 10GbE Top 10

14

#3) Cables

Today’s Data Center– Typically Twisted Pair for

Gigabit Ethernet

• Some Cat 6, more likely Cat 5

– Optical cable for Fibre Channel

• Mix of older OM1 and OM2 (orange)and newer OM3 (green)

• Fibre Channel may notbe wired to every rack

Page 15: Interop: The 10GbE Top 10

15

100Mb 1Gb 10Gb

UTP Cat 5UTP Cat 5SFP Fiber

10Mb

UTP Cat 3

Mid 1980’s

Mid 1990’s

Early 2000’s

Cable TransceiverLatency (link)

Power(each side)

DistanceTechnology

Copper

Twinax~0.1ms~0.1W10mSFP+ Direct Attach

Optic

Single-ModeVar1W

10GBase LR(long range)

Optic

Multi-Mode

~01W

10 km

10GBase SR

(short range)

10GbE Cable Options

SFP+ Cu, FiberX2, Cat 6/7

Late 2000’s

10GBASE-TCopper

Twisted Pair

Cat6 - 55mCat 6a 100m

Cat7 -100m

~2.5W (30m)~3.5W (100m)

~1W (EEE idle)

~2.5ms

62.5mm - 82m50mm - 300m

Page 16: Interop: The 10GbE Top 10

16

3X Cost and 10X Performance

1Gb Copper

10GbDAC

10GbOptical

10GbBase-T

LOM $ 5 $50

NIC/UCNA $118 $337 $690 $250

Switch Ports $87 $531 $531 $350

CAT Cables $15 $132 $25

Switch SFP $924

Optic Cable $100

Total $225 $ 1,000 $ 2,245 $675

Page 17: Interop: The 10GbE Top 10

17

10G BASE-T Will Dominate IT Connectivity

Crehan Research 2011

Page 18: Interop: The 10GbE Top 10

18

New 10GBASE-T Options

SFP+ is lowest cost option today– Doesn’t support

existing gigabit Ethernet LOM ports

10GBASE-T support emerging– Lower power at 10-30m

reach (2.5w)– Energy Efficient

Ethernet reduces to ~1w on idle

Optical is only option today

10GBASE-T support emerging– Lower power at 10-30m

reach (2.5w this year)– Can do 100m reach

at 3.5w– Energy Efficient

Ethernet reduces to ~1w on idle

Top of RackTop of Rack End of RowEnd of Row

Page 19: Interop: The 10GbE Top 10

19

#4) Data Center Bridging

Ethernet is a “best-effort” network– Packets may be dropped– Packets may be delivered out of order

• Transmission Control Protocol (TCP) is used to reassemble packets in correct order

Ethernet is a “best-effort” network– Packets may be dropped– Packets may be delivered out of order

• Transmission Control Protocol (TCP) is used to reassemble packets in correct order

Data Center Bridging(aka “lossless Ethernet”)

– The “bridge to somewhere” – pause-based link between nodes

– Provides low latency required for FCoE support– Expected to benefit iSCSI, enable

iSCSI convergence

Data Center Bridging(aka “lossless Ethernet”)

– The “bridge to somewhere” – pause-based link between nodes

– Provides low latency required for FCoE support– Expected to benefit iSCSI, enable

iSCSI convergence

Page 20: Interop: The 10GbE Top 10

20

IEEE Data Center Bridging Standards

Priority-based Flow Control (PFC)

Enables lossless Ethernet – manages I/O between initiator and target on a multi-

protocol Ethernet linkIEEE 802.1Qbb

Quality of Service (QoS) Supports 8 priorities for network traffic IEEE 802.1p

Enhanced Transmission Selection (ETS)

Allocate bandwidth to IP, iSCSI and FCoE traffic – managed with OneCommand 5.0 IEEE 802.1Qaz

Data Center Bridging Capability Exchange (DCBX)

Extends DCB network by exchanging Ethernet parameters between DCB switches IEEE 802.1ab

FeatureFeature BenefitBenefit StandardsActivity

StandardsActivity

Page 21: Interop: The 10GbE Top 10

21

Savings with Convergence

Before Convergence 140 Cables

Before Convergence 140 Cables

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

FC FC FC FC FC FC FC FC FC FC

FC FC FC FC FC FC FC FC FC FC

FC FC FC FC FC FC FC FC FC FC

FC FC FC FC FC FC FC FC FC FC

After Convergence

60 Cables

After Convergence

60 CablesLOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

LOM

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

IP IP IP IP IP IP IP IP IP IP

FC FC FC FC FC FC FC FC FC FC

FC FC FC FC FC FC FC FC FC FC

FC FC FC FC FC FC FC FC FC FC

FC FC FC FC FC FC FC FC FC FC

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

CNA

(Based on 2 LOM, 8 IP and 4 FC Ports on 10 Servers)

(Based on 2 LOM, 8 IP and 4 FC Ports on 10 Servers)

Savings Up To:• 28% on switches, adapters• 80% on cabling• 42% on power and cooling

Savings Up To:• 28% on switches, adapters• 80% on cabling• 42% on power and cooling

Page 22: Interop: The 10GbE Top 10

22

Volume x86 systemsVolume x86 systems High End Unix SystemsHigh End Unix Systems

To Converge or Not To Converge?

Best business case for convergence

Systems actually benefit by reducing adapters, switch ports and cabling

Limited convergence benefit

Typically large number of SAN and LAN ports

System may go from 24 Ethernet and 24 Fibre Channel ports to 48 Converged Ethernet ports

Benefits in later stages of data center convergence when Fibre Channel SAN fully running over Ethernet physical layer (FCoE)

Page 23: Interop: The 10GbE Top 10

23

#7) Select 10Gb Adapter

10Gb LOM becoming standard on high-end servers

Second adapter for high availability

Should support FCoE and iSCSI offload for network convergence

Compare performance for all protocols

Page 24: Interop: The 10GbE Top 10

24

Convergence Means Performance

40 Gb/Sec

Faster Engine: More Transactions

More VMs/CPU

QOS you demand

Performance for Storage

Wider Track: More Data Lanes

No Virtualization Conflicts

Capacity for Provisioning

24

900K IOPS/Sec

Page 25: Interop: The 10GbE Top 10

25

#8) Bandwidth and Redundancy

NIC Teaming – Link Aggregation

– Multiple physical links (NIC ports) combined into one logical link

– Bandwidth aggregation

– Failover redundancy

FC Multipathing

– Load balancing over multiple paths

– Failover redundancy

Core Core

Core Core

Redundancy

Redundancy

Poor Den

sity

a b

c d

Page 26: Interop: The 10GbE Top 10

26

NETWORKFRAMEWORK

LAN SANNETWORKCONVERGENCE

#9 Management ConvergenceF

utu

re P

roof

Inve

stm

ent

Pro

tect

ion

Protects YourConfigurationInvestment

Protects YourManagementInvestment

Protects YourLAN & SAN Investment

SANLAN

Page 27: Interop: The 10GbE Top 10

27

Convergence and IT Management

Traditional IT management

– Server

– Storage

– Networking

Convergence is latest technology to perturb IT management

– Prior: Blades, Virtualization, PODs

Drives innovation Server/Virtualization

StorageNetworking

Page 28: Interop: The 10GbE Top 10

28

#10) Deployment Plan

Upgrade switches as needed to support 10Gb links to servers

Plan for network convergence with switches that support DCB

Unified 10GbE platform for LOM, blade and stand-up adapters

Focus on new servers

Page 29: Interop: The 10GbE Top 10

2929

Shaun WalshEmulex

The 10GbE Top 10

Page 30: Interop: The 10GbE Top 10

30

Flexible Converged Fabric Adapter Technology

Dual Port 8Gb FC

2x8

Quad Port 10Gb CNA

Dual Port 16Gb FC

Single Port 40Gb CNA

Dual Port 8Gb FCDual Port10Gb CNA

Dual Port 10Gb CNA

Single Port 16Gb FCDual Port 10Gb CNA

Quad Port 8Gb FC

4x82x16

2x10

4x102x8 2x10

1x16 2x10

1x40

Page 31: Interop: The 10GbE Top 10

31

Emulex at Interop Booth # 743

InteropMay 9, 2011

EMC WorldMay 9, 2011

InteropMay 9, 2011

First 16Gb HBAFibre Channel Demonstration

First UCNA10GbE Base-T Demonstration

First UCNA40GbE

Demonstration

Page 32: Interop: The 10GbE Top 10

32

Wrap Up

10GBASE-T is Like Hanging Out with an Old Friend

Many Phase to Market Transitions and Implementation

Network Convergence is Coming; Best to be Ready

Management of Domains Will be the Most Challenging

10GBASE-T will drive 10Gb Cost in Racks

Page 33: Interop: The 10GbE Top 10

33

THANK YOU

For more information, please contact Shaun Walsh949.922.7472 [email protected]