trends in data center networking - intel® software · 1. virtualization for data center...
TRANSCRIPT
Digital Enterprise Group
Mickey Gutman
Digital Enterprise Group
Nov 21th 2007
Trends in Data Center Networking
2
Copyright © Intel Corporation, 2007
Yesterday’s Networking FocusGetting data in/out of the server
•Faster data throughput
•Using fewer CPU cycles
Just get me to the switch.
3
Copyright © Intel Corporation, 2007
End-to-end Ethernet solutions to address IT customers’ top needs:
1. Virtualization for data center flexibility & efficiency2. Network storage that is low cost & easy to deploy3. Low Latency interconnect for high-performance computing
Tomorrow’s Networking FocusImproving data center services and efficiency
I want data center solutions for a more efficient, nimble enterprise
4
Copyright © Intel Corporation, 2007
10G
bE N
IC/L
OM
Vol
ume
10 Gigabit Server Connections
2005/06 2007 09/102008
PCIe* NICs
Blade LOM
Mezz
Server LOM
Cluster applications
Storage backup
Video on demand
Embedded markets
>80% NICs
12+ port switchesramp
PCI-X NICs
Fabric
Aggregation of GbE switches
Performance demandsof niche applications
Server consolidationon Multi-Core platforms
Convergence of LAN,SAN and IPC traffic
Blade servers
Mainstream applications
Copper begins deployment
Enterprise backbone
Fiber deployed
All NICs
Mostly switch toswitch connections
~20% LOM
500Ku
Source: Intel estimates.All timeframes, products and dates are subject to change without further notification.
5
Copyright © Intel Corporation, 2007
Products that meet growing I/O demand …
Optimized for Virtualization
Unified Storage over Ethernet
Designed for Multi-Core Processors
6
Copyright © Intel Corporation, 2007
Designed for multi-core processors
�Improve system response and scalability using MSI-X
�Multiple transmit and receive Queues that improve system throughput and utilization
�Receive-Side Scaling sorts and directs the packets to the appropriate CPU cores
�Low Ethernet latency with adaptive and flexible interrupt moderation
Increased performance by distributing workloads across available CPU cores
7
Copyright © Intel Corporation, 2007
Intel® I/O Acceleration Technology Features (now and future)
• Platform Capabilities
– Intel® QuickData Technology
– Direct Cache Access
– Low Latency Interrupt
– Message Signaled Interrupts
• LAN Capabilities
– Header/data split
– Receive Side Scaling (RSS)
– TX/RX checksum offload
– TCP segmentation
– Header-splitting / replication
– Receive Side Coalescing
8
Copyright © Intel Corporation, 2007
Intel 10GbE Performance with IOAT – Windows*
Test
Ixia IxChariot* 6.3
14 Clients
High Perf. Throughput script
File Size = 10000000 Bytes
Buffer Sizes = 64 Bytes To 64KBytes
Data Type – Zeros
Data Verification Disabled
Supermicro Server
Supermicro Platform
Two 2.66GHz Quad-Core Intel® Xeon® processors
8GB RAM
Windows Server 2003 x64 SP2
Clients
3.0GHz Xeon processor
2GB RAM
Windows Server 2003 SP1
Network Configuration
Cisco Catalyst 6509
Clients connected @ 1000 Mbs
Intel Products
Source: Intel Labs
Legal Disclaimer:Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are
considering purchasing. For more information on performance tests and on the performance of Intel products, visit (http://www.intel.com/performance/resources/limits.htm).
Single Port Performance (1500 MTU)Windows* Server 2003
Chariot* 6.3 Testing
10%24%
35%
0
5
10
15
20
Transmit Receive Bi-directional
Thr
ough
put (
Gbp
s)
0%
20%
40%
60%
80%
100%
CP
U U
tiliz
atio
n (%
)
W3K CPU (W3K)
9
Copyright © Intel Corporation, 2007
I/O Virtualization - Two Primary Models
BenefitsBenefits
Easy Migration Model
Uniform Driver Model
ChallengesChallenges
High SW Overhead limits performance
VMDq2 features improve overhead
Direct AssignmentDirect AssignmentSW EmulationSW Emulation
TodayToday’’s virtual environmentss virtual environments A Performance I/O EnvironmentA Performance I/O Environment
VMM
VM0 VM1 VMn
VM Migration
App
Emul.Driver
Emul.Driver
Emul.Driver
SW SwitchSW Switch
ServerHardware
NIC
VM0 VM1 VMn
LANDriver
LANDriver
LANDriver
VMM
VM Migration
App
ServerHardware
NIC0NIC1
NICn
Intel Enabling Both!
Benefits
Performance ���� Higher Throughput, Lower Latency, Maximum I/O efficiency
Standards Based by using SR-IOV
New usages will expand overall virtualization TAM
Challenges:
Requires new SW infrastructure
VM Migration is more challenging
10
Copyright © Intel Corporation, 2007
Optimized for Virtualization
�Virtual Machine Device Queues improves network performance and lowers CPU utilization
� Ensures transmit fairness and prevents head-of-line blocking
� Reduces number of decisions made by the VMM software switch
�VMDq is part of Intel®Virtualization Technology for Connectivity effort that seeks to drive efficient I/O performance in the virtualized data center
Efficient NIC sharing by sorting and grouping packets
NICw/ VMDq
VMM
VM1
vNIC
Layer 2 Software Switch
MAC/PHY
LANLAN
VM2
vNIC
VMn
vNIC…
Layer 2 Sorter
…
Efficient NIC sharing by sorting and grouping packets
11
Copyright © Intel Corporation, 2007
VMDq Performance withVMware ESX* Server
VMDq provides significant
performanceimprovements
Test Configuration• Intel Dual Socket platform
• Intel® Xeon® Quad-Core processors (2.66GHz)• VMware ESX development build• Intel 10GbE adapter• 8GB RAM• 4 VMs running Windows* 2003• VMs affinitized to 1 core each• Ntttcp application• I/O size is 64KB• 8 threads per VMSource: Intel Labs
9.59.2
4.0
0123456789
10
w/o VMDq w/ VMDq w/ VMDq (Jumbo Frames)
Thr
ough
put (
Gbp
s)
Maximize 10 Gigabit Ethernet performance for Virtual machine connectivity
12
Copyright © Intel Corporation, 2007
PCI-SIG* I/O Virtualization (IOV) ImplementationStandards-based PCI Express* I/O device sharing
Intel NICs with IOV will support:Standard PCIe* Virtual Function Interface for assignment to VMs
− Allows direct VM control of specific NIC functions
− Allows Intel® VT-d to map memory to shared I/O devices
− Eliminates VMM intermediary memory copies in I/O data path
NICw/ VMDq
VMM
VM1
pDVR
L2 SW Switch
MAC/PHY
LANLAN
VM2
pDVR
VMn
pDVR…
Layer 2 Classifier with Loop-Back
… Qn
VFn
Q2
VF2
Q1
VF1
13
Copyright © Intel Corporation, 2007
Unified Storage over Ethernet
iSCSI continues to grow, while Fibre Channel remains important for Enterprise data centers
Storage solutions with support for native iSCSI initiators and remote boot
Source: IDC WW Disk Storage Systems 2007-2011 Forecast
Support for native software iSCSI initiators in Windows* and Linux* for broad, low cost deployments
iSCSI remote boot capability allows simpler deployment of updates and patches
Advanced features will allow migration to Fibre Channel over Ethernet
Disk Storage Systems: External Terabytes
0
1,000
2,000
3,000
4,000
5,0006,000
7,000
8,000
9,000
10,000
2006 2007 2008 2009 2010 2011
Fibre Channel
iSCSI
14
Copyright © Intel Corporation, 2007
Data Center Ethernet Enhancements
Ethernet Enhancements
• Priority Groups: Virtualizes links and allocates resources per traffic classes
• Priority Flow Control by traffic class
• End-to-End Congestion Mgmt and notification
• Shortest path bridging: L2 multipathing
Benefits of DCE Enhancements
• Eliminates transient and persistent congestion
• Lossless fabric: “No Drop” storage links
• Deterministic latency for HPC clusters
• Enables a converged Ethernet fabric for reduced cost & complexity
Specifications
• Adoption in IEEE 802.1- ratification in 1H’09
• Supported by most network and storage co’s
Intel developing products for Ethernet convergence in Intel developing products for Ethernet convergence in
virtualized data centers and driving IEEE standardsvirtualized data centers and driving IEEE standards
A collection of IEEE-based enhancements to classical
Ethernet that provide end-2-end QoS
Does not disrupt existing infrastructure
15
Copyright © Intel Corporation, 2007
FCoE – Fiber Channel Over Ethernet
FCoE
Phase 1Phase 1Host I/O Consolidation into Host I/O Consolidation into
existing FC and LAN networksexisting FC and LAN networks
FCoESwitch
FCoE
FC
LAN
LAN
Phase 2Phase 2Unified Data Unified Data Center FabricCenter Fabric
FCoE
LAN, iSCSI,FCoE
DCE
DCE
SANSANSANSANSANSANSANSAN LANLANLANLANLANLANLANLAN
ConvergedConvergedConvergedConvergedConvergedConvergedConvergedConverged
NetworkNetworkNetworkNetworkNetworkNetworkNetworkNetwork
FCoE uses the underlying DCE enhanced Ethernet to provide converged networksFor existing Fiber Channel Data Centers
16
Copyright © Intel Corporation, 2007
Next Stop …
Intel’s Next Generation 10 Gigabit Ethernet Products
17
Copyright © Intel Corporation, 2007
Intel’s Ethernet Leadership
EthernetEthernet
Fast Fast EthernetEthernet
GigabitGigabitEthernetEthernet
1980: Intel, DEC, Xerox publish Ethernet spec.
1983: IEEE 10BASE-5
1995: IEEE 100BASE-TX
1998: IEEE 1000BASE-SX
1990: IEEE 10BASE-T
1999: IEEE 1000BASE-T
2002: IEEE802.3ae
1994: Intel shipsworld’s first 10/100 NIC
1997: Intel ships world’s firstsingle-chip 10/100 Si
1982: Intel ships world’s 1st high-volume10Mbps Si
2001: Intel ships world’s firstsingle-chip 10/100/1000 Si
2003: Intel ships first10GBASE-LR adapter
10Gb10GbEthernetEthernet
2002: Intel ships world’s firstdual-port 10/100/1000 Si
2004 Intel Introduces Intel® AMT with 82573
2005 Intel Introduces PCIe* LAN Si w/ Intel®I/OAT
Delivering Quality Ethernet Products for 25 Years!
2006 Intel Introduces first low profile Quad-port NIC
2006:10GBASE-T
*Other names and brands may be claimed as the property of others.
18
Copyright © Intel Corporation, 2007
Intel® 82598 10 Gigabit Ethernet Controller
Dual Port 1/10G Gigabit Ethernet Controller– XAUI, CX4 (802.3ak)
– 10GBASE-KX4, 1000BASE-KX (IEEE 802.3ap)– PCI Express* v2.0 (2.5Gbps) x8– NC-SI, SMBus Interfaces
Energy Efficient Design– Power: 4.8W Typical; 6.5W Max (Dual Port)
Designed for Multi-Core processors– 32 Tx and 64 Rx queues per port– Received side scaling (RSS)
– Intel® I/OAT Acceleration
• MSI-X
• Low Latency Interrupt
• Header-splitting and replication
• Direct Cache Access
Optimized for Virtualization– 16 Virtual queues supported– Sorting based MAC address or 802.1q tag
Unified Storage over Ethernet– Priority Grouping (802.1P)
– Per priority pause
– iSCSI Boot
*Other names and brands may be claimed as the property of others
“We believe the Intel 82598
10 Gigabit Ethernet Controller, with
its outstanding performance and
power efficiency, is ideally suited
for many types of today's data
center applications and will
complement Cisco's high-
performance Catalyst data center
products,” said Richard Palmer,
senior vice president and general
manager of Security Technology
Group, Cisco.
AvailabilityIn Production
19
Copyright © Intel Corporation, 2007
Intel® 82575 Gigabit Ethernet Controller
Dual Port 10/100/1000 Ethernet Controller – Dual 1000BASE-T, SerDes, and SGMII interfaces– PCI Express* v2.0 (2.5Gbps) x4– 25 x 25mm FCBGA
On-Board Management Features– PXE, iSCSI Boot– NCSI, SMBus Interfaces– ASF 2.0 support
Designed for Multi-Core processors– 4 TX and RX queues per port– Received side scaling (RSS)
– Intel® I/OAT Acceleration
• MSI-X
• Low Latency Interrupt
• Header-splitting and replication
• Direct Cache Access
Optimized for Virtualization– 4 Virtual queues supported– Sorting based on MAC address or 802.1q tag
I/O Enhancements– Offloads compatible with IPv4, IPv6– Multiple VLAN tags– iSCSI Boot
*Other names and brands may be claimed as the property of others
“Microsoft and Intel have worked closely
to deliver integrated support for Intel®
I/OAT in the upcoming release of
Windows* Server 2008,” said Henry
Sanders, distinguished engineer and
general manager of Microsoft Windows
Networking. “Our mutual customers will
benefit from the numerous enhancements
in these products, resulting in even
greater application performance and
scalability on their multi-core Intel®
Xeon® processor-based Windows
Servers.”
AvailabilityShipping Now!
20
Copyright © Intel Corporation, 2007
Thank You!