nic teaming and converged fabric
DESCRIPTION
Hyper-V.nu meeting 16-04-2013, NIC teaming and Converged Fabric, Marc van EijkTRANSCRIPT
NIC Teaming & Converged Fabric
Marc van EijkHyper-V | Private Cloud | Hosted Cloud
www.hyper-v.nu@_marcvaneijkDUVAK
Agenda
NIC Teaming Quality of Service
System Center VMM 2012 SP1 Designs
NIC Teaming
Windows Server 2008 R2
Dedicated Network
Hyper-V Switch
Man
agem
ent
Live
Mig
ratio
n
Clus
ter
Stor
age
VMVM
MPIO
1Gb1Gb 1Gb1Gb 1Gb1Gb 1Gb1Gb 1Gb1Gb
Windows Server 2012
Members Connect Distribute Combine
NIC Teaming team members
NIC Team
Team NICs Virtual NICs
tNICVLAN 100
tNICVLAN 200
NIC TeamtNIC
Default Mode
tNICVLAN 100
vNICVLAN 100
Hyper-V Switch
Physical NIC
Management
Live Migration
Cluster VMVM
tNICDefault mode
Physical NIC
EthernetWindows Logo
Requirements
Switch Independent Switch Dependent LACP
MAC MAC MAC
NIC Teaming Connection Modes
LACP prioritySystem MAC addressPort LACP priorityPort numberOperational key
LACPDUs
NIC Teaming Load distribution modes
Address Hash HyperVPort
NIC Team
Hyper-V SwitchNIC Team
mgmtOS VM
Source and destinationTCP ports and IP addresses
TransportPorts
Source and destination IP addresses
IPAddresses
Source and destination MAC addresses
MacAddresses
D-VMQ
Matrix
• Native mode teaming with switch diversity
• Active / Standby• Teaming in a VM• Workloads with heavy
outbound / light inbound
Native Teaming / Hyper-V Switch
NIC Team
Hyper-V Switch
NIC Team
Management
Live Migration
Cluster VM VM
Switch Independent / Address Hash
MAC MAC
Matrix
• Native mode teaming with switch diversity
• Active / Standby• Teaming in a VM• Workloads with heavy
outbound / light inbound
Native Teaming / Hyper-V Switch
NIC Team
Hyper-V Switch
NIC Team
Management
Live Migration
Cluster VM VM
• Maximum use of Virtual Machine Queues (VMQs)
• More VMs than team members
• If VM bandwidth max one NIC is enough
Switch Independent / Address Hash
Switch Independent / HyperVPort
MAC MAC
Matrix
• Native mode teaming with switch diversity
• Active / Standby• Teaming in a VM• Workloads with heavy
outbound / light inbound
Native Teaming / Hyper-V Switch
NIC Team
Hyper-V Switch
NIC Team
Management
Live Migration
Cluster VM VM
• Maximum use of Virtual Machine Queues (VMQs)
• More VMs than team members
• If VM bandwidth max one NIC is enough
• Native teaming with maximum performance and no switch diversity
• One VM needs more bandwidth than one team member
Switch Dependent / Address Hash
Switch Independent / Address Hash
Switch Independent / HyperVPort
MAC MAC
Matrix
• Native mode teaming with switch diversity
• Active / Standby• Teaming in a VM• Workloads with heavy
outbound / light inbound
Native Teaming / Hyper-V Switch
NIC Team
Hyper-V Switch
NIC Team
Management
Live Migration
Cluster VM VM
• Maximum use of Virtual Machine Queues (VMQs)
• More VMs than team members
• If VM bandwidth max one NIC is enough
• Native teaming with maximum performance and no switch diversity
• One VM needs more bandwidth than one team member
• Company policy requires LACP
• More VMs than team members
• If VM bandwidth max one NIC is enough
Switch Dependent / HyperVPort
Switch Dependent / Address Hash
Switch Independent / Address Hash
Switch Independent / HyperVPort
MAC MAC
Matrix
Native Teaming / Hyper-V Switch
NIC Team
Hyper-V Switch
NIC Team
Management
Live Migration
Cluster VM VM
• Maximum use of Virtual Machine Queues (VMQs)
• More VMs than team members
• If VM bandwidth max one NIC is enough
• Native teaming with maximum performance and no switch diversity
• One VM needs more bandwidth than one team member
Switch Dependent / Address Hash
Switch Independent / HyperVPort
NIC Teaming in a VM
Guest teaming
Hyper-V Switch
Physical NIC
Hyper-V Switch
VM
Physical NIC
NIC Team
VF VF
MAC
Switch IndependentAddress Hash
Modes
SRIOV SRIOV
Maximum 2 vNICsExternal Switch
Support / Limit
NIC Teaming PowershellGet-Command -Module NetLbfo
New-NetLbfoTeam Team1 NIC1,NIC2 -TeamingMode LACP LoadBalancingAlgorithm HyperVPorts
• SwitchIndependent• Static• LACP
• TransportPorts• IPAddresses• MacAddresses• HyperVPort
Add-NetLbfoTeamMember NIC1 Team1 (Add physical NIC to Team1Add-NetLbfoTeamNIC Team1 83 (Add TeamNIC to Team1 with vlan id 83)
NIC Teaming Demo
NIC Team
Hyper-V Switch
NIC Team
Management
Live Migration
Cluster VM VM
NIC Team
Hyper-V Switch
NIC Team NIC Team
Management
Live Migration
Cluster VM VM
NIC Team
Failover Cluster
Quality of Service
Hyper-V Switch & Quality of Service
Hyper-V Switch
NIC Team
Hyper-V Switch
Management
Live Migration
Cluster VM VM
New-VMSwitch "VSwitch" -MinimumBandwidthMode Weight -NetAdapterName "Team" -AllowManagementOS 0
Add-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "VSwitch"Add-VMNetworkAdapter -ManagementOS -Name "Live Migration" -SwitchName "VSwitch"Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "VSwitch"
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Management" -Access -VlanId 10Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Live Migration" -Access -VlanId 11Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Cluster" -Access -VlanId 12
• Weight (Default)• Absolute (bits per second)• None
-DefaultFlowMinimumBandwidthAbsolute
$false
103010 50
VLAN ID 58
VLAN ID 63 Set-VMSwitch "VSwitch" -DefaultFlowMinimumBandwidthWeight 50
Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 10Set-VMNetworkAdapter -ManagementOS -Name "Live Migration" -MinimumBandwidthWeight 30Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 10
Hyper-V Switch & Quality of Service
Hyper-V Switch
NIC Team
Hyper-V Switch
Management
Live Migration
Cluster VM VM
103010 50
VLAN ID 58
VLAN ID 63 Set-VMSwitch "VSwitch" -DefaultFlowMinimumBandwidthWeight 50
Set-VMNetworkAdapter -ManagementOS -Name "Management" -MinimumBandwidthWeight 10Set-VMNetworkAdapter -ManagementOS -Name "Live Migration" -MinimumBandwidthWeight 30Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 10
Total Weight = 100
Critical workloads Gap weights
Default Flow and VMs
QoS guidelines
Live Migration [30]
Cluster [10]
Management [10]
Default Flow [50]
Live Migration
Cluster
Management
Default Flow [BPS]
Quality of ServiceWeightAbsolute
5
67,5
5
22,5
90 / 40 * 30 =
90 / 40 * 10 =
90
Available Weight
Available Bandwidth
40
Live Migration [30]
Cluster [10]
Management [10]
Default Flow [50]
Live Migration
Cluster
Management
Default Flow [BPS]
Quality of ServiceWeightAbsolute
Quality of Service Demo
NIC Team
Hyper-V Switch
Management
Live Migration
Cluster VM VM
Hyper-V Switch
NIC Team NIC Team
Management
Live Migration
Cluster VM VM
NIC Team
Failover Cluster
Hyper-V SwitchLACP
HyperVPortLACP
Address Hash
10
VMVM
90
Designs
Upgrade with existing hardware
Windows Server 2012
NIC Team
Hyper-V Switch
Cluster
VMVM
Live Migration
NIC Team
Management
Windows Server 2008 R2
NIC Team
Hyper-V Switch Cluster Live Migration
NIC Team
Management
NIC Team NIC Team
VM VM
Converged
VMs isolatedSingle Team
NIC Team
Hyper-V Switch
Management
Live Migration
Cluster VMVM
NIC Team
Hyper-V Switch
Man
agem
ent
Live
Mig
ratio
n
Clus
ter
VMVM
NIC Team
VM
Datacenter Bridging
DCB
Hyper-V Switch
ISCSI
MPIO Dedicated SwitchesMPIO
NIC Team
Hyper-V Switch
Management
Live Migration
Cluster VMVM
ISCSI
ISCSI
MPI
O
NIC Team
Hyper-V Switch
Management
Live Migration
Cluster
VM
NIC Team
Hyper-V Switch
ISCSI
ISCSI
MPI
O
MPIO
SMB 3.0
NIC Teaming & SMB 3.0
Hardware
RDMAMultichannel
RSS
SMB 3.0
NIC Team
Hyper-V Switch
Management
Live Migration
Cluster VMVM
NIC Team NIC Team
System Center VMM 2012 SP1
Bare Metal DeploymentExisting hosts
NIC Team
Hyper-V Switch
Management
Live Migration
Cluster VMVM
NIC Team
Hyper-V Switch
NIC Team
Hyper-V Switch
VMVMVM Man
agem
ent
Live
Mig
ratio
n
Clus
ter
Logical Switch Logical SwitchLogical Switch
Many, many thanks to: