vstart 100 and vstart 200 solution design...

42
vStart 50 Hyper-V Solution Design Guide Release 2.0 for Dell 12 th generation servers Dell Global Solutions Engineering Revision: A00 Sep 2012

Upload: others

Post on 23-Jul-2020

14 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Release 2.0 for Dell 12th generation servers

Dell Global Solutions Engineering

Revision: A00

Sep 2012

Page 2: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. ii

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR

IMPLIED WARRANTIES OF ANY KIND.

© 2012 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without

the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.

Dell, the DELL logo, PowerConnect, PowerEdge, EqualLogic, and OpenManage are trademarks of Dell

Inc. Microsoft, Active Directory, Windows, Hyper-V and Windows Server are either trademarks or

registered trademarks of Microsoft Corporation in the United States and/or other countries. Intel and

Xeon are registered trademarks of Intel Corporation. Other trademarks and trade names may be used in

this document to refer to either the entities claiming the marks and names or their products. Dell Inc.

disclaims any proprietary interest in trademarks and trade names other than its own.

Sep 2012 | Rev A00

Page 3: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. iii

Table of Contents 1 Introduction ........................................................................................................... 1

2 Audience............................................................................................................... 1

3 Overview .............................................................................................................. 2

3.1 Component Roles .............................................................................................. 3

3.2 Prerequisites and Datacenter Planning .................................................................... 6

4 Architecture .......................................................................................................... 7

4.1 Network Architecture Overview ............................................................................ 7

4.2 LAN Architecture ............................................................................................ 11

4.3 SAN Architecture ............................................................................................ 16

4.4 EqualLogic Storage Configuration ........................................................................ 20

4.5 Hyper-V Server Networking ................................................................................ 22

4.6 Hyper-V Role and Failover Clustering Configuration .................................................. 24

4.7 Management Server Networking .......................................................................... 27

4.8 Management Architecture ................................................................................. 29

5 Power and Cooling Configuration ............................................................................... 32

6 Scalability ........................................................................................................... 33

6.1 Adding new servers to the Hyper-V Cluster ............................................................ 33

6.2 Adding new storage to the EqualLogic group........................................................... 33

7 References .......................................................................................................... 34

8 Appendix A – IP & VLAN Planning Worksheet.................................................................. 35

9 Appendix B – IP & VLAN Sample Worksheet ................................................................... 37

Tables Table 1. vStart LAN and SAN Switch Overview ................................................................... 4

Table 2. Traffic Type Summary ..................................................................................... 9

Table 3. VLAN and Subnet Examples ............................................................................. 11

Table 4. Sample VLAN Switch Assignment ...................................................................... 12

Table 5. PowerEdge R620 LAN Connectivity .................................................................... 14

Table 6. PowerEdge R420 LAN Connectivity .................................................................... 14

Table 7. EqualLogic PS6100 Management LAN Connectivity ................................................. 15

Table 8. PowerConnect 7024 SAN Switch Settings ............................................................ 17

Table 9. PowerEdge R620 SAN Connectivity .................................................................... 18

Table 10. PowerEdge R420 SAN Connectivity .................................................................... 18

Table 11. EqualLogic PS6100 SAN Connectivity (vStart 50 + Configuration) ............................... 19

Table 12. Storage Configuration Options ......................................................................... 20

Page 4: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. iv

Figures Figure 1. vStart 50 Component Overview .......................................................................... 2

Figure 2. Network Topology (Logical View) ....................................................................... 8

Figure 3. Component Labels ....................................................................................... 10

Figure 4. LAN Switch Port Usage .................................................................................. 13

Figure 5. Server 1 LAN Connectivity .............................................................................. 13

Figure 6. PowerEdge R420 LAN Connectivity .................................................................... 14

Figure 7. EqualLogic PS6100 Array 1 Management LAN Connectivity ....................................... 15

Figure 8. SAN Switch Port Configuration ......................................................................... 16

Figure 9 Server 1 SAN Connectivity .............................................................................. 17

Figure 10. Management Server SAN Connectivity ................................................................ 18

Figure 11. EqualLogic PS6100 Array 1 SAN Connectivity ........................................................ 19

Figure 12. EqualLogic Organizational Concepts .................................................................. 20

Figure 13. PowerEdge R620 Network Adapter Enumeration ................................................... 22

Figure 14. Network Connections .................................................................................... 22

Figure 15. BACS Configuration ...................................................................................... 23

Figure 16. PowerEdge R620 Network Connections for LAN and iSCSI ........................................ 24

Figure 17. Server and Cluster Manager ............................................................................ 25

Figure 18. Hyper-V Virtual Switch Configuration ................................................................ 26

Figure 19. PowerEdge R420 Network Adapter Enumeration ................................................... 27

Figure 20. Network Connections .................................................................................... 27

Figure 21. PowerEdge R420 Server Broadcom Team Configuration .......................................... 28

Figure 23. Management Overview (SAN) .......................................................................... 31

Figure 24. vStart 50 Power Cabling ................................................................................ 32

Page 5: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 1

1 Introduction The vStart solution is a virtualization infrastructure solution that is designed and validated by Dell

Engineering. It is delivered racked, cabled, and ready to be integrated into your datacenter. VStart is

offered as four configurations: vStart 50, 100, 200 and 10001.

The vStart 50 configuration includes DellTM PowerEdgeTM R620 servers running Microsoft® Windows

Server® 2012 Datacenter Edition with Hyper-V Role enabled, Dell EqualLogicTM PS6100 Series iSCSI

storage, Dell PowerConnectTM 7024 or PowerConnect 6224 switches, an optional Dell PowerEdge R420

server that manages the solution by hosting Dell management.

The configuration also includes Dell EqualLogic Host Integration Tools for Microsoft (HIT Kit) Plug-in2.

The vStart 50 configuration varies in the number of EqualLogic PS6100 storage arrays to meet resource

needs.

The following documents are provided to describe the various aspects of the solution. Contact your Dell

Sales or Services representative to get the latest revision of all the documents.

vStart 50 Hyper-V Solution Overview – Provides a solution overview, including various

components, and how the solution is delivered.

vStart 50 Hyper-V Solution Specification – Provides a detailed specification of various

components included in the solution.

vStart 50 Hyper-V Solution Design Guide (this document) – Provides a detailed architectural

solution design.

2 Audience IT administrators and managers who have purchased or plan to purchase the vStart 50 configuration can

use this document to understand the solution architecture. For those that have purchased the solution,

detailed cabling diagrams and networking details can be utilized during troubleshooting and

maintenance. It is assumed that the reader has a basic understanding of Windows Server 2012

Datacenter Edition with Hyper-V Role enabled, Microsoft Clustering, EqualLogic, and network

architecture.

1 vStarts 100, 200 and 1000 are covered by a separate set of documents. 2 HIT Kit helps to automate the detection and configuration of MS iSCSI initiators while providing enhanced MPIO functionality

and EqualLogic Array detection and interoperability capabilities. HIT Kit 4.5 which will have full support for Windows Server 2012 will be released Dec'12. Until then the solution will use Microsoft Native MPIO. Once released, HIT can easily be incorporated into the existing solution.

Page 6: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 2

3 Overview The solution discussed in this whitepaper is powered by Dell PowerEdge servers, Dell EqualLogic iSCSI

storage, Dell PowerConnect networking, and Microsoft Windows Server 2012 with Hyper-V role enabled.

The solution implements Dell and Microsoft best practices. EqualLogic SAN HeadQuarters (SAN HQ) and

Group Manager are included in the solution for storage array monitoring and management. The solution

also includes the rack, power distribution units (PDU), and an optional uninterruptible power supply

(UPS), KMM (Keyboard, Monitor, Mouse) and management server.

vStart 50 includes two PowerEdge R620 servers and one EqualLogic PS6100 array. Storage expansion

modules and other optional equipment are also offered in this release. Additional details for these are

provided in the section below. Figure 1 below provides a high-level overview of the components

utilized in the configuration.

Figure 1. vStart 50 Component Overview

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

STACKING MODULE

STACKING MODULE

10GE XFP MODULE

10GE XFP MODULE

AUTO

Keyboard, Monitor, Mouse (Optional)RPS-720 (Optional)

PowerConnect 7024PowerConnect 7024PowerConnect 7024PowerConnect 7024

ShelfPowerEdge R420 Management Server (Optional)

PowerEdge R620 Hypervisor Cluster

SAN

LAN

1U Filler Panel

3U Filler Panel

(3U+2U Filler Panels Not Shown)

Dell 3750W UPS(Optional)

EqualLogic PS6100X (Optional)

EqualLogic PS6100X

2U Filler Panel

Page 7: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 3

3.1 Component Roles

3.1.1 Hypervisor Cluster – PowerEdge R620

Each PowerEdge R620 server is configured with two 8-Core Intel® Xeon® E5 2660 2.2GHz processors and

64GB of memory. The PowerEdge R620 has rack Network Daughter Card (rNDC) 1Gb Ethernet ports and

an additional quad port 1Gb Ethernet card has been added to each, providing a total of eight 1Gb

ports. Four of these ports are utilized for LAN traffic and the remaining four are for SAN traffic. In

addition, each PowerEdge R620 server is configured with a PERC H710 RAID controller that hosts the

Windows Server 2012 Datacenter OS.

The servers are also configured with the EqualLogic Host Integration Tools which enables enhancements

to existing Microsoft multipathing functionality by providing automatic iSCSI connection management

and load balancing across multiple active paths. The EqualLogic Multipath I/O Device Specific Module

(MPIO DSM) is installed along with the Microsoft iSCSI Initiator. This connection awareness module

understands PS Series network load balancing and facilitates host connections to PS Series volumes.

3.1.2 Management Server – PowerEdge R420 (Optional)

The optional PowerEdge R420 server is configured with one 4-Core Intel Xeon 2.2GHz processor and

8GB of memory. The PowerEdge R420 server has two onboard 1Gb ports and an add-in dual port 1Gb

NIC which provide two ports for LAN traffic and two ports for SAN traffic. The PowerEdge R420 runs

Microsoft Windows Server 2012 Standard to host the management applications for the devices in the

solution. The primary applications include EqualLogic SAN HQ and Group Manager, and OpenManage™

Server Administrator (OMSA). In addition, management and configuration of the EqualLogic array and

PowerConnect switches can be performed through a web based interface or serial connection from the

PowerEdge R420.

The PowerEdge R620 and PowerEdge R420 servers are all configured with an iDRAC Enterprise out-of-

band management device that supports direct management of the systems through a command-line or

Web based interface. Both PowerEdge R620 and PowerEdge R420 servers use iDRAC7. SAN HQ provides

consolidated performance and event monitoring for the iSCSI environment along with historical trend

analysis for capacity planning and troubleshooting.

In the vStart 50 Solution, the PowerEdge R420 Management Server is optional. This choice provides

customers with the flexibility to manage the vStart 50 from a VM running on the vStart 50 cluster, from

an existing physical server that runs virtualization management software and meets the vStart solution

requirements, or from a VM that meets the requirements and runs in a separate virtualized

environment that can communicate with the vStart 50 solution. More details about the vStart 50

Solution requirements are provided below in Section 3.2.

3.1.3 iSCSI Storage Array- EqualLogic PS6100

The default EqualLogic PS6100 in the solution has 24 300GB 10K SAS drives. It is configured with two

storage controllers for redundancy. Each storage controller has four 1Gb Ethernet network interfaces,

and a 100Mb interface dedicated to management.

Page 8: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 4

3.1.4 Local Area Network (LAN) and Storage Area Network (SAN) Switches –

PowerConnect 7024 or PowerConnect 6224

Four PowerConnect 7024 or four PowerConnect 6224 switches are utilized in the vStart 50 Solution.

Each switch supports 24 x 1Gb connections and has two expansion bays that support either 10Gb

Ethernet modules and/or stacking modules. Stacking modules provide the ability to aggregate multiple

switches into a single logical switch which is then managed as a single entity. The vStart 50 Solution

dedicates two switches for LAN traffic and two for SAN traffic.

LAN and SAN traffic is segregated to minimize latency for iSCSI traffic. In addition, this design decision

allows for integration into environments that may have already implemented separate networks for

LAN and SAN traffic. If the existing environment has a unified fabric (LAN and SAN on a single fabric),

then the LAN and SAN switches provided can be uplinked into this unified environment.

Dell PowerConnect 7024 or PowerConnect 6224 Ethernet Switches are used to network the servers and

storage in the rack. Below are the switch hardware configuration details. For detailed information,

refer to the PowerConnect 7024 and PowerConnect 6224 user manuals. Table 1 provides an overview

for the switch capabilities.

Table 1. vStart LAN and SAN Switch Overview

PC6224 Components

Switch Capability 24 10/100/1000 BASE-T auto-sensing Gigabit Ethernet Switching ports

One 48 Gbps Stacking module per LAN switch

For SAN, the 10Gbps Stacking Module is used in Ethernet mode and configured as an ISL LAG. For LAN, the Stacking Module is used to create a single logical switch. One 10 Gigabit SFP+ Ethernet module per LAN switch is used to uplink into a core network (if needed)

Performance Switch Fabric Capacity up to 184 Gb/s per switch. Forwarding rate up to 95 Mbps

PC7024 Components

Switch Capability 24 10/100/1000 BASE-T auto-sensing Gigabit Ethernet Switching ports

Dedicated 64 Gbps Stacking module per LAN switch

For SAN, the 10Gbps Stacking Module is used in Ethernet mode and configured as an ISL LAG. For LAN, the Stacking Module is used to create a single logical switch One 10 Gigabit SFP+ Ethernet module per LAN switch is used to uplink into a core network (if needed)

Performance Switch Fabric Capacity up to 224 Gb/s per switch; forwarding rate up to 160 Mbps

Page 9: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 5

3.1.5 Uninterruptible Power Supply (UPS)

The UPS provides backup power in the event of a power failure. The Dell UPS model will vary based on

the local power requirements of the datacenter, and is optional for the vStart 50 solution.

3.1.6 Power Distribution Unit (PDU)

As the name suggests, PDUs distribute power from the main power source to the individual components

within the 24U rack. Dell PDUs utilize a combination of worldwide standard IEC power outlet

connections with regionalized input options allowing flexibility across a variety of global power

infrastructures. The appropriate PDU model will vary based on the local power requirements of the

datacenter. Consult with your Dell Sales and Services representatives about your local requirements.

3.1.7 PowerEdge 2420 Rack Enclosure

A single 24U rack is required to support either configuration. Blanking panels are included to ensure

optimal airflow.

3.1.8 Keyboard, Monitor, Mouse (KMM)

A 1U KMM console (touchpad, keyboard, and 17 inch LCD) is cabled to the management server providing

the ability to walk up to the rack and manage the entire solution. The KMM is optional for the vStart 50

solution.

Page 10: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 6

3.2 Prerequisites and Datacenter Planning

To support either of the configurations, the following components are required to be present in the

customer environment:

Active Directory® (AD) Domain Services (AD DS) – An AD DS domain must be available on the

customer‟s network. The Hyper-V hosts will be joined to an existing or new domain. Cluster

Services also require AD DS. Consult with your Dell Sales and Services representatives for more

details.

Domain Name Server (DNS) – DNS must be available on the management network.

Network Time Protocol (NTP) Server - NTP is recommended on the management network.

Sufficient power to support a vStart 50 must be present. Detailed power, weight, and cooling

requirements for the datacenter are defined in the vStart 50 Hyper-V Solutions Specifications

document.

Switch Connectivity – The network architecture supports uplinks into the existing switches in

the datacenter. The uplink recommendations are discussed in Section 4.1, Network

Architecture.

The addition of servers, switches, and iSCSI storage arrays to an existing or new datacenter requires

planning for IP addresses and VLANs, as well. Appendices A and B provide examples to help support this

planning effort. Before planning can begin, it is important to understand the vStart solution

architecture, power and cooling attributes, and scalability characteristics. The remainder of this

document covers those subjects. Contact your Dell Sales and Services representatives for more

information about planning and prerequisites.

Page 11: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 7

4 Architecture The architecture discussed in this section will focus on the vStart 50.

4.1 Network Architecture Overview

Hyper-V network traffic in this solution is comprised of five distinct types: Virtual Machine (VM),

Management, Live Migration, Cluster Private, and iSCSI. In addition, support for Out-of-Band

Management (OOB) is included. Two separate networks are created to support different traffic types:

LAN - This network supports management, VM, Live Migration, Cluster Private and out-of-band

management. In addition, uplinks to core infrastructure provide connectivity to the solution

support services (AD DS, DNS, and NTP).

SAN – This network supports iSCSI data. Uplinks are supported to connect into an existing iSCSI

network; however, these uplinks are not required for full solution functionality. SAN switch

out-of-band management also occurs on this network.

Page 12: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 8

Figure 2. Network Topology (Logical View)

Legend

Stacking Link1Gb LAN1Gb SAN

Out-Of-Band Mgmt

10Gb ISL

LA

NP

ow

erC

on

ne

ct

70

24

SA

NP

ow

erC

on

ne

ct

70

24

Po

we

rEd

ge

R

42

0

Optional Management ServerHyper-V Servers

Core Network

iSCSI Network

So

luti

on

Su

pp

ort

Se

rvic

es

AD/DNS

Database Server(for SCVMM)

NTP(Recommended)

Po

we

rEd

ge

R

62

0

iSCSI Storage

Eq

ua

lLo

gic

PS

61

00

X

(2nd Array Optional – Available with vStart 50+)

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Figure 2 depicts the logical LAN and SAN network architecture. Detailed diagrams are available

throughout the remainder of the document.

Page 13: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 9

The table below summarizes the use of each traffic type.

Table 2. Traffic Type Summary

Traffic Type Use

Management Supports virtualization management traffic and communication between the Hyper-V servers in the cluster.

Live Migration Supports migration of VMs between Hyper-V host servers in the cluster.

VM Supports communication between the VMs hosted on the Hyper-V cluster and external systems.

Cluster Private Supports internal cluster network communication between the Hyper-V servers in the cluster.

Out-of-Band Management Supports configuration and monitoring of the servers through the iDRAC7 management interface, storage arrays, and network switches.

iSCSI Data Supports iSCSI traffic between the servers and storage array(s). In addition, traffic between the arrays is supported.

Throughout the remainder of the networking sections, it is important to understand the correlation

between the component location in the rack and the label for each. Figure 3 displays the labels for the

vStart 50 components. For clarity, the full device description and option details are not noted in Figure

3. Refer to Figure 1 for more information about device descriptions and options.

Page 14: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 10

Figure 3. Component Labels

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

STACKING MODULE

STACKING MODULE

10GE XFP MODULE

10GE XFP MODULE

AUTO

RPS-720 (Optional)

Shelf

1U Filler Panel

3U Filler Panel

2U Filler Panel

SAN Switch 2SAN Switch 1

LAN Switch 2LAN Switch 1

Mgmt Server

Server 2Server 1

Array 2

Array 1

UPS

Keyboard, Monitor, Mouse (Optional)

(3U+2U Filler Panels Not Shown)

Page 15: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 11

4.2 LAN Architecture

The LAN includes two PowerConnect 7024 or PowerConnect 6224 switches which support the VM,

Management, Cluster Private, Live Migration, and OOB traffic. These traffic types are logically

separated through the use of VLANs. The two switches are stacked together, which forms a single

logical switch and provides a 48Gb link between the two PC6224 switches, or 64Gb between the two

PC7024 switches. The solution provides four 1Gb uplinks from each switch to link into an existing core

network infrastructure. If the core network infrastructure supports 10Gb Ethernet, then 10Gb uplink

modules may be added to each switch; however, this option is beyond the scope of this document.

4.2.1 Traffic Isolation using VLANs

The traffic on the LAN is separated into five VLANs; one VLAN each for VM, Management, Live

Migration, Cluster Private, and OOB traffic. VLAN tagging for the OOB management and EqualLogic

PS6100 management traffic is performed by the PowerConnect 7024 or PowerConnect 6224 switches.

For the other traffic types, the tagging is performed by Windows Server 2012 Native NIC Teaming.

Table 3 below provides VLAN and Subnet examples. Routing between VLANs is dependent on the

specific infrastructure requirements and is not included in this document. Consult with your Dell Sales

and Services representatives if you have questions about routing or require assistance implementing in

your environment.

If desired, the PowerConnect 7024 or PowerConnect 6224 switches can be configured to provide the

routing function.

Table 3. VLAN and Subnet Examples

Traffic Type Sample VLAN Sample Subnet

OOB 10 192.168.10.X

Management 20 192.168.20.X

Live Migration 30 192.168.30.X

Cluster Private 40 192.168.40.X

VM 100 192.168.100.X

Additional VLANs can be implemented for VM traffic, if required.

Page 16: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 12

4.2.2 PowerConnect 7024 or PowerConnect 6224 LAN Switch Configuration

Table 4 provides a port configuration overview for the two PowerConnect 7024 or PowerConnect 6224

LAN switches. Trunk mode is used to allow multiple VLANs on a port, and access mode is used when the

switch performs the VLAN tagging function. Figure 4 below shows how the ports are distributed for the

Hyper-V hosts, management servers, out-of-band management, and uplinks. Ports are available to

support future expansion as well.

Table 4. Sample VLAN Switch Assignment

Ports Configuration Notes

Hyper-V Hosts Configured in Trunk mode: Allow VLANs 20, 30, 40, and 100

Management Server Configured in Trunk mode. Allow VLANs 10 and 20

Out-of-Band Management Configured in Access mode for VLAN 10

EqualLogic PS6100 Management

Configured in Access mode for VLAN 20

Uplink Configured in Trunk mode All uplink ports configured in a single Link Aggregation Group (LAG)

- Allow VLAN 20 for connectivity to core infrastructure

- Allow VLAN 100 for VM traffic

- Also allow VLANs 10 for possible external out-of-band manageability

- Also allow VLANs 30 and 40 if the cluster will be expanded outside rack

Future Expansion These ports are disabled This prevents any unauthorized access or misconfiguration

Each PowerConnect 7024 or PowerConnect 6224 LAN switch is configured with a single stacking module

that supports two stacking links. The two PowerConnect 7024 or PowerConnect 6224 LAN switches are

connected using both of the stacking links to provide redundancy. The stacked LAN switches form a

single logical switch where one of the switch modules acts as the master and both switches are

managed as a single entity by connecting to the master switch. Stacking also provides for a high-speed

data path between the switch modules. In addition, stacking the switches helps prevent any potential

loops when the switches are uplinked to the existing network infrastructure.

Page 17: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 13

Figure 4 shows the function that each switch port provides. Individual ports or blocks of contiguous

ports are color coded and labeled per their respective functions.

Figure 4. LAN Switch Port Usage

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

LAN

Sw

itch

1LA

N S

wit

ch 2

Mgm

t Se

rver

Arr

ay M

gmt

iDR

AC

Host Core

Mgm

t Se

rver

Arr

ay M

gmt

iDR

AC

Host Core

Stack

4.2.3 PowerEdge R620 LAN Connectivity

Each PowerEdge R620 has eight 1Gb Ethernet ports, of which four are dedicated for LAN traffic. In

addition, the iDRAC7 out-of-band management interface is connected to the LAN switches. Figure 5

shows the connectivity of Server 1 to the LAN switches

Figure 5. Server 1 LAN Connectivity

LA

N S

wit

ch

1

LA

N S

wit

ch

2

Po

we

rEd

ge

R

62

0

1

1 2 3 4

iDRAC

1 2750W 750W

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Legend

Mgmt

Out-Of-Band Mgmt

Page 18: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 14

The other PowerEdge R620 server follows the same connectivity pattern to the LAN switches with the

exception that each server uses a unique set of physical ports on the switches. Table 5 details the

connectivity for each of the PowerEdge R620 servers to the LAN switches.

Table 5. PowerEdge R620 LAN Connectivity

rNDC – Port 1 rNDC – Port 3 NIC – Port 1 NIC – Port 2 iDRAC7

Server 1 Switch 1 – Port 9

Switch 2 – Port 9

Switch 1 – Port 10

Switch 2 – Port 10

Switch 1 – Port 5

Server 2 Switch 1 – Port 11

Switch 2 – Port 11

Switch 1 – Port 12

Switch 2 – Port 12

Switch 2 – Port 5

4.2.4 PowerEdge R420 LAN Connectivity

The PowerEdge R420 has four 1Gb Ethernet ports of which two are dedicated for LAN traffic. In

addition, the iDRAC7 OOB interface is connected to the LAN switches. Figure 6 shows the management

server connectivity to the LAN switches.

Figure 6. PowerEdge R420 LAN Connectivity

Po

we

rEd

ge

R

42

0

iDRAC

LA

N S

wit

ch

1

LA

N S

wit

ch

2

PWR

FAN

DIAG RPS

TEMP

4/105/11

6/12

STACK ID

Unit 7-122/8 3/9MASTER 1/7

LNK/ACT LNK/ACT LNK/ACT LNK/ACT

RESET

21 22 23 24

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22 24

23

FDX

/HDX

LNK

/ACT

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Legend

Mgmt

Out-Of-BandMgmt

Table 6 details management server connectivity to the LAN switches.

Table 6. PowerEdge R420 LAN Connectivity

LOM – Port 1 LOM – Port 3 iDRAC7

Switch 1 – Port 1 Switch 2 – Port 1 Switch 1 – Port 6

Page 19: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 15

4.2.5 EqualLogic PS6100 LAN Connectivity

Figure 7. EqualLogic PS6100 Array 1 Management LAN Connectivity

LA

N S

wit

ch

1

LA

N S

wit

ch

2

PS

61

00

X

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Legend

1Gb LAN

SERIAL PORT

SERIAL PORT

MANAGEMENT

MANAGEMENT

STANDBY

ON/OFF

STANDBY

ON/OFF

ACT

ERR

PWR

ACT

ERR

PWR

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

Table 7 details EqualLogic Array management connectivity to the LAN switches while Figure 7 shows

the array controller management connectivity to the LAN switches. Table 7 and Figure 7 show the

network connections that are used for array monitoring and management. Group Manager and SAN HQ

utilize these interfaces.

Note: Figure 7 shows a vStart 50 configuration. Table 7 shows vStart 50+ network connectivity.

Table 7. EqualLogic PS6100 Management LAN Connectivity

Array # Controller # Mgmt Port

Array 1 Slot 0 Switch 1 – Port 3

Slot 1 Switch 2 – Port 3

Array 2 Slot 0 Switch 1 – Port 4

Slot 1 Switch 2 – Port 4

Page 20: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 16

4.3 SAN Architecture

The SAN includes two PowerConnect 7024 or PowerConnect 6224 switches, which support iSCSI data

and iSCSI management traffic. The two switches are connected together with a stacking module

configured in Ethernet mode as an ISL LAG. In addition, the solution supports up to eight 1Gb uplinks

from each switch to link into an existing core iSCSI network infrastructure. These uplinks are optional.

If required, 10Gb uplink modules may be added to each switch; however these options are beyond the

scope of this document.

4.3.1 PowerConnect 7024 or PowerConnect 6224 switch configuration for SAN

Figure 8 shows how the ports are distributed for the Hyper-V hosts, management server, storage arrays,

and uplinks. Additional ports are available for future expansion.

Figure 8. SAN Switch Port Configuration

SAN

Sw

itch

1SA

N S

wit

ch 2

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Mgm

t Se

rver

Storage

Svc Host Core

2x 10Gb LAG

Mgm

t Se

rver

Storage

Svc Host Core

The Stacking Module, in Ethernet mode, is configured into a 20Gbps Link Aggregation Group (LAG) to

provide a path for communication across the switches. In addition, the LAG supports traffic between

EqualLogic arrays if more than one array is present in the configuration.

Spanning Tree Protocol (STP) PortFast is enabled on all the server and storage ports. This helps to

reduce the link downtime in the event of a path or switch failure. Ports left for future expansion are

disabled to prevent any unauthorized access or misconfiguration. The uplink ports on each switch are

configured in a LAG. In addition, a separate LAG is created for the two Stacking Module Ethernet links.

Page 21: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 17

Switch level configuration details are defined in Table 8. A single VLAN, not VLAN 1, is used for all iSCSI

traffic.

Table 8. PowerConnect 7024 SAN Switch Settings

Item Setting

Rapid STP Enabled

Jumbo Frames Enabled

Flow Control On

Unicast Storm Control Disabled

It is important to understand the spanning tree configuration on the SAN if both uplinks will be utilized.

Spanning tree costs should be set appropriately to avoid STP blocking the LAG between these two

switches which would result in a longer Ethernet switch path for SAN traffic and potentially increased

SAN latency.

4.3.2 PowerEdge R620 SAN Connectivity

Each PowerEdge R620 has eight 1Gb Ethernet ports of which four are dedicated for SAN traffic. Figure 9

shows Server 1 connectivity to the SAN switches.

Figure 9 Server 1 SAN Connectivity

SA

N S

wit

ch

2

SA

N S

wit

ch

1

Po

we

rEd

ge

R

62

0

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

1

1 2 3 4

iDRAC

1 2750W 750W

Legend

iSCSI

The other PowerEdge R620 server follows the same connectivity pattern to the SAN switches with the

exception that each server uses a unique set of physical ports on the switches.

Page 22: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 18

Table 9 details PowerEdge R620 server connectivity to the SAN switches for the vStart 50 configuration.

Table 9. PowerEdge R620 SAN Connectivity

rNDC – Port 2 rNDC – Port 4 NIC – Port 2 NIC – Port 4

Server 1 Switch 1 – Ports 11 Switch 2 – Port 11 Switch 1 – Port 12 Switch 2 – Port 12

Server 2 Switch 1 – Port 13 Switch 2 – Port 13 Switch 1 – Port 14 Switch 2 – Port 14

4.3.3 PowerEdge R420 SAN Connectivity

The PowerEdge R420 has four 1Gb Ethernet ports of which two are dedicated for SAN traffic. These

ports allow the PowerEdge R420 to manage the SAN switches and mount iSCSI based volumes if

required. Figure 10 shows the management server connectivity to the SAN switches.

Figure 10. Management Server SAN Connectivity

SA

N S

wit

ch

2

SA

N S

wit

ch

1

Pow

erE

dge R

420

Legend

iSCSI

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

iDRAC

Table 10 details the management server connectivity to the SAN switches.

Table 10. PowerEdge R420 SAN Connectivity

LOM – Port 2 NIC – Port 2

Switch 1 – Port 1 Switch 2 – Port 1

In addition, iSCSI volumes may be presented to the management server to provide additional storage

capacity for storing items such as ISO images and scripts.

Windows Server 2012 supports file shares which may be mounted by the Hyper-V cluster hosts to access

these files. To account for this potential use case, the Dell EqualLogic Host Integration Tools for

Microsoft (HIT) is installed. The HIT provides the multipath I/O (MPIO) plug-in for the Microsoft iSCSI

framework; however, HIT is not required to use Group Manager. The HIT is also installed on the

Page 23: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 19

PowerEdge R620 Hyper-V hosts. Group Manager is the user interface for managing the EqualLogic

PS6100 storage.

4.3.4 EqualLogic PS6100 SAN Connectivity

The EqualLogic PS6100 contains redundant storage controllers. Each storage controller has four 1Gb

connections supporting iSCSI data and a 100Mb dedicated management traffic port. The four iSCSI data

connections on each controller are split between the two SAN switches for redundancy.

Figure 11 shows how the two controllers on Array 1 are connected.

Figure 11. EqualLogic PS6100 Array 1 SAN Connectivity

SA

N S

wit

ch

2

SA

N S

wit

ch

1

PS

61

00

Legend

iSCSI

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

SERIAL PORT

SERIAL PORT

MANAGEMENT

MANAGEMENT

STANDBY

ON/OFF

STANDBY

ON/OFF

ACT

ERR

PWR

ACT

ERR

PWR

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

The optional EqualLogic array follows the same connectivity pattern to the SAN switches with the

exception that it uses a unique set of physical ports on the switches. Table 11 details the connectivity

for each of the EqualLogic iSCSI arrays to the SAN switches.

Table 11. EqualLogic PS6100 SAN Connectivity (vStart 50 + Configuration)

Array # Controller # Array Port

Ethernet 0

Array Port

Ethernet 1

Array Port

Ethernet 2

Array Port

Ethernet 3

Array 1 Slot 0 Switch 1 – Port 3 Switch 2 – Port 3 Switch 1 – Port 5 Switch 2 – Port 5

Slot 1 Switch 2 – Port 4 Switch 1 – Port 4 Switch 2 – Port 6 Switch 1 – Port 6

Array 2 Slot 0 Switch 1 – Port 7 Switch 2 – Port 7 Switch 1 – Port 9 Switch 2 – Port 9

Slot 1 Switch 2 – Port 8 Switch 1 – Port 8 Switch 2 – Port 10 Switch 1 – Port 10

Page 24: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 20

4.4 EqualLogic Storage Configuration

4.4.1 EqualLogic Group and Pool Configuration

Each EqualLogic array (or member) is assigned to a particular group. Groups help in simplifying

management by enabling management of all members in a group from a single interface. Each group

contains one or more storage pools. Each pool must contain one or more members and each member is

associated with only one storage pool. As an example, Figure 12 shows a group with three members

distributed across two pools. However, other EqualLogic array types can be added to the existing

storage pool or group.

Figure 12. EqualLogic Organizational Concepts

Group

Pool Pool

Member Member Member

The iSCSI volumes are created at the pool level. In the case where multiple members are placed in a

single pool, the data is distributed amongst the members of the pool. With data being distributed over

a larger number of disks, the potential performance of iSCSI volumes within the pool is increased with

each member added.

For vStart 50+, one pool can be created with two members or two pools can be created with one

member each. Depending upon the storage options selected, the vStart 50+ EqualLogic organization

options are expanded.

Table 12. Storage Configuration Options

vStart Model Base vStart Storage Array Configuration

With Storage Expansion Configuration

vStart 50 1 x EqualLogic PS6100 array 2 x EqualLogic PS6100 arrays

Using the information from Table 12 above as inputs, for a vStart 50+ with Storage Expansion,

one pool with two members, or two pools with one member each can be created. It‟s

important to note that two arrays is the maximum number of EqualLogic PS6100 arrays that are

supported in a single Storage Group. Other EqualLogic array types can be added to the existing

storage pool or group. Understanding the expected storage workload profile will help to

Page 25: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 21

determine the appropriate array selection and pool configuration. For more information, please

consult with your Dell Sales and Services representatives for planning out and designing an

EqualLogic storage solution. Also, refer to the following white papers, Using Tiered Storage in a

PS Series SAN, available at

http://www.equallogic.com/WorkArea/DownloadAsset.aspx?id=5239; and PS Series Storage

Arrays: Choosing a Member RAID Policy, available at

http://www.equallogic.com/WorkArea/DownloadAsset.aspx?id=5231

4.4.2 Volume Size Considerations

Volumes are created in the storage pools. Volume sizes depend on the customer environment and the

type of workloads. Volumes must be sized to accommodate not only the VM virtual hard drive, but also

the size of the virtual memory of the VM and additional capacity for any snapshots of the VM.

Depending on the environment, one may decide to create multiple ~500 GB volumes with multiple VMs.

It is important to include space for the guest operating system memory cache, snapshots, and VM

configuration files when sizing these volumes. Additionally, one can configure thin-provisioned volumes

to grow on demand only when additional storage is needed for those volumes. Thin provisioning can

increase the efficiency of storage utilization.

With each volume created and presented to the servers, additional iSCSI sessions are initiated. When

planning the solution, it is important to understand that group and pool limits exist for the number of

simultaneous iSCSI sessions. For more information, refer to the current EqualLogic Firmware (FW)

Release Notes. FW Release Notes are available at the EqualLogic Support site

https://support.equallogic.com/secure/login.aspx.

4.4.3 Storage Array RAID Configuration

The storage array RAID configuration is highly dependent on the workload in your virtual environment.

The EqualLogic PS series storage arrays support four RAID types: RAID 6, RAID 10, and RAID 50. The

RAID configuration will depend on workloads and customer requirements. In general, RAID 10 provides

the best performance at the expense of storage capacity.

RAID 10 generally provides better performance in random I/O situations, and requires additional

overhead in the case of a drive failure scenario. RAID 50 generally provides more usable storage, but

has less performance than RAID 10. RAID 6 provides better data protection than 50.

For more information on configuring RAID in EqualLogic, refer to the white paper, How to Select the

Correct RAID for an EqualLogic SAN, available at

http://www.equallogic.com/WorkArea/DownloadAsset.aspx?id=5231.

4.4.4 Storage Access Control

Access to the created volumes can be restricted to a subset of the servers that have physical

connectivity to the EqualLogic arrays. For each volume, access can be restricted based on IP address,

iSCSI qualified name (IQN), and/or Challenge Handshake Authentication Protocol (CHAP). When

creating a volume for the servers in the Hyper-V cluster, access must be granted to all servers in the

cluster.

Page 26: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 22

4.5 Hyper-V Server Networking

4.5.1 PowerEdge R620 Network Adapter Enumeration

Windows Server 2012 Datacenter detects the Broadcom Adapter physical devices and labels them as

“Broadcom NetXtreme Gigabit Ethernet #x” in Windows Device Manager. Each of these devices is tied

to an individual PCI Bus location (PCI Bus, Device, Function). Figure 13 shows how the PCI bus location

information can be used to determine which physical port on the server is associated with its Windows

friendly name. By default, the Broadcom virtual bus driver loads an NDIS (Network Driver Interface

Specification) VBD (Virtual Bus Driver) Client driver for each port.

Figure 13. PowerEdge R620 Network Adapter Enumeration

1

1 2 3 4

iDRAC

1 2750W 750W

PowerEdge R620NIC1Bus5,Dev0,Func0

NIC2Bus5,Dev0,Func1

NIC3Bus5,Dev0,Func2

NIC4Bus5,Dev0,Func3

rNDC1Bus1,Dev0,Func0

rNDC2Bus1,Dev0,Func1

rNDC4Bus2,Dev0,Func1

iDRAC rNDC3Bus2,Dev0,Func0

Windows detects the NDIS device as a network adapter, labels rNDCs as “NICx” and Add-in NICs as

“SLOT x”, and lists them in Network Connections as shown in Figure 14.

Figure 14. Network Connections

Page 27: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 23

4.5.2 LAN Teaming

The Windows Server 2012 native NIC teaming is used to configure and team the network ports for LAN

traffic to access the correct VLAN. The PCI Bus Information (PCI Bus, Device, Function) is used to

determine which adapters need to be configured in the team for LAN traffic.

VLAN tags for the Management, Live Migration, Cluster Private, and VM traffic types need to be

created. Broadcom adapters in Windows Server 2012 are enumerated in a predictable order, NICs

starting with NIC name prefix „NIC‟ represent network daughter card (rNDC) and NICs with name prefix

„SLOT‟ represent Add-in network adapters.

Figure 15 displays the teamed configuration and the four virtual VLAN adapters. Note the four adapters

classified as “ISCSIx”, which are used for iSCSI traffic. Also note that the LAN adapters are teamed for

fault tolerance. The SAN adapters are not teamed because they use MPIO.

Figure 15. Windows Native NIC team

4.5.3 Windows Network Connections

For each Hyper-V server, the Network Connections control panel window lists all of the network

adapters. The Broadcom devices should be labeled appropriately based on the PCI Bus Information

which can be found in Windows Device Manager->Device Properties Details or the BACS application-

>System Devices Information Tab. Figure 16 below shows the Network Connections details.

Page 28: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 24

Figure 16. PowerEdge R620 Network Connections for LAN and iSCSI

4.5.4 iSCSI MPIO

The HIT Kit helps to automate the detection and configuration of MS iSCSI initiators while providing

enhanced MPIO functionality and EqualLogic Array detection and interoperability capabilities. In

addition, the MS iSCSI initiators are the link between the Hyper-V hosts and the storage array. The iSCSI

initiator IP addresses and authentication details are configured per each customer‟s requirements.

4.6 Hyper-V Role and Failover Clustering Configuration

4.6.1 Roles and Features

The Hyper-V role is a core component in the Windows Server 2012 Datacenter host, as well as the

foundational role for enabling the hypervisor. The Hyper-V Role is enabled by default from the Dell

factory, thus saving time and helping to ensure consistency across the PowerEdge R620 hosts OS

configuration. Since the Hyper-V Role is installed at the Dell factory, no additional steps are required

to install the role once the PowerEdge R620 hosts are powered on at the customer site. Configuration

steps like assigning the default VM Network are required, however, and additional Hyper-V switch

configuration steps will be performed at the customer site so that customer specific requirements can

be incorporated into the Hyper-V Virtual Switch configurations.

Failover Clustering is a feature that, when combined with the Hyper-V role, provides fault tolerance at

the Hyper-V server level and enables features like Live Migration and VM failover. Since Failover

Clustering depends upon an available Active Directory Domain Services, ensure that the PowerEdge

R620 servers are joined to the domain and are properly communicating with AD DS and DNS prior to

invoking the failover feature. To enable Failover Clustering, add the Failover Clustering Role from the

Server Manager interface.

Page 29: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 25

Prior to creating the cluster, best practices require that the cluster configuration validator is run and

any issues are resolved before proceeding. More information on setting up Failover Clusters for Hyper-V

can be found on Microsoft TechNet in Hyper-V: Failover Clustering Overview available at

http://technet.microsoft.com/en-us/library/hh831579.

Cluster Shared Volumes (CSV) are implemented on the Hyper-V cluster to allow multiple virtual

machines to utilize the same volume and migrate to any host in the cluster. The Live Migration feature

of Windows 2012 allows movement of a virtual machine from one host to another without perceivable

downtime. Figure 17 below shows Server and Cluster Manager in the vStart 50 environment. For more

information on network for live migration see Use Cluster Shared Volumes in a Windows Server 2012

Failover Cluster and Configure and Use Live Migration on Non-clustered Virtual Machines available at:

http://technet.microsoft.com/en-us/library/jj612868.

http://technet.microsoft.com/en-us/library/jj134199.aspx.

Figure 17. Server and Cluster Manager

Page 30: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 26

4.6.2 Virtual Networks

For each Hyper-V server, the Hyper-V Virtual Switch Manager is used to create virtual networks to allow

the virtual machines access to the VLANs. Virtual networks can be created as needed for any of the

VLAN networks. Routing can be implemented so that VMs can communicate between the VLANs as

needed.

The screenshot below in Figure 18 shows the Virtual Switch Manager configured with a virtual network

connected to the VM VLAN. Note that the “Allow management operating system to share this network

adapter” checkbox is unchecked and the LAN_VM adapter does not have an IP address.

Figure 18. Hyper-V Virtual Switch Configuration

Page 31: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 27

4.7 Management Server Networking

4.7.1 PowerEdge R420 Network Adapter Enumeration

Windows Server 2012 detects the Broadcom Adapter physical devices and labels them as “Broadcom

NetXtreme Gigabit Ethernet #x” in Windows Device Manager. Each of these devices is tied to an

individual PCI Bus location (PCI Bus, Device, Function). The PCI bus location information can be used to

determine which physical port on the server is associated with its Windows friendly name. In addition,

by default, the Broadcom virtual bus driver loads an NDIS VBD Client driver for each port. Figure 19

shows the PCI bus to network adapter enumeration details.

Figure 19. PowerEdge R420 Network Adapter Enumeration

PowerEdge R420

iDRAC

rNDC1Bus1,Dev0,Func0

rNDC2Bus1,Dev0,Func1

iDRAC

NIC2Bus3,Dev0,Fun

c1

NIC1Bus3,Dev0,Fun

c0

Broadcom adapters in Windows Server 2012 are enumerated in a predictable order, NICs starting with

NIC name prefix „NIC‟ represent network daughter card (rNDC) and NICs with name prefix „SLOT‟

represent Add-in network adapters.

Figure 20. Network Connections

Page 32: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 28

4.7.2 PowerEdge R420 Network Adapter Teaming Configuration

The two adapters dedicated to the LAN are configured into a Switch Independent team using Windows

native team. This provides redundancy in the event that one of the paths or network adapters fails.

There is no switch configuration required to support this teaming type.

Figure 21. PowerEdge R420 Server Team Configuration

Figure 21 displays the teamed configuration and the two virtual VLAN adapters. Note the two adapters

classified as “ISCSIx” which are used for iSCSI traffic. Also note that the LAN adapters are teamed for

fault tolerance. However, the SAN adapters are not teamed because they use MPIO.

Page 33: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 29

4.8 Management Architecture

This section assumes initial configuration of the devices has been performed and pertains to ongoing

management of the vStart configuration. For additional information on managing the vStart

configuration, refer to the vStart 50 Hyper-V Solution Overview document.

4.8.1 Management on the LAN

The management of the devices on the LAN includes the following items:

Out–of-band server management through the iDRAC Enterprise

Server management through Dell OpenManage Server Administrator

Hyper-V and Cluster Manager

LAN switch management through CLI or web browser

EqualLogic array management through CLI or web browser

EqualLogic array monitoring

Server Out-of-Band Management: The PowerEdge R620 and PowerEdge R420 servers can be managed

directly by connecting to the iDRAC Web interface. In addition, the iDRAC supports remote KVM

emulation through the virtual console.

Dell OpenManage Server Administrator (OMSA): OMSA should be installed on the PowerEdge R620

servers and the PowerEdge R420 server. OMSA is available to download from support.dell.com. It can

be installed on each Hyper-V server either using a self-extracting executable or via a Dell Update

Package format. For more information about OMSA, refer to the Dell TechCenter at

http://www.delltechcenter.com/page/OpenManage+Server+Administrator+-+OMSA.

Hyper-V Cluster Management: Management of the Hyper-V hosts is performed by connecting to each

server through the Hyper-V Manager that can be run from the PowerEdge R420 Management Server or

VM. Server Manager can also be utilized to access Hyper-V Manager, while the cluster is managed from

Failover Cluster Manager.

LAN and SAN Switch Management: Management of the LAN and SAN switches can be performed

through a web browser, serial cable, or telnet.

EqualLogic Array Management: The EqualLogic arrays are managed through the EqualLogic Group

Manager web interface, which can be accessed from the management server. Administrator primary

tasks within Group Manager include configuration and troubleshooting of the arrays.

Page 34: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 30

EqualLogic Array Monitoring: SAN HQ is installed on the management server to provide current

performance monitoring and historical statistics. Group Manager can also be used for array monitoring.

A logical overview of the LAN management architecture is shown in Figure 22.

Figure 22. Management Overview (LAN)

LA

NP

ow

erC

on

ne

ct

70

24

Core Network

Po

we

rEd

ge

R4

20 Optional Management Server (SAN HQ)Hyper-V Servers

Po

we

rEd

ge

R6

20

iSCSI Storage

Eq

ua

lLo

gic

PS

61

00

X

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

So

luti

on

Su

pp

ort

Se

rvic

es

NTP (Recommended)

AD Domain Controller

(2nd Array Optional – Available with vStart 50+)

Legend

1Gb LAN

Out-Of-Band Mgmt

Page 35: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 31

4.8.2 SAN Switch Management

SAN management includes SAN switch management.

Management of the SAN switches can be performed through a web browser, serial cable, or telnet from

the management server.

A logical overview of the SAN management architecture is shown in Figure 23.

Figure 23. Management Overview (SAN)

Hyper-V Servers

Po

we

rEd

ge

R6

20

SA

NP

ow

erC

on

ne

ct

70

24

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Legend

1Gb SAN

Stack\LAG

Po

we

rEd

ge

R4

20 Management Server (SAN HQ)

iSCSI Network

Page 36: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 32

5 Power and Cooling Configuration The vStart 50 configuration supports datacenters that have two separate sources of power. For the

servers and iSCSI storage arrays, redundant power supplies are provided, and each power supply is

connected to a unique PDU to avoid a single point of failure. The four PowerConnect switches are

configured with an external redundant power unit; the primary power on the switches is connected to

a separate PDU rather than the Redundant Power Unit for the switches.

The UPS and PDU model may vary based on the deployment needs of the datacenter. As such, detailed

power cabling information will be provided by Dell Services as part of the deployment process. Figure

24 below depicts the power configuration for the vStart 50 + configuration.

Figure 24. vStart 50 Power Cabling

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

RPS-600 (Optional)

Shelf

1U Filler Panel

3U Filler Panel

2U Filler Panel

Keyboard, Monitor, MousePC RPS-720 or 600

PowerConnect 7024 or 6224 (SAN 2)

PowerConnect 7024 or 6224 (SAN 1)

PowerConnect 7024 or 6224 (LAN 2)

PowerConnect 7024 or 6224 (LAN 1)

Power Edge R420 (Optional)

PowerEdge R620 (Svr 2)

PowerEdge R620 (Svr 1)

EqualLogic PS6100X (Optional Array 2)

EqualLogic PS6100X(Array 1)

UPS

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

STACKING MODULE

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

STACKING MODULE

STACKING MODULE

STACKING MODULE

1

1 2 3 4

iDRAC

1 2750W 750W

1

1 2 3 4

iDRAC

1 2750W 750W

1000=O

RG

100=G

RN10=O

FF

1000=O

RG

100=G

RN10=O

FF

REPO OUTREPO IN

OFF

BATT.EXT

192V 30A

Not For

Current

Interrupting.

20A

250Vac

CIRCUIT

BREAKER

LS2

INPUT 208V~

AC OUTPUT 30A~

LS1

PDU A PDU B

/x9

/x7

PDU connectivity varies based upon UPS presence and datacenter power infrastructure

vStart 50

PDU A Power Cables PDU B Power Cables

iDRAC

1000

=OR

G10

0=G

RN

10=O

FF

AC

T/L

NK

AC

T/L

NK

SERIAL PORT

SERIAL PORT

MANAGEMENT

MANAGEMENT

STANDBY

ON/OFF

STANDBY

ON/OFF

ACT

ERR

PWR

ACT

ERR

PWR

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

SERIAL PORT

SERIAL PORT

MANAGEMENT

MANAGEMENT

STANDBY

ON/OFF

STANDBY

ON/OFF

ACT

ERR

PWR

ACT

ERR

PWR

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

To Datacenter Power Source B

To Datacenter Power Source A

Page 37: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 33

PDU A should be cabled to one power source. PDU B should be cabled to a separate power source. With

one optional UPS in the solution, the recommended solution is to cable PDU A into the UPS and PDU B

into another power source in the datacenter. If this option is implemented, then the UPS should be

cabled to one power source and PDU B should be cabled to a separate power source, if two power

sources are available.

Detailed information on the power, cooling, and related datacenter rack requirements of the vStart 50

are available in the vStart 50 Hyper-V Solution Specifications document.

6 Scalability When adding additional servers or storage to the rack; power, weight, and cooling requirements must

be taken into account. The power limits of PDUs and UPSs must also be understood prior to installing a

new system.

Switch ports on both the LAN and SAN switches are available for expansion. Those ports must be

enabled and configured to support the new servers and/or storage arrays.

6.1 Adding new servers to the Hyper-V Cluster

If additional VMs will be deployed that will exceed current resource capabilities, or the Hyper-V cluster

has reached its acceptable maximum (CPU and memory) resource utilization, then additional servers

can be added to the cluster up to a maximum of 64 nodes depending on the rack and datacenter

capacity. See the Cluster scalability section for more information at this link,

http://technet.microsoft.com/en-us/library/hh831414#BKMK_SCALE.

Previously created iSCSI volumes on the EqualLogic array may require modifications to the access

controls to grant access to the newly added servers.

When adding servers to a Hyper-V cluster, it is recommended that the configuration be identical to the

other systems in the cluster. If this is not achievable, there may be restrictions on certain actions, such

as Live Migration between the differing systems. To understand Live Migration requirements, refer to

the link, Virtual Machine Live Migration Overview, available at,

http://technet.microsoft.com/en-us/library/hh831435.

6.2 Adding new storage to the EqualLogic group

New EqualLogic arrays can be added to the existing EqualLogic group. As each new array is added to

the storage group, the storage capacity and performance, in terms of both bandwidth and IOPS, are

increased. This increased capacity can be utilized without downtime. When a new array is added to an

existing pool, the existing data is automatically distributed to the newly added array.

If EqualLogic thin provisioning is utilized and virtual capacity allocated is nearing the limit of physical

capacity, adding an additional storage array to the constrained pool addresses this issue. As noted in

Section 4.4.2, the impact to the total iSCSI session count for the EqualLogic group and pools must be

understood when adding either new servers or EqualLogic arrays.

Page 38: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 34

7 References Microsoft references:

Windows Server 2012

http://technet.microsoft.com/en-us/library/hh801901

Windows Server 2012 Hyper-V Whitepaper

http://download.microsoft.com/download/5/D/B/5DB1C7BF-6286-4431-A244-

438D4605DB1D/WS%202012%20White%20Paper_Hyper-V.pdf.

Failover Clusters in Windows Server 2012

http://technet.microsoft.com/en-us/library/hh831579

EqualLogic references:

Dell EqualLogic PS Series Architecture Whitepaper

http://www.dell.com/downloads/global/products/pvaul/en/dell_equallogic_architecture.pdf

Host Integration Tools for Windows

http://www.dell.com/downloads/global/products/pvaul/en/equallogic-host-software.pdf

PS Series Storage Arrays: Choosing a Member RAID Policy

http://www.equallogic.com/WorkArea/DownloadAsset.aspx?id=5231

Using Tiered Storage in a PS Series SAN

http://www.equallogic.com/resourcecenter/assetview.aspx?id=5239

Monitoring your PS Series SAN with SAN HQ

http://www.equallogic.com/resourcecenter/assetview.aspx?id=8749

Page 39: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 35

8 Appendix A – IP & VLAN Planning Worksheet

VLAN Configuration

Traffic Type VLAN Subnet Subnet Mask Gateway

Management

Live Migration

Cluster Private

Out-of-Band Management

VM

iSCSI / iSCSI Management

Existing Infrastructure

DNS NTP NTP for SAN Database for Optional Mgmt Server

PowerConnect 7024 or 6248 Switches

The IP address for the PowerConnect LAN switches should be on the out-of-band management network

and the SAN switches should be on the iSCSI network. Only one IP address is required for the LAN

switches due to the stacked configuration.

Switch IP Address

LAN 1 & 2

SAN 1

SAN 2

PowerEdge R420 Server or Mgmt VM: ___________IP Address, hostname,

FQDN_______________

iDRAC7 Management OOB iSCSI 1 iSCSI 2

Page 40: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 36

PowerEdge R620 Servers

Microsoft Cluster FQDN: ____________

Microsoft Cluster IP: _______________

Server 1:____________ IP Address, hostname, FQDN_______________

iDRAC7 Management Live Migration Cluster Private

iSCSI 1 iSCSI 2

Server 2: ____________ IP Address, hostname, FQDN_______________

iDRAC7 Management Live Migration Cluster Private

iSCSI 1 iSCSI 2

EqualLogic PS6100 Array(s)

EqualLogic Group Name: ____________

EqualLogic Group IP: ____________

EqualLogic Management IP: _______________

iSCSI 1 iSCSI 2 iSCSI 3 iSCSI 4 Management

Array 1

Array 2

Page 41: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 37

9 Appendix B – IP & VLAN Sample Worksheet

VLAN Configuration (VLANs, and TCP/IP Subnets, and Gateways should be changed to match existing

infrastructure) and the information in the other tables in Appendix B is provided as an example.

Traffic Type VLAN Subnet Subnet Mask Gateway

Management 20 192.168.20.X 255.255.255.0 192.168.20.1

Live Migration 30 192.168.30.X 255.255.255.0 192.168.30.1

Cluster Private 40 192.168.40.X 255.255.255.0

Out-of-Band Management

10 192.168.10.X 255.255.255.0 192.168.10.1

VM 100 192.168.100.X 255.255.255.0 192.168.100.1

iSCSI / iSCSI Management

50 192.168.50.X 255.255.255.0 N/A

Existing Infrastructure (IP addresses should be changed to match existing infrastructure)

DNS NTP NTP for SAN Database for Optional Mgmt Server

192.168.20.2 192.168.20.4 192.168.50.2 192.168.20.3

PowerConnect 7024 or PowerConnect 6224 Switches

The IP address for the PowerConnect LAN switches should be on the out-of-band management network

and the SAN switches should be on the iSCSI network. Only one IP address is required for the LAN

switches due to the stacked configuration.

Switch IP Address

LAN 1 & 2 192.168.10.201

SAN 1 192.168.50.201

SAN 2 192.168.50.202

PowerEdge R420 Server or Mgmt VM: 192.168.20.10, management,

management.vstart50.lab

iDRAC7 Management Out-of-Band

192.168.10.10 192.168.20.10 192.168.10.60

iSCSI 1 iSCSI 2

192.168.50.10 192.168.50.60

Page 42: vStart 100 and vStart 200 Solution Design Guidei.dell.com/sites/doccontent/shared-content/data... · vStart 50 Hyper-V Solution Design Guide Dell Inc. 1 1 Introduction The vStart

vStart 50 Hyper-V Solution Design Guide

Dell Inc. 38

PowerEdge R620 Servers

MS Cluster FQDN: ____________

MC Cluster IP: 192.168.20.50

Server 1: Hyper-V node 1: 192.168.20.11, node1, node1.vstart50.lab

iDRAC7 Management Live Migration Cluster Private

192.168.10.11 192.168.20.11 192.168.30.11

iSCSI 1 iSCSI 2

192.168.50.11 192.168.50.61

Server 2: Hyper-V node 2: 192.168.20.12, node2, node2.vstart50.lab

iDRAC7 Management Live Migration Cluster Private

192.168.10.12 192.168.20.12 192.168.30.12

iSCSI 1 iSCSI 2

192.168.50.12 192.168.50.62

EqualLogic PS6100 Array(s)

EqualLogic Group Name: ____________

EqualLogic Group IP: 192.168.50.100

EqualLogic Management IP: 192.168.20.100

iSCSI 1 iSCSI 2 iSCSI 3 iSCSI 4 Management

Array 1 192.168.50.101 192.168.50.102 192.168.50.103 192.168.50.104 192.168.20.102

Array 2 192.168.50.105 192.168.50.106 192.168.50.106 192.168.50.107 192.168.20.103