ibm system x solution for microsoft hyper-v

64
© Copyright IBM Corporation, 2014 IBM System x Solution for Microsoft Hyper-V on X6 Reference Architecture February 2014 Author: Kent Swalin Contributing Authors: Roland Mueller, Scott Smith, David Ye

Upload: ahmed-mohamed-atif-zaky

Post on 20-Jan-2016

79 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: IBM System x Solution for Microsoft Hyper-V

© Copyright IBM Corporation, 2014

IBM System x Solution for Microsoft Hyper-V on X6

Reference Architecture

February 2014

Author: Kent Swalin Contributing Authors: Roland Mueller, Scott Smith, David Ye

Page 2: IBM System x Solution for Microsoft Hyper-V

IBM System x Solution for Microsoft Hyper-V on X6

Reference Architecture © Copyright IBM Corporation, 2014

Table of contents

Introduction .............................................................................................................................. 1

Business problem and business value ................................................................................... 1

Business value ......................................................................................................................................... 1

Architectural overview ............................................................................................................. 1

Hardware Overview .................................................................................................................. 3

IBM System x3850 X6 ............................................................................................................................. 3

IBM Storwize V7000 ................................................................................................................................ 4

IBM RackSwitch™ G8124E Top-of-Rack Network Switch ...................................................................... 5

IBM System Storage™ SAN48B-5 Fibre Switch ..................................................................................... 5

Microsoft Windows Server 2012 R2 ........................................................................................................ 6

Deployment diagram ................................................................................................................................ 6

Best Practice and Implementation Guidelines ....................................................................... 7

Racking and Power Distribution ............................................................................................................... 8

IBM System x3850 X6 Setup ................................................................................................................... 8

Pre-OS Installation ............................................................................................................ 8

OS Installation and Configuration ...................................................................................... 8

Network Configuration ............................................................................................................................. 9

Key Networking Concepts and Terminology ..................................................................... 9

IBM RackSwitch G8124E Top-of-Rack Network Switches ............................................. 10

x3850 X6 Hyper-V Host Server Network Ports ............................................................... 10

Network Topology Design ............................................................................................... 11

VLAN Definitions ............................................................................................................. 13

Network Switch Port Definitions ...................................................................................... 16

Using ISCLI to configure the G8124E switch .................................................................. 18

Accessing Global Configuration Mode ........................................................................................................... 19

Configure the ISL and VLAG Peer Relationship ............................................................................................ 19

Enable Tagging on Host Ports ....................................................................................................................... 20

Configure the VLANs ..................................................................................................................................... 20

Configure LACP teams and enable VLAG on the host connections .............................................................. 21

Recommended: Backup the switch configuration .......................................................................................... 22

Host Network Configuration ............................................................................................ 22

Storage Design ...................................................................................................................................... 25

Key Storage Concepts and Terminology ........................................................................ 25

IBM System Storage SAN48B-5 Fibre Switches ............................................................. 25

x3850 X6 Hyper-V Host Server Storage Ports ................................................................ 25

Storage Topology Design ................................................................................................ 26

Page 3: IBM System x Solution for Microsoft Hyper-V

IBM System x Solution for Microsoft Hyper-V on X6

Reference Architecture © Copyright IBM Corporation, 2014

Storage Configuration ............................................................................................................................ 27

Storage Zoning ................................................................................................................ 27

Configuring the Host Bus Adapters ................................................................................. 31

Enabling Multipathing ...................................................................................................... 32

Host Definition on the Storwize V7000 ............................................................................ 33

Storage Partitioning ......................................................................................................... 35

Failover Cluster Creation ....................................................................................................................... 41

Microsoft Hyper-V Configuration ............................................................................................................ 47

Setting default paths to the CSV ..................................................................................... 47

Creating a Highly Available Virtual Machine ................................................................... 48

Virtual Machine Configuration ......................................................................................... 49

Virtual Machine Fibre Channel Storage Connections (Optional) ........................................................... 49

Enable Multi-Path Support on the Virtual Machine ......................................................... 49

Enable NPIV on HBA Ports ............................................................................................. 49

Creating Virtual SAN Devices ......................................................................................... 50

Creating Virtual HBAs ..................................................................................................... 51

Verify Connectivity and Add Additional Zoning ............................................................... 52

Additional Best Practices and Requirements ........................................................................................ 57

Quorum Best Practices ................................................................................................... 57

Cluster Shared Volume Requirements ............................................................................ 57

Summary ................................................................................................................................. 57

Appendix 1: Bill of Material ................................................................................................... 58

Resources ............................................................................................................................... 59

Trademarks and special notices ........................................................................................... 60

Page 4: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

1

Introduction

This document describes a virtualization reference architecture for a 2-node Microsoft® Hyper-V® cluster using IBM® System x3850 X6 servers and IBM Storwize® V7000 storage. This paper is intended to provide the planning, design considerations, and best practices for implementation.

This reference architecture and implementation guide targets organizations implementing Hyper-V and IT engineers who are familiar with the hardware and software that make up this reference architecture. Additionally, the System x® sales teams and their customers evaluating or pursuing Hyper-V virtualization solutions will benefit from this validated configuration.

Comprehensive experience with the various reference configuration technologies is recommended.

Business problem and business value

Today’s IT managers are looking for efficient ways to manage and grow their IT infrastructure with confidence. Good IT practices recognize the need for high availability and maximum resource utilization. Rapidly responding to changing business needs with simple, fast deployment and configuration, while maintaining healthy systems and services directly corresponds to the vitality of your business. Natural disasters, malicious attacks, and even simple software upgrade patches can cripple services and applications until administrators resolve the problems and restore any backed up data. The challenge of maintaining uptime only becomes more critical as businesses consolidate physical servers into a virtual server infrastructure to reduce data center costs, maximize utilization and increase workload performance.

Business value

Microsoft Hyper-V technology continues to gain competitive traction as a key cloud component in many customer virtualization environments. Hyper-V is included as a standard component in both Windows Server® 2012 R2 Standard and Datacenter editions. Windows Server 2012 R2 Microsoft Hyper-V Virtual Machines (VMs) support up to sixty-four virtual processors and 1TB of memory.

This IBM x3850 X6 solution for Microsoft Hyper-V provides businesses with an affordable, interoperable and reliable industry-leading virtualization and cloud solution. This offering, built around the latest IBM System x servers, storage and networking, takes the complexity out of the solution by providing a step-by-step implementation guide. This reference architecture combines Microsoft software, consolidated guidance, and validated configurations for compute, network, and storage. The design provides a high level of redundancy and fault tolerance across the servers, storage, and networking to ensure high availability of private cloud pooled resources.

This reference architecture includes an implementation guide which provides setup, configuration details, and ordering information for the highly available, 2-node clustered environment described in this document. This is ideal for enterprise organizations that are ready to take their virtualization to the next level.

Architectural overview

The design consists of two servers which are part of a failover cluster. Microsoft Failover Clustering helps eliminate single points of failure so users have near-continuous access to important server-based, business-productivity applications. Multiple paths connect the clustered servers to the networking and storage infrastructure to maintain access to critical resources in the event of a planned or unplanned outage.

Each clustered server has the Hyper-V role installed and will host virtual machines. Virtualizing business-critical workloads has multiple benefits:

Page 5: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

2

Administration can be simplified by standardizing the hardware and software configuration.

Virtual machines can be saved and deployed rapidly via self-service portals to end customers.

Virtual machines can be migrated among clustered host servers to support resource balancing, scheduled maintenance, and in the event of physical or logical outages, virtual machines can be automatically restarted on remaining cluster nodes. As a result, clients minimize downtime. This seamless operation is attractive for organizations trying to create new business and maintain healthy service level agreements.

Individual virtual machines have their own operating system instance and are completely isolated from the host operating system as well as other virtual machines. Virtual machine isolation helps promote higher business-critical application availability while the Microsoft failover clustering feature, found in the Windows Server 2012 R2 Standard and Datacenter Editions, can dramatically improve production system uptimes.

Figure 1 illustrates the design of the system.

Storage Controller

B

A

2-Node Hyper-V Cluster with Virtual

Machines

Node-1 Node-2

Quorum CSVRemaining

Storage

Partitioned Storage

Fibre Switches

10GbE Network Switches

Domain Controllers

Management

Page 6: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

3

Figure 1 Architectural overview

Hardware Overview

A short summary of the software and hardware components used in the Hyper-V Virtualization Reference Architecture is listed below, followed by implementation guidelines in later sections.

The Hyper-V Virtualization Reference Configuration is constructed of the following enterprise-class components:

Two IBM System x3850 X6 servers in a failover cluster and installed with the Hyper-V role

One IBM Storwize V7000 storage system with dual controllers

Two IBM RackSwitch™ G8124E Top-of-Rack Network Switch units

Two IBM System Storage™ SAN48B-5 Fibre Channel Switch units

Microsoft Windows Server 2012 R2 Datacenter Edition

Together, these software and hardware components form a high-performance, cost-effective solution that supports Microsoft Hyper-V cloud environments for most business-critical applications and many custom third-party solutions.

This design consists of two IBM System x3850 X6 servers running the Microsoft Windows 2012 R2 operating system, attached via Fibre Channel to an IBM Storwize V7000 Storage System using two IBM SAN48B-5 fibre switches. Networking leverages two IBM RackSwitch G8124E Top-of-Rack network switches. This configuration can be expanded to multiple servers for additional compute capacity.

IBM System x3850 X6

At the core of this reference architecture, the IBM System x3850 X6 server delivers the high compute capacity, performance and reliability required for virtualizing business-critical applications in Hyper-V cloud environments. To provide the expected virtualization performance to handle any Microsoft production environment, IBM System x3850 X6 servers can be equipped with Intel Xeon E7-4800/8800 v2 product families up to 3.2 GHz, up to 1800 MHz memory access, 60 cores per server, and up to 6 TB of memory (Windows Server 2012 R2 is limited to 4 TB). The IBM System x3850 X6 includes a ServeRAID controller and the choice of either spinning hot swap SAS or SATA disks as well as SFF hot swap solid state drives. Up 11 total PCIe expansion slots (9 rear, 2 front) for data and storage connections. It also supports remote management via the IBM Integrated Management Module (IMM) which enables continuous management capabilities. All of these key features, including many not listed, help solidify the dependability IBM customers have grown accustomed to with System x servers.

By virtualizing with Microsoft Hyper-V technology on IBM System x3850 X6 (Figure 2) businesses reduce physical server sprawl, power consumption and total cost of ownership (TCO). Virtualizing the server environment also results in lower server administrative overhead, giving IT administrators the capability to manage more systems than exclusive physical environments. Highly available critical applications residing on clustered host servers can be managed with greater flexibility and minimal downtime due to Microsoft’s Hyper-V Live Migration capabilities.

Page 7: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

4

Figure 2 IBM System x3850 X6

IBM Storwize V7000

IBM Storwize V7000 is a powerful midrange disk system that has been designed to be easy to use and to enable rapid deployment without additional resources. With its simple, efficient and flexible approach to storage, the IBM Storwize V7000 is a cost-effective complement to the Virtualization Hyper-V Reference Architecture. By offering substantial features at a price that fits most budgets, the IBM Storwize V7000 delivers superior price/performance ratios, functionality, scalability and ease of use for the mid-range storage user.

The Storwize V7000 offers:

Simplified management with an integrated, intuitive user interface for faster system accessibility

Reduced network complexity with FCoE and iSCSI connectivity

Up to five times more active data in the same disk space using IBM Real-time Compression™

Optimize costs for mixed workloads, with up to 200 percent better performance with solid-state drives (SSDs) using IBM System Storage® Easy Tier®

Support business applications that need to grow dynamically, while consuming only the space actually used with thin provisioning.

Improved application availability and resource utilization for organizations of all sizes

IBM Storwize V7000 (Figure 3) is well-suited for Microsoft virtualized cloud environments. The Storwize V7000 complements the IBM System x3850 X6, the IBM RackSwitch G8124E network switches, and IBM System Storage SAN48B-5 fibre channel switches in an end-to-end Microsoft Hyper-V private cloud solution by delivering proven disk storage in flexible, scalable configurations. The Storwize V7000 consists of a control enclosure and optionally up to nine expansion enclosures. The system also supports intermixing 3.5-inch and 2.5-inch type controller and expansion enclosures. Each enclosure houses two controllers; the control enclosure contains two Node controllers and each expansion enclosure contains two Expansion controllers.

The Storwize V7000 provides a choice of up to 120 x 3.5-inch or 240 x 2.5-inch Serial-Attached SCSI (SAS) drives for the internal storage and uses SAS cables and connectors to attach to the optional expansion enclosures. In addition to the hard disk drives, there are also small-form-factor 2.5-inch solid-state drives (SSDs) available.

Page 8: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

5

Figure 3 IBM Storwize V7000

IBM RackSwitch™ G8124E Top-of-Rack Network Switch

The IBM RackSwitch G8124E Network Switch (Figure 4) is a 1/10 Gigabit Ethernet switch specifically designed for the data center, providing a virtualized and easier network solution. The G8124E offers twenty-four 1/10 Gigabit Ethernet ports in a high-density, 1U footprint. Designed with top performance in mind, the RackSwitch G8124E provides line-rate, high-bandwidth switching, filtering and traffic queuing without delaying data and large data-center grade buffers to keep traffic moving.

The G8124E is virtualized―providing rack-level virtualization of networking interfaces. VMready software enables movement of virtual machines—providing matching movement of VLAN assignments, ACLs and other networking and security settings. VMready works with all leading VM providers, including Microsoft Hyper-V. The G8124E also supports Virtual Fabric, which allows for the carving up of a physical NIC into 2 - 8 virtual NICs (vNICs) and creates a virtual pipe between the adapter and the switch (using the IBM Networking® OS) for improved performance, availability and security, while reducing cost and complexity.

The G8124E is easier―with server-oriented provisioning via point-and-click management interfaces. Its industry standard CLI, along with seamless interoperability, simplifies configuration for those familiar with Cisco environments.

Figure 4 IBM RackSwitch G8124E Top-of-Rack Network Switch

IBM System Storage™ SAN48B-5 Fibre Channel Switch

The IBM System Storage SAN48B-5 Fibre Channel Switch (Figure 5) is designed to meet the demands of hyper-scale private or hybrid cloud storage environments by delivering 16 Gbps Fibre Channel technology and capabilities that support highly virtualized environments. To enable greater flexibility and investment protection, SAN48B-5 is configurable in 24, 36 or 48 ports and supports 2, 4, 8, 10 or 16 Gbps speeds in an efficiently designed 1U package. This switch—now enhanced with enterprise connectivity options that add support for IBM FICON® connectivity—can provide a highly reliable infrastructure when used with fast, scalable IBM System x® servers.

Page 9: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

6

SAN48B-5 features:

16 Gbps performance with up to 48 ports in an energy-efficient, 1U form factor

Ports on Demand (PoD) capabilities for scaling from 24 to 48 ports in 12-port increments

2, 4, 8, 10 or 16 Gbps speed on all ports producing an aggregate 768 Gbps full-duplex throughput

16 Gbps optimized Inter-Switch Links (ISLs)

128 Gbps high-performance and resilient frame-based trunking

Simplified deployment process and point-and-click user interface

Figure 5 IBM System Storage SAN48B-5 Fibre Switch

Microsoft Windows Server 2012 R2

Windows Server 2012 R2 with Hyper-V provides the enterprise with a scalable and highly elastic platform for virtualization environments. With support for today’s largest servers with up to 4TB of RAM, 320 logical processors, and sixty-four nodes per cluster combined with key features such as High Availability clustering, simultaneous Live Migration, in-box network teaming, and improved Quality of Service (QoS) features IT organizations can simplify resource pools used to support their cloud environments. Hyper-V, under Windows Server 2012 R2, has also leveraged the operating system’s ability to better utilize resources presented to virtual machines by offering up to 64 virtual CPUs, 1TB of RAM, and virtual HBA (vHBA) support.

Deployment diagram

Figure 6 shows the hardware as it would be deployed in the data center. As illustrated, the rack includes two IBM System x3850 X6 servers, one IBM Storwize V7000 dual-controller enclosure, two IBM RackSwitch G8124E 10GbE network switches, and two IBM System Storage SAN48B-5 fibre channel switches.

Page 10: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

7

IBM System

Storage SAN48B-5

Fibre Switch

(2 units)

IBM RackSwitch

G8124E 10GbE

Network Switch

(2 units)

IBM System x3850 X6

(2 units)

IBM Storwize

V7000

25U Rack

Front View Rear View

Figure 6 Deployment diagram showing actual hardware deployed

Best Practice and Implementation Guidelines

A successful Microsoft Hyper-V deployment and operation can be significantly attributed to a set of test-proven planning and deployment techniques. Proper planning includes sizing required server resources (CPU and memory), storage (capacity and IOPS), and networking (bandwidth and VLAN assignment) needed to support the infrastructure. This information can then be implemented using industry standard best practices to achieve optimal performance and growth headroom necessary for the life of the solution.

Configuration best practices and implementation guidelines for the Hyper-V Virtualization Reference Architecture, which aid in planning and configuration of the solution, are shared in the remaining sections below. Categorically, they are broken down into the following topics:

Racking and Power Distribution

IBM System x3850 X6 Setup

Network Configuration

Storage Design

Storage Configuration

Failover Cluster Creation

Microsoft Hyper-V Configuration

Page 11: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

8

Racking and Power Distribution

The installation of power distribution units (PDUs) and their cabling should be performed before any system is racked. When cabling the PDUs, keep the following in mind:

Ensure sufficient, separate electrical circuits and receptacles to support the required PDUs.

To minimize the chance of a single electrical circuit failure taking down a device, ensure there are sufficient PDUs to feed redundant power supplies using separate electrical circuits.

For devices that have redundant power supplies, plan for individual electrical cords from separate PDUs.

Maintain appropriate shielding and surge suppression practices, as well as employ appropriate battery back-up techniques.

IBM System x3850 X6 Setup

The failover cluster consists of two dual-socket IBM System x3850 X6 servers with the following configuration:

256GB RAM

Four Intel Xeon E7-4800 v2 processors

One 2-port Broadcom NetXtreme II 10GbE*

One 2-port Emulex LPe12002-M8 fibre host bus adapter*

*Only one HBA adapter and one network adapter are shown for simplicity. Additional adapters may be used depending on client requirements for redundancy and throughput.

Setup involves the installation and configuration of Windows Server 2012 R2 Datacenter edition, networking, and storage on each server.

Note: Windows Server 2012 R2 Datacenter edition allows unlimited Windows virtual machine rights on the host servers and is the preferred version for building private cloud configurations. Windows Server 2012 R2 Standard edition now supports clustering as well, but only provides licensing rights for up to two Windows virtual machines (additional licenses would be needed for additional virtual machines). Windows Server 2012 R2 Standard edition is intended for physical servers that are hosting very few or no virtual machines.

Pre-OS Installation

Confirm the 2-port Broadcom NetXtreme network adapter device is installed in each host server

Confirm the 2-port Emulex LPe12002-M8 fibre host bus adapter is installed in each host server

Validate firmware levels are consistent across both servers

Verify two local disks are configured as a RAID 1 array

OS Installation and Configuration

Install Windows Server 2012 R2 Datacenter edition

Set your server name, and join the domain

Install the Hyper-V role and Failover Clustering feature

Run Windows Update to ensure any new patches are installed

Note: All the servers in a failover cluster should have the same software updates (patches) and service packs.

Download and install the latest network adapter and HBA drivers from IBM Fix Central: http://www.ibm.com/support

Page 12: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

9

Install the Emulex OneCommand Manager utility to provide additional insight to the storage infrastructure. A link to this tool is listed below: http://www.emulex.com/downloads/emulex/drivers/windows/windows-server-2012-r2/management/

Multipath I/O is used to provide balanced and fault tolerant paths to the Storwize V7000. Installation of multipath features is covered in section Enabling Multipathing.

Network Configuration

This section describes the networking topology and includes instructions to correctly configure the network environment.

Key Networking Concepts and Terminology

This section covers basic networking concepts and terminology that will be used throughout the following sections.

Inter-Switch Link (ISL) – a physical network connection from a physical network port on switch 1 to a physical network port on switch 2 that enables communication between the switches. This reference architecture uses two physical connections between the two networking switches.

Trunk Group – creates a virtual link between two switches which operates with aggregated throughput of the physical ports used. The switch used in this reference architecture supports two trunk types: static trunk groups and dynamic LACP trunk groups. This reference architecture uses dynamic trunk groups. Figure 7 illustrates a dynamic trunk group aggregating two ports from each switch to form an ISL.

Figure 7 A dynamic trunk group aggregating two ISL connections between two switches

Link Aggregation Control Protocol (LACP) - an IEEE 802.3ad standard for grouping several physical ports into one logical port (known as a dynamic trunk group) with any device that supports the standard. The 802.3ad standard allows standard Ethernet links to form a single Layer 2 link using the Link Aggregation Control Protocol (LACP). Link aggregation is a method of grouping physical link segments of the same media type and speed in full duplex, and treating them as if they were part of a single, logical link segment. If a link in a LACP trunk group fails, traffic is reassigned dynamically to the remaining link(s) of the dynamic trunk group.

Virtual Link Aggregation Group (VLAG) - as shown in Figure 8, a switch or server in the access layer may be connected to more than one switch in the aggregation layer to provide for network redundancy. Typically, Spanning Tree Protocol (STP) is used to prevent broadcast loops, blocking redundant uplink paths. This has the unwanted consequence of reducing the available bandwidth between the layers by as much as 50%. In addition, STP may be slow to resolve topology changes that occur during a link failure, and can result in considerable MAC address flooding. Using VLAGs, the redundant uplinks remain active, utilizing all available bandwidth. To maintain maximum bandwidth over the multiple connections, VLAG is enabled on the LACP teams in this reference architecture.

Page 13: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

10

Figure 8 STP blocking implicit loops

Virtual LAN (VLAN) - a way to logically segment networks to increase network flexibility without changing the physical network topology. With network segmentation, each switch port connects to a segment that is a single broadcast domain. When a switch port is configured to be a member of a VLAN, it is added to a group of ports that belong to one broadcast domain. Each VLAN is identified by a VLAN identifier (VID). A VID is a 12-bit portion of the VLAN tag in the frame header that identifies an explicit VLAN.

Tagged Port – a port that has been configured as a tagged member of a specific VLAN. When an untagged frame exits the switch through a tagged member port, the frame header is modified to include the 32-bit tag associated with the PVID. When a tagged frame exits the switch through a tagged member port, the frame header remains unchanged (original VID remains).

Untagged Port – a port that has been configured as an untagged member of a specific VLAN. When an untagged frame exits the switch through an untagged member port, the frame header remains unchanged. When a tagged frame exits the switch through an untagged member port, the tag is stripped and the tagged frame is changed to an untagged frame.

IBM RackSwitch G8124E Top-of-Rack Network Switches

This reference architecture uses two IBM RackSwitch G8124E top-of-rack network switches containing up to twenty-four 10GbE ports (or 1GbE ports if 1GbE SFP+s are used) each. The G8124E provides primary data communication services. Redundancy across the switches is achieved by creating an LACP team over two inter-switch links (ISL) and enabling Virtual Link Aggregation (VLAG) between the two switches.

x3850 X6 Hyper-V Host Server Network Ports

Each Hyper-V host server has a single 2-port Broadcom NetXtreme II 10GbE network adapter that will be used for networking. Each host will maintain one 10Gbps connection to each of the two IBM RackSwitch G8124E network switches used in this reference architecture. Windows Server 2012 R2 NIC teaming is used for all networks to provide fault tolerance, and spread the workload across the network interfaces.

Page 14: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

11

A visual representation of the connections between the network adapters and the switches are shown in Figure 9.

10GbENetworkAdapter*

12

123

24

x3850 X6 Host Server 1G8124E Network

Switch 1

123

24

G8124E Network Switch 2

10GbENetworkAdapter

12

x3850 X6 Host Server 2

4

4

*For simplicity only one network adapter is shown

Figure 9 Network connections between host servers and network switches

Network Topology Design

Four isolated networks are required to support this reference architecture; a cluster private network (heartbeat), cluster public network which is used to connect to the domain, a network for virtual machine Live Migration, and a network for virtual machine communication.

A combination of physical and virtual isolated networks is configured at the host and the switch layers to satisfy isolation best practices.

At the physical host layer, each server is installed with a 2-port Broadcom NetXtreme II 10GbE network adapter. At the physical switch layer, there are two redundant IBM RackSwitch G8124E top-of-rack network switches with 24x 10GbE SFP+ ports.

On the data network side Windows Server 2012 R2 NIC teaming is used to provide fault tolerance and load balancing to all the communication networks (cluster private, cluster public, live migration, and virtual machine communication). This setup allows the most efficient use of network resources with a highly optimized configuration for network connectivity.

At the physical switch layer, VLANs are used to provide logical isolation between the various networks. A key element is properly configuring the switches to maximize available bandwidth and reduce congestion. However, based on individual environment preferences, there is flexibility regarding how many VLANs are created and what type of role-based traffic they handle. Once a final selection is made, ensure the switch configurations are saved and backed up.

Page 15: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

12

Switch ports used for cluster private, cluster public, live migration, and VM communication should be configured as “tagged” and the VLAN definitions specified on the respective ports for each switch. The previously mentioned networks will require VLAN assignments be made in Windows Server.

To enable communication between the two G8124E switches, an inter-switch link (ISL) is required. However, because a single ISL would be limited in bandwidth to the 10Gbps of a single connection and would not be redundant, two ISL connections are recommended. Create two ISL connections by physically cabling port 1 of switch 1 to port 1 of switch 2 and port 2 of switch 1 to port 2 of switch 2 with 10GbE networking cables.

Link Aggregation Control Protocol (LACP) is used to combine the two ISL connections into a single virtual link, called a trunk group. LACP teams provide for higher bandwidth connections and error correction between LACP team members.

Not only are LACP teams formed on the ISLs between the switches, but LACP teams are also formed on the host connections to the switches providing for host connection redundancy. To maintain maximum bandwidth over the multiple connections, VLAGs are also configured on the LACP teams.

Note: Disabling Spanning Tree on the LACP/VLAG teams helps avoid the wasted bandwidth associated with links blocked by spanning tree.

An illustration of the LACP/VLAG configuration used for this reference architecture is shown below in Figure 10.

Page 16: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

13

x3850 X6 Server 1 x3850 X6 Server 2

G8124E Switch 1 G8124E Switch 2

Trunk Group with LACP and VLAG

LACP/VLAG

LACP/VLAG

LACP/VLAG

LACP/VLAG

Figure 10 LACP/VLAG design

VLAN Definitions

VLANs are used to provide logical isolation between the various data traffic. This reference architecture uses seven VLANs. The VLANs used are described in Table 1 for quick reference and described in more detail below.

Page 17: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

14

Table 1 VLAN definitions

Network Name Description

VLAN 70 Cluster Public Network Used for host management, storage management, cluster public network, and out-of-band communication to IMM devices.

VLAN 60 Cluster Private Network Cluster private (heartbeat and cluster shared volume)

VLAN 50 Management Cluster Live Migration Network

Virtual Machine Live Migration traffic for 2-node management cluster

VLAN 40 Virtual Machine Communication Network

Network adapters assigned to virtual machines will be configured with VLAN 40 for communication.

VLAN 200 VLAG Used for VLAG communication

VLAN 100 VLAG Used for VLAG communication

VLAN 4094 Inter-Switch Link (ISL) VLAN Dedicated to ISL

All VLAN data traffic is over a single NIC team which is built using the 2-port Broadcom NetXtreme II 10GbE network adapter installed in each of the Hyper-V host servers. The NIC team is created using the Windows Server 2012 R2 in-box NIC teaming feature, which provides fault tolerance and load balancing for the networks. The VLANs described in this section will be sharing the bandwidth of the single NIC team. Therefore, Quality of Service (QoS) will be applied from Windows to ensure each VLAN has available bandwidth.

Three virtual network adapters will be created from Windows on each Hyper-V host server and assigned VLANs 70, 60, and 50 using Windows PowerShell. These adapters will be used for host management related data traffic (cluster public, cluster private, and Live Migration). In addition to the virtual network adapters used by the Hyper-V host servers, each virtual machine hosted by the Hyper-V host servers will be assigned one or more virtual network adapters from within Hyper-V. The virtual machine’s network adapters will be assigned VLAN 40 (virtual machine communication) to isolate virtual machine data traffic from the other networks. Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will require the appropriate switch ports (see Table 2) for each Hyper-V host server to be set to tagged, and the VLAN definitions should include these ports for each switch. Cluster Public Network (VLAN 70) This network supports communication for the host management, storage management, cluster public network, and out-of-band communication with the servers’ IMM interface.

Page 18: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

15

Cluster Private and CSV Networks (VLAN 60) This network is reserved for cluster private (heartbeat and cluster shared volume) communication between clustered Hyper-V host servers. There should be no IP routing or default gateways for cluster private networks. Live Migration Network (VLAN 50) A separate VLAN should be created to support live migration for the management cluster. There should be no routing on the live migration VLAN. Virtual Machine Communication Network (VLAN 40) This network supports virtual machine data traffic. Quality of Service (QoS) will be applied from Windows for the virtual machine communication. Note: If additional segregation between networks used by the virtual machines is required then the switch ports can have additional VLAN IDs assigned. Each virtual machine can then be assigned the necessary VLAN ID as part of its network settings under Hyper-V manager. VLAG Communication Network (VLANs 100 and 200) These networks support communication between the host and switch ports configured as VLAG/LACP teams. Trunk Group Network (VLAN 4094) A dedicated VLAN to support the inter-switch link trunk group between the two switches should be implemented. The spanning tree protocol should be disabled on the trunk group VLAN. Figure 11 illustrates the VLANs which are defined on the virtual network adapters that communicate over the NIC team.

Page 19: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

16

Figure 11 VLANs used for the 2-node Hyper-v Failover Cluster

Network Switch Port Definitions

By default the G8124E top-of-rack network switch ports are configured as untagged ports. To support the multiple VLAN traffic that will be operating over the host ports and trunk group ports, the ports need to be configured as tagged ports and VLAN IDs assigned. The default VLAN ID will remain as PVID equal to ‘1’.

Table 2 describes the roles each of the switch ports provides for the two G8124E switches in the configuration.

G8124-E switches provide fault tolerant data connectivity. Fault tolerant NIC teams and LACP teams across the switches provide redundant communication paths for both the servers and virtual machines.

Page 20: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

17

Table 2 G8124E switch port roles

Switch Port Switch 1 Switch 2

1 Inter-switch link (connected to port 1 on switch 2)

Inter-switch link (connected to port 1 on switch 1)

2 Inter-switch link (connected to port 2 on switch 2)

Inter-switch link (connected to port 2 on switch 1)

3 Server 1 – Network Adapter Port 1

Cluster Public, Cluster Private, VM Live Migration, and Virtual Machine Communication traffic for Hyper-V Host Server 1

Server 1 – Network Adapter Port 2

Cluster Public, Cluster Private, VM Live Migration, and Virtual Machine Communication traffic for Hyper-V Host Server 1

4 Server 2 – Network Adapter Port 1

Cluster Public, Cluster Private, VM Live Migration, and Virtual Machine Communication traffic for Hyper-V Host Server 2

Server 2 – Network Adapter Port 2

Cluster Public, Cluster Private, VM Live Migration, and Virtual Machine Communication traffic for Hyper-V Host Server 2

Table 3 describes the VLAN configuration of the ports for each of the two G8124E switches used in this reference architecture.

Table 3 G8124E switch port VLAN assignments

Port Tagging PVID Switch 1 VLANs Switch 2 VLANs

1 Yes 4094 70, 60, 50, 40, 4094 70, 60, 50, 40, 4094

2 Yes 4094 70, 60, 50, 40, 4094 70, 60, 50, 40, 4094

3 Yes 100 70, 60, 50, 40, 100 70, 60, 50, 40, 100

4 Yes 200 70, 60, 50, 40, 200 70, 60, 50, 40, 200

Ports are configured as untagged by default. To enable multi-VLAN traffic, all ports will be set to tagged in this configuration. The commands used to enable tagging on the G8124E switch ports are shown in the next section.

In addition to the assigned VLAN IDs and PVID, each LACP team will have its own unique Port Admin Key (VLAG ID) with each port that is a member of that team being set to this unique value.

Spanning Tree Protocol is disabled on the VLAG communication VLAN over the ISL. The commands to disable Spanning Tree Protocol are listed in the next section.

Figure 12 illustrates the concept of enabling VLAGs on LACP teams to allow active-active use of both connections.

Page 21: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

18

x3850 X6 Server 1

x3850 X6 Server 2

G8124E Switch 1(VLAG Peer 1)

G8124E Switch 2(VLAG Peer 2)

Trunk Group LACP Team with VLAGLACP/VLAG AdminKey: 100

PVID: 4094VLAG VLAN: 4094

Ports: 1 & 2 on each switch

Connected to:Switch 1: Port 3Switch 2: Port 3

Connected to:Switch 1: Port 4Switch 2: Port 4

LACP Team with VLAGLACP/VLAG AdminKey: 101

PVID: 100VLAG VLAN: 100

LACP Team with VLAGLACP/VLAG AdminKey: 102

PVID: 200VLAG VLAN: 200

Figure 12 LACP/VLAG configuration

Table 4 describes the VLAG/LACP configurations for both switch 1 and 2.

Table 4 LACP/VLAG configurations

VLAG Switch 1 Switch 2 LACP/VLAG Admin Key

VLAG Communication

VLAN

Server 1 LACP Team

Port 3 Port 3 101 100

Server 2 LACP Team

Port 4 Port 4 102 200

Trunk Group (ISL) LACP Team

Ports 1 & 2 Ports 1 & 2 100 4094

Using ISCLI to configure the G8124E switch

Below are some examples of the ISCLI commends to setup and configure the switches for the ISL, VLAGs, Ports, and VLANS.

Important: The following commands are for use with switches running IBM Networking OS 7.2 software. If your switch has a different version, please refer to the applicable Application Guide for the correct commands.

To access the ISCLI, refer to the RackSwitch G8124E Command Reference manual.

Page 22: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

19

Accessing Global Configuration Mode

To affect changes to the running configuration on the switch, you must access Global Configuration Mode. Global Configuration Mode is accessed from Privileged EXEC mode, which in turn is accessed from User Exec Mode.

1. After entering the appropriate password to enter User Exec Mode, run the following command to access Privileged EXEC Mode:

RS G8124> enable

2. To access Global Configuration Mode from Privileged EXEC Mode, enter the following command:

RS G8124# configure terminal

Note: The G8124E switch supports abbreviated commands. For example, conf t can be used in replace of configure terminal.

The commands shown in the remainder of this section are run from Global Configuration Mode.

Configure the ISL and VLAG Peer Relationship

This section covers the creation of an inter-switch link (ISL) between switch 1 and 2, allowing traffic to flow between the two switches as if they are one logical entity. VLAG is then enabled to utilize the full bandwidth of the two ports being used in the ISL.

Important: The commands in this section are meant to be run in the order shown.

1. If Spanning-Tree is desired on the switch, use PVRST or MSTP mode only. To configure STP to run in PVRST mode run the following command on each switch before continuing:

RS G8124E(config)# spanning-tree mode pvrst

2. Enable VLAG globally by running the following command on each switch before continuing:

RS G8124 -E(config)# vlag enable

3. Place the ISL into a dedicated VLAN (VLAN 4094 is recommended) by running these commands on each switch before continuing:

RS G8124 -E(config)# vlan 4094 RS G8124 -E(config-vlan)# enable RS G8124 -E(config-vlan)# member 1-2 RS G8124 -E(config-vlan)# exit

4. Configure the ISL ports on each switch and place them into a port trunk group by running the following commands on each switch before continuing:

RS G8124 -E(config)# interface port 1-2 RS G8124 -E(config-if)# tagging RS G8124 -E( config-if)# lacp mode active RS G8124 -E(config-if)# lacp key 100 RS G8124 -E(config-if)# exit

5. Configure the VLAG Tier ID. This is used to identify the VLAG switch in a multi-tier environment:

RS G8124E(config)# vlag tier-id 10

Page 23: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

20

6. If spanning-tree is used on the switch, turn spanning-tree off for the ISL by running the following commands on each switch before continuing:

RS G8124 -E(config)# spanning-tree stp 20 vlan 4094 RS G8124 -E(config)# no spanning-tree stp 20 enable

Note: Although Spanning Tree is disabled on the ISL, Spanning Tree should be enabled on all switches on other networks according to your organizations requirements.

7. Define VLAG peer relationship by running the following commands on each switch before continuing:

RS G8124 -E(config)# vlag isl vlan 4094 RS G8124 -E(config)# vlag isl adminkey 100 RS G8124 -E(config)# exit

8. Save the configuration changes by running the following command on each switch:

RS G8124 -E# write

Enable Tagging on Host Ports

Before moving forward with VLAN creation, tagging should be enabled on both host ports (ports 3 and 4 in our test environment) on each switch.

Important: The commands in this section are meant to be run in the order shown and after the commands in previous sections have been run.

1. Run the following commands from the CLI on each switch:

RS G8124 -E(config)# interface port 3-4 RS G8124 -E(config-if)# tagging RS G8124 -E(config-if)# exit

Configure the VLANs

Before the VLAGs are formed on the host ports, the VLANs must be defined and assigned.

Important: The commands in this section are meant to be run in the order shown and after the commands in previous sections have been run.

1. From the ISCLI of Switch1:

RS G8124 -E(config)# vlan 70 RS G8124 -E(config-vlan)# enable RS G8124 -E(config-vlan)# member 1-4 RS G8124 -E(config-vlan)# exit RS G8124 -E(config)# vlan 60 RS G8124 -E(config-vlan)# enable RS G8124 -E(config-vlan)# member 1-4 RS G8124 -E(config-vlan)# exit RS G8124 -E(config)# vlan 50 RS G8124 -E(config-vlan)# enable RS G8124 -E(config-vlan)# member 1-4 RS G8124 -E(config-vlan)# exit

Page 24: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

21

RS G8124 -E(config)# vlan 40 RS G8124 -E(config-vlan)# enable RS G8124 -E(config-vlan)# member 1-4 RS G8124 -E(config-vlan)# exit

2. To verify the VLANs have been configured correctly, run the following command:

RS G8124 -E# show vlan

Note: The command above is run from Privileged EXEC mode rather than Global Configuration mode.

Figure 13 The results of the show vlan command

3. Write the running configuration to the startup configuration by running the following command:

RS G8124 -E# write

Note: The command above is run from Privileged EXEC mode rather than Global Configuration mode.

Configure LACP teams and enable VLAG on the host connections

This section describes the steps required to create LACP teams on the two connections from each host to each of the switches. Once the LACP teams have been created, VLAG is enabled on the team to provide active-active usage of the connections.

Important: The commands in this section are meant to be run in the order shown and after the commands in previous sections have been run.

1. Configure the LACP team for host server 1. Run the following commands on both switches before continuing. Note: Comments are provided in blue. Comments are for informational purposes only. They are not typed in the command line.

RS G8124(config)# vlan 100 Create VLAN 100 for LACP communication RS G8124(config-vlan)# enable

RS G8124(config-vlan)# member 1-2, 3 Adds the ISL ports and the host port to the VLAN RS G8124(config-vlan)# exit

RS G8124(config)# interface port 3 Select the host port for host server 1

RS G8124(config-if)# lacp mode active Activates LACP for the host port

RS G8124(config-if)# lacp key 101 Assigns a unique admin key for the LACP team RS G8124(config-if)# exit

2. Enable VLAG for the LACP team on each switch. This allows the LACP teams to be formed across the two G8124E switches. Run the following command on both switches before continuing.

Page 25: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

22

RS G8124 -E(config)# vlag adminkey 101 enable RS G8124E(config)# exit

3. Configure the LACP team for host server 2. Run the following commands on both switches before continuing.

Note: Comments are provided in blue. Comments are for informational purposes only. They are not typed in the command line.

RS G8124(config)# vlan 200 Create VLAN 200 for LACP communication RS G8124(config-vlan)# enable

RS G8124(config-vlan)# member 1-2, 4 Adds the ISL ports and the host port to the VLAN RS G8124(config-vlan)# exit

RS G8124(config)# interface port 4 Select the host port for host server 2

RS G8124(config-if)# lacp mode active Activates LACP for the host port

RS G8124(config-if)# lacp key 102 Assigns a unique admin key for the LACP team RS G8124(config-if)# exit

4. Enable VLAG for the LACP team on each switch. This allows LACP teams to be formed across the two G8124E switches. Run the following command on both switches before continuing.

RS G8124 -E(config)# vlag adminkey 102 enable

The LACP teams have been created and VLAG has been enabled. However, the VLAGs will show as offline in the ISCLI of each of the switches until the NIC team has been formed for each host within Windows Server 2012 R2.

5. Write the running configuration to the startup configuration by running the following command:

RS G8124 -E(config)# write

Recommended: Backup the switch configuration

IBM recommends backing up the switch configuration before continuing.

1. Backup the switch configuration to a TFTP server by running the following commands. Note: This document does not cover the configuration or instruction on the use of a TFTP server.

RS G8124 -E# copy running-config tftp mgt-port Enter the IP address of the TFTP Server: xx.xx.xx.yy Enter the filename: SW1-BackupConfig.cfg

Host Network Configuration

IBM recommends renaming the network interface ports in Windows to better document the network topology. In the example shown in Figure 14 the ports are renamed to identify the ports as 10Gb Ethernet (10gbE), assign each port with an identifier (PT1 =Port 1), and document the switch and port to which it is connected (SW1-3 = Switch 1:Port 3).

Page 26: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

23

Figure 14 Renamed 10GbE network ports in Windows Server 2012 R2

Once the network adapter ports have been renamed, the NIC team, virtual switch, and virtual network adapters can be created using Windows PowerShell commands.

Run the following commands from each of the Hyper-V hosts:

1. Run the following command from PowerShell to form a NIC team named “ClusterTeam” using the two 10GbE network ports:

Note: When running this command from the second Hyper-V host server, the adapter names used in the command must be changed to match the name of the adapter ports in the server. For example, this reference architecture uses “10gbE-PT1-SW1-4” and “10gbE-PT2-SW2-4” as the adapter port names on Hyper-V host server 2.

PS> New-NetLbfoTeam -name "ClusterTeam" -TeamMembers "10gbE-PT1-SW1-3", "10gbE-PT2-SW2-3" -TeamingMode LACP

2. Run the following command to create a virtual switch, named “ClusterSwitch”, which uses the newly formed NIC team:

PS> New-VMSwitch -Name "ClusterSwitch" -NetAdapterName "ClusterTeam" -MinimumBandwidthMode Weight -AllowManagementOS $true

3. When a virtual switch is created, a virtual network adapter is automatically created with the same name as the new virtual switch (in this case the name is “ClusterSwitch”). To better document the role of the network connection IBM recommends changing the name of the automatically-created network adapter to something more meaningful. To change the name to “ClusterPublicAdapter” run the following command:

PS> Rename-VMNetworkAdapter -ManagementOS -Name "ClusterSwitch" -NewName "ClusterPublicAdapter"

4. One virtual network adapter will not fulfill the networking requirements of the reference architecture; therefore, more virtual network adapters must be created. Run the following command to create two additional virtual network adapters (“ClusterPrivateAdapter” and “LiveMigrationAdapter”):

PS> Add-VMNetworkAdapter -ManagementOS -Name "ClusterPrivateAdapter" -SwitchName "ClusterSwitch" PS> Add-VMNetworkAdapter -ManagementOS -Name "LiveMigrationAdapter" -SwitchName "ClusterSwitch"

5. Because the various virtual network adapters will communicate over a single 10GbE pipe, Quality of Service must be configured to ensure each network has available bandwidth. The commands below will guarantee the “ClusterPublicAdapter” virtual network adapter and the “ClusterPrivateAdapter” virtual network adapter each have a minimum bandwidth of 10% of the

Page 27: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

24

total bandwidth available. The “LiveMigrationAdapter” virtual network adapter may require more bandwidth to migrate virtual machines; therefore it is assigned a minimum bandwidth of 20%.

PS> Set-VMNetworkAdapter -ManagementOS -Name "ClusterPublicAdapter" -MinimumBandwidthWeight 10 PS> Set-VMNetworkAdapter -ManagementOS -Name "ClusterPrivateAdapter" -MinimumBandwidthWeight 10 PS> Set-VMNetworkAdapter -ManagementOS -Name "LiveMigrationAdapter" -MinimumBandwidthWeight 20

6. To logically isolate the various networks, run the following commands to assign VLANs to the virtual network adapters:

Note: The VLAN ID for VLAN 40 will be assigned to each virtual machine’s network adapter from within the Hyper-V Management UI rather than through PowerShell.

PS> Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "ClusterPublicAdapter" -Access -VlanId 70 PS> Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "ClusterPrivateAdapter" -Access -VlanId 60 PS> Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "LiveMigrationAdapter" -Access -VlanId 50

7. When configuring the IP addresses on the virtual network adapters, it is important the ClusterPublicAdapter’s IP address be on the same subnet as the domain controller and DNS server in your environment. Also, to prevent a warning during Windows failover cluster validation, a gateway IP address should be configured on the ClusterPublicAdapter.

To configure the network IP address on the virtual network adapters, run the following commands:

Note: When running these commands on the second host server, increment the last octet by 1 (e.g. rather than 192.168.70.71, use 192.168.70.72 on the second host server).

PS> New-NetIPAddress -InterfaceAlias "vEthernet (ClusterPublicAdapter)" -IPAddress 192.168.70.71 -PrefixLength 24 PS> New-NetIPAddress -InterfaceAlias "vEthernet (ClusterPrivateAdapter)" -IPAddress 192.168.60.61 -PrefixLength 24 PS> New-NetIPAddress -InterfaceAlias "vEthernet (LiveMigrationAdapter)" -IPAddress 192.168.50.51 -PrefixLength 24

Note: IBM recommends implementing an IP naming scheme similar to what is used here to better document the network topology. In this example, the third octet of the IP address designates the VLAN (70 for VLAN 70, 60 for VLAN 60 etc.), while the fourth octet designates the server (71 for server 1, 72 for server 2, etc.).

8. Finally, set the DNS server on the “ClusterPublicAdapter” by running the following command:

Note: The DNS server should be on the same subnet as the “ClusterPublicAdapter” virtual network adapter.

PS> Set-DnsClientServerAddress -Interfacealias "vEthernet (ClusterPublicAdapter)" -ServerAddress <IP Address of your DNS server>

9. Finally, set the network binding order so the cluster public network (ClusterPublicAdapter - VLAN 70) is at the top. This can be accomplished from the Network Connections window in the host server’s operating system. Click Alt to make the command ribbon visible and click Advanced. Click Advanced Settings to open the Advanced Settings window. The available connections are

Page 28: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

25

listed under Connections. Use the arrows to move ClusterPublicAdapter to the top of the list. Click OK. Do this on each host server.

Before continuing, test the network implementation thoroughly to ensure communication is not lost despite the loss of a network switch or connection.

Storage Design

This section describes the storage topology and includes instructions to correctly configure the storage environment.

Key Storage Concepts and Terminology

This section covers basic concepts and terminology that will be used throughout the following sections.

Managed Disk (MDisk) - refers to the unit of storage that IBM Storwize V7000 virtualizes. This unit could be a logical volume on an external storage array presented to the IBM Storwize V7000 or a RAID array consisting of internal drives. The IBM Storwize V7000 can then allocate these MDisks into various storage pools. An MDisk is not visible to a host system on the storage area network, as it is internal or only zoned to the IBM Storwize V7000 system.

Storage Pool – a collection of MDisks that are grouped together to provide capacity for volumes. All MDisks in the pool are split into extents with the same size. Volumes are then allocated out of the storage pool and are mapped to a host system.

World Wide Name (WWN) - 64-bit identifiers for devices or ports. All devices with multiple ports have WWNs for each port, which provides more detailed management. Because of their length, WWNs are expressed in hexadecimal numbers, similarly to MAC addresses on network adapters.

Zoning - lets you isolate a single server to a group of storage devices or a single storage device, or associate a grouping of multiple servers with one or more storage devices, as might be needed in a server cluster deployment. Zoning is implemented at the hardware level (by using the capabilities of Fibre Channel switches) and can usually be done either on a port basis (hardware zoning) or on a WWN basis (software zoning). Zoning is configured on a per-target and initiator basis.

Cluster Shared Volume (CSV) - Microsoft Windows failover clustering supports Cluster Shared Volumes. A CSV is a logical drive concurrently visible to all cluster nodes, and allows for simultaneous access from each node.

IBM System Storage SAN48B-5 Fibre Channel Switches

This reference architecture uses two IBM System Storage SAN48B-5 Fibre Channel switches containing up to 48 8Gbps Fibre channel ports. The SAN48B-5 provides primary storage communication services.

x3850 X6 Hyper-V Host Server Storage Ports

In this instance, each Hyper-V host server has a single 2-port Emulex LightPulse LPe 12002-M8 8Gbps HBA that will be used for connecting to the storage-area network (SAN). Each host will maintain one 8Gbps connection to each of the two IBM SAN48B-5 fibre channel switches used in this reference architecture. Additional adapters may be configured depending on client requirements for redundancy.

A visual representation of the fibre connections between the HBAs and the switches is shown in Figure 15.

Page 29: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

26

1

IBM System Storage SAN48B-5 Fibre Switch

8GbpsHBA*

12

x3850 X6 Host Server 1

8GbpsHBA

12

x3850 X6 Host Server 2

0

IBM System Storage SAN48B-5 Fibre Switch

1

0

47

47

*For simplicity only one HBA is shown

Figure 15 Storage connections between host servers and SAN48B-5 fibre switches

Storage Topology Design

At the physical host layer, a single 2-port Emulex LightPulse LPe 12002-M8 8Gbps HBA is installed on each of the Hyper-V host servers.

At the physical switch layer, two redundant SAN48B-5 fibre switches each with forty-eight 8Gbps Fibre Channel ports provide storage connectivity to the hosts and the storage system.

On the storage I/O side, the IBM Subsystem Device Driver Specific Module (SDDDSM) multi-path driver in conjunction with the Windows Server 2012 R2 Multi-path I/O (MPIO) feature is used to provide fault tolerance and path management between hosts and storage. In the event one or more hardware component fails, causing a path to fail, multi-pathing logic chooses an alternate path for I/O so applications can still access their data. Multi-path driver details and installation are covered in more depth in section, Enabling Multipathing.

At the logical layer, zoning is used to provide isolation between the various data traffic. Zoning will be covered in more depth in section, Storage Zoning.

Figure 16 illustrates the fibre connections and zoning between the servers, switches and the Storwize V7000.

Note: Each set of connections in Figure 16 is defined by a unique color. Each color represents a single zone.

Page 30: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

27

x3850 X6 Server 1

x3850 X6 Server 2

IBM System StorageSAN48B-5 Fibre Switch 1

IBM System StorageSAN48B-5 Fibre Switch 2

Connected to:Port 1 -> Switch 1: Port 0Port 2 -> Switch 2: Port 0

Connected to:Port 1 -> Switch 1: Port 1Port 2 -> Switch 2: Port 1

Storwize V7000 with Dual Controllers

Controller AConnected to:Port 1 -> Switch 1: Port 2Port 2 -> Switch 2: Port 2Port 3 -> Switch 1: Port 8Port 4 -> Switch 2: Port 8

Controller BConnected to:Port 1 -> Switch 1: Port 3Port 2 -> Switch 2: Port 3Port 3 -> Switch 1: Port 9Port 4 -> Switch 2: Port 9

B

A

Figure 16 SAN cabling and zoning diagram (zones defined by color)

Storage Configuration

This section covers the configuration of the storage and SAN. The following topics are covered:

Storage Zoning

Configuring the Host Bus Adapters

Enabling Multipathing

Host Definition on the Storwize V7000

Storage Partitioning

Storage Zoning

When creating a failover cluster designed to host virtual machines, IBM recommends using WWPN-based zoning (software zoning) rather than port-based zoning (hardware zoning) on the fibre switches. In port-based zoning, a port is placed into a zone and anything connecting to that port is included in the zone (or zones). In WWPN-based zoning, zones are defined using the WWPNs of the connected interfaces. WWPN-based zoning allows for a virtual SAN to be defined at the virtualization layer which uses the same physical HBA ports the host uses while maintaining isolation between the host and virtual machine’s data traffic.

Page 31: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

28

IBM recommends using single-initiator zoning for path isolation. In single-initiator zoning, zones are created based on a single initiator. This means that each zone will contain a single WWPN, or initiator. Multiple storage array WWPNs can be added to the zone without violating the single initiator rule—storage arrays are the targets.

Zoning can be configured from the SAN48B-5 command-line interface or from the management web interface. This reference architecture focuses on the command-line interface.

More information about managing the SAN48B-5 can be found in the Fabric OS 7.1 Administrator’s Guide: http://www.brocade.com/downloads/documents/product_manuals/B_SAN/FOS_AdminGd_v710.pdf

Note: If each switch is cabled as described in this document, each switch will have the same zoning configuration.

Table 5 lists the port definitions with connected devices, as well as the WWPNs and fibre channel aliases that will be defined for each of the SAN48B-5 fibre switches.

The zones in Table 5 are highlighted using different shading. For example, the light blue rows in the first section comprise zone 1 on switch 1, while the darker blue rows comprise zone 2 on switch 1. The light blue rows in the second section comprise zone 1 on switch 2, while the darker blue rows in the second section comprise zone 2 switch 2.

Note: The WWPNs in Table 5 are unique to the test environment used for the creation of this reference architecture. The WWPNs for your environment will be different than those shown here. To obtain the WWPNs, the following command was run for each connected port on each of the fibre switches:

IBM_2498_F48:admin> portloginshow <port number>

Page 32: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

29

Table 5 Port definitions on the SAN48B-5 fibre switches

IBM System Storage SAN48B-5 Fibre Switch 1

Port Connected To Physical WWPN Alias

0 HBA Port-1 on Hyper-V Host Server 1

10:00:00:00:C9:71:BF:B8 Server1Port1

1 HBA Port-1 on Hyper-V Host Server 2

10:00:00:00:C9:7D:2B:2A Server2Port1

2 Fibre Port-1 on V7000 Controller A

50:05:07:68:03:04:11:62 ControllerAPort1

3 Fibre Port-1 on V7000 Controller B

50:05:07:68:03:04:11:63 ControllerBPort1

8 Fibre Port-3 on V7000 Controller A

50:05:07:68:03:0C:11:62 ControllerAPort3

9 Fibre Port-3 on V7000 Controller B

50:05:07:68:03:0C:11:63 ControllerBPort3

IBM System Storage SAN48B-5 Fibre Switch 2

Port Connected To Physical WWPN Alias

0 HBA Port-2 on Hyper-V Host Server 1

10:00:00:00:C9:71:BF:B9 Server1Port2

1 HBA Port-2 on Hyper-V Host Server 2

10:00:00:00:C9:7D:2B:2B Server2Port2

2 Fibre Port-2 on V7000 Controller A

50:05:07:68:03:08:11:62 ControllerAPort2

3 Fibre Port-2 on V7000 Controller B

50:05:07:68:03:08:11:63 ControllerBPort2

8 Fibre Port-4 on V7000 Controller A

50:05:07:68:03:10:11:62 ControllerAPort4

9 Fibre Port-4 on V7000 Controller B

50:05:07:68:03:10:11:63 ControllerBPort4

To define the zones on the switches, perform the following steps:

1. Log in to one of the switches as admin using a serial cable connection or access ssh using the switch’s IP address.

Page 33: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

30

2. Define the aliases for each of the connected devices (using Table 5 as a reference). The WWPNs in your environment will be different than shown in the example below:

IBM_2498_F48:admin> alicreate Server1Port1, “10:00:00:00:C9:71:BF:B8”

IBM_2498_F48:admin> alicreate Server2Port1, “10:00:00:00:C9:7D:2B:2A”

IBM_2498_F48:admin> alicreate ControllerAPort1, “50:05:07:68:03:04:11:62”

IBM_2498_F48:admin> alicreate ControllerBPort1, “50:05:07:68:03:08:11:63”

IBM_2498_F48:admin> alicreate ControllerAPort3, “50:05:07:68:03:0C:11:62”

IBM_2498_F48:admin> alicreate ControllerBPort3, “50:05:07:68:03:0C:11:63”

3. Once you have created the aliases type the following command to verify they are correct:

IBM_2498_F48:admin> alishow *

4. From the prompt, run the following command to create the first zone (using Table 5 as a reference):

IBM_2498_F48:admin> zonecreate ClusterZone1, "Server1Port1; ControllerAPort1; ControllerBPort1”

5. From the prompt, run the following command to create the second zone (again using Table 5 as a reference):

IBM_2498_F48:admin> zonecreate ClusterZone2, "Server2Port1; ControllerAPort3; ControllerBPort3”

6. Now the zones have been created, they need to be added to an active configuration. If there is not already an active configuration on the switch, then the following command will create a configuration and add the zones:

IBM_2498_F48:admin> cfgcreate ClusterCfg, “ClusterZone1; ClusterZone2”

7. If there is already an active configuration on the switch, then the following command will add the newly created zones to the active configuration:

IBM_2498_F48:admin> cfgadd <name of configuration>, "ClusterZone1; ClusterZone2"

8. Once you have either created a new configuration or added the zones to an existing configuration, the configuration must be enabled. Run the following command to enabled the config:

IBM_2498_F48:admin> cfgenable ClusterCfg

Or, if you added the zones to an existing configuration then run the following command using the name of the active configuration:

IBM_2498_F48:admin> cfgenable <name of configuration>

9. Save the active configuration by running the following command:

IBM_2498_F48:admin> cfgsave

10. Repeat this process on the second fibre switch to complete the switch zoning.

Page 34: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

31

Configuring the Host Bus Adapters

After you install the Emulex HBA and the driver, you must configure the HBA. IBM recommends using the latest supported IBM driver for the Emulex HBA when connecting to the Storwize V7000.

To check your driver version, open Microsoft Windows Device Manager and expand the Storage Controllers section.

Figure 17 illustrates the current IBM driver using Microsoft Windows Device Manager.

Figure 17 Storport driver for the Emulex LightPulse HBAs

IBM recommends setting the topology to 1 (1=F_Port Fabric) using the Emulex utility OneCommand Manager (default is 0).

Figure 18 describes the location of the topology setting in OneCommand Manager.

Page 35: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

32

Figure 18 Changing the topology of the HBA ports using OneCommand Manager

Modify the topology setting for each of the HBA ports on each of the servers before continuing.

Enabling Multipathing

IBM provides the IBM Subsystem Device Driver Device-Specific Module (SDDDSM) for use with the IBM Storwize V7000. The IBM SDDDSM is a multi-path driver that is required when multiple connections to storage are present.

The SDDDSM works in conjunction with the Microsoft Multipath I/O (MPIO) feature to support dynamic multi-pathing. Dynamic multi-pathing automatically configures and updates available paths to a storage volume. The SDDDSM uses a load-balancing policy to equalize the load across all preferred paths. No user intervention is required, other than the typical new device discovery on a Windows operating system. The SDDDSM is dependent on the Microsoft MPIO feature and will install it along with the SDDDSM if not already installed.

You can download the SDDDSM using the link provided below: http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000350

Important: When you use SDDDSM for multipathing, it is important to use the latest HBA driver provided by IBM.

For more information about the IBM SDDDSM please visit the following URL: http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp?topic=%2Fcom.ibm.storage.svc.console.doc%2Fsvc_w2kmpio_21oxvp.html

To install the SDDDSM, download the SDDDSM from the IBM website and extract the zip file onto each of the serves. Run the setup.exe file to install the SDDDSM.

Figure 19 shows the SDDDSM during the install process.

Note: For this reference architecture version SDDDSM 2.4.3.4-4 for Windows Server 2012 was used.

Page 36: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

33

Figure 19 IBM SDDDSM install progress

Host Definition on the Storwize V7000

Before performing the storage partitioning, the hosts should be defined on the Storwize V7000. To do so, first document the WWPNs of the HBA ports installed on each of the host servers.

Open the Emulex OneCommand Manager application and document the WWPN of each port of the HBA installed on each of the host servers as shown in Figure 20.

Figure 20 Finding WWPNs using Emulex OneCommand Manager

Page 37: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

34

Once you have documented the HBA port WWPNs, open the Storwize V7000 web interface and authenticate. The web interface is shown in Figure 21.

Figure 21 IBM Storwize V7000 web UI

To define the hosts follow the steps below:

1. Open the Hosts window by clicking the icon illustrated by a disk-group attached to a server (highlighted in Figure 22) and then click Hosts.

Figure 22 Open the Hosts window on the Storwize V7000

2. Click New Host and then click Fibre Channel Host on the Create Host window.

Page 38: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

35

3. Type a name for the host, then select one of the WWPNs from the dropdown list and click the Add Port to List button. Select the second WWPN associated to the server and click the Add Port to List button.

Figure 23 Entering the WWPNs of the HBA on the Hyper-V host server

4. Click the Create Host button to create the host. 5. Repeat the process for the HBA ports on the second Hyper-V host server.

Storage Partitioning

This reference architecture uses two logical drives which will be simultaneously accessed from both members of the failover cluster. Two logical drives should be created initially; a small 5GB partition for the cluster’s quorum disk and a larger 4TB logical drive which will be the CSV for housing virtual hard disks (VHD) for the virtual machine system files.

Note: IBM recommends storing VHDs used for virtual machine data files on a separate CSV than the CSV housing the VHD files for the virtual machine system files. This reference architecture does not cover the creation of a second CSV, but the process is the same as the one outlined in this document.

The IBM Storwize V7000 supports a concept called Managed Disks (MDisk). An MDisk is a unit of storage that the Storwize V7000 virtualizes. This unit is typically a RAID group created from the available V7000 disks, but can also be logical volumes from an external 3rd party storage system presented to the V7000. The MDisks can be allocated to various storage pools in Storwize V7000 for different usage or workloads.

A storage pool is a collection of MDisks that are grouped together to provide capacity for the creation of the virtual volumes or LUNs that can be mapped to the hosts. MDisks can be added to a storage pool at any time to increase the capacity and performance.

Once a storage pool has been defined, two logical disks should be created initially; one for the cluster quorum and one for a cluster shared volume.

Note: Disk configuration and performance can be highly workload dependent, and while this disk configuration will fit most end user applications, it is recommended to profile and analyze your specific environment to ensure adequate performance for your needs.

Page 39: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

36

Figure 24 illustrates the storage partitioning required to support the Microsoft Windows Failover Cluster.

Figure 24 Storage partitioning on the Storwize V7000

To partition the storage, follow the steps below:

1. Open the Internal Storage window by clicking the graphic shown in Figure 25 and then clicking Internal Storage from the pop-up menu.

Figure 25 Opening the Internal Storage window

2. On the Internal Storage window, click the Configure Storage button to open the Configure Internal Storage dialogue box.

3. On the Configure Internal Storage dialogue box select the radio button labeled Select a different configuration. Next, select the appropriate drive class from the from the Drive Class drop down (the drive class will vary depending on the type of drives you have installed). Next, select the type

Page 40: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

37

of RAID to use for the MDisks. In this reference architecture RAID-5 is selected. However, the RAID level for your environment is dependent on the workload or other requirements of your organization. Next select whether to optimize for performance or capacity. Again, this is highly dependent on the workload requirements of your organization. In this reference architecture, capacity is used for drive configuration. Finally, enter the number of drives you want defined as MDisks. In this reference architecture 24 disks were used to create three 8-disk MDisks. See Figure 26. Click Next once you have configured the internal storage.

Figure 26 Configuring internal storage

4. On the next page of the dialogue box, select the radio button labeled Create one more new pools. Finally, type a name for the new storage pool and click Finish.

Page 41: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

38

Figure 27 Creating a Storage Pool using the MDisks

The next step is to define logical volumes from the Storage Pool that was just created and map the volumes to the hosts that were defined in a previous section. To define the logical volumes and map them to the hosts follow the steps below:

1. Open the Volumes by Pool window by clicking on the graphic shown in Figure 28 and then clicking Volumes by Pool.

Figure 28 Opening the Volumes by Pool window

2. On the Volumes by Pool window, click the New Volume button. 3. When the New Volume window opens, click the volume type you would like to use. This reference

architecture uses Generic. Click Generic to continue.

Page 42: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

39

4. One you click Generic, the window expands with a list of available Storage Pools. Click the Storage Pool you created earlier.

5. Enter the size of the size of the logical volume (in GB) and then type a name. Click the Create and Map to Host button. The example shown in Figure 29 is the 5 GB Quorum logical drive.

Figure 29 Defining the size and name for a logical volume

6. Once the task completes, click the Continue button. 7. On the next window, select a host from the drop-down, to open the Modify Host Mappings window. 8. Click the Apply button to apply the current mapping, then using the Host drop-down, select the

second server.

Figure 30 Modify Host Mappings window

9. Click the Apply button again, to apply the same mapping to the second host. At this point a warning dialogue will appear informing you a logical volume has been mapped to multiple hosts. Click Map All Volumes to continue.

Page 43: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

40

Figure 31 Warning that a logical volume is mapped to multiple hosts

10. Close the Modify Mappings Window. 11. Repeat the process to create a second logical volume that is 4TB in size and named

Cluster_Shared_Volume.

The newly created logical volumes should now be visible to both of the host servers.

Before continuing, verify the logical volumes are visible in the Disks section of Windows Disk Manager on both the host servers.

Note: A disk rescan may be required.

Figure 32 Windows Disk Manager showing the new Logical Volumes

Before continuing, validate that each Hyper-V host server can see both drives and bring them online.

Note: Only one server can have a disk online at a time, until they have been added to Cluster Shared Volumes.

Once you have verified both servers can bring the drives online, from a single server, bring both drives online, and initialize them as GPT. Create a new volume on each of the drives, using the entire available capacity. Assigning drive letters is not required since the disks will be used for specific clustering roles such as CSV and Quorum. Format the new volumes using NTFS.

Page 44: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

41

Failover Cluster Creation

Failover clusters provide high availability and scalability to many server workloads. These include server applications such as Microsoft Exchange Server, Hyper-V, Microsoft SQL Server, and file servers. The server applications can run on physical servers or virtual machines. In a failover cluster, if one or more of the clustered servers (nodes) fails, other nodes begin to provide service (a process known as failover). In addition, the clustered roles (formerly called clustered services and applications) are proactively monitored to verify that they are working properly. If they are not working, they restart or move to another node. Failover clusters also provide Cluster Shared Volume (CSV) functionality that provides a consistent, distributed namespace that clustered roles can use to access shared storage from all nodes.

Microsoft Windows Failover Clustering will be used to join the two Hyper-V host servers together in a highly available configuration that will allow both servers to run virtual machines to support a production environment.

Important: Virtual machine workloads should be balanced across all hosts and careful attention should be given to ensure that the combined resources of all virtual machines do not exceed those available on N-1 cluster nodes. A policy of monitoring resource utilization such as CPU, Memory, and Disk (space, and I/O) will help keep the cluster running at optimal levels, and allow for proper planning to add additional resources as needed.

To create the Windows Failover Cluster follow the steps outlined below:

1. If the Failover Clustering feature has not been installed on both host servers, install the feature on both host servers using Windows Server Manager.

2. Once the Failover Clustering feature is installed, open the Failover Cluster Manager.

Page 45: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

42

Figure 33 Windows Failover Cluster Manager

3. Validate the server configuration by running the Validate a Configuration Wizard from the Failover Cluster Manager. Since the cluster has not been formed, both servers should be added to the validation as shown in Figure 34. The cluster validation wizard checks for available cluster compatible host servers, validates storage, and validates networking. To ensure a successful test, verify the intended cluster storage is online to only one of the cluster nodes.

Important: Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster nodes since it causes the validation to issue a warning during network detection due to all nodes sharing the same IP address.

Page 46: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

43

Figure 34 Validate a Configuration Wizard

4. Address any issues that are flagged during the validation before continuing. 5. Once the validation passes, leave the checkbox labeled Create the cluster now using the validated

nodes… checked and click the Finish button to open the Create Cluster Wizard. 6. On the Create Cluster Wizard, type a name for the cluster and assign an IP address for the cluster

to use. Click Next to continue.

Figure 35 Naming the cluster

7. On the Confirmation page, uncheck the checkbox labeled Add all eligible storage to the cluster and click Next.

8. Once the cluster has formed, click the Finish button to exit the wizard and return to the Failover Cluster Manager.

Page 47: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

44

9. Expand the Storage section in the navigation pane and click Add Disk in the Actions pane.

Figure 36 Adding a disk to the cluster

10. Both disks should be listed and checked. Click OK to continue.

Figure 37 Cluster disks

Page 48: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

45

11. IBM recommends renaming the disks within the Failover Cluster Manager for better documentation. Once the disks have been added, you can right-click a disk and click Properties. Type a new name for the disk and click the OK button.

12. Right-click the disk named Cluster Shared Volume (the 4TB disk), and click Add to Cluster Shared Volumes from the context-sensitive menu. Once it completes you will see the 4TB volume available as a Cluster Shared volume as shown in Figure 38.

Figure 38 Cluster shared volumes

13. IBM recommends renaming the cluster networks for better documentation. Expand the Networks section in the Navigation pane and right-click each of the listed networks. Click Properties from context-sensitive menu and rename the network. Figure 39 shows the renamed networks.

Page 49: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

46

Figure 39 Renamed networks

Failover Cluster Manager will automatically select the Quorum Witness volume if the volumes are visible to the hosts upon cluster creation. Verify the Quorum Witness was added correctly by following the steps below:

1. With the cluster selected, expand Storage in the Navigation pane and click Disks. 2. Verify the correct disk is used as the Disk Witness in Quorum and that its status is Online.

If you need to manually assign the quorum disk follow the steps below:

1. With the cluster selected, on the Actions pane, click More Actions, and then click Configure Cluster Quorum Settings. The Configure Cluster Quorum Wizard appears. Click Next.

2. On the Select Quorum Configuration Options page, select the radio button labeled Select the quorum witness. Click Next.

3. On the Select Quorum Witness page, select the radio button labeled Configure a disk witness. Click Next.

4. On the Configure Storage Witness page, check the drive you would like to use for the quorum. Click Next.

Page 50: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

47

Figure 40 Select a disk to use as the quorum

5. Click Next on the Confirmation page. Once it completes click Finish to close the wizard.

The last step is to configure Live Migration:

1. Next, click Live Migration Settings in the Actions pane to open the Live Migration Settings window.

2. Uncheck all networks besides the 192.168.50.x network (previously renamed to Live Migration Network). Click OK.

Microsoft Hyper-V Configuration

This section covers the configuration for Hyper-V.

Setting default paths to the CSV

Using Hyper-V Manager, set the default paths for VM creation to use the CSV.

Note: Disks in CSV are identified with a path name. Each path appears to be on the system drive of the host server as a numbered volume under the \ClusterStorage folder. This path is the same when viewed from any node in the cluster. You can rename the volumes if needed. For example, the CSV created earlier will appear as C:\ClusterStorage\Volume1 from both servers in the cluster.

To configure Hyper-V to use the CSV follow the steps outlined below:

1. Using Hyper-V Manager, with the Hyper-V server selected, click Hyper-V Settings… from the Actions pane.

2. In the Server section, select the first option, Virtual Hard Disks. Type a new path for the VHD files which points to the CSV. See Figure 41.

Page 51: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

48

Figure 41 Changing the path for the VHD files to the CSV

3. Repeat the process to change the path for the virtual machine configuration settings by selecting Virtual Machines and typing a new path which points to the CSV.

Creating a Highly Available Virtual Machine

Follow the steps below to create a highly available virtual machine:

1. In Failover Cluster Manager, with the cluster selected, click Roles in the Navigation pane. 2. In the Actions pane, click Virtual Machines…, and then click New Virtual Machine. The New

Virtual Machine Wizard appears. Select one of the host servers and click OK. 3. Click Next to bypass the Before You Begin page. 4. On the Specify Name and Location page, specify a name for the virtual machine. Verify the

location to store the virtual machine files is the path to the cluster shared volume and click Next. 5. On the Specify Generation page, select the appropriate generation dependent on the version of

Windows you will install on the virtual machine. Click Next. 6. On the Assign Memory page, specify the amount of memory required for the operating system that

will run on this virtual machine. Click Next. 7. On the Configure Networking page, connect the network adapter to the virtual switch that is

associated with the physical network adapter (in this reference architecture the switch is called ClusterSwitch). Click Next.

8. On the Connect Virtual Hard Disk page, select Create a virtual hard disk. If you want to change the name, type a new a name for the virtual hard disk. Click Next.

9. On the Installation Options page, click Install an operating system from a boot CD/DVD-ROM. Under Media, specify the location of the media, and then click Finish.

Page 52: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

49

The virtual machine is created and the High Availability Wizard in Failover Cluster Manager then automatically configures the virtual machine for high availability.

Virtual Machine Configuration

To isolate the virtual machine network traffic, the virtual machine’s network adapter(s) must be on a private VLAN. In this reference architecture, VLAN 40 has been configured for virtual machine network traffic.

Note: If a virtual machine requires access to the domain controller, routing will have to be enabled for VLAN 40.

To isolate the virtual machine’s network, follow the steps below:

1. Once a virtual machine has been created, from within Hyper-V Manager, right-click the virtual machine and click Settings.

2. Select Network Adapter from the Hardware section. 3. Verify ClusterSwitch (or the name you defined) is selected as the virtual switch option. 4. Check the checkbox labeled, Enable virtual LAN identification and type 40 as the VLAN identifier.

Click OK.

Virtual Machine Fibre Channel Storage Connections (Optional)

The majority of the storage used in a virtual environment will consist of virtual hard drives in the form of VHDX files used by Hyper-V. These will typically reside on storage provided by the Cluster Shared Volume and managed by the host server. In some cases there may be the need to create direct Fibre Channel connections from the virtual machine to the storage. An example of this is to support the creation of a cluster between two virtual machines (also known as Guest Clustering).

Follow the steps listed in the sections below to perform the setup and configuration of direct Fibre Channel connections to a virtual machine.

Enable Multi-Path Support on the Virtual Machine

Each virtual machine will have multiple paths available to the storage. Therefore, multi-path support must be enabled.

To enable multi-path support, install the Storwize V7000 SDDDSM driver on each of the virtual machines that require direct access to the storage (for guidance on installing the SDDDSM see section, Enabling Multipathing). Once installation completes, shut down the virtual machines.

Enable NPIV on HBA Ports

When NPIV is not enabled (default), an N_Port has a single N_Port_ID associated with it. The N_Port_ID is a 24-bit address assigned by the Fibre Channel switch. The N_Port_ID is not the same as the World Wide Port Name (WWPN), although there is typically a one-to-one relationship between WWPN and N_Port_ID. Thus, for any given physical N_Port, there would be exactly one WWPN and one N_Port_ID associated with it. NPIV allows a single physical N_Port to have multiple WWPNs, which allows Hyper-V to create a virtual fibre infrastructure using the physical HBA ports but with virtualized WWPNs.

On each of the host servers, open the Emulex OneCommand Manager application and enable N_Port ID Virtualization (NPIV) each of the Fibre Channel HBA ports used for storage access (by default NPIV is disabled). Once NPIC has been enabled, restart the host servers. Figure 42 illustrates the steps needed to enable NPIV on each HBA port.

Page 53: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

50

Figure 42 Enabling NPIV on Fibre Channel ports

Creating Virtual SAN Devices

Once NPIV has been enabled on both HBA ports on each of the host servers, use Hyper-V Manager to create two virtual SAN devices on each host server. Each virtual SAN device will be assigned one HBA port. Therefore, each virtual SAN device on a host server will connect to a separate physical fibre switch.

Follow the steps outlined below to create two virtual fibre switches on each host:

1. Using Hyper-V Manager, click Virtual SAN Manager in the Actions pane. 2. Once the Virtual SAN Manager page opens, ensure Virtual Fibre Channel SAN is selected and

click the Create button. 3. On the New Fibre Channel SAN page, type a new name for the virtual SAN device and select one

of the two WWPNs. Previous WWPN information can be reviewed to determine port-to-switch correlation. Click OK to continue. See Figure 43.

Page 54: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

51

Figure 43 Creation of a virtual SAN device in Hyper-V Manager

4. Repeat the process to create a second virtual SAN device. 5. Create two virtual SAN devices on the second host by following the same process.

Important: Use the same names on the second host server as was used on the first host server.

Creating Virtual HBAs

Once the virtual SAN devices have been created, create two HBAs on each of the virtual machines that require fibre channel connectivity. This step should be performed from the Failover Cluster Manager rather than Hyper-V Manager.

Follow the steps below to create the virtual HBAs and associate them to the virtual SAN devices:

1. From Failover Cluster Manager, select your cluster in the Navigation pane and then click Roles. 2. Right-click the virtual machine and click Settings from the context-sensitive menu. 3. Select Add Hardware from the Navigation pane and select Fibre Channel Adapter on the Add

Hardware page and click the Add button. 4. On the Fibre Channel Adapter page select one of the virtual SAN devices using the drop-down.

Page 55: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

52

Figure 44 Creating a virtual HBA

5. Document the WWPN that will be provided to the new virtual HBA under “Address Set A”. If this virtual machine will be Live Migrated also record the WWPN seen under “Address Set B”.

6. Repeat this process to create a second virtual HBA that is assigned to the second virtual SAN device.

7. Start the virtual machine.

Verify Connectivity and Add Additional Zoning

At this point, the newly created virtual HBA’s WWPNs should be seen by the physical fibre switches.

When the virtual machine is hosted by Hyper-V host server 1, port 0 on fibre switch 1 and port 0 on fibre switch 2 should see the virtual machine’s virtual HBA’s WWPNs. When the virtual machine is hosted by Hyper-V host server 2, port 1 on fibre switch 1 and port 1 on fibre switch 2 should see the virtual machine’s virtual HBA’s WWPNs.

To verify connectivity login to one of the fibre switches and run the following command (0 is the port on the fibre switch that is connected to the physical HBA for host server 1) :

IBM_2498_F48:admin> portloginshow 0

Figure 45 shows an example output of the command shown above. The new virtual HBA’s WWPN is highlighted by the red box.

Page 56: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

53

Figure 45 Results of portloginshow showing the new virtual HBA's WWPN

Once connectivity has been established, new zoning must be implemented to isolate the data traffic over the new virtual SAN.

Start by assigning aliases to the new virtual WWPNs on each of the fibre switches (aliases for the V7000 ports should have already been defined on the switches). Then create two additional zones on each of the fibre switches. For guidance on performing these actions see section, Storage Zoning using Table 6 Additional zoning to support virtual HBAs as a reference.

Like Table 5, the zones in Table 6 are highlighted using different shading. For example, the light blue rows in the first section comprise zone 1 on switch 1, while the darker blue rows comprise zone 2 on switch 1. The light blue rows in the second section comprise zone 1 on switch 2, while the darker blue rows in the second section comprise zone 2 switch 2.

Note: The WWPNs in Table 6 are unique to the test environment used for the creation of this reference architecture. The WWPNs for your environment will be different than those shown here. To obtain the WWPNs, the following command was run for each connected port on each of the fibre switches:

IBM_2498_F48:admin> portloginshow <port number>

Page 57: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

54

Table 6 Additional zoning to support virtual HBAs

IBM System Storage SAN48B-5 Fibre Switch 1

Port Connected To Virtual/Physical WWPN Alias

0 Virtual HBA Port-1 When Hosted on Hyper-V Host Server 1

C0:03:FF:AC:8F:F6:00:02 Server1VMPort1

1 Virtual HBA Port-1 When Hosted on Hyper-V Host Server 2

C0:03:FF:AC:8F:F6:00:03 Server2VMPort1

2 Fibre Port-1 on V7000 Controller A

50:05:07:68:03:04:11:62 ControllerAPort1

3 Fibre Port-1 on V7000 Controller B

50:05:07:68:03:04:11:63 ControllerBPort1

8 Fibre Port-3 on V7000 Controller A

50:05:07:68:03:0C:11:62 ControllerAPort3

9 Fibre Port-3 on V7000 Controller B

50:05:07:68:03:0C:11:63 ControllerBPort3

IBM System Storage SAN48B-5 Fibre Switch 2

Port Connected To Virtual/Physical WWPN Alias

0 Virtual HBA Port-2 When Hosted on Hyper-V Host Server 1

C0:03:FF:AC:8F:F6:00:04 Server1VMPort2

1 Virtual HBA Port-2 When Hosted on Hyper-V Host Server 2

C0:03:FF:AC:8F:F6:00:05 Server2VMPort2

2 Fibre Port-2 on V7000 Controller A

50:05:07:68:03:08:11:62 ControllerAPort2

3 Fibre Port-2 on V7000 Controller B

50:05:07:68:03:08:11:63 ControllerBPort2

8 Fibre Port-4 on V7000 Controller A

50:05:07:68:03:10:11:62 ControllerAPort4

9 Fibre Port-4 on V7000 Controller B

50:05:07:68:03:10:11:63 ControllerBPort4

Page 58: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

55

Once zoning is finished, verify the zoning is correct by running the following command from each of the fibre switches:

IBM_2498_F48:admin> zoneshow

The result of the command should look like that shown in Figure 46 and match the content of Table 6 (using WWPNs consistent to your environment):

Figure 46 Final zoning

Once zoning is complete, the virtual HBAs’ WWPNs should be visible to the Storwize V7000.

Note: Only two WWPNs will be visible at a time, depending on which Hyper-V host server is currently hosting the virtual machine.

Open the Storwize V7000 GUI and create a new host using the two visible WWPNs. For guidance on creating new hosts within the Storwize V7000 management GUI see section, Host Definition on the Storwize V7000.

Figure 47 illustrates creating a new host with two visible WWPNs.

Page 59: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

56

Figure 47 Creating a new host using the virtual HBA's WWPNs

To add the other two ports, either type in the WWPNs manually, or migrate the virtual machine to the second host server, making the second set of WWPNs visible to the storage, and add the newly visible WWPNs using the drop-down.

Figure 48 illustrates the final host definition within the Storwize V7000 management GUI. Notice only two ports are visible at one time. Upon a virtual machine migration, the two active ports will go offline and the two offline ports will become active.

Figure 48 Virtual machine virtual HBA WWPNs

Page 60: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

57

Now that zoning has been completed and the host has been created on the Storwize V7000, volumes can be created and assigned directly to the virtual machine. For additional guidance on creating volumes and assigning the volumes to hosts see section, Storage Partitioning.

Additional Best Practices and Requirements

This section includes additional best practices when creating a Hyper-V cluster.

Quorum Best Practices

Make sure that the file share has a minimum of 5 MB of free space.

Make sure that the file share is dedicated to the cluster and is not used in other ways (including storage of user or application data).

Do not place the share on a node that is a member of this cluster or will become a member of this cluster in the future.

You can place the share on a file server that has multiple file shares servicing different purposes. This may include multiple file share witnesses, each one a dedicated share. You can even place the share on a clustered file server (in a different cluster), which would typically be a clustered file server containing multiple file shares servicing different purposes.

For a multi-site cluster, you can co-locate the external file share at one of the sites where a node or nodes are located. However, we recommend that you configure the external share in a separate third site.

Place the file share on a server that is a member of a domain, in the same forest as the cluster nodes.

For the folder that the file share uses, make sure that the administrator has Full Control share and NTFS permissions.

Do not use a file share that is part of a Distributed File System (DFS) Namespace.

A CSV cannot be used as a quorum witness disk

Cluster Shared Volume Requirements

To use CSV, your nodes must meet the following requirements:

In Windows Server 2012 R2, a disk or storage space for a CSV volume must be a basic disk that is partitioned with NTFS; you cannot use a disk for a CSV that is formatted with FAT or FAT32.

On all cluster nodes, the drive letter for the system disk must be the same.

Authentication protocol. The NTLM protocol must be enabled on all nodes. This is enabled by default.

Summary

Upon completing implementation steps, an operational, highly available Microsoft Hyper-V failover cluster helps to form a high-performing, interoperable, and reliable IBM private cloud architecture. With Enterprise-class multi-level software and hardware fault tolerance is achieved by configuring a robust collection of industry-leading IBM System x3850 X6 Servers, System Storage, SAN, and networking components to meet Microsoft’s Private Cloud Fast Track program guidelines. The program’s unique framework promotes standardized and highly manageable cloud environments which help satisfy even the most challenging business critical virtualization demands.

Page 61: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

58

Appendix 1: Bill of Material

Part # Description

Part # Description Qty 3837A4J IBM System x3850 X6 2

44X3996 Intel E7-4890 v2 (15C/2.8GHz/37.5M/155W) 6

46W0671 16GB (1x16GB, 2Rx4, 1.35V) PC3L-12800 CL11 ECC DDR3 1600MHz LP RDIMM 32

00D2028 Broadcom NetXtreme II ML2 Dual Port 10GbE SFP+ Adapter 2

00AJ156 IBM 200GB SATA 1.8'' MLC Enterprise SSD 4

44X4152 Power Supplies 1400W AC 8

44X4106 1.8" SSD drive bay 2

44X4051 Standard I/O Book (2 x8, 1 x16 PCIe Gen3, 5 x USB 2.0, 1GbE IMM Port, Dedicated ML Socket)

2

44X4049 Half-Length I/O Book (2 x8, 1 x16 PCIe Gen 3 HL/FH) 2

A3YZ ServeRAID M5210 SAS/SATA Controller for IBM System x 2

3581 Emulex 8Gb FC Dual-port HBA for IBM System x 2

6756FCK 1 Year Onsite Repair 24x7 4 hour response 2

2076-124 Storwize V7000 Disk Control Enclosure (2.5 in Drives) 1

2076-224 IBM Storwize V7000 Expansion Enclosure (for additional drives if needed) 0

85Y6185 IBM 300 GB 15k 2.5 in SAS Disk Drive for Storwize V7000 24

6942-25B 1 Year Onsite Repair 24x7, 4 hour response 1

39M5697 5M Fibre Optic Cable LC-LC 8

69Y2876 FP Transceiver GBIC 8 Gbps SW 8

2498F48 IBM System Storage SAN48B-5 FC Switch 2

7309BR6 IBM RackSwitch G8124E (Rear to Front) 10 GbE Network Switch 2

90Y9430 IBM 3m Passive DAC SFP+ Cable 6

Page 62: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

59

Resources

IBM Support: http://www.ibm.com/support IBM System x3850 X6 Overview http://www-03.ibm.com/systems/x/x6/4socket.html IBM Firmware update and best practices guide: http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5082923 IBM Redbooks: Implementing the IBM Storwize V7000 http://www.redbooks.ibm.com/abstracts/sg247938.html IBM Redbooks: IBM System Storage SAN48B-5 http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/tips1125.html?Open IBM Redbooks: IBM System Networking RackSwitch G8124E http://www.redbooks.ibm.com/redbooks.nsf/RedbookAbstracts/tips0787.html

IBM Fast Setup http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-FASTSET IBM x86 Server Cloud Solutions http://www-03.ibm.com/systems/x/solutions/cloud/index.html Microsoft Fast Track Deployment Guide http://www.microsoft.com/en-us/download/details.aspx?id=39088

Microsoft IaaS Fabric Management Architecture Guide http://www.microsoft.com/en-us/download/confirmation.aspx?id=38813

Microsoft Fast Track Fabric Manager Architecture Guide http://www.microsoft.com/en-us/download/details.aspx?id=38813

Page 63: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

60

Trademarks and special notices

© Copyright IBM Corporation 2014.

References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction.

Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

Page 64: IBM System x Solution for Microsoft Hyper-V

IBM System x3850 X6 Solution for Microsoft Hyper-V

Reference Architecture © Copyright IBM Corporation, 2014

61

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.