hyper-v bpg storwizesvcfamily_ v 2.3.b final

Upload: bryan-calderon

Post on 01-Jun-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    1/35

     

    © Copyright IBM Corporation, 2014

    Microsoft Hyper -V with IBM SAN VolumeController and IBM Storwize family

    products 

     Best practices and guidelines 

     IBM Systems and Technology Group ISV Enablement  

     March 2014 

    https://twitter.com/

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    2/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    Table of contents

    Abstract ..................................................................................................................................... 1

    Introduction .............................................................................................................................. 1

    IBM systems ............................................................................................................................. 1

    Storwize family products and SVC...................................................................................................... 1

    System x servers ............................................................................................................................... 3

    Hyper-V overview ..................................................................................................................... 3

    Prerequisites ............................................................................................................................. 4

    Storage configuration .............................................................................................................. 5

    Volumes............................................................................................................................................. 5

    Thin provisioning ................................................................................................................................ 8

    Multipathing ....................................................................................................................................... 9

    SAN zoning .......................................................................................................................................10

    Virtual machine storage ......................................................................................................... 11

    Fibre Channel ...................................................................................................................................11Hyper-V virtual Fibre Channel configuration .......................... ......................... ..............12

    iSCSI ................................................................................................................................................16

    FCoE ................................................................................................................................................18

    VHD volumes ....................................................................................................................................18

    SAS ..................................................................................................................................................19

    Direct attached ..................................................................................................................................19

    Pass-through disks............................................................................................................................20

    Hyper-V CSV Cache .........................................................................................................................20

    Hyper-V clustering .................................................................................................................. 20

    Guest clusters ...................................................................................................................................21

    Virtual machine migrations .................................................................................................... 21

    Hyper-V storage live migration ..........................................................................................................22

    Storwize seamless data migration .....................................................................................................22

    Resource metering ................................................................................................................. 23

    System Center 2012 VMM ...................................................................................................... 23

    Backup .................................................................................................................................... 25

    IBM FlashCopy .................................................................................................................................26

    Tivoli Storage FlashCopy Manager .......................... .......................... ......................... .......................26

    Easy Tier ................................................................................................................................. 28

    IBM Real-time Compression .................................................................................................. 28Conclusion .............................................................................................................................. 30

    Resources ............................................................................................................................... 31

    Trademarks and special notices ........................................................................................... 32

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    3/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    1

    Abstract

    This paper covers IBM Storwize family products and IBM System Storage SAN VolumeController (SVC) best practices for Microsoft Hyper-V. The focus includes storage-specificconfiguration guidance including array configuration, connectivity and disk options, backup, andintegration with Microsoft System Center 2012 Virtual Machine Manager (VMM).

    Introduction

    Virtualization and private cloud environments are now in wide spread use in most data centers. Both

    server and storage virtualization have improved efficiency and lowered IT costs while providing highly

    scalable infrastructures and a much smaller footprint. Microsoft® Hyper-V allows IT organizations to

    consolidate resources and still use the familiar Microsoft interfaces and technology for their virtualized

    solutions.

    The IBM® Storwize® family products and IBM System Storage® SAN Volume Controller provide robust

    and reliable feature-rich storage for Hyper-V. All of these storage systems run the same code base,

    architecture and feature sets, and are compliant with Hyper-V and Storage Management Initiative

    Specification (SMI-S) standards that Microsoft System Center 2012 Virtual Machine Manager (VMM) relie

    on. The result is a highly efficient, integrated, and easily managed virtual environment that combines the

    benefits of both virtual storage and servers.

    The content in this paper is a collection of best practices and guidance for Hyper-V implementation with a

    focus on storage-specific topics. It is not intended to be a detailed step-by-step guide to install and

    configure Hyper-V, (there are abundant resources available to help with that). This paper is about the

    general guidelines for Hyper-V with IBM Storwize family and SVC systems; however, not everything

    applies to all customer situations. Each feature or suggestion must be carefully reviewed and evaluated to

    determine if it makes sense for a given environment.

    IBM systems

    This section describes the features of the IBM Storwize family products and IBM System x® servers. IBM

    storage and servers are tightly integrated and provide proven combinations for robust, reliable, and

    scalable solutions.

    Storwize family products and SVC

    The IBM Storwize family products and SVC include the following storage systems.

      IBM Storwize V7000

      IBM Storwize V7000 Unified – combined block and file storage  IBM Storwize V5000

      IBM Storwize V3500 (China)

      IBM Storwize V3700

      IBM Flex System® V7000 Storage Node

      IBM SAN Volume Controller models

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    4/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    2

    This paper refers to software and features that are common across all Storwize family products and SVC

    systems, unless otherwise noted. The system used for proof of concept testing was IBM Storwize V7000.

    The IBM Storwize family products and SVC systems provide internal storage and also external storage

    virtualization, with the exception of Storwize V3700 which is internal only; making it possible to integrate

    with and manage existing heterogeneous storage along with the internal storage from a single interface.

    The web-based management GUI runs directly on the IBM Storwize family products and SVC systems an

    can be accessed by a web browser.

    Figure 1: IBM Storwize V7000

    Each IBM Storwize family product consists of a control enclosure and optionally multiple expansion

    enclosures. This provides 3.5 inch or 2.5 inch serial-attached SCSI (SAS) drives for the internal storage.

    The SVC system does not support expansion enclosures or internal disk storage, but virtualizes external

    disk storage only. The systems now include support for clustering multiple Storwize family products of the

    same model together, allowing for additional scalability.

    Figure 2: View of the Storwize family and SVC management GUI

    The IBM Storwize family products and SVC also include a function called IBM System Storage

    Easy Tier®. This takes advantage of the high performance solid-state drives (SSDs) by automatically

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    5/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    3

    moving the most active data from the hard disk drives (HDDs) to the SSDs using a very sophisticated

    algorithm.

    In addition, there is also a full array of features including:

      8 Gbps Fibre Channel (FC), 1 Gbps iSCSI, or optionally 10 GbE iSCSI

      Seamless data migration to internal or external arrays  Thin provisioning, volume mirroring, and thin mirroring

      Global Mirror and Metro Mirror replication, IBM FlashCopy® with 256 targets,

    cascaded, incremental, space efficient (thin provisioned)

      Integration with IBM Tivoli® Productivity Center

    You can find more detailed information about IBM Storwize family products and SVC systems at:

    ibm.com/systems/storage/disk

    System x servers

    IBM System x servers can deliver the performance and reliability required for virtualizing mission-critical

    applications on Hyper-V with Microsoft Failover Clustering. A full line of rack mount and blade systems areavailable to fit any budget or data center requirements. System x servers are available in several

    performance ranges, from mission-critical or general business level, to entry-level systems that allow a

    company to start small and expand as needed.

    You can find details about the full line of System x servers and more information at:

    ibm.com/systems/x/index.html

    Hyper-V overview

    Hyper-V is Microsoft’s hypervisor included with recent versions of Microsoft Windows® Server, and it

    continues to gain ground in the competitive server virtualization market. It offers Microsoft-based

    infrastructure environments a built-in, lower-cost server virtualization solution, along with add-on

    integration and management products such as System Center 2012 VMM to simplify administering a

    growing virtual data center.

    Hyper-V supports many storage-specific features and enhancements, including:

      Virtual Fibre Channel SAN switches and adapters

      iSCSI connectivity

      Pass-through disks

      Storage migration

      Storage automation with Microsoft System Center 2012 Virtual Machine Manager

      Virtual hard disk (VHD) and the improved virtual hard disk X (VHDX) format

      Cluster Shared Volumes (CSV) cache

      Virtual machine (VM) snapshots

      Resource metering

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    6/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    4

    Hyper-V with Windows Server 2012 R2 includes additional new features and improvements. At the time o

    this writing, Windows 2012 R2 is not fully supported on IBM servers and all software components,

    therefore the R2 features are not included in this version of the paper.

    Throughout this paper, the Hyper-V host server is referred to as the parent partition. This is where the

    Hyper-V hypervisor is installed as a Windows feature. The virtual machines are referred to as guests orchild partitions.

    You can find in depth information about Hyper-V architecture and features on the Microsoft website at:

    http://technet.microsoft.com/en-us/library/hh831531.aspx

    Prerequisites

    The remaining sections of this paper cover best practices for Hyper-V environments with IBM Storwize

    family and SVC systems. The paper assumes systems administrator level knowledge of Windows servers

    Hyper-V, storage, SAN, and networking as some of the information is presented at a higher level. Newer

    concepts will include more details or links to in-depth information.

    Before configuring any environment, it is important that all systems and components have the latest

    supported firmware and drivers installed. Unusual technical issues are often resolved by applying these

    overlooked updates. Additionally, Microsoft security and important patches must be applied and up to

    date. There are also specific Microsoft patches that might be needed to provide functionality or stability in

    a Hyper-V environment. You can find a list of these at:

    http://social.technet.microsoft.com/wiki/contents/articles/15576.hyper-v-update-list-for-windows-server-

    2012.aspx

    Note: Install the IBM Subsystem Device Driver Device Specific Module (SDDDSM) multipath software

    b efo re  applying Microsoft patches. Refer to the note in the “Multipathing” section of this paper for more

    details on the reason and KB number of the conflicting patch.

     A list of recommended patches for failover cluster environments is available at:

    http://support.microsoft.com/kb/2784261

    Windows PowerShell can also be used to automatically check for any missing patches or fixes related to

    Hyper-V and failover clustering environments. You can find a PowerShell script for Windows Server 2012

    from Microsoft at:

    http://blogs.technet.com/b/cedward/archive/2013/05/24/validating-hyper-v-2012-and-failover-clustering-

    2012-hotfixes-and-updates-with-powershell.aspx

    It is also important to verify that a newer OS or application is supported with the planned hardware or

    feature sets, whether it be servers, storage, applications or network devices. For example, WindowsServer 2012 has been tested in depth, with wide support statements, while Windows 2012 R2 might not b

    fully supported yet due to its relatively recent release date. Because the support levels are always

    updated, checking the IBM support links provided in this paper is the best way to ensure a given system

    and software combination is compatible.

    The IBM support site offers a useful utility for verifying supported combinations of hardware, software,

    protocols, and operating systems. It is called the System Storage Interoperation Center (SSIC) and it

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    7/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    5

    provides detailed information on supported combinations on IBM systems. When in doubt, this is the best

    source to verify interoperability. You can find the tool at:

    ibm.com/systems/support/storage/ssic/interoperability.wss

    When setting up the Hyper-V host server, Microsoft recommends keeping the number of roles and

    features limited to Hyper-V, failover clustering, and Multipath I/O. Running additional components can

    introduce instability and complicate troubleshooting. The reasoning behind this is because when the

    Hyper-V role is installed, the host OS becomes the parent partition and the hypervisor is added between

    the parent partition and the hardware. Microsoft recommends the Hyper-V host server be primarily

    dedicated to running as a hypervisor, to minimize the additional load and complexity of other services

    running on it.

    When configuring the default path for virtual hard disks, the best practice is to locate it on a non-system

    drive. This is to avoid system disk latency and prevent the possibility of the server running out of disk

    space.

     Antivirus software running on a Hyper-V environment needs to have the recommend Hyper-V files

    excluded from scanning to avoid instability and unexpected issues. You can find the list of file exclusions

    and more detailed information about this topic at:

    http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-

    hosts.aspx

    This is far from a complete list of prerequisites for Hyper-V environment; however, covering these areas

    can help avoid some of the more common issues that can create unexpected instability. For a complete

    list of prerequisites and detailed best practices for each aspect of Hyper-V, search the Microsoft TechNet

    website. You can find an excellent Hyper-V predeployment and configuration guide from Microsoft at:

    http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-

    easy-checklist-form.aspx

    Storage configuration

     A successful Hyper-V implementation takes some planning to avoid performance or capacity issues as the

    environment grows. The storage configuration deployed depends on how the volumes will be connected

    and the type of workloads running on them.

    To have the best performance and reliability, all of the storage components need to be configured

    optimally and working together, including the SAN, volume structures, host I/O management and sizing.

    Volumes

    Volumes can be thin provisioned or standard, with most environments opting for thin provisioned volumesas the most efficient configuration. More information on this is covered later in the “Thin provisioning” 

    section.

    It is common for Hyper-V hosts to have very large volumes to store multiple virtual hard disks. In such

    configurations, for performance reasons, it is important to ensure there are adequate disks in the

    underlying storage pools to support the expected I/O load, as well as the needed space. However, even if

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    8/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    6

    such performance ceilings are reached, Hyper-V and Storwize family systems provide options such as

    virtual machine and storage migration to quickly remedy the situation.

    Virtual machines can use one of the several directly connected volume methods. These types of volumes

    offer the best performance and overall control. When using directly connected volumes, they are sized

    closer to what typical physical server environments use. Another important use for directly attached

    volumes is when using backup solutions that use Microsoft Volume Shadow Copy Service (VSS) to take

    snapshots and restore individual volumes, an approach that cannot be used with shared virtual hard disk

    (VHD) volumes.

    In most instances, VHD files are recommended and provide adequate balanced performance. However,

    directly connected drives might be required by an application or are needed for the highest performance.

    Often, a virtual server uses a combination of direct-attached and VHD files to balance efficiency and

    performance. If performance problems persist with VHDs, try placing heavy I/O data on directly attached.

    Volume capacity needs to consider enough space for dynamically expanding VHD files, which start out at

    a minimal size but can have a high, maximum size limit configured. If administrators are not careful, disk

    arrays can be provisioned too small to accommodate this type of growth.

    It can also be easy to overlook I/O performance, and only size volumes for expected capacity. Multiple

    high I/O virtual machines can quickly overwhelm a shared VHD volume. In such cases, the best practice is

    to spread heavy I/O virtual machines over many volumes, or use directly mapped volumes to control VM

    storage performance more directly.

    If using System Center 2012 VMM to provision virtual machines through SAN copy, it is important to

    remember that these are limited to 1:1 ratio of VM to disk. This restriction is imposed by VMM template

    deployment rules, and is covered in more detail in the “System Center 2012 VMM” section. 

    The following points briefly summarize the Storwize family and SVC volume design.

      RAID arrays, called managed disks (MDisks), are created from the available internal

    physical disks.  The MDisks are placed in pools, where they are virtualized as extents.

      External back-end storage systems from any vendor can also present their own arrays as

    MDisks imported into the Storwize or SVC systems as a virtual front end, which are also

    placed in pools. This is called external virtualization and it allows other storage systems to

    take advantage of centralized management from the Storwize easy-to-use GUI in addition

    to the performance benefits from I/O caching, copy services, and striping volumes over

    multiple arrays.

      Lastly, volumes are provisioned from the available space in these pools, and can either b

    sequentially mapped to the physical MDdisks, or striped across multiple MDisks.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    9/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    7

    Figure 3: View of storage pools and volumes

    The only method if using the Storwize GUI is striped volumes. This provides the best performance and

    provisioning flexibility for most workloads. Provisioning a sequential volume, using the space on only one

    MDisk, requires the use of command-line interface (CLI) commands to configure the volumes. As an

    example use of sequential volumes, when testing for the Microsoft SQL Fast Track data warehouse

    workload, the sequential volumes slightly out-performed stripped volumes. You can find this Fast Track

    configuration guide, including scripts for creating and mapping sequential volumes at:

    ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102243

    Virtual extent size is preconfigured in the new Storwize software to a default of 1 GB. This should not be

    confused with volume stripe size, which is 256 KB by default. Consider the following points when

    determining the extent size of new storage pools.

      The extent size must be specified when the new storage pool is created.

      Extent size cannot be changed later.

      Storage pools can have different extent sizes.

      Pools must have the same extent size to migrate data between them.

      The extent size affects the maximum size of a volume in the storage pool.

    Figure 4 shows the maximum volume capacity possible for each extent size. The maximum is different for

    thin-provisioned volumes because Storwize allocates a whole number of extents to each volume created;

    using a larger extent size might increase the amount of storage that is wasted at the end of each thin

    volume. Larger extent sizes also reduce the ability of the Storwize V7000 system to distribute sequential

    I/O workloads across many MDisks, which reduces the performance benefits of virtualization.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    10/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    8

    Figure 4: Maximum volume sizes per extent

    Thin provisioning

    Thin provisioning is a technology that has been around awhile and most environments are able to take

    advantage of the feature to keep their data storage as efficient as possible, including Hyper-V volumes.

    You need to consider a few things to keep the thin provisioned volumes optimized and efficient.

    Thin-provisioned volumes have both a virtual (or logical) capacity and a real capacity. The real capacity is

    what is actually used by the system and the virtual capacity is its configured maximum size that the

    volume can expand to. The real space needs to be monitored and must have warning thresholds

    configured to prevent the disk from reaching its full virtual size and shutting down. Another solution to this

    is to enable the auto expand feature on the Storwize family or SVC volume.

     Although the impact is minor, I/O rates for thin-provisioned disks are slightly slower than fully allocated

    volumes, all other things being equal. This is due to the additional processing needed to maintain

    metadata information regarding real and virtual space usage.

    When creating thin-provisioned volumes on the Storwize family or SVC system, a grain size is specified.

    The recommended size is 256 KB. Although smaller grain sizes use less space, they create more

    metadata processing and can negatively affect performance. The size of the grain used also affects the

    virtual size limit of the volume. A size of 256 KB has been determined to be the best for performance. If

    using FlashCopy with thin-provisioned source and target volumes, the volumes must use the same grain

    size.

    On the Storwize family and SVC systems, thin-provisioned volumes can be converted to fully allocated

    volumes and fully allocated volumes in turn can be converted to thin-provisioned volumes.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    11/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    9

    Figure 5: Detailed properties view of thin provisioned volume

    One of the administrative concerns with thin provisioning is the need to reclaim space after files are

    deleted at the operating system level. If space is not reclaimed, then over a period of time, thin

    provisioning loses its efficiency. Windows 2012 uses the SCSI UNMAP command to reclaim space, with

    both real-time and manual space reclamation capabilities. The Storwize family systems and SVC systems

    do not currently support this method; however, support is expected as a future enhancement. Until then,

    as an alternative, there are third-party tools available that can be used to reclaim the space.

    Multipathing

    Regardless of the connection methods used, it is important to install the IBM SDDDSM to manage

    multipath I/O traffic. SDDDSM relies on the Microsoft MPIO feature, and the installer will check for the

    feature and install it. This software needs to be installed before mapping any volumes to the server.

    You can download the software from the IBM support website, and it is important to download the version

    specific to the Storwize or SVC model and operating system version, as cataloged on the IBM website.

    You can download the SDDDSM packages at: ibm.com/support/docview.wss?uid=ssg1S4000350

    The software installation must be launched from the command line, with administrative permissions by

    opening the command line with the Run as option and choosing administrator.

    The recommended installation process is to let the SDDDSM installer install the native Microsoft MPIO

    feature. Attempts to install with a pre-existing Microsoft MPIO configuration might result in unexpected

    errors and conflicts with leftover connections or configurations, depending on the history of the server. The

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    12/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    10

    best approach to avoid such issues is to uninstall the Microsoft MPIO feature first, restart, and then start

    the SDDDSM installation and allow it to install the Microsoft MPIO component during the process. After th

    server restarts, connect the storage and scan for new volumes. Verify that the volumes show up as

    expected in Windows Disk Manager.

    If the environment is only using volumes mapped to the Hyper-V hosts, such as VHDs, then the multipath

    software only needs to be installed on the Hyper-V hosts, and not the virtual machines. However, if there

    are direct connections and volume mappings to the virtual machines, then the SDDDSM software must be

    installed on the virtual machines. SDDDSM can handle these the same way it manages the physical

    server connections.

    When volumes are created on the Storwize family or SVC systems, the volumes are alternately assigned

    node owner in a balanced manner. The controller node that owns the volume is considered to be the

    preferred path. SDDDSM load-balancing attempts to balance the load across all preferred paths.

    SDDDSM uses the path that has the least I/O at the time. If SDDDSM cannot find an available preferred

    path, it balances the load across all the paths found and uses the least active non-preferred path.

    Note: On Windows Server 2012, there is a Microsoft patch that conflicts with the SDDDSM installation. If

    KB2779768 is installed on the server and then SDDDSM is installed, the list of IBM storage devices is not

    populated. This is because, the registry key MPIOSupportedDeviceList  under MPDEV does not get

    properly updated. The issue is being investigated. The workaround is to ensure that SDDDSM is installed

    f i r s t  , and then apply all applicable Microsoft patches. If installed in this order, the issue does not occur.

    The other workaround is to import a known good registry key from an unaffected system.

    Version of SDDDSM for proof of concept testing: 2.4.3.4-4

    SAN zoning

    Single initiator zoning is the recommended configuration. In this type of zoning, each host’s host bus

    adapter (HBA) port is zoned with two storage ports, one from each Storwize controller. With multiple

    storage ports, the zones must rotate the use of the four ports on each controller to balance the load and

    performance. The illustration in Figure 6 shows a typical zoning configuration.

    Figure 6: Single initiator zoning example

    Figure 7 is a simple example of the zones being load balanced across the four ports on each Storwizecontroller node. In the example, the server has 4 HBA ports, and is balanced across all 8 FC ports on theStorwize system, using both controller nodes (A and B). Each line in the table is a zone.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    13/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    11

    Figure 7: Sample zoning configuration

    Virtual machine storage

    There are several connectivity options available for Hyper-V on IBM Storwize family and SVC systems.

    Volumes can be connected at the Hyper-V host as VHD storage, mapped directly to the virtual machinesor a combination of the two. The direct connections to virtual machines are still using the underlying

    physical network or HBA, by creating a virtual connection mapped through these devices. As with any

    Hyper-V resource, bandwidth considerations need to be based on the throughput and capabilities of the

    physical hardware that the virtual machines rely on. An example of combining the connectivity types is

    having the operating system on a VHD, with direct-attached volumes dedicated to a database or similar

    heavy I/O or high-throughput application. Whenever high performance is needed, direct attached is the

    fastest, whether it is iSCSI, virtual Fibre Channel or pass-through disks.

    When using directly connected drives, the volume sizing is configured similar to physical server standards

    The volume is no longer shared with other VHD files, but is a direct application specific volume designed

    for capacity and performance accordingly.

    Fibre Channel

    Fibre Channel connections are used at the Hyper-V host as volumes for VHD files or directly to the virtual

    machine. Making direct Fibre Channel connections requires the use of Microsoft’s virtual Fibre Channel

    technology. N_Port ID virtualization (NPIV) compliant switches and HBAs can take advantage of virtual

    Fibre Channel direct connection mappings to a VM, which was introduced with Windows Server 2012 and

    uses Hyper-V virtual SAN switches and HBAs. NPIV is a Fibre Channel feature that allows multiple FC

    initiators to occupy a single physical port. You can find more information about virtual Fibre Channel at:

    http://technet.microsoft.com/en-us/library/hh831413.aspx

    Direct virtual machine connections can also be Fibre Channel connected pass-through disks, which are

    less common as newer direct connection options gain popularity.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    14/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    12

    Hyper-V virtual Fibre Channel configuration

    This section covers virtual Fibre Channel configuration. Although it is often used as an ideal direct

    connection method for shared storage and fail over clustering, it can also be used with stand-alone

    Hyper-V hosts. In this example, it is using Microsoft fail over clustering.

    Note: NPIV is only supported on Storwize family or SVC systems if connected to a switch thatsupports the protocol. NPIV is not supported with direct attached storage configurations (connecting

    without a switch).

    The IBM mulitpath driver (SDDDSM) must also be installed on each virtual machine, as covered

    earlier. The multipathing is managed from within the virtual machine by this software and the Microsof

    MPIO component. SDDDSM can also be installed on the Hyper-V host, and does not create any

    conflict.

    Since virtual FC is a newer feature, more detailed steps are being shown to configure it. Follow the

    steps below to connect a volume with Hyper-V virtual FC switches and HBAs.

    1. Open the Hyper-V manager, and in the right panel, click Virtual SAN Manager .

    2. Click Create.3. Enter a name for the virtual SAN switch. Note that the same virtual switch name must be used on

    each Hyper-V host server in the host cluster, or migration of the virtual machine will fail.

    4. Select the check boxes next to the physical HBA ports to be used. In a typical production

    environment, two virtual SAN switches must be configured for redundancy, similar to a typical

    physical SAN topology.

    5. Click Apply to save the switch configuration.

    6. Repeat steps 1 to 5 on the second node of the host cluster, naming the switch the same. This

    completes the virtual SAN switch setup. Refer to Figure 8 for a view of the Virtual SAN Manager.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    15/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    13

    Figure 8: Hyper-V virtual FC SAN switch interface

    Configure Hyper-V virtual Fibre Channel adapters

    Perform the following steps to configure Hyper-V virtual FC adapters.

    1. From the Failover Cluster Manager, stop each virtual machine to allow configuration changes.

    2. Right-click the virtual machine and click Settings.

    3. In the left panel, click Add Hardware, then click Fibre Channel Adapter , and then click Add. Thi

    opens the Fibre Channel Adapter settings screen as shown in Figure 9. 

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    16/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    14

    Figure 9: Hyper-V virtual Fibre Channel adapter and WWPN address set

    4. Select the virtual SAN switch (that you created earlier) from the Virtual SAN list.

    5. Use the default worldwide port name (WWPN) and worldwide node name (WWNN) that are

    provided. These are used to create NPIV virtual ports on the host automatically when the virtual

    machine is started. Each virtual Fibre Channel adapter has two WWPNs assigned to it as anaddress set. The virtual ports are automatically removed when the virtual machine is turned off.

    Only one virtual port will be active and visible on the physical HBA at a time , however both

    WWPNs must be zoned to the storage system. During a live migration, both ports will be in use to

    maintain connectivity during the move.

    6. Repeat steps 3 to 5 to create any additional adapters.

    7. Click Apply to save the configuration.

    8. Start the virtual machine. Note that the VM might fail to start if NPIV is not enabled on the HBA,

    switch, and individual switch ports. Enable NPIV on the switch ports connecting both the HBAs

    and the Storwize or SVC system. It can also fail to start if the switch firmware or HBA drivers are

    not at the most recent versions to support NPIV.

    9. Using the physical HBA management software on the Hyper-V host server, verify that the virtualports are created on the expected physical HBA ports. Figure 10 illustrates the QLogic HBA

    command-line utility and shows the virtual ports listed as vPorts.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    17/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    15

    Figure 10: Virtual ports from Hyper-V virtual FC adapters

    10. Create a zone on the physical FC switch to enable connectivity between the new vPort WWPN

    and the storage. Because only one of the two virtual WWPNs is enabled on the physical HBA at a

    time, the second one will not be visibly discovered by the switch. It can be added manually.

     Alternately, the position of the WWPNs in the virtual Fibre Channel adapter properties page can

    be swapped which results in the other WWPN being created and visible for zoning. It is critical tha

    both virtual WWPNs are zoned.

    11. Create a host definition on the storage system using the vPort WWPNs. Only one of these will be

    visible in the Storwize GUI port drop-down list when creating the host. The second one must be

    manually added, and will show as unverified. This is fine. The host will also be in a degraded state

    as it can only see one of the virtual ports. During a live migration, both ports will become active.

    12. Map volumes to the host and verify that multipath is working on the host.

    13. Test live migration to verify that the virtual machine migrates without errors.

    You can find more detailed information on Microsoft Hyper-V virtual FC features at:

    http://technet.microsoft.com/en-us/library/hh831413.aspx

    The windows event log provides Hyper-V virtual Fibre Channel specific logging that can be helpful for

    troubleshooting virtual FC issues. You can find the log at Applications and Services Logs Microsoft

    Windows Hyper-V SynthFC (refer to Figure 11).

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    18/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    16

    Figure 11: Hyper-V virtual Fibre Channel event log

    There is a limit to the number of Fibre Channel paths supported to a single volume from the Storwize

    system to a host server. The number of paths must not exceed the number of Fibre Channel ports on the

    storage system. If this is exceeded, then it is considered an unsupported configuration. As an example, on

    the IBM Storwize V7000 system, each controller node has four FC ports and each Storwize V7000 system

    has two controller nodes, providing a total of eight ports. In this case, without any zoning, the total numbeof paths to a volume is eight times the number of host ports. This rule exists to limit the paths that must be

    resolved by the multipathing software and to provide a reasonable path failover time. A host port and two

    storage ports would result in two paths per zone. The maximum number of volumes supported for a

    Window host is 512.

    iSCSI

    The IBM Storwize family and SVC supports 1Gb and 10Gb iSCSI connections. For 1Gb connections, the

    built-in Ethernet ports on the Storwize controllers can be used, including port 1 on each controller, which i

    shared as a management port. The Storwize system can assign multiple IP addresses to this port, one formanagement and one for iSCSI connections. For the 10Gb connections, the Storwize systems use

    expansion cards with 10Gb iSCSI / Fibre Channel over Ethernet (FCoE) ports.

    The following section provides more detail on configuring iSCSI connections. It is the same process

    regardless of physical or virtual server.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    19/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    17

    Microsoft failover clustering is supported with iSCSI, with some limitations, as listed in the following IBM

    support page: ibm.com/support/docview.wss?uid=ssg1S1004450. Information on this page is indexed by

    the operating system and protocols, and provides detailed compatibility listings.

    Jumbo frames should be enabled and the maximum transmission unit (MTU) set to 9000 (or 9014

    depending on the hardware) when setting the virtual network interface cards (NICs) as shown below. The

    method for setting this is the same on both the physical and virtual NICs.

    Figure 12: Configuring jumbo frames on network card

    This can improve performance significantly. When enabling this, it must be done at all points along the

    data path – physical and virtual NICs, storage, and the physical switch. There is no configuration needed

    on the Hyper-V virtual switch; it automatically senses MTU sizes.

    To configure jumbo frames on the Storwize family and SVC systems, use the cfgportip CLI command:

    cfgportip –mtu -iogrp

    For example:

    cfgportip -mtu 9000 -iogrp 0 1

     After configuring the components, verify that the jumbo frames are set up correctly by pinging the storage

    IP with an 8k packet from the Hyper-V server or guest VM, as follows:

    Ping -f –l 8000

    If the replies are received with bytes=8000, then jumbo frames are configured correctly. Technically, the

    ping test might be as large as 8972, which is 9000 minus wrappers and associated packet structures.

    For the most up-to-date information on configuring iSCSI with Storwize systems, refer to:

      ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_best-practices-with-ibm-

    storwize-v7000

      http://pic.dhe.ibm.com/infocenter/storwize/ic/index.jsp?topic=%2Fcom.ibm.storwize.v7000.doc%2

    Fsvc_rulesiscsi_334gow.html

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    20/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    18

    FCoE

    Fibre Channel over Ethernet takes advantage of the latest 10Gb networking to run the Fibre Channel

    protocol for block storage on 10Gb Ethernet cables. The servers connect to a supported converged switch

    using converged networking adapter (CNA) cards, with each card providing the operating system both a

    network port and a Fibre Channel port. The Storwize family systems connect to the converged networking

    switch through the 10Gb expansion cards in the storage controllers. These expansion card ports are used

    for either 10Gb iSCSI or FCoE.

    FCoE configurations need to be using supported CNA cards, switches, and firmware levels. You can find

    the supported hardware and software levels at: ibm.com/support/docview.wss?uid=ssg1S1004450

    This link has an indexed list sorted by operating systems and protocols. This link is specific to Storwize

    V7000, but there are similar pages available for the other Storwize models. Because the Storwize system

    all run similar architecture and protocols, what is true about one model usually applies to the others.

     Although other combinations of cards and switches might work, if they are not listed on the page, they

    have not been tested.

    For FCoE connections, the ports need to be zoned with the 10Gb storage ports on the converged switch,then a Fibre Channel host definition can be created on the Storwize system. The host creation, mapping o

    drives, and drive initialization on the host server is similar to any other Fibre Channel connected drive.

    FCoE is supported at the Windows Hyper-V host level and as such supports Hyper-V virtual hard drives

    hosted on the Hyper-V host server’s volumes. There is no direct-attached option with FCoE, other than the

    use of pass-through disks, which are volumes mounted on the Hyper-V host server and mapped directly t

    a virtual machine.

     Although Microsoft failover clustering supports FCoE, Microsoft clustering with FCoE is not currently

    supported with Storwize family systems and SVC at the time of this writing. As a result, Hyper-V systems

    are currently limited to stand-alone configurations. Support for clustering with FCoE should be available at

    a later date, and the link can be checked for updates on supported configurations. If your environmentrequires clustering support with FCoE, contact your IBM representative for updated information on suppor

    availability options.

    VHD volumes

    The most common storage for virtual machines is the volume on the Hyper-V host server where virtual

    hard drive files are stored. In many cases, these are configured as cluster shared volumes, because a

    typical Hyper-V environment consists of multinode high availability clusters, with the volume presented to

    multiple hosts and access controlled by the Windows disk and cluster management features.

    VHD files are connected as either IDE or SCSI controller devices in Hyper-V. Boot disks must use

    integrated drive electronics (IDE). SCSI disks are the more scalable method, which supports multiple disk

    per controller. Each VM can have four SCSI controllers with 64 disks per controller. As a result, the

    recommended method for adding additional VHD storage to a VM is SCSI. Of course, the SCSI connected

    device can be either a VHD or a physical drive.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    21/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    19

    For heavy I/O applications, multiple SCSI adapters used to be the norm to avoid bottlenecks due to limited

    channels, queue depths, and processor storage I/O interrupt management of each SCSI adapter.

    However, with Windows Server 2012, additional storage performance was gained by adding channels,

    with the interrupt management distributed over all virtual processors. As a result, the use of multiple virtua

    SCSI adapters is usually not needed.

    VHD files can be either fixed or dynamically expanding. There are some impacts to storage provisioning

    depending on the type of VHD used. One of the impacts is on thin provisioning. Placing a fixed size VHD

    on a thin provisioned disk results in the full VHD file size being written to the volume, which can take time

    but results in better performance once it is written to disk due to the processing associated with dynamic

    expansion. If storage efficiency is more important than performance, then use dynamic expanding VHDs.

    These disks will not allocate all storage that is specified for them, but rather dynamically allocate it as

    needed. There will be more processing with the ongoing expansion, but the virtual drive and the thin-

    provisioned volumes will only use space as needed.

    Windows 2012 introduced the VHDX virtual hard drive format. These can also be fixed or dynamic;

    however, the dynamic format VHDX addresses most of the performance concerns that existed with

    Windows 2008. As a result the performance of dynamic VHDXs are much closer to that of fixed VHDXs.However, if maximum storage I/O performance is needed, using fixed VHDX files is still recommended.

    VHDX supports larger volume sizes, up to 64 TB and offer improved protection against data corruption by

    logging updates to VHDX metadata files. New virtual hard disks should use the VHDX format whenever

    possible to take advantage of the new features and capabilities.

     Another performance-related concern with dynamic virtual hard drives over fixed ones is available

    capacity . The dynamic volumes can fill up the defined space and shut down if administrators or monitoring

    solutions are not watching the capacity levels and trends closely.

     As virtual machines are migrated to Windows Server 2012, consider converting VHDs to VHDX format to

    take advantage of the improved performance and capabilities.

    SAS

    Some of the newer Storwize systems support SAS host connectivity. For example the IBM Storwize V370

    and V5000 systems come with four 6Gb SAS ports for host connectivity. SAS connections are direct

    attached only. On Storwize systems that support SAS host connections, the new host wizard in the

    Storwize management GUI includes an option for a SAS host, and new CLI commands to support SAS

    host and port management from the command line.

    Direct attached

     Although in most cases, SAN-attached hosts are the recommended approach, most of the Storwize family

    systems now support direct-attached hosts. Direct-attached storage offers smaller environments a less-

    complex and lower-cost starting point, with the option to expand to SAN-attached, as needed later.

    When directly attaching, the host needs to make one or more connections to each controller node in the

    Storwize system to provide redundant I/O paths.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    22/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    20

    Refer to the SSIC link provided earlier to determine the most recent information regarding which systems

    and protocols support direct host connections.

    Pass-through disks

    Pass-through disks provide one of the better performance storage options, similar to iSCSI or virtual Fibre

    Channel. If the systems support it, virtual Fibre Channel is the recommended direct connection method fo

    the best performance. However, if the systems do not support virtual Fibre Channel, then pass-through

    disks provide a good alternative.

    Pass-through disks are volumes presented originally to the Hyper-V host servers, and kept offline to the

    host server, with control and connectivity passed into the virtual machine. The virtual machine uses a

    virtual SCSI controller to connect the pass-through disk. Up to 64 disks can be connected per virtual SCS

    controller. Up to four SCSI controllers can be configured per virtual machine. So, clearly, this is a good

    option for storage performance and scalability in larger systems. After the pass-through disks are

    connected, they are configured similar to a disk on a physical server.

    It is possible to use the pass-through disk as a boot disk for the operating system. This requires the pass-

    through disk to be set up before installing the operating system, and must be connected as an IDE

    controller device instead of the usual SCSI device.

    Hyper-V CSV Cache

    Hyper-V introduced CSV Cache with Windows Server 2012 to improve storage performance by using

    system memory (RAM) as a write-through cache. This can improve read request performance, while write

    through results in no caching of write requests. Up to 20% of RAM can be allocated for Windows Server

    2012, and up to 80% with Windows Server 2012 R2. CSV Cache is built into the Failover Clustering

    feature, which manages and balances the performance across all nodes in the cluster. The recommended

    value is 512 MB, which is a good starting point. Additional testing needs to be done with larger settings to

    validate the best performance setting for the environment.

    You can find more detailed information from Microsoft at:

    http://blogs.msdn.com/b/clustering/archive/2013/07/19/10286676.aspx

    Hyper-V clustering

    High availability for virtual machines is accomplished by using Windows Failover Clustering and shared

    storage. The virtual machine and files are cluster resources that reside on shared storage which can fail

    over between multiple Windows Hyper-V host cluster nodes. After a Hyper-V VM has been made highly

    available, it must be managed from the failover cluster manager. Although this level of clustering at theHyper-V parent or host-level protects the virtual machine and its related resources, it does not necessarily

    protect the operating system or the applications running on the VM. This is where guest clustering can be

    very valuable.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    23/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    21

    Figure 13: View of highly available virtual machines in Microsoft Fail Over Cluster Manager

    Guest clusters

    Clustering of Hyper-V guests adds another layer of clustering to protect the applications and operating

    system running on the virtual machines. The virtual machines become cluster nodes, which access directl

    connected shared storage. Not all storage types support this feature. The available storage methods are

    limited to iSCSI if running Windows Server 2008 or Windows Server 2012, or virtual Fibre Channel forWindows Server 2012. Pass-through or virtual hard disks cannot be used for guest clustering.

    Virtual machine migrations

    Most Hyper-V environments take advantage of VM migration capabilities. Although quick migrations are

    still used, they can result in some downtime and require a state save to disk. The live migration feature

    allows VMs to be moved without any downtime and minimal performance impact. It requires a well-

    connected dedicated network to be successful. If using virtual disks for live migration, CSVs are

    recommended. However, pass-through disks, standard virtual disks, virtual Fibre Channel and iSCSI are

    all supported.Virtual machine sprawl is very common in today’s virtualized environments. It is easy for IT departments to

    start creating a lot of VMs rather quickly out of convenience without factoring in the performance impact.

    Moving virtual machines to alternate infrastructure resources is a common administrative task with

    Hyper-V, whether it be moving a VM to another node to balance out performance or moving a VM’s files to

    different storage. This section covers the storage migration for VMs and the different ways to accomplish

    this.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    24/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    22

    Hyper-V storage live migration

    With Windows 2012, Hyper-V VHD files can be nondisruptively migrated to different storage using either

    the Hyper-V manager or the Windows failover cluster manager. The migration can also be initiated from

    System Center 2012 VMM. During the migration copy process, the I/O continues to be routed to the

    source location. After the copy is complete, I/O is mirrored to both locations and after fully synchronized, i

    is directed only to the target VM file. Finally, the original VM file is deleted. With this new feature, it is no

    longer necessary to take a virtual machine offline to move the machine’s VHD file to another storage

    location.

    Storwize seamless data migration

     Another benefit of the IBM Storwize and SVC family systems is the ability to seamlessly migrate volumes

    that are having performance issues or need maintenance to more adequately sized storage pools. This

    process resolves overloaded disk performance issues quickly without interrupting or making any changes

    to the production systems.

    Relocating data is one of the most common causes of downtime. The Storwize family and SVC systems

    have virtualized the data, which allows the underlying physical changes without interrupting host access o

    performance. As a result, applications stay online during common storage maintenance or movement

    activities.

    The application of this feature for Hyper-V means any storage used by virtual machines or the Hyper-V

    hosts can be moved quickly to different storage pools as needed. The performance and capacity of virtual

    machines can be constantly changing with the business needs. Sometimes, slower disks are needed; and

    other times, it is more space, higher performance, load balancing, or just a maintenance window that is

    needed.

    Figure 14: View of volume migration to another storage pool

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    25/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    23

    Resource metering

     A new addition to Windows Server 2012 Hyper-V is resource metering. This tracks and gathers data abou

    the physical processor, memory, network, and storage used by each virtual machine. By allowing

    administrator’s in-depth data on virtual machine resource utilization, they can detect hot spots and

    bottlenecks and avoid outages.

    This data can be used to plan capacity, track usage of resources by business unit for chargeback, or

    analyze the costs of a particular workload. Additionally, the movement of virtual machines such as live,

    offline or storage migrations does not affect the measurements. The metering is accomplished with either

    PowerShell commands or through APIs in the virtualization WMI provider.

     Available measurements include:

       Average CPU usage, in MHz

       Average physical memory usage, in MB

      Minimum memory usage (lowest physical memory)

      Maximum memory usage (highest physical memory)

      Maximum amount of disk space used by each virtual machine

      Total inbound network traffic, in MB, by virtual network adapter

      Total outbound network traffic, in MB, by virtual network adapter

    System Center 2012 VMM

    Microsoft System Center 2012 VMM SP1 provides private cloud management ease and deep functionality

    allowing system administrators to perform common server and storage tasks from a single graphical

    interface. Figure 15 shows a view of the VMM management console. You can find more information abou

    VMM features and capabilities at:

    http://technet.microsoft.com/en-us/library/gg610610.aspx

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    26/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    24

    Figure 15: System Center 2012 SP1 Virtual Machine Manager interface

    The IBM Storwize family products and SVC fully support integration with System Center 2012 VMM and

    are compliant with SMI-S standards that VMM relies on. Unlike many SMI-S providers, a separate

    installation or management system is not needed with the IBM SMI-S provider because it runs on the

    Storwize system, included with Storwize software version 7.0 and later. This single-tier design simplifies

    setup and administration, allowing direct communication between VMM or other storage management

    applications.

    You can find detailed information on IBM storage integration with SMI-S and Common Information Model

    (CIM) at:

    http://pic.dhe.ibm.com/infocenter/strhosts/ic/index.jsp?topic=%2Fcom.ibm.help.strghosts.doc%2Fhsg_sm

    -s_main.html

    The high-level steps to set up System Center 2012 VMM with Storwize family or SVC systems includes:

      Preparing the VMM server to make sure that VMM can communicate with the storage.

    This includes some registry and Microsoft patch changes.

      Discovering and connecting the storage system. This involves creating a connection to th

    Storwize or SVC system.  Classifying storage levels, based on performance tiers or RAID types.

       Allocating storage to Hyper-V host groups. Before using VMM to provision storage for

    virtual machines, the storage pools must be allocated to host groups.

     After these steps are completed, the solution is ready to start provisioning storage for Hyper-V hosts from

    the VMM console.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    27/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    25

    There is another IBM solution guide that covers this configuration in depth for Storwize and SVC systems

    and you can find the paper at:

    ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102408

    BackupThere are several options to consider when discussing Hyper-V backup planning. A backup can capture

    an entire virtual machine, as a point-in-time snapshot of the entire system. This type of snapshot can be

    initiated from the Hyper-V manager or VMM.

    Figure 16: Hyper-V Manager virtual machine snapshot

    However, backup software can also back up applications within the virtual machine, known as application

    aware backups. Often, both methods are combined to provide the highest level of integrity and data

    recovery. For most Microsoft environments and applications, VSS is the preferred and often only backup

    method supported. VSS providers can be both software and hardware based. Most solutions make use of

    hardware-based VSS if it is available, as it allows the storage system to do the heavy lifting, rather than

    the operating system.

    For Hyper-V, the IBM VSS Hardware Provider supports installation within the VM and provides VSS

    snapshot capability of direct-attached volumes just like a physical server. These are application-aware

    backups, using a VSS requestor and the application writer such as Exchange or SQL Server to coordinate

    the snapshots. IBM provides a proven VSS backup solution with IBM Tivoli® Storage FlashCopy®

    Manager.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    28/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    26

    IBM FlashCopy

    The FlashCopy function of Storwize family and SVC systems creates point-in-time snapshots of entire

    volumes. The data is available immediately on the target volume. Although it takes some time to complete

    fully, the result is a full clone of the source volume. The copies are available immediately for read and writ

    operations. FlashCopy works with normal or thin provisioned volumes – for either source or target

    locations.

    FlashCopy is fully compatible with the Microsoft VSS framework to coordinate snapshots between backup

    requestors, application writers, and software or hardware providers. IBM supports hardware VSS

    operations with the IBM VSS Hardware Provider.

    There are several methods available for creating snapshots, such as copy-on-write, redirect-on-write, split

    mirror, and so on. IBM FlashCopy uses a unique method of copy-on-write with background copy. This

    provides both an immediate copy of the data, and a full clone or copy after the background copy is

    complete. The time it takes for the background copy to finish depends on the copy rate, and what other

    loads are occurring on the storage system. The copy rate can be set between 0 and 100, with the default

    set to 50. This is recommended to balance the speed of copying and minimizing impact on the storage

    system resources. If faster copies are required, the copy rate can be set closer to 100; however, storage

    performance must be monitored to ensure impact on other storage operations.

    For Hyper-V environments, the important thing to remember is that this or any snapshot technology snaps

    the entire volume. If it is a volume used to store VHDs, or any other multiple files systems, it is taking a

    snapshot of everything. When restoring the data, it overwrites everything that is on the disk. As a result,

    the recommendation when using snapshots is to use directly connected dedicated volumes for data

    locations to ensure other data is not over-written during restores. Some snapshot backup software, such

    as Tivoli Storage FlashCopy Manager allows a hybrid approach where the entire volume is captured,

    however still allows granular restore of data, thereby avoiding the otherwise full volume overwrite.

    Tivoli Storage FlashCopy Manager

    Backing up business-critical applications running on virtual machines has often been a hurdle for IT

    administrators. IBM Tivoli Storage FlashCopy Manager meets this challenge with support for hardware-

    assisted snapshots within Hyper-V guest machines using IBM VSS Hardware Provider and the IBM

    Storwize family products and SVC. This provides application-aware VSS backup capability for Hyper-V

    guests, with the VSS Hardware Provider and FlashCopy Manager running on the guest machine as if it

    were a physical server.

    Figure 17 shows a sample configuration of FlashCopy Manager backup for Exchange servers running on

    Hyper-V virtual machines.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    29/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    27

    Figure 17: FlashCopy Manager Hyper-V configuration

    Figure 18 View of FlashCopy Manager GUI

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    30/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    28

    FlashCopy Manager provides a feature rich graphical interface and full command-line capabilities. Error!

    Reference source not found. shows a view of the FlashCopy Manager GUI and an Exchange

    environment.

     Another IBM solution guide is available that covers FlashCopy Manager configuration with Hyper-V. The

    paper can be found at: ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102328

    Easy Tier

    The Storwize family and SVC systems include a feature called IBM System Storage Easy Tier that

    dynamically move hot data to faster performing SSDs, and back again when the data becomes cold . Usin

    a complex algorithm built into the Storwize and SVC software, the system starts monitoring data patterns

    over a period of time and then after the most active data is identified, it is migrated to SSDs.

    To activate Easy Tier, an existing storage pool made up of traditional disks just needs an SSD MDisk

    added to the pool. The pool then becomes a hybrid pool, and Easy Tier is automatically activated at this

    point. As soon as the SSD MDisks are added, Easy Tier starts measuring performance in preparation fordata block migrations. By default, the monitoring process runs for about 24 hours before data starts

    migrating.

    When sizing Hyper-V storage pools, spread the virtual machine storage resources out over several pools.

    Place the heavy I/O virtual machine storage in a pool or group of pools that will have SSDs added to it.

    Then add an equal amount of SSDs to each pool to spread out the benefit evenly. If many virtual servers

    are pulling from one pool, only a percentage of the servers can benefit from the SSDs, as they contend fo

    SSD resources. For large mail servers, for example, the pools might be configured for only one server per

    pool, to ensure that each server can benefit from Easy Tier and the SSDs in each pool evenly.

    Easy Tier also has a monitoring mode that can be enabled at the storage pool level that measures the

    data and predicts the performance gains that can be expected. This provides an opportunity to plan andweigh the cost benefits of SSDs before actually purchasing and installing them in the system.

    The monitoring measurements are predictions, and the actual benefit can vary depending on how the data

    is structured and the system is configured. Before deploying Easy Tier in production, the configurations

    need to be tested in a lab or a preproduction sandbox environment to validate the distribution of SSD

    design and corresponding performance gains.

    IBM Real-time Compression

    Beginning with Storwize and SVC software version 7.1 and later, IBM Real-time Compression™ is

    included and implemented by simply creating a compressed volume, which is a new type of volume. This

    feature uses the Random Access Compression Engine (RACE), previously implemented in the IBM Real-

    time Compression Appliance™, which compresses inbound data before it is written to disk.

    Since Real-time Compression is defined at the volume level, any Hyper-V virtual hard drives hosted on th

    volume are compressed. Of course, direct-attached disks can also be provisioned as compressed

    volumes.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    31/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    29

    Figure 19: Creating a new compressed volume

    Data can be compressed by up to 80%, allowing up to five times data storage as in the same physical

    space. The compression can be used with live and heavy I/O applications such as databases or email,

    and it occurs in real time without a queue of uncompressed data waiting to be processed. What all of this

    means is no performance impact and very dense storage for a better storage return on investment (ROI).

    With Hyper-V and other virtualization solutions, testing has shown compression ratios in the 45% to 75%

    range, depending on the workload. File systems that contain data that is already in a compressed formatsuch as audio, video, and compressed files are not good candidates for Real-time Compression, as the

    benefits would be minimal. It would be similar to placing a number of compressed format multimedia files

    in a compressed file and the end result might not be much improvement. Because databases store data in

    tables, these types of files are good candidates for a compressed volume, with expected compression

    ratios around 50% to 80%. When it comes to Hyper-V, the benefits are going to be in line with the target

    workload, similar to the way a physical server benefits.

    It is possible to predict the compression benefits by using a new IBM utility called the Comprestimator.

    This is a command-line utility that uses a highly complex statistical algorithm to sample and predict

    compression ratios for a given workload on the sampled volumes. You can find details about the

    Comprestimator download and additional information at:

    ibm.com/support/docview.wss?uid=ssg1S4001012

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    32/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    30

    Conclusion

    This paper summarizes the Storwize family products and SVC capabilities and configurations with

    Microsoft Windows Server 2012 and Hyper-V. The reliability and performance of the Storwize family

    products and SVC with Microsoft workloads has been proven by customer deployments and involvement

    with partner testing programs such as SQL Server I/O Reliability, SQL Server Fast Track, Hyper-V FastTrack and the Exchange Solution Reviewed Program. IBM virtualized storage systems combined with the

    wide capabilities of Hyper-V and System Center 2012 VMM provide an efficient and simplified

    virtualization solution for today’s budget conscious data centers. Refer to the “Resources” section for

    additional detailed information.

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    33/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    31

    Resources

    The following websites provide useful references to supplement the information contained in this paper:

      IBM Systems on PartnerWorld

    ibm.com/partnerworld/systems

      IBM Storwize family products

    ibm.com/systems/storage/storwize/

      IBM SAN Volume Controller

    ibm.com/systems/storage/software/virtualization/svc/

      IBM Redbooks

    ibm.com/redbooks

      IBM Publications Centerwww.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?CTY=US

      IBM System x servers

    ibm.com/systems/x/index.html

      Microsoft TechNet: Hyper-V architecture and features overview

    http://technet.microsoft.com/en-us/library/hh831531.aspx

      Microsoft TechNet: General System Center 2012 VMM overview and support

    http://technet.microsoft.com/en-us/library/gg610610.aspx

    https://twitter.com/

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    34/35

     

    Microsoft Hyper-V with IBM SAN Volume Controller and Storwize family products

    32

    Trademarks and special notices

    © Copyright IBM Corporation 2014.

    References in this document to IBM products or services do not imply that IBM intends to make them

    available in every country.

    IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business

    Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked

    terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these

    symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information

    was published. Such trademarks may also be registered or common law trademarks in other countries. A

    current list of IBM trademarks is available on the Web at "Copyright and trademark information" at

    www.ibm.com/legal/copytrade.shtml. 

    Microsoft, Windows, SQL Server, Windows NT, and the Windows logo are trademarks of Microsoft

    Corporation in the United States, other countries, or both.

    Other company, product, or service names may be trademarks or service marks of others.

    Information is provided "AS IS" without warranty of any kind.

     All customer examples described are presented as illustrations of how those customers have used IBM

    products and the results they may have achieved. Actual environmental costs and performance

    characteristics may vary by customer.

    Information concerning non-IBM products was obtained from a supplier of these products, published

    announcement material, or other publicly available sources and does not constitute an endorsement of

    such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly

    available information, including vendor announcements and vendor worldwide homepages. IBM has not

    tested these products and cannot confirm the accuracy of performance, capability, or any other claims

    related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the

    supplier of those products.

     All statements regarding IBM future direction and intent are subject to change or withdrawal without notice

    and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the

    full text of the specific Statement of Direction.

    Some information addresses anticipated future capabilities. Such information is not intended as a definitiv

    statement of a commitment to specific levels of performance, function or delivery schedules with respect t

    any future products. Such commitments are only made in IBM product announcements. The information is

    presented here to communicate IBM's current investment and development activities as a good faith effor

    to help with our customers' future planning.

    Performance is based on measurements and projections using standard IBM benchmarks in a controlled

    environment. The actual throughput or performance that any user will experience will vary depending upo

    considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the

    storage configuration, and the workload processed. Therefore, no assurance can be given that an

    individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

    Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

    http://www.ibm.com/legal/copytrade.shtmlhttp://www.ibm.com/legal/copytrade.shtmlhttp://www.ibm.com/legal/copytrade.shtml

  • 8/9/2019 Hyper-V BPG StorwizeSVCfamily_ v 2.3.b Final

    35/35

     

     Any references in this information to non-IBM websites are provided for convenience only and do not in

    any manner serve as an endorsement of those websites. The materials at those websites are not part of

    the materials for this IBM product and use of those websites is at your own risk.