intel® cloud builders guide to cloud design and deployment ... · intel® cloud builders guide to...

19
Intel® Cloud Builders Guide to Cloud Design and Deployment on Intel® Platforms Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC Audience and Purpose This paper is written to give IT professionals architectural insights with methodical instructions to demonstrate storage I/O control, run end-to-end on VMware* vSphere 5.0 and 10 Gigabit Ethernet (10GbE) 1 on Intel server platforms. This addresses one of the key aspects of I/O control usage model requirements defined by Open Data Center Alliance*. Storage I/O Control solutions and 10GbE networks are critical elements of the cloud architecture. 10GbE provides a cost-effective solution for cloud storage architectures based on commonly used Ethernet technology. Our conclusions can inform the IT professional in how I/O Control can bring agility and reduced costs to their environments. September 2011 Intel® Cloud Builders Guide Intel® Xeon® Processor-based Servers Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC Intel® Xeon® Processor 5500 Series Intel® Xeon® Processor 5600 Series

Upload: others

Post on 22-May-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Intel® Cloud Builders Guide to Cloud Design and Deployment on Intel® PlatformsStorage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Audience and PurposeThis paper is written to give IT professionals architectural insights with methodical instructions to demonstrate storage I/O control, run end-to-end on VMware* vSphere 5.0 and 10 Gigabit Ethernet (10GbE)1 on Intel server platforms. This addresses one of the key aspects of I/O control usage model requirements defined by Open Data Center Alliance*. Storage I/O Control solutions and 10GbE networks are critical elements of the cloud architecture. 10GbE provides a cost-effective solution for cloud storage architectures based on commonly used Ethernet technology. Our conclusions can inform the IT professional in how I/O Control can bring agility and reduced costs to their environments.

September 2011

Intel® Cloud Builders GuideIntel® Xeon® Processor-based ServersStorage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Intel® Xeon® Processor 5500 Series

Intel® Xeon® Processor 5600 Series

2

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Table of Contents

Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Test Bed Configuration and Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

I/O Control Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Usage Model Test Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Host level Storage I/O Resource Allocation for VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Data-store Array Level Storage I/O Allocation, Congestion, and Latency Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Enforcing Limit on Peak I/O Resource Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Executive SummaryThis reference architecture discusses storage I/O control technologies as they relate to virtualization on Intel server platforms running VMware* ESXi 5.0. An increasing trend towards virtualizing larger and more powerful servers creates congestion of I/O throughput and challenges with preserving I/O latency in the virtualized environment. More specifically, large number of Virtual Machines (VMs) creates network bottleneck because of the aggregated storage I/O over the shared physical network interface and shared storage. In today’s highly consolidated and dynamic virtualized storage environment, it is desired that administrators can comfortably run all virtual machines and protect from undue negative performance impact due to misbehaving I/O-heavy virtual machines, often known as the “noisy neighbor” problem. More specifically, the service level of critical virtual machines can be protected giving them preferential I/O resource allocation.

IntroductionVirtualization I/O performance is one of the key considerations in Information Technology shops. The VMware* Storage I/O Control (SIOC) can be used on Intel server platforms to address many of the I/O challenges and may be beneficial to users of virtualization technology.

SIOC achieves these benefits by utilizing a number of Intel platform hardware capabilities and extending the constructs of shares and limits, used extensively for CPU and memory, to manage the allocation of storage I/O resources. SIOC improves upon the previous host-level I/O scheduler by detecting and responding to congestion occurring at the array, and enforcing share-based allocation of I/O resources across all virtual machines and hosts accessing a shared data-store.

Key Intel platform hardware capabilities that SIOC utilizes are:

• Intel® QuickData Technology enables data copy by the chipset instead of the CPU, to move data more efficiently through the server and provide fast, scalable, and reliable throughput.

• Direct Cache Access (DCA) allows a capable I/O device, such as a network controller, to place data directly into CPU cache, reducing cache misses and improving application response times.

• Extended Message Signaled Interrupts (MSI-X) distributes I/O interrupts to multiple CPUs and cores, for higher efficiency, better CPU utilization, and higher application performance.

• Receive Side Coalescing (RSC)aggregates packets from the same TCP/IP flow into one larger packet, reducing per-packet processing costs for faster TCP/IP processing.

• Low Latency Interrupts tune interrupt interval times depending on the latency sensitivity of the data, using criteria such as port number or packet size, for higher processing efficiency.

• Intel® Virtualization Technology for Connectivity (Intel® VT-c) using Virtual Machine Device Queues (VMDq), and VMware* NetQueue offload the sorting burden from the CPU and hypervisor to the network controller, to accelerate network I/O throughput.

SIOC extends the constructs of shares and limits, used extensively for CPU and memory, to manage the allocation of storage I/O resources.

With SIOC, VMware vSphere administrators can mitigate the performance loss of critical workloads due to high congestion and storage latency during peak load periods. The use of SIOC will produce better and more predictable performance behavior for workloads

during periods of congestion. Benefits of leveraging SIOC:

• Provides performance protection by enforcing proportional fairness of access to shared storage

• Detects and manages bottlenecks at the array

• Maximizes your storage investments by enabling higher levels of virtual-machine consolidation across your shared data-stores

The purpose of this reference architecture is to demonstrate the usage models enabled by this I/O control technology, show how to setup and configure different parametersm and serve as a reference for end users in deploying this technology in their virtualized environment.

Test Bed Configuration and SetupTo demonstrate storage I/O Control in a virtualized environment, set up the hardware test bed with the components detailed in Table 1. The completed test bed is represented by Figure 1 which will utilize a constant and stable iSCSI I/O workload on two virtual machines. Use cases then follow that demonstrate the application and result of applying resource allocation policies to the virtualized environment using SIOC. Using three Intel® Xeon® Processor 5600 series server platforms, form 1) a VMware vSphere Cluster using two Intel Xeon Processor 5600 series servers and 2) a Microsoft Storage Server, and 3) one private 10GbE network using the Intel® Ethernet Server Adapter X520 Series 10GbE NIC in conjunction with a Cisco 5020 10 GbE switch.

The VMware vSphere Cluster contains two VMware ESXi 5.0 Hosts, two shared data-stores, and two virtual machines, and initially homed on one of the VMware ESXi hosts. A Virtual Distributed Switch (vDSwitch) is necessary using one 10 GbE

4

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

port from each host Intel Ethernet Server Adapter X520 Series 10GbE NIC. The dSwitch facilitates the VMware vMotion migration of a VM between VMware ESXi hosts as well as the iSCSI data traffic between the VM’s and the Microsoft Storage Server that is generated by Iometer.

Each virtual machine, sized at 20 GB, is installed with Windows* 2008 Server OS. Additionally a second 5GB hard drive is added from the 40 GB iSCSI LUN located on the Microsoft Storage Server that serves as the I/O workload target.

The storage hardware used for this test was a white box server with a dual-processor Intel® Xeon® 5670 CPU, 25GB of RAM and, 500GB SAS drives in a Raid 5 configuration. The platform is built on Windows 2008 R2 Server with Microsoft Storage Server software to enable iSCSI targets. Two target/LUNs were created; one for the guest operating systems and one for the storage testing.

For the purposes of this RA, a synthetic I/O workload is required to demonstrate the various VMware SIOC policy selections available to the administrator. IOmeter is configured and started on each VM which will target the 5MB Windows partition with sequential reads across the 10GbE network to the Microsoft Storage Server. This IOmeter workload will need to be tuned to achieve an I/O latency that will be defined later on in the individual use cases. The initial default trigger level for SIOC is 30 ms so the initial workload tuning will need to exceed this trigger point. Use esxtop and switch the display to the Disk Adapter view to validate the baseline latency.

Table 1: Setup and Configuration

Host Servers (two required) • Intel® Xeon® processor 5680 @ 3.33 GHz (2 sockets, 6 cores/socket), SMT enabled, 24Gb RAM, 3x146GB SAS HDD RAID-0

• VMware ESXi 5.0 build 411354

• 2 VMs for guest OS – 25GB VMDK, 4 vCPU, 4GB RAM, Windows Server 2008 R2

• System BIOS version 58

Management Server • VMware vSphere Client version 5.0 build 434157

• VMware vCenter Server version 5.0 build 434157

Network Configuration • Cisco Nexus 5020 Chassis 40x10GE/Supervisor

• Intel® Ethernet Server Adapter X520 Series 10GbE NIC5

• VMware ESXi 5.0 Native iSCSI Initiator

• MTU = 1500

Storage Resource • Intel Xeon 5670 dual-processor 5670 CPU, 24Gb RAM, 500GB SAS drives in a RAID 5 configuration

• Intel® Ethernet Server Adapter X520 Series 10GbE NIC

• Microsoft Storage Server 2008 configured with one 40 GB iSCSI target and presented to both VMware Hosts

Application Configuration • IOmeter v.2006.7.27

• # of Managers = 1

• # of Workers = 4

• # of Outstanding I/O = 32

• I/O Size = 1MB packet, 100% read, 100% sequential

Data Collection Tools • Screenshots measure throughput in MB/s from Iometer

• Captured throughput and latency from esxtop

5

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

I/O Control Setup

Configuration Overview

VMware vSphere data centers supporting clusters of virtualized hosts can use shared storage infrastructure. Applications in those virtualized hosts that share the storage infrastructure can be I/O constrained without proper control of the shared resources. This is what Storage I/O Control (SIOC) provides. SIOC is the ability to set threshold and share adjustments within the VMware ESXi environment. Setting rate-limits and triggers allow the administrators to set a pre-programmed response to

storage resources avoiding congestion and ensuring performance for critical VMs. There are three SIOC settings used for this experiment; Share Value, Limit-IOPs, and Congestion Threshold.

Share Value and Limit-IOPs are both set on a per VM basis. Share Value sets a percentage value to the VM in order to give it more or less priority to the data-store. The changes made here will impact the VMs on a single host regardless of whether SIOC in enabled or not. Enabling SIOC comes into play when there is more than one host. As part of this experiment, a VM is migrated from one host to another.

With SIOC enabled, settings made to Share Value or Limit-IOPs will be applied during the migration and on the host the VM is migrated. Shares can be changed to High (2000), Normal (1000), Low (500), or Custom; custom being in the range of 200 to 4000. Whereas Limit-IOPs will set the upper bound for storage resources in the range of 16 to 2,147,483,647.

Congestion Threshold is a setting in the target properties. It is a parameter only available with SIOC enabled. Congestion Threshold is a trigger that, when latency is exceeded, will throttle the I/O throughput. A lower value will

Figure 1: Architecture of Test Environment

6

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

result in lower device latency and stronger virtual machine I/O performance isolation whereas a higher value typically results in higher aggregate throughput and weaker isolation. Congestion Threshold can be set in the range of 5ms to 100ms.

The steps to configure Share Value, Limit-IOPs, Congestion Threshold, and enable SIOC are as follows:

1. From the VMware vSphere Console home page navigate to Inventory > Datastores and Datastore Clusters page.

2. Select the LUN on which testing will be performed to see the available option and views. Navigate to the Virtual Machines tab to begin the process. Note that both VMs are in the default settings for Limit–IOPs and Data-store % Shares. To make changes to these options, Right+Click on the VM to be changed and select Edit Settings.

7

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

3. Navigate to the Resources tab and then choose Disk. There are two primary items that can be changed: Shares and Limit–IOPs. Shares adjustment allows a VM to be given a higher or lower percentage of the host’s resources. Shares can be changed to High (2000), Normal (1000), Low (500), or Custom; custom being in the range of 200 to 4000. Whereas Limit-IOPs will set the upper bound for storage resources in the range of 16 to 2,147,483,647.

4. A “Share” example of 1:1 or Normal-Normal shows a Share Value of 1000 and 50% Share per VM.

5. A “Share” example of 2:1 or Normal-Low shows a Share Value of 1000 and 500 with 66% and 33% Share per VM respectively.

6. A “Share” example of 1:1 or Normal-Normal and a Limit-IOPs value of 62 (500Mbps) shows a Share Value of 1000 and 50% Share per VM plus the rate-limiting of the App05 VM to 62.

8

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

7. Navigate to the Configuration tab and select the storage volume or LUN to be modified. On this page there are a couple of notable items, the first being a list of servers that have access to the storage and the second is the state of SIOC. Next. By clicking on properties, the storage properties dialog box is displayed. For this papers use-cases, there are two options: enable/disable SIOC and edit the Congestion Threshold which is found by clicking the advanced button.

8. Changing the Congestion Threshold is only possible with SIOC enabled.

Usage Model Test Results and Analysis:

Usage Model #1: Host Level Storage I/O Resource Allocation for VM

This usage model talks about the storage I/O resource allocation at the host level. The setup consists of a single VMware ESXi 5.0 server is hosting two virtual machines (app04 and app05) and the disk I/O shares for the virtual disks (VMDKs) of these virtual machines are varied with SIOC feature enabled and disabled. As a

result there are four such scenarios that will be discussed below. The scenarios are:

a. Disk shares ratio of 1:1 (with SIOC disabled)

b. Disk shares ratio of 1:1 (with SIOC enabled)

c. Disk shares ratio of 2:1 (with SIOC disabled)

d. Disk shares ratio of 2:1 (with SIOC enabled)

Note: Please refer to the section “Environment Setup” as to how to set shares on virtual disks and how to enable/disable SIOC. Also note that, the Congestion Threshold was set to 30ms (default) in all the scenarios when SIOC is enabled.

Disk shares ratio of 1:1 (with SIOC disabled)

In this sub-case the disk I/O shares for the virtual disks on both VMs app04 and app05 are set to “Normal” or 1000 shares

9

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

as shown in the below figure. Hence the disk share ratio is 1:1 for both the VMs. Also note that SIOC is disabled.

Once the disk shares are set, the application Iometer (1MB 100% read test) was run on the both the VMs. The throughput (MBps) and IOPs are captured from the Iometer tool running on the VMs and the total aggregate throughput was captured from the VMware ESXi host by running the tool “esxtop” as shown in the below figure.

The throughput on app04 and app05 VMs was observed to be ~328MBps and ~329MBps.

10

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Disk shares ratio of 1:1 (with SIOC enabled)

With the same disk share ratio as in sub-case (a), SIOC was enabled and the Iometer test was re-run on the VMs and the throughput is ~329 MBps on both app04 and app05 as shown in the below figure.

Disk shares ratio of 2:1 (with SIOC disabled)

In this sub-case the disk shares were set to “Normal or 1000 shares” on app04 and “High or 2000 shares” on app05 VM. As a result we have a disk share ratio of 2:1 as shown in the below figure.

11

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

With the disk shares set, Iometer was ran on both VMs and the throughput of about ~430 MBps and ~215 MBps was observed on app05 and app04 respectively as shown in the below figure. The throughput numbers were proportional to the disk shares of 2:1.

Disk shares ratio of 2:1 (with SIOC enabled)

With SIOC enabled the above test was re-run and the throughput from the Iometer tool was observed to be ~440MBps and ~220 MBps for app05 and app04 respectively as shown in the below screenshot. These throughput numbers are again proportional to the disk shares of 2:1.

12

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

The above four scenarios can be summarized in a simple table as follows:

SIOC Disk shares ratio App05 VM App04 VM

Disabled 1:1 329 MBps 328 MBps

Enabled 1:1 328 MBps 328MBps

Disabled 2:1 430 MBps 215 MBps

Enabled 2:1 440 MBps 220 MBps

From the above table, it can be said that when the VMs are on the same host irrespective of whether SIOC is enabled or disabled, the performance of the VMs are proportional to the distribution of disk shares. This is due to the fact that the local host level disk scheduler is enforcing the shares.

Usage Model #2: Data-store Array Level Storage I/O Allocation, Congestion and Latency Management

This usage model discusses about the implementation of resource allocation at the shared data-store level. The setup consists of two VMware ESXi 5.0 servers which are part of a cluster and the virtual machines app04 and app05 are hosted on each of the server and the disk I/O shares for the virtual disks (VMDKs) of these virtual machines are varied with Storage I/O Control (SIOC) feature enabled and

Disk shares ratio of 1:1 (with SIOC disabled)

To start, the disk I/O shares for the virtual disks on both virtual machines app04 and app05 are set to “Normal” or 1000 shares as shown in the below figure. Hence the disk share ratio is 1:1 for both the VMs.

disabled. As a result there are four such scenarios that will be discussed below. The scenarios are:

a. Disk shares ratio of 1:1 (with SIOC disabled)

b. Disk shares ratio of 1:1 (with SIOC enabled)

c. Disk shares ratio of 2:1 (with SIOC disabled)

d. Disk shares ratio of 2:1 (with SIOC enabled)

Note: Please refer to the section “Environment Setup” as to how to set shares on virtual disks and how to enable/disable SIOC. Also note that, the congestion threshold was set to 30ms (default) in all the scenarios when SIOC is enabled.

13

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

With disk shares ratio of 1:1, the workload i.e. Iometer (1MB 100% read test) was run on the both the VMs. The throughput (MBps) and IOPs are captured from the Iometer tool running on the VMs. Throughput was also captured by running esxtop on the two servers. Both these screenshots are shown below.

The throughput on app04 and app05 VMs was observed to be ~395MBps and ~393MBps.

14

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Disk shares ratio of 1:1 (with SIOC enabled)

With the same disk share ratio as in above sub-case, SIOC was enabled and the Iometer test was re-run on the VMs and the screenshots for Iometer and esxtop were captured as shown below.

The throughput on app04 and app05 VMs was observed to be ~394MBps and ~393MBps.

15

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Disk shares ratio of 2:1 (with SIOC disabled)

In this sub-case the disk shares were set to “Normal or 1000 shares” on app04 and “High or 2000 shares” on app05 VM. As a result we have a disk share ratio of 2:1 as shown in the below figure.

With the disk shares set, Iometer was ran on both VMs and the throughput of about ~393 MBps and ~394 MBps was observed on app05 and app04 respectively as shown in the following captured screenshots.

16

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Disk shares ratio of 2:1 (with SIOC enabled)

With SIOC enabled the above test was re-run and the throughput from the Iometer tool was observed to be ~504 MBps and ~276 MBps for app05 and app04 respectively as shown in the below screenshots. With SIOC enabled on the data-store the throughput numbers are proportional to the disk shares of 2:1.

For the usage model #2, the above four scenarios can be summarized in a simple table as follows:

SIOC Disk shares ratio App05 VM App04 VM

Disabled 1:1 393 MBps 395 MBps

Enabled 1:1 393 MBps 394 MBps

Disabled 2:1 393 MBps 394 MBps

Enabled 2:1 504 MBps 276 MBps

17

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

The key take away from this usage model is that, when the VMs are on multiple/different VMware ESXi hosts, the disk shares distribution is enforced only when SIOC is enabled on the common shared data-store. This is due to the fact that SIOC provides data-store-wide disk scheduling that responds to congestion at the storage array, not just on the host side HBA. This provides an ability to monitor storage traffic and the priorities of all the virtual machines accessing the shared data-store. This is accomplished by way of a data-store-wide distributed disk scheduler that uses I/O shares per virtual machine to allow a higher-priority workload to get better performance. The data-store-wide disk scheduler totals up the disk shares for all the VMDKs that a virtual machine has on the given data-store. The scheduler then calculates what percentage of the shares the virtual machine has compared to the total number of shares of all the virtual machines running on the data-store.

Usage Model #3: Enforcing Limit on Peak I/O Resource Usage

This usage model demonstrates performance impact from IO interference such as one seen from VM migration and how I/O control can be used to manage interference. The setup consists of a single VMware ESXi 5.0 server hosting two virtual machines (app04 and app05) and then measuring the impact of disk I/O of these virtual machines with the SIOC feature enabled and disabled during the migration of one of the VMs to a second host. All tests are the two VMs (App04 & App05), both on the single VMware ESXi 5.0 host and running Iometer. VMs were tested at 20GB VMDKs, 4 vCPUs, and 10GB memory. Iometer is running a 1MB payload size and 32 outstanding I/O per target. Then while App05 VM is migrated to the second host, IOPs are measured on App04 before and during each test.

As mentioned, adjustments to this SIOC experiment were Share Value, Limit-

Test Matrix

IOPs, and Congestion Threshold. Each adjustment consisted in four tests: SIOC enabled and disabled, and VM migration outbound (Tx) and inbound (Rx). A baseline test was completed before any adjustments were made to capture the default measurements. For each test, the IOPs were recorded prior to the VM migration. This is the “Starting State” column of Test Matrix. While the VM was migrated, the lowest value or impacting value was recorded in order to show the impact to existing running VMs. This value is the “Impacted State” column of Test Matrix. Five adjustments plus the baseline were recorded in this experiment:

• Share Value of 2:1 (1000:500)

• Share Value of 4:1 (2000:500)

• Share Value of 1:1 (1000:1000); Limit-IOPs to 200 on migrating VM

• Share Value of 1:1 (1000:1000); Limit-IOPs to 62 on migrating VM

• Share Value of 2:1 (1000:500); Limit-IOPs to 62 on migrating VM

18

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC

Several key takeaways are evident from the above table. When the VMs are on the same VMware ESXi host, the disk shares distribution is enforced whether SIOC is enabled or not on the shared data-store. When a VM is migrated, the storage workload was impacted to the extent of its share value setting. The more diverse the share setting, the more cycles are taken from VM (App04) under test. The adjustment of Limit-IOPs had the best overall impact of all tests; its rate-limit function was completely adjustable. As noted in the previous tests, VMs are on multiple/different VMware ESXi hosts share the disk distribution enforcement when SIOC is enabled. Additionally, as the disk scheduler responds to congestion at the storage array it allows the VM a higher-priority and thus better performance.

ConclusionsI/O control capabilities available on Intel server platforms from VMware vSphere SIOC offers I/O prioritization to virtual machines accessing shared storage resources. Administrators now have a new tool available to help them increase the consolidation density, detects and manage bottlenecks at the array only when congestion exists and provides performance protection while ensuring that they will have peace of mind, knowing that during periodic periods of peak I/O activity there will be a prioritization and proportional fairness enforced across all the virtual machines accessing that shared resource. It allows administrators to implement quality of service expressed for storage workloads. This is a step forward in the journey towards an automated and policy-based management of shared storage resources.

More Intel/VMware reference architectures are available at the Intel Cloud Builders Web site at www.intel.com/cloudbuilders.

Additional Usage Models Under Development:

Flexible partition and service assurance using VMware ESXi traffic types and flows on a vNetwork Distributed Switch (vDS)NetIOC to provide users with the following features:

• Isolation: ensure traffic isolation so that a given flow will never be allowed to dominate over others, thus preventing drops and undesired jitter

• Shares: allow flexible networking capacity partitioning to help users to deal with overcommitment when flows compete aggressively for the same resources

• Limits: enforce traffic bandwidth limit on the overall vDS set of dvUplinks

• Load-Based Teaming: efficiently use a vDS set of dvUplinks for networking capacity

Disclaimers∆ Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See www.intel.com/

products/processor_number for details.INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROP-

ERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.

Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The infor-mation here is subject to change without notice. Do not finalize a design with this information.

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized er-rata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which

have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel’s Web site at www.intel.com.Copyright © 2011 Intel Corporation. All rights reserved. Intel, the Intel logo, Xeon, Xeon inside, and Intel Intelligent Power Node Manager are trademarks of IntelCorporation in the U.S. and other countries. *Other names and brands may be claimed as the property of others.

Intel® Cloud Builders Guide: Storage I/O Control: 10Gb Intel® Ethernet with VMware* vSphere 5.0* SIOC