emc infrastructure for citrix xendesktop 5 · validate an emc infrastructure for citrix xendesktop...

94
Proven Solutions Guide EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of tests performed to validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1. This document focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vSphere, and Citrix XenDesktop 5.6. August 2012 EMC ® INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 EMC VNX™ Series (NFS), VMware vSphere ® 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Simplify management and decrease TCO User install applications capability on pooled desktops Minimize the risk of virtual desktop deployment

Upload: duongkhue

Post on 21-Oct-2018

247 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Proven Solutions Guide

EMC Solutions Group

Abstract

This Proven Solutions Guide provides a detailed summary of tests performed to validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1. This document focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vSphere, and Citrix XenDesktop 5.6.

August 2012

EMC® INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

EMC VNX™ Series (NFS), VMware vSphere®

5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1

Simplify management and decrease TCO User install applications capability on pooled desktops Minimize the risk of virtual desktop deployment

Page 2: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

2

Copyright © 2012 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

VMware, ESXi, VMware vCenter, and VMware vSphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions.

All other trademarks used herein are the property of their respective owners.

Part Number: H11007

Page 3: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Table of contents

3 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Table of contents

1 Executive Summary .................................................................................... 10

Introduction to the EMC VNX series ..................................................................................... 10 Introduction .............................................................................................................................. 10

Software suites available .................................................................................................................. 11

Software packages available ............................................................................................................. 11

Business case ........................................................................................................................... 11 Solution overview ...................................................................................................................... 12 Key results and recommendations ............................................................................................. 12

2 Introduction ............................................................................................... 13 Document overview ............................................................................................................ 13

Use case definition .................................................................................................................... 13 Purpose ..................................................................................................................................... 14 Scope ........................................................................................................................................ 14 Not in scope .............................................................................................................................. 14 Audience ................................................................................................................................... 14 Prerequisites ............................................................................................................................. 14 Terminology .............................................................................................................................. 15

Reference Architecture ........................................................................................................ 15 Corresponding reference architecture ........................................................................................ 15 Reference architecture diagram ................................................................................................. 16

Configuration ...................................................................................................................... 16 Hardware resources ................................................................................................................... 16 Software resources .................................................................................................................... 17

3 Citrix XenDesktop Infrastructure .................................................................. 19 Citrix XenDesktop 5.6 ......................................................................................................... 19

Introduction .............................................................................................................................. 19 Deploying Citrix XenDesktop components ................................................................................. 19 Citrix XenDesktop Controller ...................................................................................................... 20 Citrix personal vDisk .................................................................................................................. 20 Citrix Profile Management ......................................................................................................... 20

vSphere 5.0 Infrastructure .................................................................................................. 21 vSphere 5.0 overview ................................................................................................................ 21 Desktop vSphere clusters .......................................................................................................... 21 Infrastructure vSphere cluster .................................................................................................... 21

Windows infrastructure ....................................................................................................... 22

Page 4: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Table of contents

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

4

Introduction .............................................................................................................................. 22 Microsoft Active Directory .......................................................................................................... 22 Microsoft SQL Server ................................................................................................................. 22 DNS server ................................................................................................................................ 22 DHCP server............................................................................................................................... 22

4 Storage Design ........................................................................................... 23 EMC VNX series storage architecture .................................................................................. 23

Introduction .............................................................................................................................. 23 Storage layout ........................................................................................................................... 24 Storage layout overview ............................................................................................................ 24 File system layout ...................................................................................................................... 26 EMC VNX FAST Cache ................................................................................................................. 28 VSI for VMware vSphere ............................................................................................................ 29 vCenter Server storage layout .................................................................................................... 29 VNX shared file systems ............................................................................................................ 30 Citrix Profile Manager and folder redirection .............................................................................. 30 EMC VNX for File Home Directory feature .................................................................................... 30 Capacity .................................................................................................................................... 30

5 Network Design .......................................................................................... 31

Considerations ................................................................................................................... 31 Network layout overview ............................................................................................................ 31 Logical design considerations ................................................................................................... 32 Link aggregation ........................................................................................................................ 32

VNX for File network configuration ...................................................................................... 33 Data Mover ports ....................................................................................................................... 33 LACP configuration on the Data Mover ....................................................................................... 33 Data Mover interfaces ................................................................................................................ 34 Enable jumbo frames on Data Mover interface ........................................................................... 34

vSphere network configuration ........................................................................................... 35 NIC teaming ............................................................................................................................... 35 Increase the number of vSwitch virtual ports ............................................................................. 36 Enable jumbo frames for the VMkernel port used for NFS........................................................... 36

Cisco Nexus 5020 configuration ......................................................................................... 37 Overview ................................................................................................................................... 37 Cabling ...................................................................................................................................... 37 Enable jumbo frames on Nexus switch ...................................................................................... 37 vPC for Data Mover ports ........................................................................................................... 38

Cisco Catalyst 6509 configuration ...................................................................................... 39

Page 5: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Table of contents

5 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Overview ................................................................................................................................... 39 Cabling ...................................................................................................................................... 39 Server uplinks ........................................................................................................................... 39

6 Installation and Configuration ..................................................................... 40

Installation overview ........................................................................................................... 40 Citrix XenDesktop components .................................................................................................. 41 Citrix XenDesktop installation overview ..................................................................................... 41 Virtual Desktop Agent Installation ............................................................................................. 41 Citrix XenDesktop machine catalog configuration ...................................................................... 43 Citrix Profile Manager Configuration .......................................................................................... 48

Storage components ........................................................................................................... 50 Storage pools ............................................................................................................................ 50 NFS active threads per Data Mover ............................................................................................ 51 NFS performance fix .................................................................................................................. 52 Enable FAST Cache .................................................................................................................... 54

7 Testing and Validation ................................................................................ 56 Validated environment profile ............................................................................................ 56

Profile characteristics ................................................................................................................ 56 Use cases .................................................................................................................................. 57 Login VSI ................................................................................................................................... 57 Login VSI launcher ..................................................................................................................... 58 FAST Cache configuration .......................................................................................................... 58

Boot storm results .............................................................................................................. 58 Test methodology ...................................................................................................................... 58 Pool individual disk load ........................................................................................................... 59 Pool LUN load ............................................................................................................................ 61 Storage processor IOPS and CPU utilization ............................................................................... 63 FAST Cache IOPS ....................................................................................................................... 64 Data Mover CPU utilization ........................................................................................................ 65 Data Mover NFS load ................................................................................................................. 66 vSphere CPU load ...................................................................................................................... 67 vSphere disk response time ...................................................................................................... 68

Antivirus results .................................................................................................................. 68 Test methodology ...................................................................................................................... 68 Pool individual disk load ........................................................................................................... 69 Pool LUN load ............................................................................................................................ 71 Storage processor IOPS and CPU utilization ............................................................................... 73 FAST Cache IOPS ....................................................................................................................... 74

Page 6: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Table of contents

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

6

Data Mover CPU utilization ........................................................................................................ 75 Data Mover NFS load ................................................................................................................. 76 vSphere CPU load ...................................................................................................................... 77 vSphere disk response time ...................................................................................................... 78

Login VSI results ................................................................................................................. 78 Test methodology ...................................................................................................................... 78 Desktop logon time ................................................................................................................... 79 Pool individual disk load ........................................................................................................... 80 Pool LUN load ............................................................................................................................ 82 Storage processor IOPS and CPU utilization ............................................................................... 84 FAST Cache IOPS ....................................................................................................................... 85 Data Mover CPU utilization ........................................................................................................ 86 Data Mover NFS load ................................................................................................................. 87 vSphere CPU load ...................................................................................................................... 88 vSphere disk response time ...................................................................................................... 89

8 Personal vDisk Implementation considerations ........................................... 90 Personal vDisk Implementation considerations .................................................................. 90

Storage Layout .......................................................................................................................... 90 Login Time ................................................................................................................................. 91 vSphere CPU Utilization ............................................................................................................. 92

9 Conclusion ................................................................................................. 93 Summary ............................................................................................................................ 93

References .......................................................................................................................... 93 Supporting documents .............................................................................................................. 93 Citrix documents ....................................................................................................................... 94

Page 7: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

List of Tables

7 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

List of Tables Table 1. Terminology ............................................................................................................... 15 Table 2. Citrix XenDesktop 5.6—Solution hardware ................................................................. 16 Table 3. Citrix XenDesktop—Solution software ......................................................................... 17 Table 4. VNX5300—File systems .............................................................................................. 30 Table 5. vSphere—Port groups in vSwitch0 .............................................................................. 35 Table 6. Citrix XenDesktop—environment profile ..................................................................... 56

Page 8: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

List of Figures

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

8

List of Figures Figure 1. Citrix XenDesktop—Reference architecture ................................................................. 16 Figure 2. VNX5300–Core reference architecture physical storage layout ................................... 24 Figure 3. VNX5300–user data physical storage layout .............................................................. 24 Figure 4. VNX5300–Virtual desktop NFS file system layout ....................................................... 26 Figure 5. VNX5300–Personal vDisk NFS file system layout ........................................................ 27 Figure 6. VNX5300–CIFS file system layout ............................................................................... 28 Figure 7. Citrix XenDesktop–Network layout overview ............................................................... 32 Figure 8. VNX5300–Ports of the two Data Movers ..................................................................... 33 Figure 9. vSphere–vSwitch configuration .................................................................................. 35 Figure 10. vSphere—Load balancing policy ................................................................................. 35 Figure 11. vSphere—vSwitch virtual ports and MTU settings ....................................................... 36 Figure 12. vSphere–VMkernel port MTU setting .......................................................................... 37 Figure 13. Select the virtual desktop agent ................................................................................. 41 Figure 14. Select Personal vDisk Configuration ........................................................................... 42 Figure 15. Virtual Desktop Configuration ..................................................................................... 43 Figure 16. Select the machine type ............................................................................................. 44 Figure 17. Select the cluster host and Master image ................................................................... 45 Figure 18. Virtual Machine specifications ................................................................................... 46 Figure 19. Active directory location and naming scheme ............................................................. 47 Figure 20. Summary and Catalog name ....................................................................................... 47 Figure 21. Citrix Profile Manager–Master Image Profile Redirection ............................................. 48 Figure 22. Enable Citrix Profile Manager ...................................................................................... 49 Figure 23. Citrix Profile Manager path to user store ..................................................................... 50 Figure 24. VNX5300–nThreads properties .................................................................................. 52 Figure 25. VNX5300–File System Mount Properties .................................................................... 53 Figure 26. VNX5300–FAST Cache tab .......................................................................................... 54 Figure 27. VNX5300–Enable FAST Cache .................................................................................... 55 Figure 28. Personal Disk Boot storm—IOPS for a single Desktop pool SAS drive .......................... 59 Figure 29. Personal vDisk Boot storm—IOPS for a single personal vDisk pool SAS drive .............. 60 Figure 30. Personal vDisk Boot storm—Desktop Pool LUN IOPS and response time ..................... 61 Figure 31. Personal vDisk Boot storm—personal vDisk Pool LUN IOPS and response time ........... 62 Figure 32. Personal vDisk Boot storm—Storage processor total IOPS and CPU Utilization............ 63 Figure 33. Personal vDisk Boot storm—personal vDisk FAST Cache IOPS ..................................... 64 Figure 34. Personal vDisk Boot storm—Data Mover CPU utilization ............................................. 65 Figure 35. Personal vDisk Boot storm—Data Mover NFS load ...................................................... 66 Figure 36. Personal vDisk Boot storm—ESXi CPU load ................................................................. 67 Figure 37. Personal vDisk Boot storm—Average Guest Millisecond/Command counter ............... 68 Figure 38. Personal vDisk Antivirus—Desktop Disk I/O for a single SAS drive .............................. 69 Figure 39. Personal vDisk Antivirus—personal vDisk Disk I/O for a single SAS drive .................... 70 Figure 40. Personal vDisk Antivirus—Desktop Pool LUN IOPS and response time ........................ 71 Figure 41. Personal vDisk Antivirus—Personal vDisk Pool LUN IOPS and response time .............. 72 Figure 42. Personal vDisk Antivirus—Storage processor IOPS ..................................................... 73 Figure 43. Personal vDisk Antivirus—FAST Cache IOPS ................................................................ 74 Figure 44. Personal vDisk Antivirus—Data Mover CPU utilization ................................................. 75 Figure 45. Personal vDisk Antivirus—Data Mover NFS load .......................................................... 76 Figure 46. Personal vDisk Antivirus—vSphere CPU load .............................................................. 77 Figure 47. Personal vDisk Antivirus—Average Guest Millisecond/Command counter .................. 78 Figure 48. Login VSI Desktop login time- Personal vDisk vs. Nonpersonal vDisk ......................... 79 Figure 49. Personal vDisk Login VSI—Desktop Disk IOPS for a single SAS drive ........................... 80

Page 9: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

List of Figures

9 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 50. Personal vDisk Login VSI—PvDisk Disk IOPS for a single SAS drive ............................. 81 Figure 51. Personal vDisk Login VSI—Desktop Pool LUN IOPS and response time ....................... 82 Figure 52. Personal vDisk Login VSI—personal vDisk Pool LUN IOPS and response time ............. 83 Figure 53. Personal vDisk Login VSI—Storage processor IOPS ..................................................... 84 Figure 54. Personal vDisk Login VSI—FAST Cache IOPS ............................................................... 85 Figure 55. Personal vDisk Login VSI—Data Mover CPU utilization ................................................ 86 Figure 56. Personal vDisk Login VSI—Data Mover NFS load ......................................................... 87 Figure 57. Personal vDisk Login VSI — vSphere CPU load ............................................................ 88 Figure 58. Personal vDisk Login VSI—Average Guest Millisecond/Command counter .................. 89 Figure 59. Login VSI—LUN response time (ms) ............................................................................ 90 Figure 60. Login VSI—Login Time ................................................................................................ 91 Figure 61. vSphere—CPU Utilization ............................................................................................ 92

Page 10: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 1: Executive Summary

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

10

1 Executive Summary This chapter summarizes the proven solution described in this document and includes the following sections:

Introduction to the EMC VNX series

Business case

Solution overview

Key results and recommendations

Introduction to the EMC VNX series

The EMC® VNX™ series delivers uncompromising scalability and flexibility for the mid-tier user while providing market-leading simplicity and efficiency to minimize total cost of ownership. Customers can benefit from VNX features such as:

Next-generation unified storage, optimized for virtualized applications.

Extended cache by using Flash drives with Fully Automated Storage Tiering for Virtual Pools (FAST VP) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously on both block and file.

Multiprotocol supports for file, block, and object with object access through EMC Atmos™ Virtual Edition (Atmos VE).

Simplified management with EMC Unisphere™ for a single management framework for all NAS, SAN, and replication needs.

Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for Flash.

6 Gb/s SAS back end with the latest drive technologies supported:

3.5-in. 100 GB and 200 GB Flash, 3.5-in. 300 GB, and 600 GB 15k or 10k rpm SAS, and 3.5-in. 1 TB, 2 TB and 3 TB 7.2k rpm NL-SAS

2.5” 100 GB and 200 GB Flash, 300 GB, 600 GB and 900 GB 10k rpm SAS

Expanded EMC UltraFlex™ I/O connectivity—Fibre Channel (FC), Internet Small Computer System Interface (iSCSI), Common Internet File System (CIFS), network file system (NFS) including parallel NFS (pNFS), Multi-Path File System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for converged networking over Ethernet.

The VNX series includes five software suites and three software packs that make it easier and simpler to attain the maximum overall benefits.

Introduction

Page 11: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 1: Executive Summary

11 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Software suites available VNX FAST Suite—automatically optimizes for the highest system performance

and the lowest storage cost simultaneously (FAST VP is not part of the FAST Suite for the VNX5100™).

VNX Local Protection Suite—Practices safe data protection and repurposing.

VNX Remote Protection Suite—Protects data against localized failures, outages and disasters.

VNX Application Protection Suite—Automates application copies and proves compliance.

VNX Security and Compliance Suite—Keeps data safe from changes, deletions, and malicious activity.

Software packages available VNX Total Efficiency Pack—includes all five software suites (not available for

the VNX5100).

VNX Total Protection Pack—Includes local, remote and application protection suites.

VNX Total Value Pack—includes all three protection software suites and the Security and Compliance Suite (the VNX5100 exclusively supports this package).

Business case Customers require a scalable, tiered, and highly available infrastructure to deploy their virtual desktop environment. There are several new technologies available to assist them in architecting a virtual desktop solution, but the customers need to know how best to use these technologies to maximize their investment, support service-level agreements, and reduce their desktop total cost of ownership.

The purpose of this solution is to build a replica of a common customer end-user computing (EUC) environment, and validate the environment for performance, scalability, and functionality. Customers will achieve:

Increased control and security of their global, mobile desktop environment, typically their most at-risk environment.

Better end-user productivity with a more consistent environment.

Simplified management with the environment contained in the data center.

Better support of service-level agreements and compliance initiatives.

Lower operational and maintenance costs.

Page 12: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 1: Executive Summary

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

12

Solution overview This solution demonstrates how to use an EMC VNX platform to provide storage resources for a robust Citrix XenDesktop 5.6 environment and Windows 7 virtual desktops.

Planning and designing the storage infrastructure for Citrix XenDesktop is a critical step as the shared storage must be able to absorb large bursts of input/output (I/O) that occur throughout the course of a day. These large I/O bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users can often adapt to slow performance, but unpredictable performance will quickly frustrate them.

To provide predictable performance for a EUC environment, the storage must be able to handle peak I/O load from clients without resulting in high response times. Designing for this workload involves deploying several disks to handle brief periods of extreme I/O pressure and it is which is expensive to implement. This solution uses EMC VNX FAST Cache to reduce the number of disks required, and thus minimizes the cost.

Key results and recommendations EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment. It not only reduces the response time for both read and write workloads, but also effectively supports more virtual desktops on fewer drives, and greater IOPS density with a lower drive requirement.

Separating personal vDisk storage from desktop storage improves the user experience. The personal vDisk storage workload is more sequential even though the IOPS is higher than desktop storage. Segregating these two workloads by creating two different storage pools improves the LUN response time of the personal vDisk storage and thus improves the user response time.

Log on process in personal vDisk XenDesktop environment takes slightly longer than non personal vDisk XenDesktop environment. Our testing shows that personal vDisk log on took 17 seconds where as nonpersonal vDisk environment took only 5 seconds.

Personal vDisk Desktop needs to do additional processing on the IOs so it can be sent to the appropriate storage device. This will increase the CPU utilization on the ESXi server that is hosting the virtual desktops. Our testing shows that during steady state Login VSI testing, the average ESXi CPU utilization on the personal vDisk (32%) environment is about 15%, higher than the nonpersonal vDisk environment (27%). During a virus scan, the average ESXi CPU utilization in the personal vDisk (20%) environment is twice as high as in the nonpersonal vDisk environment (10%).

Chapter 7: Testing and Validation provides more details.

Page 13: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 2: Introduction

13 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

2 Introduction EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect realworld deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently faced by its customers.

This Proven Solutions Guide summarizes a series of best practices that were discovered or validated during testing of the EMC infrastructure for Citrix XenDesktop 5.6 solution by using the following products:

EMC VNX series

Citrix XenDesktop 5.6

Citrix Profile Manager 4.1

VMware vSphere® 5.0

This chapter includes the following sections:

Document overview

Reference architecture

Prerequisites and supporting documentation

Terminology

Document overview

The following use cases are examined in this solution:

Boot storm

Antivirus scan

Login storm

User workload simulated with Login Consultants Login VSI 3.6 tool

Chapter 7: Testing and Validation contains the test definitions and results for each use case.

Use case definition

Page 14: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 2: Introduction

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

14

The purpose of this solution is to provide a virtualized infrastructure for virtual desktops powered by Citrix XenDesktop 5.6, VMware vSphere 5.0, Citrix Profile Manager 4.1, EMC VNX series (NFS), VNX FAST Cache, and VNX storage pools.

This solution includes all the components required to run this environment such as the infrastructure hardware, software platforms including Microsoft Active Directory, and the required Citrix XenDesktop configuration.

Information in this document can be used as the basis for a solution build, white paper, best practices document, or training.

This Proven Solutions Guide contains the results observed from testing the EMC Infrastructure for Citrix XenDesktop 5.6 solution. The objectives of this testing are to establish:

A reference architecture of validated hardware and software that permits easy and repeatable deployment of the solution.

The best practices for the storage configuration to provide optimal performance, scalability, and protection in the context of the midtier enterprise market.

Implementation instructions are beyond the scope of this document. Information on how to install and configure Citrix XenDesktop 5.6 components, vSphere 5.0, and the required EMC products is outside the scope of this document. References to supporting documentation for these products are provided where applicable.

The intended audience for this Proven Solutions Guide is:

Internal EMC personnel

EMC partners

Customers

It is assumed that the reader has a general knowledge of the following products:

VMware vSphere 5.0

Citrix XenDesktop 5.6

EMC VNX series

Cisco Nexus and Catalyst switches

Purpose

Scope

Not in scope

Audience

Prerequisites

Page 15: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 2: Introduction

15 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Table 1 lists the terms that are frequently used in this document.

Table 1. Terminology

Term Definition

EMC VNX FAST Cache A feature that enables the use of Flash drive as an expanded cache layer for the array.

Login VSI A third-party benchmarking tool developed by Login Consultants that simulates realworld EUC workloads. Login VSI uses an AutoIT script and determines the maximum system capacity based on the response time of the users.

Citrix Profile Manager Preserves user profiles and dynamically synchronizes them with a remote profile repository.

Reference Architecture

This Proven Solutions Guide has a corresponding Reference Architecture document that is available on EMC online support website and EMC.com. EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 EMC VNX™ Series (NFS), VMware vSphere™ 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 —Reference Architecture provides more details.

If you do not have access to these documents, contact your EMC representative.

The reference architecture and the results in this Proven Solutions Guide are valid for 1,000 Windows 7 virtual desktops conforming to the workload described in the Validated environment profile section.

Terminology

Corresponding reference architecture

Page 16: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 2: Introduction

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

16

Figure 1 shows the reference architecture of the midsize solution.

Figure 1. Citrix XenDesktop—Reference architecture

Configuration

Table 2 lists the hardware used to validate the solution.

Table 2. Citrix XenDesktop 5.6—Solution hardware

Hardware Quantity Configuration Notes

EMC VNX5300 1 Two Data Movers (1 active and 1 passive)

two disk-array enclosures (DAEs) configured with:

Twenty nine 300 GB, 15k-rpm 3.5-in. SAS disks

Three 100 GB, 3.5-in. Flash drives

VNX shared storage for core solution

two additional disk-array enclosures (DAEs)

Twenty five 2 TB, 7,200

Optional; for user data

Reference architecture diagram

Hardware resources

Page 17: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 2: Introduction

17 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Hardware Quantity Configuration Notes rpm 3.5-in. NL-SAS disks

Five additional 600 GB, 15k-rpm 3.5-in. SAS disks

Optional; for infrastructure storage

Intel-based servers

22 Memory: 72 GB RAM

CPU: Two Intel Xeon E5540 2.5 GHz quadcore processors

Internal storage: Two 146 GB internal SAS disks

External storage: VNX5300 (NFS)

Dual 1GbE ports

Two ESX clusters to host 1,000 virtual desktops

Cisco Catalyst 6509

2 WS-6509-E switch

WS-x6748 1-gigabit line cards

WS-SUP720-3B supervisor

1-gigabit host connections distributed over two line cards

Cisco Nexus 5020 2 Forty 10-gigabit ports Redundant LAN A/B configuration

Table 3 lists the software used to validate the solution.

Table 3. Citrix XenDesktop—Solution software

Software Configuration

VNX5300 (shared storage, file systems)

VNX OE for File Release 7.0.50.2

VNX OE for Block Release 31 (05.31.000.5.704)

VSI for VMware vSphere: Unified Storage Management

Version 5.2

VSI for VMware vSphere: Storage Viewer Version 5.2

Cisco Nexus

Cisco Nexus 5020 Version 5.1(5)

Software resources

Page 18: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 2: Introduction

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

18

Software Configuration

ESXi servers

ESXi ESXi 5.0.0 (515841)

VMware Servers

OS Windows 2008 R2 SP1

VMware vCenter Server 5.0

Citrix XenDesktop 5.6

Virtual desktops

Note: This software is used to generate the test load.

OS MS Windows 7 Enterprise SP1 (32-bit)

VMware tools 8.6.0 build-515842

Microsoft Office Office Enterprise 2007 SP3

Internet Explorer 8.0.7601.17514

Adobe Reader 9.1.0

McAfee Virus Scan 8.7 Enterprise

Adobe Flash Player 11

Bullzip PDF Printer 6.0.0.865

Login VSI (EUC workload generator) 3.6 Professional Edition

Page 19: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 3: Citrix XenDesktop Infrastructure

19 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

3 Citrix XenDesktop Infrastructure This chapter describes the general design and layout instructions that apply to the specific components used during the development of this solution. This chapter includes the following sections:

Citrix XenDesktop 5.6

vSphere 5.0 Infrastructure

Windows infrastructure

Citrix XenDesktop 5.6

Citrix XenDesktop transforms Windows desktops as an on-demand service to any user, any device, anywhere. XenDesktop quickly and securely delivers any type of virtual desktop or any type of Windows, web, or SaaS application, to all the latest PCs, Macs, tablets, smartphones, laptops and thin clients – and does so with a high-definition HDX user experience.

FlexCast delivery technology enables IT to optimize the performance, security, and cost of virtual desktops for any type of user, including task workers, mobile workers, power users, and contractors. XenDesktop helps IT rapidly adapt to business initiatives by simplifying desktop delivery and enabling user self-service. The open, scalable, and proven architecture simplifies management, support, and integration.

This solution used two Citrix XenDesktop Controllers, each capable of scaling up to 1,000 virtual desktops.

The core elements of this Citrix XenDesktop 5.6 implementation are:

Citrix XenDesktop Controllers

Citrix Profile Manager

Citrix Personal vDisk

Additionally, the following components are required to provide the infrastructure for a Citrix XenDesktop 5.6 deployment:

Microsoft Active Directory

Microsoft SQL Server

DNS server

Dynamic Host Configuration Protocol (DHCP) server

Introduction

Deploying Citrix XenDesktop components

Page 20: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 3: Citrix XenDesktop Infrastructure

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

20

The Citrix XenDesktop Controller is the central management location for virtual desktops and has the following key roles:

Broker connections between the users and the virtual desktops

Control the creation and retirement of virtual desktop images

Assign users to desktops

Control the state of the virtual desktops

Control access to the virtual desktops

Citrix personal vDisk feature is introduced in Citrix XenDesktop 5.6. With personal vDisk, users can preserve customization settings and user-installed applications in a pooled desktop. This capability is accomplished by redirecting the changes from the user’s pooled VM to a separate disk called personal vDisk. During runtime, the content of the personal vDisk is blended with the content from the base VM to provide a unified experience to the end user. The personal vDisk data is preserved during reboot/refresh operations.

Citrix Profile Manager 4.1 preserves user profiles and dynamically synchronizes them with a remote profile repository. Citrix Profile Manager does not require the configuration of Windows roaming profiles, eliminating the need to use Active Directory to manage Citrix user profiles.

Citrix Profile Manager provides the following benefits over traditional Windows roaming profiles:

With Citrix Profile Manager, a user’s remote profile is dynamically downloaded when the user logs in to a XenDesktop desktop. XenDesktop downloads persona information only when the user needs it.

The combination of Citrix Profile Manager and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the number of storage required in an organization.

Citrix XenDesktop Controller

Citrix personal vDisk

Citrix Profile Management

Page 21: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 3: Citrix XenDesktop Infrastructure

21 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

vSphere 5.0 Infrastructure

VMware vSphere 5.0 is the market-leading virtualization hypervisor that is used across thousands of IT environments around the world. VMware vSphere 5.0 can transform or virtualize computer hardware resources, including the CPUs, RAM, hard disks, and network controllers to create fully functional virtual machines, each of which run their own operating systems and applications just like physical computers.

The high-availability features in VMware vSphere 5.0 along with VMware Distributed Resource Scheduler (DRS) and Storage vMotion® enable seamless migration of virtual desktops from one vSphere server to another with minimal or no disruption to the customers.

This solution deploys two vSphere clusters to host virtual desktops. These server types were chosen due to availability; similar results should be achievable with a variety of server configurations assuming that the ratios of server RAM per desktop and number of desktops per CPU core is upheld.

Both clusters consist of eleven dual eight-core vSphere 5 servers to support 500 desktops each, resulting in around 45 to 46 virtual machines per vSphere server. Each cluster has access to five NFS datastores; four for storing virtual desktop machines and one for storing personal vDisk.

One vSphere cluster is deployed in this solution for hosting the infrastructure servers. This cluster is not required if the necessary resources to host the infrastructure servers are already present within the host environment.

The infrastructure vSphere 5.0 cluster consists of two dual quad-core vSphere 5 servers. The cluster has access to a single datastore that is used for storing the infrastructure server virtual machines.

The infrastructure cluster hosts the following virtual machines:

One Windows 2008 R2 SP1 domain controllers — Provides DNS, Active Directory, and DHCP services.

One VMware vCenter 5 Server running on Windows 2008 R2 SP1 — Provides management services for the VMware clusters.

Two Citrix XenDesktop controllers each running on Windows 2008 R2 SP1 — Provides services to manage the virtual desktops.

SQL Server 2008 SP2 on Windows 2008 R2 SP1 — Hosts databases for the VMware Virtual Center Server and Citrix XenDesktop Controllers.

vSphere 5.0 overview

Desktop vSphere clusters

Infrastructure vSphere cluster

Page 22: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 3: Citrix XenDesktop Infrastructure

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

22

Windows infrastructure

Microsoft Windows provides the infrastructure that is used to support the virtual desktops and includes the following components:

Microsoft Active Directory

Microsoft SQL Server

DNS server

DHCP server

The Windows domain controller runs the Active Directory service that provides the framework to manage and support the virtual desktop environment. Active Directory performs the following functions:

Manages the identities of users and their information

Applies group policy objects

Deploys software and updates

Microsoft SQL Server is a relational database management system (RDBMS). A dedicated SQL Server 2008 SP2 is used to provide the required databases to vCenter Server and XenDesktop Controller.

DNS is the backbone of Active Directory and provides the primary name resolution mechanism for Windows servers and clients. In this solution, the DNS role is enabled on the domain controllers.

The DHCP server provides the IP address, DNS server name, gateway address, and other information to the virtual desktops. In this solution, the DHCP role is enabled on one of the domain controllers. The DHCP scope is configured with an IP range that is large enough to support 1,000 virtual desktops.

Introduction

Microsoft Active Directory

Microsoft SQL Server

DNS server

DHCP server

Page 23: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 4: Storage Design

23 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

4 Storage Design This chapter describes the storage design that applies to the specific components of this solution.

EMC VNX series storage architecture

The EMC VNX series is a dedicated network server optimized for file and block access that delivers high-end features in a scalable and easy-to-use package.

The VNX series delivers a single-box block and file solution that offers a centralized point of management for distributed environments. This makes it possible to dynamically grow, share, and cost-effectively manage multiprotocol file systems and provide multiprotocol block access. Administrators can take advantage of simultaneous support for NFS and CIFS protocols by enabling Windows and Linux/UNIX clients to share files by using the sophisticated file-locking mechanisms of VNX for File and VNX for Block for high-bandwidth or for latency-sensitive applications.

This solution uses file-based storage to leverage the benefits that each of the following provides:

File-based storage over the NFS protocol is used to store the VMDK files for all virtual desktops, which has the following benefit:

Unified Storage Management plug-in provides seamless integration with VMware vSphere to simplify the provisioning of datastores or virtual machines.

EMC vSphere Storage APIs for Array Integration (VAAI) plug-in for vSphere 5 supports the vSphere 5 VAAI primitives for NFS on the EMC VNX platform.

File-based storage over the CIFS protocol is used to store user data and the Citrix Profile Manager repository which has the following benefits:

Redirection of user data and Citrix Profile Manager data to a central location for easy backup and administration.

Single instancing and compression of unstructured user data to provide the highest storage utilization and efficiency.

This section explains the configuration of the storage provisioned over NFS for the vSphere cluster to store the Desktop images and the storage provisioned over CIFS to redirect user data and provide storage for the Citrix Profile Manager repository.

Introduction

Page 24: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 4: Storage Design

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

24

Figure 2 shows the physical storage layout of the disks in the core reference architecture; this configuration accommodates only the virtual desktops. The Storage layout overview section provides more details about the physical storage configuration.

UN-BOUNDVirtual Desktops

Storage Pool 1

RAID 5

Personal vDisk

Storage Pool 2

RAID 5

VNX OE

RAID 5 (3+1)

Hot

Spare

0 1 42 3 5

110 1 42 3 5 6 97 8

SAS SSD NL SAS

Bus 1

Enclosure 0

Bus 0

Enclosure 0

10

6 7 108 9 11 12 13 14

UNBOUND

Hot

Spare

14

FAST Cache

RAID 1

12 13

Figure 2. VNX5300–Core reference architecture physical storage layout

Figure 3 shows the physical storage layout of the disks allocated for user data and Citrix profile manager repository. This storage is in addition to the core storage shown in Figure 2.

Infrastructure VMs, SQL Database and Logs

Storage Pool 4

RAID 5

Hot

Spare

User Profiles, Home Directories, and XenApp Profiles

Storage Pool 3

RAID 6

5 6 97 8 10 11 12 13

Bus 0

Enclosure 10 1 42 3 14

User Profiles, Home Directories, and XenApp Profiles

Storage Pool 3

RAID 6

5 6 97 8 10 11 12 13

Bus 1

Enclosure 10 1 42 3 14

Figure 3. VNX5300–user data physical storage layout

The following configurations are used in the core reference architecture as shown in Figure 2:

Four SAS disks (0_0_0 to 0_0_3) are used for the VNX OE.

Disk 0_0_4 is hot spare for SAS disks. Disk 1_0_114 is hot spare for SSD drives. These disks are marked as hot spare in the storage layout diagram.

Ten SAS disks (1_0_0 to 0_0_9) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled on this pool.

For NAS, ten LUNs of 203 GB each are carved out of the pool to provide the storage required to create eight NFS file systems. The file systems are presented to the ESXi servers as eight NFS datastores.

Ten SAS disks (0_0_5 to 0_0_14) on the RAID 5 storage pool 2 are used to store personal vDisk. FAST Cache is enabled on this pool.

Storage layout

Storage layout overview

Page 25: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 4: Storage Design

25 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

For NAS, ten LUNs of 203 GB each are carved out of the pool to provide the storage required to create two NFS file systems. The file systems are presented to the ESXi servers as two NFS datastores.

Two Flash drives (1_0_12 and 1_0_13) are used for EMC VNX FAST Cache.

Disks 1_0_1 and 1_0_11 are unbound. They are not used for testing this solution.

The following configurations are used in user data and Citrix profile manager repository shown in Figure 3:

Disk 1_1_9 is hot spare for the NL-SAS disks. This disk is marked as hot spare in the storage layout diagram.

Five SAS disks (1_1_10 to 1_1_14) on the RAID 5 storage pool 4 are used to store the infrastructure virtual machines and SQL database and logs.

For NAS, one LUN of 1TB is carved out of the pool to provide the storage required to create one NFS file systems. The file system is presented to the ESXi servers as one NFS datastores.

Twenty four NL-SAS disks (0_1_0 to 0_1_14 and 1_1_0 to 1_1_8) on the RAID 6 storage pool 3 are used to store user data and Citrix profile manager user profiles. FAST Cache is enabled for the entire pool.

For NAS, twenty five LUNs of 1 TB each are carved out of the pool to provide the storage required to create two NFS file systems. The file systems are presented to the ESXi servers as two NFS datastores.

Page 26: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 4: Storage Design

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

26

Figure 4 shows the layout of the NFS file systems to store virtual desktops.

Figure 4. VNX5300–Virtual desktop NFS file system layout

Ten LUNs of 203 GB each are provisioned out of a RAID 5 storage pool configured with 10 SAS drives. Ten drives are used because the block-based storage pool internally creates 4+1 RAID 5 groups. Therefore, the number of SAS drives used is a multiple of five. Likewise, ten LUNs are used because AVM stripes across five dvols, so the number of dvols is a multiple of five. The LUNs are presented to VNX File as dvols that belong to a system-defined pool.

Eight file systems are then provisioned out of the Automatic Volume Management (AVM) system pool and are presented to the vSphere servers as datastores. A total of 1,000 desktops are evenly distributed among the eight NFS datastores.

Starting from VNX for File version 7.0.35.3, AVM is enhanced to intelligently stripe across dvols that belong to the same block-based storage pool. There is no need to manually create striped volumes and add them to user-defined file-based pools.

File system layout

Page 27: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 4: Storage Design

27 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 5 shows the layout of NFS file systems to store personal vDisk.

Figure 5. VNX5300–Personal vDisk NFS file system layout

Ten LUNs of 203 GB each are provisioned out of a RAID 5 storage pool configured with 10 SAS drives. The LUNs are presented to VNX File as dvols that belong to a system-defined pool.

Two file systems are then provisioned out of the Automatic Volume Management (AVM) system pool and are presented to the vSphere servers as datastores. Personal vDisk data is stored on these NFS datastores.

Page 28: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 4: Storage Design

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

28

Figure 6 shows the layout of CIFS file systems.

Figure 6. VNX5300–CIFS file system layout

Twenty five LUNs of 1 TB each are provisioned out of a RAID 6 storage pool configured with 24 NL-SAS drives. Twenty four drives are used because the block-based storage pool internally creates 6+2 RAID 6 groups. Therefore, the number of used NL-SAS drives is a multiple of eight. Likewise, twenty five LUNs are used because AVM stripes across five dvols, so the number of dvols is a multiple of five. The LUNs are presented to VNX File as dvols that belong to a system-defined pool.

Like the NFS file systems, the CIFS file systems are provisioned from the AVM system pool to store user home directories and the Citrix Profile Manager repository. The two file systems are grouped in the same storage pool because their I/O profiles are sequential.

FAST Cache is enabled on all the storage pools that are used to store the NFS and CIFS file systems.

VNX Fully Automated Storage Tiering (FAST) Cache, a part of the VNX FAST Suite, uses Flash drives as an expanded cache layer for the array. VNX5300 is configured with two 100 GB Flash drives in a RAID 1 configuration for a 91 GB read/write-capable cache. This is the minimum amount of FAST Cache. Larger configurations are supported for scaling beyond 1,000 desktops.

FAST Cache is an array-wide feature available for both files and block storage. FAST Cache works by examining 64 KB chunks of data in FAST Cache-enabled objects on

EMC VNX FAST Cache

Page 29: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 4: Storage Design

29 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to the Flash drives. The use of Flash drives dramatically improves the response times for very active data and reduces data hot spots that can occur within the LUN.

FAST Cache is an extended read/write cache that enables Citrix XenDesktop to deliver consistent performance at Flash-drive speeds by absorbing read-heavy activities (such as boot storms and antivirus scans). This extended read/write cache is an ideal caching mechanism for XenDesktop Controller because the base desktop image and other active user data are so frequently accessed that the data is serviced directly from the Flash drives without accessing the slower drives at the lower storage tier.

EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to the vSphere client that provides a single management interface for managing EMC storage within the vSphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience that allows new features to be introduced rapidly in response to changing customer requirements.

The following VSI features were used during the validation testing:

Storage Viewer (SV)—Extends the vSphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware vSphere hosts and virtual machines. SV presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vSphere client views.

Unified Storage Management—Simplifies storage administration of the EMC VNX platforms. It enables VMware administrators to provision new NFS and VMFS datastores, and RDM volumes seamlessly within vSphere client.

The EMC VSI for VMware vSphere product guides available on the EMC online support website provide more information.

The desktop storage pool is configured with 10x300 GB SAS drives to provide the storage required for the virtual desktops. Eight file systems are carved out of the pool to present to the ESX clusters as eight datastores. Each of these 250GB datastores accommodates 125 virtual machines, allowing each desktop to grow to a maximum average size of 2GB. The pool of desktops created in XenDesktop is balanced across eight datastores.

The personal vDisk storage pool is configured with 10x300 GB SAS drives to provide the storage required for the virtual desktops personal vDisk. Two file systems are carved out of the pool to present to the ESX clusters as two datastores. Each of these 500GB datastores accommodates 500 virtual machines personal vDisk.

VSI for VMware vSphere

vCenter Server storage layout

Page 30: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 4: Storage Design

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

30

Virtual desktops use two VNX shared file systems, one for Citrix Profile Manager repository and the other to redirect user storage. Each file system is exported to the environment through a CIFS share.

Table 4 lists the file systems used for user profiles and redirected user storage.

Table 4. VNX5300—File systems

File system Use Max Size

profiles_fs Citrix Profile Manager repository 15 TB

home_fs User data 15 TB

Local user profiles are not recommended in an EUC environment. One reason for this is that a performance penalty is incurred when a new local profile is created when a user logs in to a new desktop image. Solutions such as Citrix Profile Manager and folder redirection enable user data to be stored centrally on a network location that resides on a CIFS share hosted by the EMC VNX array. This reduces the performance impact during user logon, while enables user data to roam with the profiles.

The EMC VNX for File Home Directory feature uses the home_fs file system to automatically map the H: drive of each virtual desktop to the users’ own dedicated subfolder on the share. This ensures that each user has exclusive rights to a dedicated home drive share. This share is created by the File Home Directory feature, and does not need to be created manually. The Home Directory feature automatically maps this share for each user.

The Documents folder for each user is also redirected to this share. Users can recover the data in the Documents folder by using the VNX Snapshots for File. The file system is set at an initial size of 100 GB, and extends itself automatically when more space is required.

The file systems leverage EMC Virtual Provisioning™ and compression to provide flexibility and increased storage efficiency. If single instancing and compression are enabled, unstructured data such as user documents typically leads to a 50 percent reduction in consumed storage.

The VNX file systems for the Citrix Profile Manager repository and user documents are configured as follows:

profiles_fs can be configured to consume up to 15 TB of space. With 50 percent space saving, each profile can grow up to 30 GB in size.

home_fs can be configured up to 15 TB of space. With 50 percent space saving, each user is able to store 30 GB of data.

VNX shared file systems

Citrix Profile Manager and folder redirection

EMC VNX for File Home Directory feature

Capacity

Page 31: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

31 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

5 Network Design This chapter describes the network design in this solution and contains the following sections:

Considerations

VNX for File network configuration

vSphere network configuration

Cisco Nexus 5020 configuration

Cisco Catalyst 6509 configuration

Considerations

Figure 7 shows the 10-gigabit Ethernet (GbE) connectivity between the two Cisco Nexus 5020 switches and the EMC VNX storage. The uplink Ethernet ports from the Nexus switches can be used to connect to a 10 Gb or 1 Gb external LAN. In this solution, a 1 Gb LAN through Cisco Catalyst 6509 switches is used to extend Ethernet connectivity to the desktop clients, Citrix XenDesktop components, and the Windows Server infrastructure.

Network layout overview

Page 32: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

32

Figure 7. Citrix XenDesktop–Network layout overview

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

The IP scheme for the virtual desktop network must be designed with enough IP addresses in one or more subnets for the DHCP server to assign them to each virtual desktop.

VNX platforms provide network high availability or redundancy by using link aggregation. This feature is one of the methods to address the problem of link or switch failure.

Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses.

In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining two 10 GbE ports into a single virtual device. If a link is lost on an Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Logical design considerations

Link aggregation

Page 33: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

33 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

VNX for File network configuration

The EMC VNX5300 in this solution includes two Data Movers. The Data Movers can be configured in an active/active or an active/standby configuration. In the active/standby configuration, the standby Data Mover serves as a failover device for any of the active Data Movers. In this solution, the Data Movers operate in the active/standby mode.

The VNX5300 Data Movers are configured for two 10-gigabit interfaces on a single I/O module. Link Aggregation Control Protocol (LACP) is used to configure ports fxg-1-0 and fxg-1-1 to support virtual machine traffic, home folder access, and external access for the Citrix Profile Manager repository.

Figure 8 shows the rear view of two VNX5300 Data Movers that include two 10-gigabit fiber Ethernet (fxg) ports each in I/O expansion slot 1.

01

23

01

23

01

23

01

23

01

23

01

23

01

23

01

23

01

23

01

23

fxg-1-0

fxg-1-1

Data

Mover 2

Data

Mover 3

fxg-1-0

fxg-1-1

Figure 8. VNX5300–Ports of the two Data Movers

To configure the link aggregation that uses fxg-1-0 and fxg-1-1 on Data Mover 2, run the following command:

$ server_sysconfig server_2 -virtual -name <Device Name> -create

trk –option "device=fxg-1-0,fxg-1-1 protocol=lacp"

To verify if the ports are channeled correctly, run the following command:

$ server_sysconfig server_2 -virtual -info lacp1

server_2:

*** Trunk lacp1: Link is Up ***

*** Trunk lacp1: Timeout is Short ***

*** Trunk lacp1: Statistical Load Balancing is IP ***

Device Local Grp Remote Grp Link LACP Duplex Speed

--------------------------------------------------------------

fxg-1-0 10000 4480 Up Up Full 10000 Mbs

fxg-1-1 10000 4480 Up Up Full 10000 Mbs

The remote group number must match for both ports and the LACP status must be “Up.” Verify if appropriate speed and duplex are established as expected.

Data Mover ports

LACP configuration on the Data Mover

Page 34: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

34

It is recommended to create two Data Mover interfaces and IP addresses on the same subnet with the VMkernel port on the vSphere servers. Half of the NFS datastores are accessed by using one IP address and the other half by using the second IP. This enables the VMkernel traffic to be load balanced among the vSphere NIC teaming members. The following command shows an example of assigning two IP addresses to the same virtual interface named lacp1.

$ server_ifconfig server_2 -all

server_2:

lacp1-1 protocol=IP device=lacp1

inet=192.168.16.2 netmask=255.255.255.0 broadcast=192.168.16.255

UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:92

lacp1-2 protocol=IP device=lacp1

inet=192.168.16.3 netmask=255.255.255.0 broadcast=192.168.16.255

UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:93

To enable jumbo frames for the link aggregation interface, run the following command to increase the MTU size.

$ server_ifconfig server_2 lacp1-1 mtu=9000

To verify if the MTU size is set correctly, run the following command:

$ server_ifconfig server_2 lacp1-1

server_2:

lacp1 protocol=IP device=lacp1

inet=192.168.16.2 netmask=255.255.255.0 broadcast=192.168.16.255

UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:92

Data Mover interfaces

Enable jumbo frames on Data Mover interface

Page 35: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

35 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

vSphere network configuration

All network interfaces on the vSphere servers in this solution use 1 GbE connections. All virtual desktops are assigned an IP address by using a DHCP server. The Intel-based servers use two onboard Broadcom GbE Controllers for all the network connections. Figure 9 shows the vSwitch configuration in vCenter Server.

Figure 9. vSphere–vSwitch configuration

Virtual switch vSwitch0 use two physical network interface cards (NICs). Table 5 lists the configured port groups in vSwitch0.

Table 5. vSphere—Port groups in vSwitch0

Virtual switch

Configured port groups

Used for

vSwitch0 Management Network

VMkernel port used for vSphere host management

vSwitch0 VM Network Network connection for virtual desktops and LAN traffic

vSwitch0 VMKernelStorage NFS datastore traffic

The NIC teaming load balancing policy for the vSwitches must be set to Route based on IP hash as shown in Figure 10.

Figure 10. vSphere—Load balancing policy

NIC teaming

Page 36: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

36

By default, a vSwitch is configured with 120 virtual ports, which may not be sufficient in an EUC environment. On the vSphere servers that host the virtual desktops, each virtual desktop consumes one port. Set the number of ports according to the number of virtual desktops that will run on each vSphere server, as shown in Figure 11.

Note: Reboot the vSphere server for the changes to take effect.

Figure 11. vSphere—vSwitch virtual ports and MTU settings

If a vSphere server fails or needs to be placed in the maintenance mode, other vSphere servers within the cluster must accommodate the additional virtual desktops that are migrated from the vSphere server that goes offline. Consider the worst-case scenario when determining the maximum number of virtual ports per vSwitch. If there are not enough virtual ports, the virtual desktops will not be able to obtain an IP address from the DHCP server.

For a VMkernel port to access the NFS datastores by using jumbo frames, the MTU size for the vSwitch to which the VMkernel port belongs and the VMkernel port itself must be set accordingly.

The MTU size is set on the properties windows of the vSwitch and the VMkernel ports. Figure 11 and Figure 12 show how a vSwitch and a VMkernel port are configured to support jumbo frames.

Increase the number of vSwitch virtual ports

Enable jumbo frames for the VMkernel port used for NFS

Page 37: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

37 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 12. vSphere–VMkernel port MTU setting

The MTU values of the vSwitch and the VMkernel ports must be set to 9,000 to enable jumbo frame support for NFS traffic between the vSphere hosts and the NFS datastores.

Cisco Nexus 5020 configuration

The two 40-port Cisco Nexus 5020 switches provide redundant high-performance, low-latency 10 GbE, and are delivered by a cut-through switching architecture for 10 GbE server access in the next-generation data centers.

In this solution, the VNX Data Mover cabling is spread across two Nexus 5020 switches to provide redundancy and load balancing of the network traffic.

The following excerpt of the switch configuration shows the required commands to enable jumbo frames at the switch level because per-interface MTU is not supported.

policy-map type network-qos jumbo

class type network-qos class-default

mtu 9216

system qos

service-policy type network-qos jumbo

Overview

Cabling

Enable jumbo frames on Nexus switch

Page 38: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

38

Because the Data Mover connections for the two 10-gigabit network ports are spread across two Nexus switches, and LACP is configured for the two Data Mover ports, virtual Port Channel (vPC) must be configured on both switches.

The following excerpt is an example of the switch configuration pertaining to the vPC setup for one of the Data Mover ports. The configuration on the peer Nexus switch is mirrored for the second Data Mover port.

n5k-1# show running-config

feature vpc

vpc domain 2

peer-keepalive destination <peer-nexus-ip>

interface port-channel3

description channel uplink to n5k-2

switchport mode trunk

vpc peer-link

spanning-tree port type network

interface port-channel4

switchport mode trunk

vpc 4

switchport trunk allowed vlan 275-277

interface Ethernet1/4

description 1/4 vnx dm2 fxg-1-0

switchport mode trunk

switchport trunk allowed vlan 275-277

channel-group 4 mode active

interface Ethernet1/5

description 1/5 uplink to n5k-2 1/5

switchport mode trunk

channel-group 3 mode active

interface Ethernet1/6

description 1/6 uplink to n5k-2 1/6

switchport mode trunk

channel-group 3 mode active

To verify if the vPC is configured correctly, run the following command on both the switches. The output should look like the following: n5k-1# show vpc

Legend:

(*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 2

Peer status : peer adjacency formed ok

vPC keep-alive status : peer is alive

Configuration consistency status: success

vPC role : secondary

Number of vPCs configured : 1

Peer Gateway : Disabled

Dual-active excluded VLANs : -

vPC for Data Mover ports

Page 39: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 5: Network Design

39 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

vPC Peer-link status

------------------------------------------------------------------

id Port Status Active vlans

-- ---- ------ -----------------------------------------------

1 Po3 up 1,275-277

vPC status

------------------------------------------------------------------

id Port Status Consistency Reason Active vlans

------ ----------- ------ ----------- --------------- -----------

4 Po4 up success success 275-277

Cisco Catalyst 6509 configuration

The 9-slot Cisco Catalyst 6509-E switch provides high port densities that are ideal for many wiring closet, distribution, and core network deployments as well as data center deployments.

In this solution, the vSphere server cabling is evenly spread across two WS-x6748 1 Gb line cards to provide redundancy and load balancing of the network traffic.

The server uplinks to the switch are configured in a port channel group to increase the utilization of server network resources and to provide redundancy. The vSwitches are configured to balance the network traffic according to the IP hash.

The following is a configuration example for one of the server ports.

description 8/10 9048-43 rtpsol189-1

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 276,516-527

switchport mode trunk

mtu 9216

no ip address

spanning-tree portfast trunk

channel-group 23 mode on

Overview

Cabling

Server uplinks

Page 40: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

40

6 Installation and Configuration This chapter describes how to install and configure this solution and includes the following sections:

Installation overview

Citrix XenDesktop components

Storage components

Installation overview This section provides an overview of the configuration of the following components:

Virtual Desktop Agent on master Image

Desktop pools

Storage pools

FAST Cache

VNX Home Directory

The installation and configuration steps for the following components are available on the VMware website:

Citrix XenDesktop 5.6

Citrix Profile Manager 4.1

VMware vSphere 5.0

The installation and configuration steps for the following components are not covered:

Microsoft Active Directory, Group Policies, DNS, and DHCP

Microsoft SQL Server 2008 SP2

Page 41: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

41 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Citrix XenDesktop components

The Citrix XenDesktop Installation document available on the Citrix website has detailed procedures on how to install Citrix XenDesktop 5.6. No special configuration instructions are required for this solution.

In this solution, virtual desktops are configured with personal vDisk. Complete the following steps while installing XenDesktop Virtual Desktop agent on the master image.

1. On the master image, insert the XenDesktop 5.6 DVD and double-click AutoSelect. The XenDesktop Install Virtual Desktop Agent window appears.

2. Click Install Virtual Desktop Agent. The Installation option window appears.

3. Select Advanced Install. The License Agreement window appears.

4. Select I accept the terms and condition, and then click Next.

5. In the Select the Virtual Desktop agent window, select the appropriate agent, and then click Next.

Figure 13. Select the virtual desktop agent

6. On the Select components to install page, select the appropriate components and installation location. Click Next.

7. In the Personal vDisk configuration window, select Yes, enable personal vDisk. Click Next. The Controller Location window appears.

Citrix XenDesktop installation overview

Virtual Desktop Agent Installation

Page 42: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

42

Figure 14. Select Personal vDisk Configuration

8. Select the appropriate radio button to locate the controller. Click Next.

9. On the Virtual Desktop Configuration page, select the appropriate virtual desktop configurations check boxes. Click Next. The Summary page is displayed.

Page 43: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

43 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 15. Virtual Desktop Configuration

10. Verify the settings, and click Install to start the installation.

In this solution, pooled desktops with personal vDisk are created using Machine Creation Services (MCS) so that users can maintain their desktop customization. Do the following steps to create a pooled with personal vDisk catalog.

1. On the XenDesktop Controller, select Start All Programs Citrix Desktop Studio. The Citrix Desktop Studio window appears.

2. Right-click Machines in the left pane and select Create catalog. The create catalog window appears.

3. In the Machine type page, select pooled with personal vDisk from the Machine Type list box.

Citrix XenDesktop machine catalog configuration

Page 44: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

44

Figure 16. Select the machine type

4. Click Next. The Master Image and Hosts page is displayed.

5. In the Hosts list, select the cluster host from which the virtual desktops are to be deployed.

6. Click the button to select a virtual machine or VM snapshot as the master image.

Page 45: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

45 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 17. Select the cluster host and Master image

7. Click Next. The Number of VMs page is displayed.

8. In the Number of virtual machines to create box, enter or select the number of virtual machines to be created. Set the virtual machine specifications in Master Image including personal vDisk and select the appropriate Active Directory computer accounts option. In this example, the default options are selected.

Page 46: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

46

Figure 18. Virtual Machine specifications

9. Click Next. The create accounts page is displayed.

Complete the following steps:

a. In the Domain area, select an Active Directory container to store the computer accounts.

b. In the Account naming scheme field, enter your account naming scheme.

An example of XD#### will create computer account names XD0001 through XD0500. These names are used when the virtual machines are created.

Page 47: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

47 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 19. Active directory location and naming scheme

10. Click Next. The Administrators page is displayed.

11. In the Administrators page, make any changes and click Next. The Summary page is displayed.

12. In the Summary page, verify the settings for the catalog and specify the name under Catalog name.

Figure 20. Summary and Catalog name

Page 48: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

48

13. Click Finish to start the deployment of the virtual machines.

The profiles_fs CIFS share is used for the Citrix Profile Manager repository. Citrix Profile Manager is enabled by using a Windows group policy template. The group policy is located in GPO-Templates in profile manager installation software. The group policy template ctxprofile4.1.0.adm is required to configure the Citrix Profile Manager. Do the following steps to configure the Citrix Profile Manager.

1. On the master Image, install Citrix Profile manager.

2. One the master Image, open registry settings. Set the value of EnableUserProfileRedirection to 0. The value is located in HKLM\Software\Citrix\personal vDisk\Configuration.

Figure 21. Citrix Profile Manager–Master Image Profile Redirection

3. On the active directory server, open group policy management. Right-click the appropriate OU containing the computers on which profile manager is installed and create a new GPO.

4. Right-click the new GPO and select edit.

5. Navigate to Computer Configuration Policies and right-click Administrative Templates.

6. Select Add/Remove templates and click Add.

7. Browse the folder and locate ctxprofile4.1.0.adm. Click open to import the profile manager template.

8. On the profile GPO created in Step 3, navigate to Computer Configuration Policies Administrative Templates classic Administrative templates Citrix Profile Management. Set the following settings:

a. Double-click Enable Profile Management. The Enable Profile Management window appears. Select Enabled.

Citrix Profile Manager Configuration

Page 49: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

49 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Click OK to close the window.

Figure 22. Enable Citrix Profile Manager

b. Double-click Path to user store. The Path to user store window appears. Select Enabled and set the UNC path of the profiles_fs CIFS share.

Click OK to close the window.

Page 50: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

50

Figure 23. Citrix Profile Manager path to user store

Storage components

Storage pools in the EMC VNX OE support heterogeneous drive pools. Four storage pools were configured in this solution, as shown in Figure 25.

A RAID 5 storage pool (XD5_10Disk_SAS_Desktop) is configured from 10 SAS drives. Ten 203 GB thick LUNs were created from this storage pool. This pool is used to store the NFS datastores containing the virtual desktops. FAST Cache is enabled for the pool.

A RAID 5 storage pool (XD5_SAS_vDisk) is configured from 10 SAS drives. Ten 203 GB thick LUNs were created from this storage pool. This pool is used to store the NFS datastores containing the virtual desktops personal vDisks. FAST Cache is enabled for the pool.

A RAID 5 storage pool (XD5_SAS_NFS_Infrastructure_OS) is configured from 5 SAS drives. Five 203 GB thick LUNs are created from this storage pool. This pool is used to store the NFS file systems containing the infrastructure virtual servers. FAST Cache is disabled for the pool.

A RAID 6 storage pool (XD5_NLSAS_userdata) is configured from 24 NL-SAS drives. Twenty-five 1 TB GB thick LUNs were created from this storage pool. This pool is used to store the user home directory and Citrix Profile Manager Repository CIFS shares. FAST Cache is enabled for the pool.

Storage pools

Page 51: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

51 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

The default number of threads dedicated to serve NFS requests is 384 per Data Mover on the VNX. Some use cases such as the scanning of desktops might require more NFS active threads. It is recommended to increase the number of active NFS threads to the maximum of 2048 on each Data Mover. The nthreads parameter can be set by using the following command:

# server_param server_2 –facility nfs –modify nthreads –value 2048

Reboot the Data Mover for the change to take effect.

Type the following command to confirm the value of the parameter:

# server_param server_2 -facility nfs -info nthreads

server_2 :

name = nthreads

facility_name = nfs

default_value = 384

current_value = 2048

configured_value = 2048

user_action = reboot DataMover

change_effective = reboot DataMover

range = (32,2048)

description = Number of threads dedicated to serve nfs

requests This param represents number of threads dedicated to

serve nfs requests. Any changes made to this param will be

applicable after reboot only

The NFS active threads value can also be configured by editing the properties of the nthreads Data Mover parameter in Settings–Data Mover Parameters menu in Unisphere, as shown in Figure 24. Highlight the nthreads value you want to edit and select Properties to open the nthreads properties window. Update the Value field with the new value and click OK shown in Figure 24. Do this procedure for each of the nthreads Data Mover parameters listed menu. Reboot the Data Movers for the changes to take effect.

NFS active threads per Data Mover

Page 52: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

52

Figure 24. VNX5300–nThreads properties

VNX file software contains a performance fix that significantly reduces NFS write latency. The minimum software patch required for the fix is 7.0.13.0. In addition to the patch upgrade, the performance fix only takes effect when the NFS file system is mounted by using the uncached option.

# server_mount server_2 -option uncached fs1 /fs1

The uncached option can be verified by using the following command:

# server_mount server_2

server_2 :

root_fs_2 on / uxfs,perm,rw

root_fs_common on /.etc_common uxfs,perm,ro

fs1 on /fs1 uxfs,perm,rw,uncached

fs2 on /fs2 uxfs,perm,rw,uncached

fs3 on /fs3 uxfs,perm,rw,uncached

fs4 on /fs4 uxfs,perm,rw,uncached

fs5 on /fs5 uxfs,perm,rw,uncached

fs6 on /fs6 uxfs,perm,rw,uncached

fs7 on /fs7 uxfs,perm,rw,uncached

fs8 on /fs8 uxfs,perm,rw,uncached

NFS performance fix

Page 53: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

53 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

The uncached option can also be configured by editing the properties of the file system mount in StorageStorageConfigurationFile SystemsMounts menu in Unisphere. Highlight the file system mount you want to edit and select Properties to open the Mount Properties window, as shown in Figure 25. Select Set Advanced Options to display the advanced menu options, and then select Direct Writes Enabled and click OK. The uncached option is now enabled for the selected file system.

Figure 25. VNX5300–File System Mount Properties

Page 54: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

54

FAST Cache is enabled as an array-wide feature in the system properties of the array in EMC Unisphere. Click the FAST Cache tab, then click Create and select the Flash drives to create the FAST Cache. There is no user-configurable parameters for FAST Cache. Figure 26 shows the FAST Cache settings for VNX5300 array in this solution.

Figure 26. VNX5300–FAST Cache tab

Enable FAST Cache

Page 55: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 6: Installation and Configuration

55 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

To enable FAST Cache for any LUN in a pool, navigate to the Storage Pool Properties page in Unisphere, and then click the Advanced tab. Select Enabled to enable FAST Cache, as shown in Figure 27.

Figure 27. VNX5300–Enable FAST Cache

Page 56: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

56

7 Testing and Validation This chapter provides a summary and characterization of the tests performed to validate the solution. The goal of the testing is to characterize the performance of the solution and its component subsystems during the following scenarios:

Boot storm of all desktops

McAfee antivirus full scan on all desktops

User workload testing using Login VSI on all desktops

Validated environment profile

Table 6 provides the validated environment profile.

Table 6. Citrix XenDesktop—environment profile

Profile characteristic Value

Number of virtual desktops 1,000

Virtual desktop OS Windows 7 Enterprise SP1 (32-bit)

CPU per virtual desktop 1 vCPU

Number of virtual desktops per CPU core 5.7

RAM per virtual desktop 1 GB

Average desktop storage available for each virtual desktop

2 GB

Average personal vDisk storage available for each virtual desktop

2 GB

Average IOPS per virtual desktop in steady state 11

Average peak IOPS per virtual desktop during boot storm

34

Number of datastores used to store virtual desktops 8

Number of datastores used to store personal vDisk 2

Number of virtual desktops per datastore 125

Disk and RAID type for datastores RAID 5, 300 GB, 15k rpm, 3.5-in. SAS disks

Profile characteristics

Page 57: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

57 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Profile characteristic Value

Disk and RAID type for CIFS shares to host the Citrix Profile Manager repository and home directories

RAID 6, 2 TB, 7,200 rpm, 3.5-in. NL-SAS disks

Number of VMware clusters for virtual desktops 2

Number of vSphere servers in each cluster 11

Number of virtual desktops in each cluster 500

Three common use cases are executed to validate whether the solution performs as expected under heavy-load situations.

The following use cases are tested:

Simultaneous boot of all desktops

Full antivirus scan of all desktops

Login and steady-state user load simulated using the Login VSI medium workload on all desktops

Each use cases are executed with personal vDisk and nonpersonal vDisk configuration. A number of key metrics are compared between these two configurations to show the overall performance difference of the solution.

Virtual Session Index (VSI) version 3.6 is used to run a user load on the desktops. VSI provides the guidance to gauge the maximum number of users that a desktop environment can support. The Login VSI workload is categorized as light, medium, heavy, multimedia, core, and random (also known as workload mashup). A medium workload that was selected for this test had the following characteristics:

The workload emulated a medium knowledge user who used Microsoft Office Suite, Internet Explorer, Adobe Acrobat Reader, Bullzip PDF Printer, and 7-zip.

After a session started, the medium workload repeated every 12 minutes.

The response time was measured every 2 minutes during each loop.

The medium workload opened up to five applications simultaneously.

The type rate was 160ms for each character.

Approximately 2 minutes of idle time was included to simulate realworld users.

Each loop of the medium workload used the following applications:

Microsoft Outlook 2007—Browsed 10 email messages.

Microsoft Internet Explorer (IE)—One instance of IE browsed the BBC.co.uk website, another instance browsed Wired.com, Lonelyplanet.com, and another instance opened a flash-based 480p video file.

Use cases

Login VSI

Page 58: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

58

Microsoft Word 2007—One instance of Microsoft Word 2007 was used to measure the response time, while another instance was used to edit a document.

Bullzip PDF Printer and Adobe Acrobat Reader—The Word document was printed to PDF and reviewed.

Microsoft Excel 2007—A very large Excel worksheet was opened and random operations were performed.

Microsoft PowerPoint 2007—A presentation was reviewed and edited.

7-zip—The command line version was used to zip the output of the session.

A Login VSI launcher is a Windows system that launches desktop sessions on target virtual desktops. There are two types of launchers—master and slave. Only one master is in a given test bed, while there can be several slave launchers as required.

The number of desktop sessions that a launcher can run is typically limited by CPU or memory resources. By default, the graphics device interface (GDI) limit is not tuned. In such a case, Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vCPUs) and a 2 GB RAM. When the GDI limit is tuned, this limit extends to 60 sessions per two-core machine.

In this validated testing, 1,000 desktop sessions were launched from 32 launchers, with approximately 32 sessions per launcher. Each launcher was allocated two vCPUs and 4 GB of RAM. No bottlenecks were observed on the launchers during the Login VSI tests.

For all tests, FAST Cache is enabled for the storage pools holding the virtual desktop datastores, personal vDisk, user home directories, and Citrix Profile manager repository.

Boot storm results

This test is conducted by selecting all the desktops in vCenter Server, and then selecting Power On. Overlays are added to the graphs to show when the last power-on task is completed and when the IOPS to the pool LUNs achieves a steady state.

For the boot storm test, all 1,000 desktops are powered on within 3 minutes. The steady state is achieved in another 10 minutes for personal vDisk environment. This section describes the boot storm results for Desktop pools with and without personal vDisk configuration.

Login VSI launcher

FAST Cache configuration

Test methodology

Page 59: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

59 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 28 shows the disk IOPS and response time for a single SAS drive in the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph.

Figure 28. Personal Disk Boot storm—IOPS for a single Desktop pool SAS drive

During peak load, the desktop pool disk serviced a maximum of 350 IOPS and experienced a response time of 32ms.

Pool individual disk load

Page 60: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

60

Figure 29. Personal vDisk Boot storm—IOPS for a single personal vDisk pool SAS drive

During peak load, the personal vDisk pool disk serviced a maximum of 290 IOPS and experienced a response time of 12ms.

Page 61: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

61 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 30 shows the replica LUN IOPS and the response time of one of the storage pool LUNs. Each LUN in a particular had similar results, therefore only the results from a single LUN are shown in the graph.

Figure 30. Personal vDisk Boot storm—Desktop Pool LUN IOPS and response time

During peak load, the Desktop pool LUN serviced a maximum of 2900 IOPS and peak response time of 3.6ms.

Pool LUN load

Page 62: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

62

Figure 31. Personal vDisk Boot storm—personal vDisk Pool LUN IOPS and response time

During peak load, the personal vDisk pool LUN serviced a maximum of 590 IOPS and peak response time of 9ms.

Page 63: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

63 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 32 shows the total IOPS serviced by the storage processors during the test.

Figure 32. Personal vDisk Boot storm—Storage processor total IOPS and CPU Utilization

During peak load, the storage processors serviced 34,000 IOPS. The storage processor utilization remained below 45 percent. The EMC VNX5300 had sufficient scalability headroom for this workload.

Storage processor IOPS and CPU utilization

Page 64: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

64

Figure 33 shows the IOPS serviced from FAST Cache during the boot storm test.

Figure 33. Personal vDisk Boot storm—personal vDisk FAST Cache IOPS

During peak load, FAST Cache serviced 29,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the two Flash drives alone serviced 15,500 IOPS during peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would require approximately 87 SAS drives to achieve the same level of performance. The Desktop storage pool data has more FAST cache hits than the personal vDisk storage pool data.

FAST Cache IOPS

Page 65: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

65 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 34 shows the Data Mover CPU utilization during the boot storm test.

Figure 34. Personal vDisk Boot storm—Data Mover CPU utilization

The Data Mover achieved a peak CPU utilization of 70 percent during this test.

Data Mover CPU utilization

Page 66: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

66

Figure 35 shows the NFS operations per second from the Data Mover during the boot storm test.

Figure 35. Personal vDisk Boot storm—Data Mover NFS load

During peak load, there were 61,000 total NFS operations per second. The Data Mover cache helped reduce the load on the disks.

Data Mover NFS load

Page 67: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

67 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 36 shows the CPU load from the vSphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph.

Figure 36. Personal vDisk Boot storm—ESXi CPU load

The vSphere server achieved a peak CPU utilization of 49 percent. Hyper-threading was enabled to double the number of logical CPUs.

vSphere CPU load

Page 68: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

68

Figure 37 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the personal vDisk is shown as PvDisk FS GAVG and the average of all the datastores hosting the desktop storage is shown as Desktop FS GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph.

Figure 37. Personal vDisk Boot storm—Average Guest Millisecond/Command counter

The peak GAVG of the file system hosting the desktop was 118ms, and the personal vDisk file system was 61ms. The overall impact of this brief spike in GAVG values was minimal as all 1,000 desktops attained steady state in less than 13 minutes after the initial power-on.

Antivirus results

This test is conducted by scheduling a full scan of all desktops using a custom script to initiate an on-demand scan using McAfee VirusScan 8.7i. The full scans are started on all the desktops. The difference between start time and finish time is 60 minutes.

vSphere disk response time

Test methodology

Page 69: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

69 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 38 shows the disk I/O for a single SAS drive in the storage pool that stores the virtual desktops. Each disk had similar results, therefore only the results from a single disk are shown in the graph.

Figure 38. Personal vDisk Antivirus—Desktop Disk I/O for a single SAS drive

During peak load, the disk serviced with the peak of 90 IOPS and experienced a maximum response time of 14ms.

Pool individual disk load

Page 70: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

70

Figure 39. Personal vDisk Antivirus—personal vDisk Disk I/O for a single SAS drive

During peak load, the personal vDisk disk serviced with the peak of 290 IOPS and experienced a maximum response time of 16ms. The majority of the virus scan IOs were directed toward personal vDisk disks.

Page 71: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

71 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 40 shows the replica LUN IOPS and the response time of one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph.

Figure 40. Personal vDisk Antivirus—Desktop Pool LUN IOPS and response time

During peak load, the LUN serviced with the peak of 52 IOPS and experienced a peak response time of 4ms.

Pool LUN load

Page 72: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

72

Figure 41. Personal vDisk Antivirus—Personal vDisk Pool LUN IOPS and response time

During peak load, the personal vDisk LUN serviced 1200 IOPS and experienced a response time of 7ms. The majority of the virus scans IOs are directed toward personal vDisk LUNs.

Page 73: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

73 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 42 shows the total IOPS serviced by the storage processor during the test.

Figure 42. Personal vDisk Antivirus—Storage processor IOPS

During peak load, the storage processors serviced 14,000 IOPS. The antivirus scan operations caused a peak CPU utilization of 60 percent.

Storage processor IOPS and CPU utilization

Page 74: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

74

Figure 43 shows the IOPS serviced from FAST Cache during the test.

Figure 43. Personal vDisk Antivirus—FAST Cache IOPS

During peak load, FAST Cache serviced 11,900 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the two Flash drives alone serviced almost all the IOPS during peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would require approximately 67 SAS drives to achieve the same level of performance.

FAST Cache IOPS

Page 75: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

75 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 44 shows the Data Mover CPU utilization during the antivirus scan test.

Figure 44. Personal vDisk Antivirus—Data Mover CPU utilization

The Data Mover achieved a peak CPU utilization of 65 percent during this test.

Data Mover CPU utilization

Page 76: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

76

Figure 45 shows the NFS operations per second from the Data Mover during the antivirus scans.

Figure 45. Personal vDisk Antivirus—Data Mover NFS load

During peak load, there were 34,000 NFS operations per second. The Data Mover cache helped reduce the load on the disks.

Data Mover NFS load

Page 77: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

77 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 46 shows the CPU load from the vSphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph.

Figure 46. Personal vDisk Antivirus—vSphere CPU load

The vSphere server achieved a peak CPU utilization of 35 percent during peak. Hyper-threading was enabled to double the number of logical CPUs.

vSphere CPU load

Page 78: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

78

Figure 47 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the personal vDisk storage is shown as PvDisk FS GAVG and the average of all the datastores hosting the Desktop storage is shown as Desktop FS GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph.

Figure 47. Personal vDisk Antivirus—Average Guest Millisecond/Command counter

The peak GAVG of the file system hosting the desktop was 105ms, and the personal vDisk file system 55ms.

Login VSI results

This test is conducted by scheduling 1,000 users to connect through remote desktop in a 60-minute window, and starting the Login VSI-medium with Flash workload. The workload was run for thirty minutes in a steady state to observe the load on the system.

vSphere disk response time

Test methodology

Page 79: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

79 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 48 shows the required time for the desktops to complete the user login process.

Figure 48. Login VSI Desktop login time- Personal vDisk vs. Nonpersonal vDisk

The required time to complete the login process reached a maximum of 17 seconds during peak load of the 1,000 desktop login storms on the personal vDisk configuration. The Citrix profile manager is enabled on the personal vDisk environment and it helped to reduce the login time.

Desktop logon time

Page 80: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

80

Figure 49 shows the disk IOPS for a single SAS drive that is part of the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph.

Figure 49. Personal vDisk Login VSI—Desktop Disk IOPS for a single SAS drive

During peak load, the SAS disk serviced 280 IOPS and experienced a response time of 7ms.

Pool individual disk load

Page 81: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

81 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 50. Personal vDisk Login VSI—PvDisk Disk IOPS for a single SAS drive

During peak load, the SAS disk serviced 180 IOPS and experienced a response time of 12ms.

Page 82: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

82

Figure 51 shows the Replica LUN IOPS and response time from one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph.

Figure 51. Personal vDisk Login VSI—Desktop Pool LUN IOPS and response time

During peak load, the LUN serviced 800 IOPS and experienced a response time of 6.5ms.

Pool LUN load

Page 83: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

83 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 52. Personal vDisk Login VSI—personal vDisk Pool LUN IOPS and response time

During peak load, the LUN serviced 550 IOPS and experienced a response time of 1.8ms.

Page 84: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

84

Figure 53 shows the total IOPS serviced by the storage processor during the test.

Figure 53. Personal vDisk Login VSI—Storage processor IOPS

During peak load, the storage processors serviced a maximum of 16,000 IOPS. The storage processor peak utilization was 36 percent.

Storage processor IOPS and CPU utilization

Page 85: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

85 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 54 shows the IOPS serviced from FAST Cache during the test.

Figure 54. Personal vDisk Login VSI—FAST Cache IOPS

During peak load, FAST Cache serviced 12,000 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the four Flash drives alone serviced 9500 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would require approximately 53 SAS drives to achieve the same level of performance.

FAST Cache IOPS

Page 86: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

86

Figure 55 shows the Data Mover CPU utilization during the Login VSI test.

Figure 55. Personal vDisk Login VSI—Data Mover CPU utilization

The Data Mover achieved a peak CPU utilization of 40 percent during this test.

Data Mover CPU utilization

Page 87: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

87 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 56 shows the NFS operations per second from the Data Mover during the Login VSI test.

Figure 56. Personal vDisk Login VSI—Data Mover NFS load

During peak load, there were 14,000 NFS operations per second. The Data Mover cache helped reduce the load on the disks.

Data Mover NFS load

Page 88: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

88

Figure 57 shows the CPU load from the vSphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph.

Figure 57. Personal vDisk Login VSI — vSphere CPU load

The CPU load on the vSphere server reached a maximum of 35 percent utilization during peak load. Hyper-threading was enabled to double the number of logical CPUs.

vSphere CPU load

Page 89: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 7: Testing and Validation

89 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

Figure 58 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the PvDisk storage is shown as PvDisk FS GAVG and the average of all the datastores hosting the desktop storage is shown as Desktop FS GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph.

Figure 58. Personal vDisk Login VSI—Average Guest Millisecond/Command counter

The peak GAVG of the file system hosting the PvDisk was 7ms, and that of the desktop file system was 6ms.

vSphere disk response time

Page 90: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 8: Personal vDisk Implementation considerations

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

90

8 Personal vDisk Implementation considerations This chapter provides a few important guidelines when implementing personal vDisk feature in Citrix XenDesktop environment.

Personal vDisk Implementation considerations

Citrix XenDesktop has the option to place the personal vDisk storage on the same storage repository as the Desktop storage or on a dedicated storage repository. Tests show that separating personal vDisk storage from Desktop storage provides better user experience. Figure 59 shows the LUN response time between Desktop storage pool and personal vDisk pool LUNs.

Figure 59. Login VSI—LUN response time (ms)

If one pool is used, the Login VSI test will fail with higher application access time. Having separate pools for personal vDisk reduces the application access time and enables the Login VSI test to pass.

Storage Layout

Page 91: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 8: Personal vDisk Implementation considerations

91 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

The login time of personal vDisk enabled virtual desktop is compared with nonpersonal vDisk pooled virtual desktop. The pooled virtual desktop is deployed with same number of spindles using one pool (20 SAS drives for desktop and 2 SSD for FAST cache). Figure 60 shows the login time for the virtual desktops.

Figure 60. Login VSI—Login Time

Tests show that personal vDisk login time is 9 seconds longer than the nonpersonal vDisk login time.

Login Time

Page 92: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 8: Personal vDisk Implementation considerations

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

92

The vSphere server CPU utilization of personal vDisk enabled virtual desktop is compared with nonpersonal vDisk pooled virtual desktop. The pooled virtual desktop is deployed with same number of spindles using one pool (20 SAS drives for desktop and 2 SSD for FAST cache). Figure 61 shows the CPU utilization of the vSphere server.

Figure 61. vSphere—CPU Utilization

Tests show that during the steady-state Login VSI testing, the average ESXi CPU utilization on the personal vDisk (32%) environment is about 15% higher than the nonpersonal vDisk environment (27%). During the virus scan, the average ESXi CPU utilization on the personal vDisk (20%) environment is twice as high as the nonpersonal vDisk environment (10%).

vSphere CPU Utilization

Page 93: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 9: Conclusion

93 EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1—Proven Solutions Guide

9 Conclusion This chapter includes the following sections:

Summary

References

Summary As shown in Chapter 7: Testing and Validation, EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment. It not only reduces the response time for both read and write workloads, but also effectively supports more users on fewer drives, and greater IOPS density with a lower drive requirement.

Separating personal vDisk storage from desktop storage improves the user experience. The personal vDisk storage workload is more sequential even though the IOPS are higher than desktop storage. Segregating these two workloads by creating two different storage pools improves the LUN response time of the personal vDisk storage and thus improves the user response time.

Logon process in personal vDisk XenDesktop environment takes slightly longer than nonpersonal vDisk XenDesktop environment. Our testing shows that personal vDisk logon took 17 minutes whereas nonpersonal vDisk environment took only 5 minutes.

Personal vDisk Desktop needs additional processing on the IOs so that it can be sent to the appropriate storage device. This operation will increase the CPU utilization on the ESXi server that is hosting the virtual desktops. Our testing shows that during the steady-state Login VSI testing, the average ESXi CPU utilization on the personal vDisk (32%) environment is about 15% higher than the nonpersonal vDisk environment (27%). During the virus scan, the average ESXi CPU utilization on the personal vDisk (20%) environment is twice as high as the nonpersonal vDisk environment (10%).

References

The following documents, located on the EMC online support website, provide additional relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative:

EMC Performance Optimization for Microsoft Windows XP for the Virtual Desktop Infrastructure—Applied Best Practices

Supporting documents

Page 94: EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5 · validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix

Chapter 9: Conclusion

EMC Infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and

Citrix Profile Manager 4.1—Proven Solutions Guide

94

The following documents, located on Powerlink.emc.com, also provide useful information:

Reference Architecture: EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6

Proven Solution Guide: EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6

Citrix documents