xen server 6.2 technical sales presentation
TRANSCRIPT
XenServer 6.2 Technical OverviewJune 2013
What is XenServer?
© 2013 Citrix | Confidential – Do Not Distribute
What’s so Great About Xen?
• It’s robustᵒ Native 64-bit hypervisorᵒ Runs on bare metalᵒ Directly leverages CPU hardware for virtualization
• It’s widely-deployedᵒ Hundreds of thousands of organizations have deployed Xen
• It’s advancedᵒ Optimized for hardware-assisted virtualization and paravirtualization
• It’s trustedᵒ Open, resilient Xen security framework
• It’s part of mainline Linux
• Xen Project is a widely supported Linux Foundation collaborative effort
© 2013 Citrix | Confidential – Do Not Distribute
Understanding Architectural Components
The Xen Project hypervisor and control domain (dom0) manage physical server resources among virtual machines
© 2013 Citrix | Confidential – Do Not Distribute
Understanding the Domain 0 Component
Domain 0 is a compact specialized Linux VM that manages the network and storage I/O of all guest VMs … and isn’t the XenServer hypervisor
© 2013 Citrix | Confidential – Do Not Distribute
Understanding the Linux VM Component
Linux VMs include paravirtualized kernels and drivers, and Xen is part of Mainline Linux 3.0
© 2013 Citrix | Confidential – Do Not Distribute
Understanding the Windows VM Component
Windows VMs use paravirtualized drivers to access storage and network resources through Domain 0
© 2013 Citrix | Confidential – Do Not Distribute
XenServer Meets All Virtualization Needs
EnterpriseData Center
•High performance, resilient virtualization platform•Simple deployment and management model
DesktopVirtualization
•Optimized for high performance desktop workloads•Storage optimizations to control VDI CAPEX
•Scalable platform for IaaS and Cloud Service Providers•Powers the NetScaler SDX platform
Enterprise Data Center VirtualizationScalable virtualization platform
© 2013 Citrix | Confidential – Do Not Distribute
XenCenter – Simple XenServer Management
• Single pane of management glass
• Manage XenServer hostsᵒ Start/Stop VMs
• Manage XenServer resource poolsᵒ Shared storage
ᵒ Shared networking
• Configure advanced featuresᵒ HA, Reporting, Alerting
• Manage updates
© 2013 Citrix | Confidential – Do Not Distribute
Management Architecture Comparison
“The Other Guys”
Traditional ManagementArchitecture
Single backend management server
Citrix XenServer
DistributedManagement Architecture
Clustered management layer
© 2013 Citrix | Confidential – Do Not Distribute
Role-Based Administration
• Provide user roles with varying permissions
• Pool Admin• Pool Operator• VM Power Admin• VM Admin• VM Operator• Read-only
• Roles are defined within a Resource Pool
• Assigned to Active Directory users, groups
• Audit logging via Workload Reports
© 2013 Citrix | Confidential – Do Not Distribute
XenMotion Live VM Migration
XenServerXenServerXenServer
Shared StorageMore about XenMotion
© 2013 Citrix | Confidential – Do Not Distribute
XenServer Pool
• Migrates VM disks from any storage type to any other storage typeᵒ Local, DAS, iSCSI, FC
• Supports cross pool migrationᵒ Requires compatible CPUs
• Encrypted Migration model
• Specify management interface for optimal performance
Live Storage XenMotion
XenServer Hypervisor
VDI(s)
Live Virtual
Machine
More about Storage XenMotion
© 2013 Citrix | Confidential – Do Not Distribute
Heterogeneous Resource Pools
Safe Live Migrations
Feature 5
Virtual Machine
Older CPU
Feature 1
Feature 2
Feature 3
Feature 4
XenServer 1
Newer CPU
Feature 1
Feature 2
Feature 3
Feature 4
XenServer 2
Mixed Processor Pools
© 2013 Citrix | Confidential – Do Not Distribute
Memory Overcommit
• Feature name: Dynamic Memory Control
• Ability to over-commit RAM resources
• VMs operate in a compressed or balanced mode within set range
• Allow memory settings to be adjusted while VM is running
• Can increase number of VMs per host
© 2013 Citrix | Confidential – Do Not Distribute
Virtual Appliances (vApp)
• Support for “vApps” or Virtual Appliancesᵒ OVF definition of Virtual Appliance
• vApp contains one or more Virtual Machines
• Enables grouping of VMs which can be utilized byᵒ XenCenterᵒ Integrated Site Recoveryᵒ Appliance Import and Exportᵒ HA
© 2013 Citrix | Confidential – Do Not Distribute
High Availability in XenServer
• Automatically monitors hosts and VMs
• Easily configured within XenCenter
• Relies on Shared Storageᵒ iSCSI, NFS, HBA
• Reports failure capacity for DR planning purposes
More about HA
Advanced Data Center Automation
© 2013 Citrix | Confidential – Do Not Distribute
Virtualization can hinder the linkage
between servers and storage, turning
expensive storage systems into little
more than “dumb disks”
XenServer Hosts
StorageVirtual Servers StorageLink
Citrix StorageLink™ technology lets your
virtual servers fully leverage all the
power of existing storage systems
XenServer Hosts
Optimizing Storage – Integrated StorageLink
More about StorageLink
© 2013 Citrix | Confidential – Do Not Distribute
Integrated Site Recovery
• Supports LVM SRs
• Replication/mirroring setup outside scope of solutionᵒ Follow vendor instructionsᵒ Breaking of replication/mirror also manual
• Works with every iSCSI and FC array on HCL
• Supports active-active DR
More about Site Recovery
© 2013 Citrix | Confidential – Do Not Distribute
Live Memory Snapshot and Rollback
• Live VM snapshot and revertᵒ Both memory and disk state are
capturedᵒ Optional quiesce option via VSS
provider (Windows guests)ᵒ One-click revert
• Snapshot branchesᵒ Support for parallel subsequent
checkpoints based on a previous common snapshot
Desktop Optimized XenServer
© 2013 Citrix | Confidential – Do Not Distribute
Supporting High Performance Graphics
• Feature name: GPU pass-through
• Enables high-end graphics in VDI deployments with HDX 3D Pro
• Optimal CAD application support with XenDesktop
• More powerful than RemoteFX, virtual GPUs, or other general purpose graphics solutions
© 2013 Citrix | Confidential – Do Not Distribute
Benefits of GPU Pass-through
With GPU pass-through, hardware costs are cut up to 75%
GPU cards
XenServer Host
Without GPU pass-through, each user requires their own Blade PC
More about GPU Pass Through
© 2013 Citrix | Confidential – Do Not Distribute
Controlling Shared Storage Costs – IntelliCache
• Caching of XenDesktop images
• Leverages local storage
• Reduce IOPS on shared storage
© 2013 Citrix | Confidential – Do Not Distribute
IntelliCache Fundamentals1. Master Image created through
XenDesktop MCS
2. VM is configured to use Master Image
3. VM using Master Image is started
4. XenServer creates read cache object on local storage
5. Reads in VM being done from local cache
6. Additional Reads done from SAN when required
7. Writes will happen in VHD child per VM
8. Local “write” cache is deleted when VM is shutdown/restarted
9. Additional VMs will use same read cache
XenDesktop
NFS Based Storage
VMVM
VMXenServer
Master ImageCache0011010100110101
011001
00110101
© 2013 Citrix | Confidential – Do Not Distribute
Cost Effective VM Densities
• Supporting VMs with up to:ᵒ 16 vCPU per VMᵒ 128GB Memory per VM
• Supporting XenServer hosts with up to:ᵒ 1TB Physical RAMᵒ 160 logical processors
• Yielding up to 225 Desktop images per host (LoginVSI Medium)
• Included at no cost with all XenDesktop purchases
• Cisco Validated Design for XenDesktop on UCS
Cloud Optimized XenServer
© 2013 Citrix | Confidential – Do Not Distribute
Distributed Virtual Network Switching
• Virtual Switchᵒ Open source: www.openvswitch.orgᵒ Provides a rich layer 2 feature setᵒ Cross host internal networksᵒ Rich traffic monitoring optionsᵒ ovs 1.4 compliant
• DVS Controllerᵒ Virtual applianceᵒ Web-based GUIᵒ Can manage multiple poolsᵒ Can exist within pool it managesᵒ Note: Controller is deprecated, but supported
VM
VM
VM
VM
VM
© 2013 Citrix | Confidential – Do Not Distribute
Switch Policies and Live Migration
VM
VM
VM
VM
Linux VM1• Allow all traffic
Linux VM2• Allow SSH on eth0• Allow HTTP on eth1
Windows VM• Allow RDP and deny HTTP
Linux VM1• Allow all traffic
Linux VM2• Allow SSH on eth0• Allow HTTP on eth1
Windows VM• Allow RDP and deny HTTP
SAP VM• Allow only SAP traffic• RSPAN to VLAN 26
Windows VM• Allow all traffic
Linux VM• Allow SSH on eth0• Allow HTTP on eth1
Windows VM• Allow all traffic
SAP VM• Allow only SAP traffic• RSPAN to VLAN 26
Linux VM• Allow SSH on eth0• Allow HTTP on eth1
VM
More about DVSC
© 2013 Citrix | Confidential – Do Not Distribute
Single Root IO Virtualization (SR-IOV)
• PCI Specification for direct IO accessᵒ Hardware supports multiple PCI ids ᵒ Presents multiple virtual NICs from single NIC
• Virtual NICs presented directly into guestsᵒ Minimize hypervisor overhead in high
performance networks
• Not without downsidesᵒ Requires specialized hardwareᵒ Can not participate in DVSᵒ Does not support live migrationᵒ Limited number of virtual NICs
GuestVM
NIC
dom0
Physical driver
App
VF driver
vSwitch
GuestVMApp
VF driver
Virtual NIC Virtual NIC
More about SRIOV
© 2013 Citrix | Confidential – Do Not Distribute
NetScaler SDX – Powered by XenServer
• Complete tenant isolation
• Complete independence
• Partitions within instances
• Supportsᵒ NetScaler VPX and Branch Repeaterᵒ Windows 2008 R2ᵒ Virtualized Storefrontᵒ Virtualized Sharefile
• Optimized network: 120+ Gbps
• Runs default XenServer 6
XenServer Editions
© 2013 Citrix | Confidential – Do Not Distribute
Core Feature MatrixFeature XenServer 6.2
64-bit Xen Hypervisor a
Active Directory Integration a
Role-Based Administration and Audit Trail a
VMware to XenServer Conversion Utilities a
Multi-Server Management with XenCenter a
Live VM Migration with XenMotion™ a
Live Storage Migration with Storage XenMotion™ a
Dynamic Memory Control a
Host Failure Protection with High Availability a
Performance Reporting and Alerting a
Mixed Resource Pools with CPU Masking a
GPU Pass-Through for Desktop Graphics Processing a
IntelliCache™ for XenDesktop Storage Optimization a
Live Memory Virtual Machine Snapshot and Revert a
OpenFlow Distributed Virtual Switch a
Integrated Multi-site Recovery a
Added Operating Systems: Windows 8, Windows Server 2012, Debian Wheezy, *EL 6.3/6.4
Supported Virtual Machine Density 225 Windows 8 LoginVSI Medium/500 General purpose
Price Free
© 2013 Citrix | Confidential – Do Not Distribute
Simple Packaging and Pricing
Commercially packaged and certified distribution of XenServer that includes*:• Product License• XenServer Maintenance*
• Premier Support• Subscription Advantage
• Ability to use XenCenter for updates
Perpetual$1,250
/socket*Annual
$500/socket*
Citrix XenServer
Free, open source distribution of XenServer.• Citrix Technical Support not available• Access limited to major updates only• Command line updates/patches
FREEOpen Source XenServer
New
*Citrix XenServer is sold in conjunction with 1-year of XenServer Maintenance
© 2013 Citrix | Confidential – Do Not Distribute
NewPricing
Citrix XenServer 6.2 PriceAnnual
XenServer License – 1 year (per socket) $400XenServer License – 3 year (per socket) $1050XenServer Maintenance (SA + Support) – 1 year (per socket)* $100XenServer Maintenance (SA + Support) – 3 year (per socket)* $300
PerpetualXenServer License (per socket) $1025XenServer Maintenance (SA + Support) – 1 year (per socket)* $225
*Citrix XenServer is sold in conjunction with 1-year of XenServer Maintenance*XenServer Maintenance is required when purchasing Citrix XenServer
© 2013 Citrix | Confidential – Do Not Distribute
XenServer Annual Promo
• Encourage first time users or existing Free and OSS users to get the commercial version of XenServer with aggressively priced annual license option.
• 1yr. annual XenServer 6.2 license term only
• Available July 25, 2013 through Dec 31, 2013
• All standard discount programs apply
• Promo pricing available in the Online Store
XenServer License Types XenServer 6.2
Annual XenServer License – 1 year (per socket)* *Maintenance is required $99
XenServer Software Maintenance Annual (SA + Support) 1 year (per socket) $100
Total $199
New
© 2013 Citrix | Confidential – Do Not Distribute
Additional Support Options XenServer Support Options Premier SupportCost Included with Software MaintenanceCoverage Hours 24x7x365Incidents UnlimitedNamed Contacts UnlimitedType of Access Phone/Web/Email
Add-on Service OptionsSoftware or Hardware TRM 200 hours/Unlimited incidents/1region $40,000
Additional TRM hours 100 hours $20,000
Fully Dedicated TRM 1600 hours/Unlimited incidents/1 region $325,000
On-site Days On-site technical support service $2,000 per day
Assigned Escalation 200 hours/1 region (must have TRM) $16,000
Fully Dedicated Assigned Escalation 1600 hours $480,000
© 2013 Citrix | Confidential – Do Not Distribute
It’s Your Budget … Spend it Wisely
• Vendor lock-in great for vendor• Beware product lifecycles and tool set changesSingle Vendor
• ROI Calculators always show vendor author as best• Use your own numbersROI Can be Manipulated
• Over buying is costly; get what you need• Support call priority with tiered modelsUnderstand Support Model
• Some projects have requirements best suited to specific tool• Understand deployment and licensing impactUse Correct Tool
• Blanket purchases benefit only vendor• Chargeback to project for feature requirements
Leverage Costly Features as Required
Work better. Live better.
GPU Pass-through Details
© 2013 Citrix | Confidential – Do Not Distribute
How GPU Pass-through Works
• Identical GPUs in a host auto-create a GPU group
• The GPU Group can be assigned to set of VMs – each VM will attach to a GPU at VM boottime
• When all GPUs in a group are in use, additional VMs requiring GPUs will not start
• GPU and non-GPU VMs can (and should) be mixed on a host
• GPU groups are recognized within a poolᵒ If Server 1, 2, 3 each have GPU type 1, then
VMs requiring GPU type 1 can be started on any of those servers
© 2013 Citrix | Confidential – Do Not Distribute
GPU Pass-through HCL is Server Specific
• Serverᵒ HP ProLiant WS460c G8 Workstation series*ᵒ IBM System x3650 M3, dx360ᵒ Dell Precision R5500, R720ᵒ Cisco UCS C240 M3
• GPU (1-4 per host)ᵒ NVIDIA Quadro 2000, 4000, 5000, 6000ᵒ NVIDIA Tesla M2070-Q ᵒ NVIDIA GRID K1 and K2ᵒ AMD FirePro S7000, S9000
• Support for Windows guests only
• Important: Combinations of servers + GPUs must be tested as a pair
© 2013 Citrix | Confidential – Do Not Distribute
Limitations of GPU Pass-through
• GPU Pass-through binds the VM to host for duration of session ᵒ Restricts XenMotion
• Multiple GPU types can exist in a single serverᵒ E.g. high performance and mid performance GPUs
• VNC will be disabled, so RDP is required
• Fully supported for XenDesktop, best effort for other windows workloads ᵒ Not supported for Linux guests
• HCL is very important
IntelliCache Details
© 2013 Citrix | Confidential – Do Not Distribute
Enabling IntelliCache on XenServer Hosts
• IntelliCache requires local EXT3 storage, to be selected during XenServer installation
• If this is selected during installation the host is automatically enabled for IntelliCache
• Manual steps in Admin guide
© 2013 Citrix | Confidential – Do Not Distribute
Enabling IntelliCache in XenDesktop
• http://support.citrix.com/article/CTX129052
• Use IntelliCache checkbox when adding a host in Desktop Studio
• Supported from XenDesktop 5 FP1
© 2013 Citrix | Confidential – Do Not Distribute
IOPS – 1000 Users – No IntelliCache
0:00:000:01:500:03:400:05:300:07:200:09:100:11:000:12:500:14:400:16:300:18:200:20:100:22:000:23:500:25:400:27:300:29:200:31:100:33:000:34:500:36:400:38:300:40:200:42:100:44:000
2000
4000
6000
8000
10000
12000
14000
16000
18000
NFS Ops (Non-IC)
NFS Read Ops NFS Write Ops
NFS
Ops
© 2013 Citrix | Confidential – Do Not Distribute
IOPS – 1000 Users – Cold Cache Boot
0:00:000:01:400:03:200:05:000:06:400:08:200:10:000:11:400:13:200:15:000:16:400:18:200:20:000:21:400:23:200:25:000:26:400:28:200:30:000:31:400:33:200:35:000:36:400:38:200:40:000:41:400
500
1000
1500
2000
2500
3000
NFS Ops (Cold Cache)
NFS Read Ops NFS Write Ops
NFS
Ops
© 2013 Citrix | Confidential – Do Not Distribute
IOPS – 1000 Users – Hot Cache Boot
0:00:00 0:01:50 0:03:40 0:05:30 0:07:20 0:09:10 0:11:00 0:12:50 0:14:40 0:16:30 0:18:20 0:20:10 0:22:00 0:23:50 0:25:40 0:27:30 0:29:20 0:31:10 0:33:00 0:34:50 0:36:40 0:38:30 0:40:20 0:42:10 0:44:000
5
10
15
20
25
30
35
NFS Ops (Hot Cache)
NFS Read Ops NFS Write Ops
NFS
Ops
© 2013 Citrix | Confidential – Do Not Distribute
Limitations of IntelliCache
• Best results achieved with local SSD drivesᵒ SAS and SATA supported, but spindled disks are slower
• XenMotion with pooled images
• Best practice Local space sizingᵒ Expecting 50% cache usage per user + daily log offᵒ [real size master image] + #[users per server] * [size master image] * 0,5ᵒ Cache disk may vary according to VM lifecycle definition (reboot cycle)
© 2013 Citrix | Confidential – Do Not Distribute
IntelliCache Conclusions
• Dramatic reduction of I/O for pooled desktops
• Significant reduction of I/O for assigned desktopsᵒ Still need IOPS for write trafficᵒ Local write cache benefits
• Storage investment much lower – and more appropriate
• Overall TCO 15 – 30 % improvement
• Continued evolution of features to yield better performance and TCO
Integrated Site Recovery Details
© 2013 Citrix | Confidential – Do Not Distribute
Integrated Site Recovery
• Supports LVM SRs only
• Replication/mirroring setup outside scope of solutionᵒ Follow vendor instructionsᵒ Breaking of replication/mirror also manual
• Works with every iSCSI and FC array on HCL
• Supports active-active DR
© 2013 Citrix | Confidential – Do Not Distribute
Feature Set
• Integrated in XenServer and XenCenter
• Support failover and failback
• Supports grouping and startup order through vApp functionality
• Failover pre-checksᵒ Powerstate of source VMᵒ Duplicate VMs on target poolᵒ SR connectivity
• Ability to start VMs paused (e.g. for dry-run)
© 2013 Citrix | Confidential – Do Not Distribute
How it Works
• Depends on “Portable SR” technologyᵒ Different from Metadata backup/restore functionality
• Creates a logical volume on SR during setup
• Logical Volume containsᵒ SR metadata informationᵒ VDI metadata information for all VDIs stored on SR
• Metadata information is read during failover sr-probe
© 2013 Citrix | Confidential – Do Not Distribute
Integrated Site Recovery - Screenshots
Deprecated
Note: Aspects of the functionality contained in this section have been deprecated. This means you are fully supported if you deploy using this version, but no new features or functionality will be developed.
Distributed Virtual Switch DetailsOnly the Virtual Switch Controller is deprecated
© 2013 Citrix | Confidential – Do Not Distribute
Terminology
• OpenFlowᵒ An open standard that separates the control and data paths for switching devices
• OpenFlow switchᵒ Could be physical or virtualᵒ Includes packet processing and remote configuration/control support via OpenFlow
• Open vSwitchᵒ An OSS Linux-based implementation of an OpenFlow virtual switchᵒ Maintained at www.openvswitch.org
• vSwitch Controllerᵒ A commercial implementation of a OpenFlow controllerᵒ Provides integration with XenServer poolsᵒ Note: Controller functionality has been deprecated, but remains supported
© 2013 Citrix | Confidential – Do Not Distribute
Core Distributed Switch Objectives
• Extend network management to virtual networks
• Provide network monitoring using standard protocols
• Define network policies on virtual objects
• Support multi-tenant virtual data centers
• Provide cross host private networking without VLANs
• Answer to VMware VDS and Cisco Nexus 1000v
© 2013 Citrix | Confidential – Do Not Distribute
Understanding Policies
• Access controlᵒ Basic Layer 3 firewall rulesᵒ Definable by pool/network/VMᵒ Inheritance controls VM
VM
VM
VM
VM
© 2013 Citrix | Confidential – Do Not Distribute
Understanding Policies
• Access control
• QoSᵒ Rate limits to control bandwidth
VM
VM
VM
VM
VM
© 2013 Citrix | Confidential – Do Not Distribute
Understanding Policies
• Access control
• QoS
• RSPANᵒ Transparent monitoring of VM level
traffic
VM
VM
VM
VM
© 2013 Citrix | Confidential – Do Not Distribute
What is NetFlow?
• Layer 3 monitoring protocol
• UDP/SCTP based
• Broadly adopted solution
• Implemented in three partsᵒ Exporter (DVS)ᵒ Collectorᵒ Analyzer
• DVSC is NetFlow v5 basedᵒ Enabled at pool level
© 2013 Citrix | Confidential – Do Not Distribute
Performance Monitoring
• Enabled via NetFlow
• Dashboardᵒ Throughputᵒ Packet flowᵒ Connection flow
• Flow Statisticsᵒ Slice and dice reportsᵒ See top VM trafficᵒ Data goes back 1 week
© 2013 Citrix | Confidential – Do Not Distribute
Bonus Features
• Jumbo Frames
• Cross Server Private Networks
• LACP
• 4 NIC bonds
High Availability Details
© 2013 Citrix | Confidential – Do Not Distribute
Protecting Workloads
• Not just for mission critical applications anymore
• Helps manage VM density issues
• "Virtual" definition of HA a little different than physical
• Low cost / complexity option to restart machines in case of failure
© 2013 Citrix | Confidential – Do Not Distribute
High Availability Operation
• Pool-wide settings
• Failure capacity – number of hosts to carry out HA Plan
• Uses network and storage heartbeat to verify servers
© 2013 Citrix | Confidential – Do Not Distribute
VM Protection Options
• Restart Priorityᵒ Do not restartᵒ Restart if possibleᵒ Restart
• Start Orderᵒ Defines a sequence and delay to ensure applications run correctly
© 2013 Citrix | Confidential – Do Not Distribute
HA Design – Hot Spares
Simple Designᵒ Similar to hot spare in disk arrayᵒ Guaranteed availableᵒ Inefficient Idle resources
Failure Planningᵒ If surviving hosts are fully loaded – VMs will be forced to start on spareᵒ Could lead to restart delays due to resource plugsᵒ Could lead to performance issues if spare is pool master
© 2013 Citrix | Confidential – Do Not Distribute
HA Design – Distributed Capacity
Efficient Designᵒ All hosts utilized
Failure Planningᵒ Impacted VMs automatically placed for best fitᵒ Running VMs undisturbedᵒ Provides efficient guaranteed availability
© 2013 Citrix | Confidential – Do Not Distribute
HA Design – Impact of Dynamic Memory
Enhances Failure Planningᵒ Define reduced memory which meets SLAᵒ On restart, some VMs may “squeeze” their memoryᵒ Increases host efficiency
© 2013 Citrix | Confidential – Do Not Distribute
HA Enhancements in XenServer 6
• HA over NFS
• HA with Application Packagesᵒ Define multi-VM servicesᵒ Define VM startup order and delaysᵒ Application packages can be defined from
running VMs
• Auto-Start VMs are removedᵒ Usage conflicted with HA failure planningᵒ Created situations when perceived host
recovery wasn’t met
© 2013 Citrix | Confidential – Do Not Distribute
High Availability – No Excuses
• Shared storage the hardest part of setupᵒ Simple wizard can have HA defined in minutesᵒ Minimally invasive technology
• Protects your important workloadsᵒ Reduce on-call support incidentsᵒ Addresses VM density risksᵒ No performance, workload, configuration penalties
• Compatible with resilient application designs
• Fault tolerant options exist through ecosystem
Deprecated
Note: Aspects of the functionality contained in this section have been deprecated. This means you are fully supported if you deploy using this version, but no new features or functionality will be developed.
StorageLink Details
© 2013 Citrix | Confidential – Do Not Distribute
Array OS
SnapshottingProvisioning
Cloning
Leverage Array Technologies
• No file system overlay
• Use Best-of-Breed technologiesᵒ Thin Provisioningᵒ Deduplexingᵒ Cloningᵒ Snapshottingᵒ Mirroring
• Maximize array performance
Hypervisor Filesystem
SnapshottingProvisioning
Cloning
VM VM VM VM VM
VM VM VM VM VM
Array OS
SnapshottingProvisioning
Cloning
VM VM VM VM VM
VM VM VM VM VM
Traditional Approach Citrix StorageLink
© 2013 Citrix | Confidential – Do Not Distribute
LUN 600GB
LUN 600GBLUN 600GB
No StorageLink – Inefficient LUN Usage
1 TB storage capacity
Today
Customer request for 600GB
LUN 600GB
4 weeks
Customer adds 5 VMswith 50 GB each
8 weeks
Customer adds 5 VMswith 50 GB each
12 weeks
Customer adds 5 VMswith 50 GB each
Customer requests new
storage capacity
400 GB free
50GB disk50GB disk
50GB disk50GB disk50GB disk
50GB disk50GB disk
50GB disk50GB disk50GB disk
50GB disk50GB disk
50GB disk50GB disk50GB disk
50GB disk50GB disk
50GB disk50GB disk50GB disk
400 GB free
50GB disk50GB disk
50GB disk50GB disk50GB disk
400 GB free
LUN 600GB
© 2013 Citrix | Confidential – Do Not Distribute
With StorageLink – Maximize Array Utilization
1 TB storage capacity
Today
Customer request for 600 GB
4 weeks
Customer adds 5 VMswith 50 GB each
8 weeks
Customer adds 5 VMswith 50 GB each
500 GB free
12 weeks
Customer adds 5 VMswith 50 GB each
50GB LUN50GB LUN
50GB LUN50GB LUN50GB LUN
50GB LUN50GB LUN
50GB LUN50GB LUN50GB LUN
50GB LUN50GB LUN
50GB LUN50GB LUN50GB LUN
50GB LUN50GB LUN
50GB LUN50GB LUN50GB LUN
50GB LUN50GB LUN
50GB LUN50GB LUN50GB LUN
750 GB free
50GB LUN50GB LUN
50GB LUN50GB LUN50GB LUN
1 TB free 250 GB free
© 2013 Citrix | Confidential – Do Not Distribute
StorageLink – Efficient Snapshot Management
LUN 600GB
400 GB free
50GB disk50GB disk
50GB disk50GB disk50GB disk
NO StorageLink
VM Snapshot capacity limited to LUN size
Snapshot capacity
50GB LUN50GB LUN
50GB LUN50GB LUN50GB LUN
With StorageLink
VM Snapshot capacity limited storage pool size
750 GB350 GB
750 GB free
© 2013 Citrix | Confidential – Do Not Distribute
Integrated StorageLink Architecture
XenServer Host
XAPI Daemon
SMAPI
LVM NFS NetApp CSLGBridge
…
EQL NTAP SMI-S …
SR-IOV Details
© 2013 Citrix | Confidential – Do Not Distribute
Network Performance for GbE with PV drivers
• XenServer PV drivers can sustain peak throughput on GbEᵒ However limited to 2.9Gb/s in total
• But XenServer uses significantly more CPU cycles than Linuxᵒ Less available cycles for applicationᵒ 10GbE networks: CPU saturation in dom0 prevents achieving line rate
• Need to reduce I/O virtualization overhead in XenServer networking
© 2013 Citrix | Confidential – Do Not Distribute
I/O Virtualization Overview – Hardware Solution
• VMDq (Virtual Machine Device Queue)ᵒ Separate Rx & Tx queue pairs of NIC for
each VM, Software “switch”.
• Direct I/O (VT-d)ᵒ Improved I/O performance through direct
assignment of a I/O device to a HVM or PV workload
• SR-IOV (Single Root I/O Virtualization)ᵒ Changes to I/O device silicon to support
multiple PCI device ID’s, thus one I/O device can support multiple direct assigned guests. Requires VT-d.
Network Only
VM exclusivelyowns device
One Device, multiple Virtual Functions
© 2013 Citrix | Confidential – Do Not Distribute
Where Does SR-IOV Fit In?
Technique \ Characteristic
Efficiency Hardware Abstraction Applicability Scalability
Emulation Low Very high All device classes High
Para-virtualization Medium High – requires installing paravirtual drivers on the guest
Block, network High
Acceleration (VMDq) High Medium:-Transparent to apps-May require device-specific accelerators
Network only, hypervisor dependent
Medium (for accelerated interfaces)
PCI Pass-through High Low:-Explicit device plug/unplug-Device specific drivers
All devices Low
SR-IOV Addresses This
© 2013 Citrix | Confidential – Do Not Distribute
XenServer Solarflare SR-IOV Implementation
GuestVM
NIC
dom0
Physical driver
App
VF driver
vSwitch
GuestVMApp
VF driver
Virtual NIC Virtual NIC
Improved performance, but loss of services and management (e.g. live migration)
GuestVM
NIC
dom0
Physical driver
App
Plug-in driverNetfront
driver
Netback driver
vSwitch
Virtual NIC
VF
Improved performance AND full use of services and management
XS & Solarflare SR-IOV ModelTypical SR-IOV Implementation
Experimental
XenMotion in Detail
© 2013 Citrix | Confidential – Do Not Distribute
XenMotion – Live VM Migration
• Requires systems that have compatible CPUsᵒ Must be the same manufacturerᵒ Can be different speedᵒ Must support maskable features; or be of simlar type (e.g. 3450 and 3430)
• Minimal Downtimeᵒ Generally sub 200 mS; mostly due to network switches
• Requires shared storageᵒ VM state moves between hosts; underlying disks remain in existing location
Detailed XenMotion Example
© 2013 Citrix | Confidential – Do Not Distribute
• Systems verify correct storage and network setup on destination server• VM Resources Reserved on Destination Server
Pre-Copy Migration: Round 1
Source Virtual Machine Destination
© 2013 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1• While source VM is still running XenServer copies over memory image to destination server• XenServer keeps track of any memory changes during this process
© 2013 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1
© 2013 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1
© 2013 Citrix | Confidential – Do Not Distribute
• After first pass most of the memory image is now copied to the destination server• Any memory changes during initial memory copy are tracked
Pre-Copy Migration: Round 1
© 2013 Citrix | Confidential – Do Not Distribute
• XenServer now does another pass at copying over changed memory
Pre-Copy Migration: Round 2
© 2013 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2
© 2013 Citrix | Confidential – Do Not Distribute
• Xen still tracks any changes during the second memory copy• Second copy moves much less data• Also less time for memory changes to occur
Pre-Copy Migration: Round 2
© 2013 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2
© 2013 Citrix | Confidential – Do Not Distribute
• Xen will keep doing successive memory copies until minimal differences between source and destination
Pre-Copy Migration
© 2013 Citrix | Confidential – Do Not Distribute
• Source VM is paused and last bit of memory and machine state copied over• Master unlocks storage from source system and locks to destination system• Destination VM is unpaused and attached to storage and network resources• Source VM resources cleared
XenMotion: Final
Storage XenMotion
© 2013 Citrix | Confidential – Do Not Distribute
XenServer Pool
Live Storage XenMotionUpgrading VMs from Local to Shared Storage
XenServer Hypervisor
Local Storage
FC, iSCSI, NFS SAN
VDI(s)VDI(s)
Live Virtual
Machine
© 2013 Citrix | Confidential – Do Not Distribute
XenServer Pool
Live Storage XenMotionMoving VMs within a Pool with local-only storage
XenServer Hypervisor
Local Storage
XenServer Hypervisor
Local Storage
Live Virtual
Machine
VDI(s)VDI(s)
© 2013 Citrix | Confidential – Do Not Distribute
XenServer Pool 2XenServer Pool 1
Live Storage XenMotionMoving or rebalancing VMs between Pools (Local SAN)
Local Storage
XenServer HypervisorXenServer HypervisorXenServer HypervisorXenServer HypervisorXenServer HypervisorXenServer Hypervisor
VDI(s)
FC, iSCSI, NFS SAN
VDI(s)
Live Virtual
Machine
© 2013 Citrix | Confidential – Do Not Distribute
XenServer Pool 2XenServer Pool 1
Live Storage XenMotionMoving or rebalancing VMs between Pools (Local Local)
Local Storage
XenServer HypervisorXenServer HypervisorXenServer Hypervisor
Local Storage
XenServer HypervisorXenServer HypervisorXenServer Hypervisor
Live Virtual
Machine
VDI(s)VDI(s)
© 2013 Citrix | Confidential – Do Not Distribute
VHD Benefits
• Many SRs implement VDIs as VHD trees
• VHDs are a copy-on-write format for storing virtual disks
• VDIs are the leaves of VHD trees
• Interesting VDI operation: snapshot (implemented as VHD “cloning”)
• A: Original VDI
• B: Snapshot VDI
A
RW
BRO
ARW
RO
© 2013 Citrix | Confidential – Do Not Distribute
VDI Mirroring FlowSOURCE DESTINATION
mirror
copy
root
VM VM
no color = emptygradient = live
© 2013 Citrix | Confidential – Do Not Distribute
Benefits of VDI Mirroring
• Optimization: start with most similar VDIᵒ Another VDI with the least number of different blocksᵒ Only transfer blocks that are different
• New VDI field: Content ID for each VDIᵒ Easy way to confirm that different VDIs have identical contentᵒ Preserved across VDI copy, refreshed after VDI attached RW
• Worst case is a full copy (common in server virtualization)
• Best case occurs when you use VM “gold images” (i.e. XenDesktop)
Work better. Live better.