virtual volumes – white paper: vmware, inc.€¦ · technicalmarketingdocumentation!!/!3!...
TRANSCRIPT
Virtual Volumes (VVols) Beta What’s New
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 1
Virtual Volumes (VVols) Beta What’s New TECHNICAL MARKETING DOCUMENTATION v 0.1/August 2014
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 2
Contents
INTRODUCTION ...................................................................................................................................... 3 1.1 SOFTWARE DEFINED STORAGE ................................................................................................................... 3 1.2 WHAT IS VIRTUAL VOLUMES? ..................................................................................................................... 5 1.3 VIRTUAL VOLUMES ARCHITECTURE .......................................................................................................... 6 1.3.1 Protocol Endpoints (PE) .................................................................................................................... 7 1.3.2 Storage Containers (SC) .................................................................................................................... 7 1.3.3 Vendor Provider (VP) .......................................................................................................................... 8 1.3.4 Virtual Volumes (VVols) ..................................................................................................................... 9
2. ARCHITECTURE COMPARISON ................................................................................................... 10 1.4 STORAGE POLICY-‐BASED MANAGEMENT (SPBM) .............................................................................. 12 2.1 VSPHERE REQUIREMENTS ............................................................................................................................. 15 2.1.1 vCenter Server & vSphere Hosts ....................................................................................................... 15
2.2 STORAGE REQUIREMENTS ............................................................................................................................. 15 2.2.1 Storage Array ........................................................................................................................................... 15
CONCLUSION ......................................................................................................................................... 15 ACKNOWLEDGEMENTS ..................................................................................................................... 16 ABOUT THE AUTHOR ......................................................................................................................... 16
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 3
Introduction
1.1 Software Defined Storage VMware’s plan for software defined storage focuses on a set of VMware initiatives around local storage, shared storage and storage/data services. In essence, we want to enable vSphere to become a platform for storage services. Software defined storage aims to provide storage services and service level agreement automation through a software layer on the hosts that integrates with, and abstracts, the underlying hardware. In early 2014, VMware introduced the Storage Policy-‐based Management framework (SPBM), the first release of the software defined storage policy control plane. VMware Virtual SAN 5.5 was the first storage product to take advantage of this capability for locally attached and hypervisor-‐converged storage solution. With Virtual Volumes, VMware plans to extend the VM-‐centric management capabilities and other value propositions of the Software-‐define Storage strategy to SAN/NAS.
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 4
Virtual SAN 5.5 delivers a radically simple hypervisor-‐converged storage tier that implements the complete VMware vision for SDS on x-‐86 server storage. The next step in VMware’s technology roadmap for SDS is to bring SDS to external SAN/NAS storage by introducing Virtual Volumes and extending Storage Policy Based Management to Virtual Volume capable SAN/NAS arrays. Virtual Volumes and SPBM enable VM-‐centric, policy-‐based automation for external storage. Critical to driving SDS for external storage, Virtual Volumes has gathered tremendous support from storage vendors. In this document, Virtual Volumes (VVols), a solution that provides persistent flash based storage tier services by utilizing local flash based devices on vSphere hosts, will be examined in detail.
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 5
1.2 What is Virtual Volumes? Virtual Volumes (VVols) is a new virtual machine disk management and integration framework that exposes virtual disks as native storage objects onto storage arrays that enables array-‐based operations at the virtual disk level. Virtual Volumes transforms the data plane of SAN/NAS storage systems by aligning storage consumptions and operations with virtual machines. In other words, Virtual Volumes makes SAN/NAS storage systems VM-‐aware and unlocks the ability to leverage array based data services and storage array capabilities with a VM-‐centric approach at the granularity of a single virtual disk. With Virtual Volumes most of the data operations are offloaded to the storage arrays. New data management operations, and communication mechanism have been implemented to manage required communications between vSphere, storage arrays and the Virtual Volumes.
Figure 1: Virtual Volume Logical Construct
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 6
1.3 Virtual Volumes Architecture Virtual Volumes implements significantly different and improved storage architecture, allowing operations to be conducted at the VM level using native array capabilities. Additionally, Virtual Volumes eliminates the need for provisioning and managing large number of LUNs/Volumes per host, and increasing the manageability of the vSphere infrastructure by reducing operational overhead while enabling scalable data services on a per VM level.
Figure 2: Virtual Volumes Logical Architecture
In order to management and operate the new features and capabilities introduced by virtual volumes a new set of communication mechanism and management constructs have been introduced:
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 7
1.3.1 Protocol Endpoints (PE) Protocol Endpoints (PE) are transport mechanism or access points that enable and manage access control and communications between ESXi hosts and storage array systems. Protocol Endpoints are part of the physical storage fabric, and they are capable of establish data paths on demand from virtual machines to their respective virtual volumes. Protocol Endpoints are compatible with all SAN/NAS industry standard protocols:
• iSCSI • NFS v3 • NFS v4.1 • Fiber Channel (FC) • Fiber Channel over Ethernet (FCoE)
Protocol Endpoints are setup and configured by Storage administrators.
1.3.2 Storage Containers (SC) Storage Containers are logical storage construct that are responsible for managing storage capacity on the storage array and are also designed for grouping and storing virtual volumes. Storage Containers are typically defined and setup by the storage administrators on the storage arrays in order to define the following:
• Storage capacity allocations and restrictions • Storage policy settings based on data service capabilities on a per VM
basis Storage Container are logical constructs which are mapped in vSphere as a datastore whenever a vSphere admin adds storage.
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 8
Similar to Protocol Endpoints, Storage Containers are setup and configured by the storage administrator.
Figure 3: Adding a Virtual Volume Datastore in vSphere
1.3.3 Vendor Provider (VP)
Vendor Providers (VP) also known as VASA providers is a storage-‐side software component (Plug-‐in) that acts as a storage awareness service for vSphere that is developed by the storage vendor. ESXi hosts and vCenter Server connect to the Vendor Provider and obtain information about available storage topology, capabilities, and status. Subsequently vCenter Server provides this information to vSphere clients. Vendor Providers can be implemented within the vendor’s storage management server, or in storage firmware.
This plug-‐in uses a set of vSphere Storage APIs -‐ Storage Awareness (VASA) out-‐of-‐band management APIs -‐ that is utilized to export storage array features and capabilities and present them to vSphere through the VASA APIs for granular per-‐VM consumption. Vendor Providers are typically setup and configured by the vSphere administrator in one of two ways:
• Automatically via the array vendors plug-‐in • Manually through the vCenter Server
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 9
Figure 4: Vendor Provider Manual Registration
1.3.4 Virtual Volumes (VVols) Virtual Volumes (VVols) are a new type of virtual machine objects, which are created and stored natively on the storage array. VVols are stored in storage containers and mapped to virtual machine files/objects such as VM swap, VMDKs and their derivatives. There are five different types of Virtual Volumes and each of them map to different and specific functions of a virtual machine’s files:
• Data-‐VVol – Equivalent to VMDKs • Config-‐VVol – VM Home, configuration related files, logs, etc • Memory-‐VVol – Snapshots • Swap-‐VVol – VM memory swap • Other-‐VVol – Generic type for solution specific
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 1 0
2. Architecture Comparison Traditional vSphere and storage virtualization architectures for virtual machines, the datastore (a LUN formatted as VMFS or an NFS mount point) serves the dual purposes of being an endpoint to receive SCSI or NFS read and write commands, and being a storage container for a large number of VM metadata and data files. By contrast, in a Virtual Volume architecture, Protocol Endpoints act as endpoints and Storage Containers’s act as datastores, but without imposing a single data policy on all the VMDKs stored within them By multiplexing a large number of virtual machines onto a single LUN or NFS mount-‐point reduces the number of entities on the fabric and their configuration and scalability overhead, but limits the granularity at which data services can be applied.
Figure 5: Traditional storage Architectures vs. Virtual Volumes SDS Architectures
In a Software-‐defined based architecture with Virtual Volumes the storage system provides a Protocol Endpoint, which is a discoverable entity on the physical fabric. For example, SAN storage systems could provide a proxy LUN Protocol Endpoint that is discovered using regular LUN discovery commands. Further, this Protocol Endpoint could be accessible by multiple paths, and traffic on these paths may follow well-‐known path selection policies, as is conventional with LUNs today.
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 1 1
The NFS server mount (IP address and a share name) is a Protocol Endpoint for NAS storage systems. The Protocol Endpoint exports a large number of storage containers, each container being a Virtual Volume. These new type of objects are associated 1-‐to-‐1 with VMDKs. Physical fabric methods for device discovery, path selection, and so forth, do not apply to virtual volumes. Virtual Volumes does not replace VMFS or other existing file system and storage access protocols that are supported vSphere. The vSphere platform will be able to simultaneously connect both to Virtual Volume
capable and to traditional LUN based SAN/NAS arrays. Depending on vendor implementation, a single array could be capable of support both Virtual Volume and LUN based operations. Virtual Volumes present several advantages over traditional storage solutions from technical limitations perspective as well as operational impact.
File system Limits
Value VVol Maximum Value
VMDK Size 64TB Data VVol size 64TB Virtual disk per host
2K # of VVols bound to host 4K
LUNs/NAS mounts per hosts
256 each # of PE per host 256 each
LUN to datastore mapping
1:1 Storage container to datastore mapping N/A
Volume Size 64TB Storage Container 2^ 64 Volumes per host
256 Storage containers per host 256
Adapter Queue depth
32 Adapter Queue depth 32
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 1 2
Table 1: Virtual Volumes vs. Current Storage limitations
Storage Admins Current With VVols Benefit
Datastore provisioning Many standard size LUNs based policies – one for each application Capacity mgmt may be tied to LUNs Needs to format VMFS
Small number of datastores; but will need to be shared Capacity planning on a pool
Define capabilities to be consumed without static limitations Vendor mgmt tools many need to be updated
Troubleshooting and Monitoring
Keyed off from datastores
Based on VM/VMDK VM-‐awareness on storage systems Vendor mgmt tools many need to be updated
Provisioning To datastores Using tag based policies
To datastores With granular policy
Storage consumption has been automated. No need to map VMs to LUNs
Troubleshooting and Monitoring
Per VM Datastore driven Manual process to ensure policy compliance
Per-‐VM Vendor container Automated monitoring of policy compliance
VM-‐awareness at storage Full visibility of policy compliance VMware tools could be made better with VVols.
1.4 Storage Policy-Based Management (SPBM) The Storage Policy-‐Based Management (SPBM) framework delivers an orchestration and automation engine that translates the storage requirements expressed in a VM Storage policy into VM granular provisioning capabilities with dynamic resource allocation and management of storage related service.
Through the integration of the vSphere Storage API – Storage Awareness (VASA), the storage array capabilities are pushed through the vSphere stack and the vendor-‐specific capabilities are surface in the vCenter Server management interface.
Using VM Storage Policies, vSphere administrators can specify a set of storage require capabilities for any particular virtual machine, in order to match the service levels required by the applications hosted in a virtual machine.
SPBM is a common management framework that is applied consistently across all tiers of the Software-‐Defined Storage virtual data plane (for Virtual SAN and VVols enabled arrays) regardless of the underlying infrastructure. It also provides integration points with data center management tools such as PowerCLI and cloud automation solutions such as vCloud Automation Center.
Configured VVol-‐managed storage arrays per ESXi host
64
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 1 3
Figure 6: Policy Based Management with Virtual Volumes and SPBM
In the existing LUN/volume dependent model, it is very complex and time-‐consuming to accurately deliver storage SLAs on a per VM level. All of the array enabled service, such as snapshots; clones, replication, etc. Service capabilities are limited to the datastore level and are based on vendor specific implantations, which require specific administration skills sets to manage each vendor implementation. Additionally, each virtual machine needs to be placed on the appropriate pre-‐defined storage tier and pre-‐configured storage container (LUN/Volume). In many cases, a datastore with the right combination of capacity, performance, and data services may not be available, resulting in sub-‐ optimal placement of applications tiers that leads to overprovisioning of data services, high performance resources and eventually extra cost. In contrast, SPBM allows specifying storage requirements for capacity, performance, data protection, etc at the granularity of each individual virtual machine via policy. In the case of external SAN/NAS, SPBM leverages Virtual Volumes to recommend compliant datastores for VM placement and transparently turn on the necessary data services based on native array capabilities. Through SPBM, VM-‐tailored data services are executed by the array. Coupled with Virtual Volumes, SPBM ensures policy compliance throughout VM lifecycle.
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 1 4
Figure 7: SPBM Policy Compliance Enforcement
With SPBM, SLA monitoring and change management are also significantly simplified. SPBM monitors compliance to policies on an ongoing basis and alerts administrators if the policies associated with a VM are not satisfied. .
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 1 5
2.1 vSphere Requirements
2.1.1 vCenter Server & vSphere Hosts The latest vSphere 2015 Beta builds are required to test Virtual Volumes. The software and all of its related components are available in the Virtual Volumes beta community site. Sign up for the beta to gain access. Virtual Volumes Beta Community site: https://communities.vmware.com/community/vmtn/beta/vsphere-‐beta/vvols
2.2 Storage Requirements
2.2.1 Storage Array Virtual Volumes requires a compatible storage array system. A software solution such as a virtual storage appliance from one of the supporting vendors is supported for testing management workflows, operations, and functionalities. Depending on the vendor specific Virtual Volumes implementation, storage array system may require a firmware upgrade in order to support Virtual Volumes. Check with your storage vendor for information and configuration procedures.
Conclusion Virtual Volumes and SPBM bring Software-‐Defined Storage benefits to SAN/NAS arrays by enabling policy-‐driven provisioning and control of storage services to individual VMs leveraging native array capabilities. Virtual Volumes and SPBM expose virtual disks as native storage containers and enables granular array-‐based operations at the virtual disk level. Through automation and by establishing the VM as the unit of data, storage workflows are streamlined and underlying arrays are utilized more efficiently. Virtual Volumes is an industry wide initiative, supported by all major vendors and will make provisioning of SAN/NAS arrays in vSphere environments more agile, simpler, and will reduce storage costs.
T E CHN I C A L MARKET I NG DO CUMENTAT I ON / 1 6
Acknowledgements I would like to thank Suzy Visvanath, Virtual Volumes Sr. Product Manager, Juan Novella, Virtual Volumes Product Marketing Manage for contributing to this paper and to Cormac Hogan, Sr. Technical Marketing Architect of the Cloud Infrastructure Product Marketing Group, Kiran Madnani, Sr. Product Manager of the Storage platform and Charu Chaubal, Director of the Storage and Availability Technical Marketing team for reviewing this paper.
About the Author Rawlinson is a Senior Technical Marketing Architect in the Cloud Infrastructure Storage and Availability Technical Marketing Group at VMware, Inc. focused on Software-‐Defined Storage technologies such as VMware Virtual SAN, Virtual Volumes and other storage related. Previously, as an architect in VMware's Cloud Infrastructure & Management Professional Services Organization he specialized in VMware vSphere and Cloud enterprise architectures for VMware fortune 100 and 500 customers. Rawlinson is amongst the few VMware Certified Design Experts in the world and is the author of multiple publications based on VMware and other technologies. • Follow Rawlinson’s blogs: http://blogs.vmware.com/vsphere/storage http://www.punchingclouds.com • Follow Rawlinson on Twitter: @PunchingClouds
[[Title of Document]]
T E CH n e t w o r k i n t e r f a c e c a r d A L M ARK E T I N G D O CUMENTA T I ON / 1 7
CA USA Tel
© Inc. All This is U.S. laws.
or is or of Inc. All may of