vsphere 4.0 storage: features and enhancements
DESCRIPTION
vSphere 4.0 Storage: Features and Enhancements. Nathan Small Staff Engineer Rev E Last updated 3 rd August 2009. VMware Inc. Introduction. - PowerPoint PPT PresentationTRANSCRIPT
vSphere 4.0 Storage:Features and Enhancements
Nathan Small
Staff EngineerRev E
Last updated 3rd August 2009
VMware Inc
2VI4: Storage - Slide
Introduction
This presentation is a technical overview and deep dive of some of the features and enhancements to the storage stack and related storage components of vSphere 4.0
3VI4: Storage - Slide
New Acronyms in vSphere 4
MPX = VMware Generic Multipath Device (No Unique Identifier)
NAA = Network Addressing Authority
PSA = Pluggable Storage Architecture
MPP = Multipathing Plugin
NMP = Native Multipathing
SATP = Storage Array Type Plugin
PSP = Path Selection Policy
4VI4: Storage - Slide
vSphere Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
5VI4: Storage - Slide
Naming Convention Change in vSphere 4
Although the vmhbaN:C:T:L:P naming convention is visible, it is now known as the run-time name and is no longer guaranteed to be persistent through reboots.
ESX 4 now uses the unique LUN identifiers, typically the NAA (Network Addressing Authority) ID. This is true for the CLI as well as the GUI and is also the naming convention used during the install.
The IQN (iSCSI Qualified Name) is still used for iSCSI targets.
The WWN (World Wide Name) is still used for Fiber Channel targets.
For those devices which do not have a unique ID, you will observe an MPX reference (which is basically stands for VMware Multipath X device).
6VI4: Storage - Slide
vSphere Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
7VI4: Storage - Slide
Pluggable Storage Architecture
PSA, the Pluggable Storage Architecture, is a collection of VMkernel APIs that allow third party hardware vendors to insert code directly into the ESX storage I/O path.
This allows 3rd party software developers to design their own load balancing techniques and failover mechanisms for particular storage array types.
This also means that 3rd party vendors can now add support for new arrays into ESX without having to provide internal information or intellectual property about the array to VMware.
VMware, by default, provide a generic Multipathing Plugin (MPP) called NMP (Native Multipathing Plugin).
PSA co-ordinates the operation of the NMP and any additional 3rd party MPP.
8VI4: Storage - Slide
PSA Tasks
Loads and unloads multipathing plugins (MPPs).
Handles physical path discovery and removal (via scanning).
Routes I/O requests for a specific logical device to an appropriate MPP.
Handles I/O queuing to the physical storage HBAs & to the logical devices.
Implements logical device bandwidth sharing between Virtual Machines.
Provides logical device and physical path I/O statistics.
9VI4: Storage - Slide
MPP Tasks
The PSA discovers available storage paths and based on a set of predefined rules, the PSA will determine which MPP should be given ownership of the path.
The MPP then associates a set of physical paths with a specific logical device.
The specific details of handling path failover for a given storage array are delegated to a sub-plugin called a Storage Array Type Plugin (SATP).
SATP is associated with paths.
The specific details for determining which physical path is used to issue an I/O request (load balancing) to a storage device are handled by a sub-plugin called Path Selection Plugin (PSP).
PSP is associated with logical devices.
10
VI4: Storage - Slide
NMP Specific Tasks
Manage physical path claiming and unclaiming.
Register and de-register logical devices.
Associate physical paths with logical devices.
Process I/O requests to logical devices:
Select an optimal physical path for the request (load balance)
Perform actions necessary to handle failures and request retries.
Support management tasks such as abort or reset of logical devices.
11
VI4: Storage - Slide
Storage Array Type Plugin - SATP
An Storage Array Type Plugin (SATP) handles path failover operations.
VMware provides a default SATP for each supported array as well as a generic SATP (an active/active version and an active/passive version) for non-specified storage arrays.
If you want to take advantage of certain storage specific characteristics of your array, you can install a 3rd party SATP provided by the vendor of the storage array, or by a software company specializing in optimizing the use of your storage array.
Each SATP implements the support for a specific type of storage array, e.g. VMW_SATP_SVC for IBM SVC.
12
VI4: Storage - Slide
SATP (ctd)
The primary functions of an SATP are:
Implements the switching of physical paths to the array when a path has failed.
Determines when a hardware component of a physical path has failed.
Monitors the hardware state of the physical paths to the storage array.
There are many storage array type plug-ins. To see the complete list, you can use the following commands:
# esxcli nmp satp list
# esxcli nmp satp listrules
# esxcli nmp satp listrules –s <specific SATP>
13
VI4: Storage - Slide
Path Selection Plugin (PSP)
If you want to take advantage of more complex I/O load balancing algorithms, you could install a 3rd party Path Selection Plugin (PSP).
A PSP handles load balancing operations and is responsible for choosing a physical path to issue an I/O request to a logical device.
VMware provide three PSP: Fixed, MRU or Round Robin.
# esxcli nmp psp list
Name Description
VMW_PSP_MRU Most Recently Used Path Selection
VMW_PSP_RR Round Robin Path Selection
VMW_PSP_FIXED Fixed Path Selection
14
VI4: Storage - Slide
NMP Supported PSPs
Most Recently Used (MRU) — Selects the first working path discovered at system boot time. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available.
Fixed — Uses the designated preferred path, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX host cannot use the preferred path, it selects a random alternative available path. The ESX host automatically reverts back to the preferred path as soon as the path becomes available.
Round Robin (RR) – Uses an automatic path selection rotating through all available paths and enabling load balancing across the paths.
15
VI4: Storage - Slide
Enabling Additional Logging on vSphere 4.0
For additional SCSI Log Messages, set:
Scsi.LogCmdErrors = "1“
Scsi.LogMPCmdErrors = "1“ At GA, the default setting for Scsi.LogMPCmdErrors is "1“
These can be found in the Advanced Settings.
16
VI4: Storage - Slide
Viewing Plugin InformationThe following command lists all multipathing modules loaded on the system. At a minimum, this command returns the default VMware Native Multipath (NMP) plugin & the MASK_PATH plugin. Third-party MPPs will also be listed if installed:
# esxcfg-mpath -G
MASK_PATH
NMP
For ESXi, the following VI CLI 4.0 command can be used:
# vicfg-mpath –G –-server <IP> --username <X> --password <Y>
MASK_PATH
NMP
LUN path masking is done via the MASK_PATH Plug-in.
17
VI4: Storage - Slide
Viewing Device InformationThe command esxcli nmp device list lists all devices managed by the NMP plug-in and the configuration of that device, e.g.:
# esxcli nmp device list
naa.600601601d311f001ee294d9e7e2dd11
Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11)
Storage Array Type: VMW_SATP_CX
Storage Array Type Device Config: {navireg ipfilter}
Path Selection Policy: VMW_PSP_MRU
Path Selection Policy Device Config: Current Path=vmhba33:C0:T0:L1
Working Paths: vmhba33:C0:T0:L1
mpx.vmhba1:C0:T0:L0
Device Display Name: Local VMware Disk (mpx.vmhba1:C0:T0:L0)
Storage Array Type: VMW_SATP_LOCAL
Storage Array Type Device Config:
Path Selection Policy: VMW_PSP_FIXED
Path Selection Policy Device Config: {preferred=vmhba1:C0:T0:L0;current=vmhba1:C0:T0:L0}
Working Paths: vmhba1:C0:T0:L0
Specific configuration for EMC Clariion & Invista products
mpx is used as an identifier for devices that do not have their own unique ids
NAA is the Network Addressing Authority (NAA) identifier guaranteed to be unique
18
VI4: Storage - Slide
Viewing Device Information (ctd)
Get current path information for a specified storage device managed by the NMP.
# esxcli nmp device list -d naa.600601604320170080d407794f10dd11
naa.600601604320170080d407794f10dd11
Device Display Name: DGC Fibre Channel Disk (naa.600601604320170080d407794f10dd11)
Storage Array Type: VMW_SATP_CX
Storage Array Type Device Config: {navireg ipfilter}
Path Selection Policy: VMW_PSP_MRU
Path Selection Policy Device Config: Current Path=vmhba2:C0:T0:L0
Working Paths: vmhba2:C0:T0:L0
19
VI4: Storage - Slide
Viewing Device Information (ctd)Lists all paths available for a specified storage device on ESX:
# esxcfg-mpath -b -d naa.600601601d311f001ee294d9e7e2dd11
naa.600601601d311f001ee294d9e7e2dd11 : DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11)
vmhba33:C0:T0:L1 LUN:1 state:active iscsi Adapter: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target: IQN=iqn.1992-04.com.emc:cx.ck200083700716.b0 Alias= Session=00023d000001 PortalTag=1
vmhba33:C0:T1:L1 LUN:1 state:standby iscsi Adapter: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b Target: IQN=iqn.1992-04.com.emc:cx.ck200083700716.a0 Alias= Session=00023d000001 PortalTag=2
ESXi has an equivalent vicfg-mpath command.
20
VI4: Storage - Slide
Viewing Device Information (ctd)
# esxcfg-mpath -l -d naa.6006016043201700d67a179ab32fdc11
iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b-00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.a0,t,2-naa.600601601d311f001ee294d9e7e2dd11
Runtime Name: vmhba33:C0:T1:L1
Device: naa.600601601d311f001ee294d9e7e2dd11
Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11)
Adapter: vmhba33 Channel: 0 Target: 1 LUN: 1
Adapter Identifier: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b
Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.a0,t,2
Plugin: NMP
State: standby
Transport: iscsi
Adapter Transport Details: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b
Target Transport Details: IQN=iqn.1992-04.com.emc:cx.ck200083700716.a0 Alias= Session=00023d000001 PortalTag=2
iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b-00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.b0,t,1-naa.600601601d311f001ee294d9e7e2dd11
Runtime Name: vmhba33:C0:T0:L1
Device: naa.600601601d311f001ee294d9e7e2dd11
Device Display Name: DGC iSCSI Disk (naa.600601601d311f001ee294d9e7e2dd11)
Adapter: vmhba33 Channel: 0 Target: 0 LUN: 1
Adapter Identifier: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b
Target Identifier: 00023d000001,iqn.1992-04.com.emc:cx.ck200083700716.b0,t,1
Plugin: NMP
State: active
Transport: iscsi
Adapter Transport Details: iqn.1998-01.com.vmware:cs-tse-h33-34f33b4b
Target Transport Details: IQN=iqn.1992-04.com.emc:cx.ck200083700716.b0 Alias= Session=00023d000001 PortalTag=1
Storage array (target) iSCSI Qualified Names (IQNs)
21
VI4: Storage - Slide
Third-Party Multipathing Plug-ins (MPPs)
You can install the third-party multipathing plug-ins (MPPs) when you need to change specific load balancing and failover characteristics of ESX/ESXi.
The third-party MPPs replace the behaviour of the NMP and entirely take control over the path failover and the load balancing operations for certain specified storage devices.
22
VI4: Storage - Slide
Third-Party SATP & PSP
Third-party SATP
Generally developed by third-party hardware manufacturers who have ‘expert’ knowledge of the behaviour of their storage devices.
Accommodates specific characteristics of storage arrays and facilitates support for new arrays.
Third-party PSP
Generally developed by third-party software companies.
More complex I/O load balancing algorithms.
NMP coordination
Third-party SATPs and PSPs are coordinated by the NMP, and can be simultaneously used with the VMware SATPs and PSPs.
23
VI4: Storage - Slide
vSphere Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
24
VI4: Storage - Slide
iSCSI Enhancements
ESX 4 includes an updated iSCSI stack which offers improvements to both software iSCSI (initiator that runs at the ESX layer) and hardware iSCSI (a hardware-optimized iSCSI HBA).
For both software and hardware iSCSI, functionality (e.g. CHAP support, digest acceleration, etc.) and performance are improved.
Software iSCSI can now be configured to use host based multipathing if you have more than one physical network adapter.
In the new ESX 4.0 Software iSCSI stack, there is no longer any requirement to have a Service Console connection to communicate to an iSCSI target.
25
VI4: Storage - Slide
Software iSCSI Enhancements
iSCSI Advanced Settings
In particular, data integrity checks in the form of digests.
CHAP Parameters Settings
A user will be able to specify CHAP parameters as per-target CHAP and mutual per-target CHAP.
Inheritance model of parameters.
A global set of configuration parameters can be set on the initiator and propagated down to all targets.
Per target/discovery level configuration.
Configuration settings can now be set on a per target basis which means that a customer can uniquely configure parameters for each array discovered by the initiator.
26
VI4: Storage - Slide
Software iSCSI Multipathing – Port Binding
You can now create a port binding between a physical NIC and a iSCSI VMkernel port in ESX 4.0.
Using the “port binding" feature, users can map the multiple iSCSI VMkernel ports to different physical NICs. This will enable the software iSCSI initiator to use multiple physical NICs for I/O transfer.
Connecting the software iSCSI initiator to the VMkernel ports can only be done from the CLI using the esxcli swiscsi commands.
Host based multipathing can then manage the paths to the LUN.
In addition, Round Robin path policy can be configured to simultaneously use more than one physical NIC for the iSCSI traffic to the iSCSI.
27
VI4: Storage - Slide
Hardware iSCSI Limitations
Mutual Chap is disabled.
Discovery is supported by IP address only (storage array name discovery not supported).
Running with the Hardware and Software iSCSI initiator enabled on the same host at the same time is not supported.
28
VI4: Storage - Slide
vSphere Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
29
VI4: Storage - Slide
GUI Changes - Display Device Info
Note that there are no further references to vmhbaC:T:L. Unique device identifiers such as the NAA id are now used.
30
VI4: Storage - Slide
GUI Changes - Display HBA Configuration Info
Again, notice the use of NAA ids rather than vmhbaC:T:L.
31
VI4: Storage - Slide
GUI Changes - Display Path Info
Note the reference to the PSP &
SATP
Note the (I/O)status designating
the active path
32
VI4: Storage - Slide
GUI Changes - Data Center Rescan
33
VI4: Storage - Slide
Degraded Status
If we detect less than 2 HBAs or 2 Targets in the paths of the datastore, we mark the datastore multipathing status as “Partial/No Redundancy“ in the Storage Views.
34
VI4: Storage - Slide
Storage Administration
VI4 also provides new monitoring, reporting and alarm features for storage management.
This now gives an administrator of a vSphere the ability to:
1. Manage access/permissions of datastores/folders
2. Have visibility of a Virtual Machine’s connectivity to the storage infrastructure
3. Account for disk space utilization
4. Provide notification in case of specific usage conditions
35
VI4: Storage - Slide
Datastore Monitoring & Alarms
vSphere introduces new datastore and VM-specific alarms/alerts on storage events:
New datastore alarms:
Datastore disk usage %
Datastore Disk Over allocation %
Datastore connection state to all hosts
New VM alarms:
VM Total size on disk (GB)
VM Snapshot size (GB)
Customer’s can now track snapshot usage
36
VI4: Storage - Slide
New Storage Alarms
New Datastore specific AlarmsNew VM specific Alarms
This alarms allow the trackingof Thin Provisioned disks
This alarms will trigger ifa datastore becomes unavailble to the host
This alarms will trigger ifa snapshot delta filebecomes too large
37
VI4: Storage - Slide
vSphere 4 Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
38
VI4: Storage - Slide
Traditional Snapshot Detection
When an ESX 3.x server finds a VMFS-3 LUN, it compares the SCSI_DiskID information returned from the storage array with the SCSI_DiskID information stored in the LVM Header.
If the two IDs don’t match, then by default, the VMFS-3 volume will not be mounted and thus be inaccessible.
A VMFS volume on ESX 3.x could be detected as a snapshot for a number of reasons:
LUN ID changed
SCSI version supported by array changed (firmware upgrade)
Identifier type changed – Unit Serial Number vs NAA ID
39
VI4: Storage - Slide
New Snapshot Detection Mechanism
When trying to determine if a device is a snapshot in ESX 4.0, the ESX uses a globally unique identifier to identify each LUN, typically the NAA (Network Addressing Authority) ID.
NAA IDs are unique and are persistent across reboots.
There are many different globally unique identifiers (EUI, SNS, T10, etc). If the LUN does not support any of these globally unique identifiers, ESX will fall back to the serial number + LUN ID used in ESX 3.0.
40
VI4: Storage - Slide
SCSI_DiskId Structure
The internal VMkernel structure SCSI_DiskId is populated with information about a LUN.
This is stored in the metadata header of a VMFS volume.
if the LUN does have a globally unique (NAA) ID, the field SCSI_DiskId.data.uid in the SCSI_DiskId structure will hold it.
If the NAA ID in the SCSI_DiskId.data.uid stored in the metadata does not match the NAA ID returned by the LUN, the ESX knows the LUN is a snapshot.
For older arrays that do not support NAA IDs, the earlier algorithm is used where we compare other fields in the SCSI_DISKID structure to detect whether a LUN is a snapshot or not.
41
VI4: Storage - Slide
8:00:45:51.975 cpu4:81258)ScsiPath: 3685: Plugin 'NMP' claimed path 'vmhba33:C0:T1:L2'
8:00:45:51.975 cpu4:81258)ScsiPath: 3685: Plugin 'NMP' claimed path 'vmhba33:C0:T0:L2'
8:00:45:51.977 cpu2:81258)VMWARE SCSI Id: Id for vmhba33:C0:T0:L2
0x60 0x06 0x01 0x60 0x1d 0x31 0x1f 0x00 0xfc 0xa3 0xea 0x50 0x1b 0xed 0xdd 0x11 0x52 0x41 0x49 0x44 0x20 0x35
8:00:45:51.978 cpu2:81258)VMWARE SCSI Id: Id for vmhba33:C0:T1:L2
0x60 0x06 0x01 0x60 0x1d 0x31 0x1f 0x00 0xfc 0xa3 0xea 0x50 0x1b 0xed 0xdd 0x11 0x52 0x41 0x49 0x44 0x20 0x35
8:00:45:52.002 cpu2:81258)LVM: 7125: Device naa.600601601d311f00fca3ea501beddd11:1 detected to be a snapshot:
8:00:45:52.002 cpu2:81258)LVM: 7132: queried disk ID: <type 2, len 22, lun 2, devType 0, scsi 0, h(id) 3817547080305476947>
8:00:45:52.002 cpu2:81258)LVM: 7139: on-disk disk ID: <type 2, len 22, lun 1, devType 0, scsi 0, h(id) 6335084141271340065>
8:00:45:52.006 cpu2:81258)ScsiDevice: 1756: Successfully registered device "naa.600601601d311f00fca3ea501beddd11" from plugin "
Snapshot Log Messages
42
VI4: Storage - Slide
Resignature & Force-Mount
We have a new naming convention in ESX 4.
“Resignature” is equivalent to EnableResignature = 1 in ESX 3.x.
“Force-Mount” is equivalent to DisallowSnapshotLUN = 0 in ESX 3.x.
The advanced configuration options EnableResignature and DisallowSnapshotLUN have been replaced in ESX 4 with a new CLI utility called esxcfg-volume (vicfg-volume for ESXi).
Historically, the EnableResignature and DisallowSnapshotLUN were applied server wide and applied to all volumes on an ESX. The new Resignature and Force-mount are volume specific so offer much greater granularity in the handling of snapshots.
43
VI4: Storage - Slide
Persistent Or Non-Persistent MountsIf you use the GUI to force-mount a VMFS volume, it will make it a persistent mount which will remain in place through reboots of the ESX host. VC will not allow this volume to be resignatured.
If you use the CLI to force-mount a VMFS volume, you can choose whether it persists or not through reboots.
Through the GUI, the Add Storage Wizard now displays the VMFS label. Therefore if a device is not mounted, but it has a label associated with it, you can make the assumption that it is a snapshot, or to use ESX 4 terminology, a Volume Copy.
44
VI4: Storage - Slide
Mounting A Snapshot
OriginalVolume is still presented to
the ESX
Snapshot – notice that thevolume label is the same as
the original volume.
45
VI4: Storage - Slide
Snapshot Mount Options
Keep Existing Signature – this is a force-mount operation: similar to disabling DisallowSnapshots in ESX 3.x. New datastore has original UUID saved in the file system header.
If the original volume is already online, this option will not succeed and will print a ‘Cannot change the host configuration’ message when resolving the VMFS volumes..
Assign a new Signature – this is a resignature operation: similar to enabling EnableResignature in ESX 3.x. New datastore has a new UUID saved in the file system header.
Format the disk – destroys the data on the disk and creates a new VMFS volume on it.
46
VI4: Storage - Slide
New CLI Command: esxcfg-volumeThere is a new CLI command in ESX 4 for resignaturing VMFS snapshots. Note the difference between ‘-m’ and ‘-M’:
# esxcfg-volume
esxcfg-volume <options>
-l|--list List all volumes which have been
detected as snapshots/replicas.
-m|--mount <VMFS UUID|label> Mount a snapshot/replica volume,
if its original copy is not
online.
-u|--umount <VMFS UUID|label> Umount a snapshot/replica volume.
-r|--resignature <VMFS UUID|label> Resignature a snapshot/replica
volume.
-M|--persistent-mount <VMFS UUID|label> Mount a snapshot/replica volume
persistently, if its original
copy is not online.
-h|--help Show this message.
47
VI4: Storage - Slide
esxcfg-volume (ctd)
The difference between a mount and a persistent mount is that the persistent mounts will be maintained through reboots.
ESX manages this by adding entries for force mounts into the /etc/vmware/esx.conf.
A typical set of entries for a force mount look like:
/fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]\ /forceMountedLvm/forceMount = "true"
/fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]\ /forceMountedLvm/lvmName = "48d247da-b18fd17c-1da1-0019993032e1"
/fs/vmfs[48d247dd-7971f45b-5ee4-0019993032e1]\ /forceMountedLvm/readOnly = "false"
48
VI4: Storage - Slide
Mount With the Original Volume Still Online
/var/log # esxcfg-volume -l
VMFS3 UUID/label: 496f202f-3ff43d2e-7efe-001f29595f9d/Shared_VMFS_For_FT_VMs
Can mount: No (the original volume is still online)
Can resignature: Yes
Extent name: naa.600601601d311f00fca3ea501beddd11:1 range: 0 - 20223 (MB)
/var/log # esxcfg-volume -m 496f202f-3ff43d2e-7efe-001f29595f9d
Mounting volume 496f202f-3ff43d2e-7efe-001f29595f9d
Error: Unable to mount this VMFS3 volume due to the original volume is still online
49
VI4: Storage - Slide
esxcfg-volume (ctd)
In this next example, a clone LUN of a VMFS LUN is presented back to the same ESX server. Therefore we cannot use either the mount or the persistent mount options since the original LUN is already presented to the host so we will have to resignature:
# esxcfg-volume -l
VMFS3 UUID/label: 48d247dd-7971f45b-5ee4-0019993032e1/cormac_grow_vol
Can mount: No
Can resignature: Yes
Extent name: naa.6006016043201700f30570ed09f6da11:1 range: 0 - 15103 (MB)
50
VI4: Storage - Slide
esxcfg-volume (ctd)
# esxcfg-volume -r 48d247dd-7971f45b-5ee4-0019993032e1
Resignaturing volume 48d247dd-7971f45b-5ee4-0019993032e1
# vdf
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdg2 5044188 1595804 3192148 34% /
/dev/sdd1 248895 50780 185265 22% /boot
.
.
/vmfs/volumes/48d247dd-7971f45b-5ee4-0019993032e1
15466496 5183488 10283008 33% /vmfs/volumes/cormac_grow_vol
/vmfs/volumes/48d39951-19a5b934-67c3-0019993032e1
15466496 5183488 10283008 33% /vmfs/volumes/snap-397419fa-cormac_grow_vol
Warning – there is no vdf command in ESXi. However the df command reports on VMFS filesystems in ESXi.
51
VI4: Storage - Slide
Ineffective Workarounds Under ESX 4
Some of our customers have been running with DisallowSnapshotLUN set to ‘0’ as a workaround solution due their choice to use inconsistent LUN presentation numbers across their hosts since the early ESX 3.0.x days.
Other customers have been running with this setting after enabling the SPC-2 and SC-3 bits on their FA ports for their EMC DMX/Symmetrix array and found that their LUNs were now seen as snapshots. This was due to the LUN ID’s for their LUNs changing since we reference the LUN by a unique ID (in page 0x83) rather than the serial number for the LUN or the NAA (if running ESX 3.5) once these director bits are enabled. This behavior was also observed in other arrays when upgrading firmware/OS.
As there is no longer a global DisallowsnapshotLUN setting available in ESX 4, if your environment is running entirely on “snapshots” then you can use the following script to streamline the force-mount operation on each ESX host:
for i in `/usr/sbin/esxcfg-volume -l|grep VMFS3|awk {print $3}`;do /usr/sbin/esxcfg-volume -M $i;done
52
VI4: Storage - Slide
vSphere Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
53
VI4: Storage - Slide
Storage VMotion Enhancements In VI4
The following enhancements have been made to the VI4 version of Storage VMotion:
GUI Support.
Leverages new features of VI4, including fast suspend/resume and Change Block Tracking.
Supports moving VMDKs from Thick to Thin formats & vice versa
Ability to migrate RDMs to VMDKs.
Ability to migrate RDMs to RDMs.
Support for FC, iSCSI & NAS.
Storage VMotion no longer requires 2 x memory.
No requirement to create a VMotion interface for Storage VMotion.
Ability to move an individual disk without moving the VM’s home.
54
VI4: Storage - Slide
New Features
Fast Suspend/Resume of VMs
This provides the ability to quickly transition between the source VM to the destination VM reliably and in a fast switching time. This is only necessary when migrating the .vmx file
Changed Block Tracking
Very much like how we handle memory with standard VMotion in that a bitmap of changed disk blocks is used rather than a bitmap of changed memory pages.
This means Storage VMotion no longer needs to snapshot the original VM and commit it to the destination VM so the Storage VMotion operation performs much faster.
Multiple iterations of the disk copy goes through, but each time the number of disk blocks that changed reduces, until eventually all disk blocks have been copied and we have a complete copy of the disk at the destination.
55
VI4: Storage - Slide
Storage VMotion – GUI Support
• Storage VMotion is still supported via the VI CLI 4.0 as well as the API, so customers wishing to use this method can continue to do so.
• The Change both host and datastore option is only available to powered off VMs.
For a non-passthru RDM, you can select to convert it to either Thin Provisioned or Thick when converting it to a VMDK, or you can leave it as a non-passthru RDM.
56
VI4: Storage - Slide
Storage VMotion – CLI (ctd)
# svmotion --interactive
Entering interactive mode. All other options and environment variables will be ignored.
Enter the VirtualCenter service url you wish to connect to (e.g. https://myvc.mycorp.com/sdk, or just myvc.mycorp.com): VC-Linked-Mode.vi40.vmware.com
Enter your username: Administrator
Enter your password: ********
Attempting to connect to https://VC-Linked-Mode.vi40.vmware.com/sdk.
Connected to server.
Enter the name of the datacenter: Embedded-ESX40
Enter the datastore path of the virtual machine (e.g. [datastore1] myvm/myvm.vmx): [CLAR_L52] W2K3SP2/W2K3SP2.vmx
Enter the name of the destination datastore: CLAR_L53
You can also move disks independently of the virtual machine. If you want the disks to stay with the virtual machine, then skip this step..
Would you like to individually place the disks (yes/no)? no
Performing Storage VMotion.
0% |----------------------------------------------------------------------------------------------------| 100%
##########
57
VI4: Storage - Slide
Limitations
The migration of Virtual Machines which have snapshots will not be supported at GA.
Currently the plan is to have this in a future release
The migration of Virtual Machines to a different host and a different datastore simultaneously is not yet supported.
No firm date for support of this feature yet.
58
VI4: Storage - Slide
Storage VMotion Timeouts
There are also a number of tunable timeout values:
Downtime timeout
Failure: Source detected that destination failed to resume. Update fsr.maxSwitchoverSeconds (default 100 seconds) in the
VM’s .vmx file.
May be observed on Virtual Machines that have lots of virtual disks.
Data timeout
Failure: Timed out waiting for migration data. Update migration.dataTimeout (default 60 seconds ) in the VM’s .vmx
file.
May be observed when migrating from NFS to NFS on slow networks.
59
VI4: Storage - Slide
vSphere Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
60
VI4: Storage - Slide
Supported Disk Growth/Shrink Operations
VI4 introduces the following growth/shrink operations:
Grow VMFS volumes: yes
Grow RDM volumes: yes
Grow *.vmdk : yes
Shrink VMFS volumes: no
Shrink RDM volumes: yes
Shrink *.vmdk : no
61
VI4: Storage - Slide
Volume Grow & Hot VMDK ExtendVolume Grow
VI4 allows dynamic expansion of a volume partition by adding capacity to a VMFS without disrupting running Virtual Machines.
Once the LUN backing the datastore has been grown (typically through an array management utility), Volume Grow expands the VMFS partition on the expanded LUN.
Historically, the only way to grow a VMFS volume was to use the extent-based approach. Volume Grow offers a different method of capacity growth.
The newly available space appears as a larger VMFS volume along with an associated grow event in vCenter.
Hot VMDK Extend
Hot extend is supported for VMFS flat virtual disks in persistent mode and without any Virtual Machine snapshots.
Used in conjunction with the new Volume Grow capability, the user has maximum flexibility in managing growing capacity in VI4.
62
VI4: Storage - Slide
Comparison: Volume Grow & Add Extent
Volume Grow Add Extent
Must power-off VMs No No
Can be done on newly-provisioned LUN
No Yes
Can be done on existing array-expanded LUN
Yes Yes (but not allowed through GUI)
Limits An extent can be grown any number of times, up to 2TB.
A datastore can have up to 32 extents, each up to 2TB.
Results in creation of new partition
No Yes
VM availability impact None, if datastore has only one extent.
Introduces dependency on first extent.
63
VI4: Storage - Slide
Here I am choosing the same device on which the VMFS is installed – there is currently 4GB free.
Volume Grow GUI Enhancements
This option selects to expand the VMFS using free space
on the current device
Notice that the current extent capacity is 1GB.
64
VI4: Storage - Slide
VMFS Grow - Expansion Options
LUN Provisioned at Array
VMFS Volume/Datastore Provisioned for ESX
Virtual Disk Provisioned for VM
VMFS Volume Grow
Dynamic LUN Expansion
VMDK Hot Extend
65
VI4: Storage - Slide
Hot VMDK Extend
66
VI4: Storage - Slide
vSphere Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
67
VI4: Storage - Slide
ESX 4.0 CLI
There have been a number of new storage commands introduced with ESX 4.0 as well as enhancements to the more traditional commands.
Here is a list of commands that have changed or have been added:
esxcli
esxcfg-mpath / vicfg-mpath
esxcfg-volume / vicfg-volume
esxcfg-scsidevs / vicfg-scsidevs
esxcfg-rescan / vicfg-rescan
esxcfg-module / vicfg-module
vmkfstools
This slide deck will only cover a few of the above mentioned.
68
VI4: Storage - Slide
New/Updated CLI Commands(1) : esxcfg-scsidevs
The esxcfg-vmhbadevs command has been replaced by the esxcfg-scsidevs command.
To display the old VMware Legacy identifiers (vml), use:
# esxcfg-scsidevs –u
To display Service Console devices:
# esxcfg-scsidevs –c
To display all logical devices on this host:
# esxcfg-scsidevs –l
To show the relationship between COS native devices (/dev/sd) and vmhba devices:
# esxcfg-scsidevs -m
The VI CLI 4.0 has an equivalent vicfg-scsidevs for ESXi.
69
VI4: Storage - Slide
esxcfg-scsidevs (ctd)
Sample output of esxcfg-scsidevs –l:
naa.600601604320170080d407794f10dd11
Device Type: Direct-Access
Size: 8192 MB
Display Name: DGC Fibre Channel Disk (naa.600601604320170080d407794f10dd11)
Plugin: NMP
Console Device: /dev/sdb
Devfs Path: /vmfs/devices/disks/naa.600601604320170080d407794f10dd11
Vendor: DGC Model: RAID 5 Revis: 0224
SCSI Level: 4 Is Pseudo: false Status: on
Is RDM Capable: true Is Removable: false
Is Local: false
Other Names:
vml.0200000000600601604320170080d407794f10dd11524149442035
Note that this is one of the few CLI commands which will
report the LUN size
70
VI4: Storage - Slide
New/Updated CLI Commands(2): esxcfg-rescan
You now have the ability to rescan based on whether devices were added or device were removed.
You can also rescan the current paths and not try to discover new ones.
# esxcfg-rescan -h
esxcfg-rescan <options> [adapter]
-a|--add Scan for only newly added devices.
-d|--delete Scan for only deleted devices.
-u|--update Scan existing paths only and update their state.
-h|--help Display this message.
The VI CLI 4.0 has an equivalent vicfg-rescan command for ESXi.
71
VI4: Storage - Slide
New/Updated CLI Commands(3): vmkfstools
The vmkfstools commands exists in the Service Console and VI CLI 4.0
Grow a VMFS:
vmkfstools –G
Inflate a VMDK from thin to thick:
vmkfstools –j
Import a thick VMDK to thin:
vmkfstools –i <src> -d thin
Import a thin VMDK to thick:
vmkfstools –i <src thin disk> -d zeroedthick
72
VI4: Storage - Slide
vSphere Storage
Section 1 - Naming Convention Change
Section 2 - Pluggable Storage Architecture
Section 3 - iSCSI Enhancements
Section 4 - Storage Administration (VC)
Section 5 - Snapshot Volumes & Resignaturing
Section 6 - Storage VMotion
Section 7 - Volume Grow / Hot VMDK Extend
Section 8 - Storage CLI Enhancements
Section 9 - Other Storage Features/Enhancements
73
VI4: Storage - Slide
Other Storage Features/Enhancements
Storage General
The number of LUNs that can be presented to the ESX 4.0 server is still 256.
VMFS
The maximum extent volume size in VI4 is still 2TB.
Maximum number of extents is still 32, so maximum volume size is still 64TB.
We are still using VMFS3, not VMFS 4 (although the version has increased to 3.33).
iSCSI Enhancements
10 GbE iSCSI Initiator – iSCSI over a 10GbE interface is supported. First introduced in ESX/ESXi 3.5 u2 & extended back to ESX/ESXi 3.5 u1.
74
VI4: Storage - Slide
Other Storage Features/Enhancements (ctd)
NFS Enhancements
IPv6 support (experimental)
Support for up to 64 NFS volumes (the old limit was 32)
10 GbE NFS Support – NFS over a 10GbE interface is supported.
First introduced in ESX/ESXi 3.5 u2
FC Enhancements
Support for 8Gb Fibre Channel
First introduced in ESX/ESXi 3.5 u2
Support for FC over Ethernet (FCoE)
75
VI4: Storage - Slide
Other Storage Features/Enhancements (ctd)
Paravirtualized SCSI Driver (Guest OS)
Eliminates the need to trap privileged instructions as it uses hypercalls to request that the underlying hypervisor execute those privileged instructions.
Handling unexpected or unallowable conditions via trapping can be time-consuming and can impact performance.
76
VI4: Storage - Slide
Other Storage Features/Enhancements (ctd)
ESX 3.xboot time
LUN selection – which sd
device represents an
iSCSI disk and which
represents an FC disk?
77
VI4: Storage - Slide
Other Storage Features/Enhancements (ctd)
ESX 4.0boot time
LUN selection.Hopefully this will address
incorrect LUN selections
during install/upgrade.