emc fast vp for unified storage systems · pdf fileemc fast vp for unified storage systems a...
TRANSCRIPT
EMC FAST VP for Unified Storage Systems
A Detailed Review
Abstract
This white paper introduces EMC® Fully Automated Storage Tiering for Virtual Pools (FAST VP) technology
and describes its features and implementation. Details on how to work with the product in the Unisphere™
operating environment are discussed, and usage guidance and major customer benefits are also included.
March 2011
EMC FAST VP for Unified Storage Systems
A Detailed Review 2
Copyright © 2010, 2011 EMC Corporation. All rights reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE
INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com
All other trademarks used herein are the property of their respective owners.
Part Number h8058.1
EMC FAST VP for Unified Storage Systems
A Detailed Review 3
Table of Contents
Executive summary ............................................................................................ 4
Audience ...................................................................................................................................... 4
Introduction ......................................................................................................... 4
Storage tiers ........................................................................................................ 5
Extreme Performance Tier drives: Flash drives (for VNX and CX4 platforms) ........................ 5 Performance Tier drives: SAS (VNX) and Fibre Channel (CX4) .............................................. 5 Capacity Tier drives: NL-SAS (VNX) and SATA (CX4) ............................................................ 6
FAST VP operations ........................................................................................... 7
Storage pools ............................................................................................................................... 7
FAST VP algorithm ...................................................................................................................... 8
Statistics collection ................................................................................................................... 8 Analysis .................................................................................................................................... 8 Relocation................................................................................................................................. 8
Managing FAST VP at the storage pool level .............................................................................. 8
Automated scheduler ............................................................................................................... 9 Manual relocation ................................................................................................................... 10
FAST VP LUN management ...................................................................................................... 10
Tiering policies ....................................................................................................................... 11 Initial placement ..................................................................................................................... 12
Using FAST VP for file ...................................................................................... 12
Management .............................................................................................................................. 12
Best practices............................................................................................................................. 14
Automated Volume Manager rules ............................................................................................ 15
Guidance and recommendations .................................................................... 16
FAST VP and FAST Cache ....................................................................................................... 16
What drive mix is right for my I/O profile? .................................................................................. 17
Conclusion ........................................................................................................ 18
References ........................................................................................................ 19
EMC FAST VP for Unified Storage Systems
A Detailed Review 4
Executive summary Fully Automated Storage Tiering for Virtual Pools (FAST VP) can lower total cost of ownership (TCO)
and increase performance by intelligently managing data placement at a sub-LUN level. When FAST VP is
implemented, the storage system measures, analyzes, and implements a dynamic storage-tiering policy
much faster and more efficiently than a human analyst could ever achieve.
Storage provisioning can be repetitive and time-consuming and produce uncertain results. It is not always
obvious how to match capacity to the performance requirements of a workload’s data. Even when a match
is achieved, requirements change, and a storage system’s provisioning will require constant adjustment.
Storage tiering puts drives of varying performance levels and cost into a storage pool. LUNs use the storage
capacity they need from the pool, on the devices with the required performance characteristics. FAST VP
collects I/O activity statistics at a 1 GB granularity (known as a slice). The relative activity level of each
slice is used to determine which slices should be promoted to higher tiers of storage. Relocation is initiated
at the user’s discretion through either manual initiation or an automated scheduler. Working at such a
granular level removes the need for manual, resource-intensive LUN migrations while still providing the
performance levels required by the most active dataset.
FAST VP is a licensed feature available on EMC® VNX™ series and CLARiiON
® CX4 series platforms.
The VNX series supports a unified approach to automatic tiering for both file and block data. CX4 systems
running release 30 and later are supported for the tiering of block data. FAST VP licenses are available a´
la carte for both platforms, or as part of a FAST Suite of licenses that offers complementary licenses for
technologies such as FAST Cache, Analyzer, and Quality of Service Manager.
This white paper introduces the EMC FAST VP technology and describes its features, functions, and
management.
Audience This white paper is intended for EMC customers, partners, and employees who are considering using the
FAST VP product. Some familiarity with EMC midrange storage systems is assumed. Users should be
familiar with the material discussed in the white papers Introduction to EMC VNX Series Storage Systems
and EMC VNX Virtual Provisioning.
Introduction Data has a lifecycle. As data progresses through its lifecycle, it experiences varying levels of activity.
When data is created, it is typically heavily used. As it ages, it is accessed less often. This is often referred
to as being temporal in nature. FAST VP is a simple and elegant solution for dynamically matching
storage requirements with changes in the frequency of data access. FAST VP segregates disk drives into the
following three tiers:
Extreme Performance Tier – Flash drives
Performance Tier – SAS drives for VNX platforms and Fibre Channel drives for CX4 platforms
Capacity Tier – Near-Line SAS (NL-SAS) drives for VNX platforms and SATA drives for CX4
platforms
You can use FAST VP to aggressively reduce TCO and/or to increase performance. A target workload that
requires a large number of Performance Tier drives can be serviced with a mix of tiers, and a much lower
drive count. In some cases, an almost two-thirds reduction in drive count is achieved. In other cases,
performance throughput can double by adding less than 10 percent of a pool’s total capacity in Flash
drives.
EMC FAST VP for Unified Storage Systems
A Detailed Review 5
FAST VP has proven highly effective for a number of applications. Tests in OLTP environments with
Oracle1 or Microsoft SQL Server
2 show that users can lower their capital expenditure (by 15 percent to 38
percent), reduce power and cooling costs (by over 40 percent), and still increase performance by using
FAST VP instead of a homogeneous drive deployment.
FAST VP can be used in combination with other performance optimization software, such as FAST Cache.
A common strategy is to use FAST VP to gain TCO benefits while using FAST Cache to boost overall
system performance. There are other scenarios where it makes sense to use FAST VP for both purposes.
This paper discusses considerations for the best deployment of these technologies.
The VNX series of storage systems delivers even more value over previous systems by providing a unified
approach to auto-tiering for file and block data. FAST VP is available on the VNX5300™ and larger
systems. Now, file data served by VNX Data Movers can also use virtual pools and the same advanced data
services as block data. This provides compelling value for users who wish to optimize the use of high-
performing drives across their environment.
Storage tiers FAST VP can leverage two or all three storage tiers in a single pool. Each tier offers unique advantages in
performance and cost.
Extreme Performance Tier drives: Flash drives (for VNX and CX4 platforms)
Flash drives are having a large impact on the external-disk storage system market. They are built on solid-
state drive (SSD) technology that has no moving parts. The absence of moving parts makes these drives
highly energy-efficient, and eliminates rotational latencies. Therefore, migrating data from spinning disks
to Flash drives can boost performance and create significant energy savings.
Tests show that adding a small (single-digit) percentage of Flash capacity to your storage, while using
intelligent tiering products (such as FAST VP and FAST Cache), can deliver double-digit percentage gains
in throughput and response time performance in some applications. Flash drives can deliver an order of
magnitude better performance than traditional spinning disks when the workload is IOPS-intensive and
response-time sensitive. They are particularly effective when small random-read I/Os are part of the profile,
as they are in many transactional database applications. On the other hand, bandwidth-intensive
applications perform only slightly better on Flash drives than on spinning drives.
Flash drives have a higher per-gigabyte cost than traditional spinning drives. To receive the best return,
you should use Flash drives for data that requires fast response times and high IOPS. The best way to
optimize the use of these high-performing resources is to allow FAST VP to migrate data to Flash drives at
a sub-LUN level.
Performance Tier drives: SAS (VNX) and Fibre Channel (CX4)
Traditional spinning drives offer high levels of performance, reliability, and capacity. These drives are
based on industry-standardized, enterprise, mechanical hard-drive technology that stores digital data on a
series of rapidly rotating magnetic platters.
The Performance Tier includes 10k and 15k rpm spinning drives, which are available on all EMC midrange
storage systems. They have been the performance medium of choice for many years. They also have the
highest availability of any mechanical storage device. These drives continue to serve as a valuable storage
tier, offering high all-around performance including consistent response times, high throughput, and good
bandwidth, at a midtier price point.
1 Leveraging Fully Automated Storage Tiering (FAST) with Oracle Database Applications EMC white
paper 2 EMC Tiered Storage for Microsoft SQL Server 2008—Enabled by EMC Unified Storage and EMC Fully
Automated Storage Tiering (FAST) EMC white paper
EMC FAST VP for Unified Storage Systems
A Detailed Review 6
The VNX series and CX4 series use different attach technologies. VNX systems use a 6 Gb/s SAS back
end while the CX4 series uses a 4 Gb/s Fibre Channel back end. Despite the differences in speed, the drive
assemblies on both series are very similar. When sizing a solution with these drives, you need to consider
the rotational speed of the drive. However, the interconnect to the back end does not affect rule-of-thumb
performance assumptions for the drive.
This also applies to the different drive form factors. The VNX series offers 2.5-inch and 3.5-inch form-
factor SAS drives. The same IOPS and bandwidth performance guidelines apply to 2.5-inch 10k drives and
3.5-inch 10k inch drives.
FAST VP differentiates tiers by drive type. However, it does not take rotational speed into consideration.
We strongly encourage you to use one rotational speed for each drive type within a pool. If multiple
rotational-speed drives exist in the array, multiple pools should be implemented as well.
Capacity Tier drives: NL-SAS (VNX) and SATA (CX4)
Using capacity drives can significantly reduce energy use and free up capacity in higher storage tiers.
Studies have shown that 60 percent to 80 percent of the capacity of many applications has little I/O activity.
Capacity drives can cost about four times less than performance drives on a per-gigabyte basis, and a small
fraction of the cost of Flash drives. They consume up to 96 percent less power per TB than performance
drives. This offers a compelling opportunity for TCO improvement considering both purchase cost and
operational efficiency.
Capacity drives are designed for maximum capacity at a modest performance level. They have a slower
rotational speed than Performance Tier drives. NL-SAS drives for the VNX series have a 7.2k rotational
speed, while SATA drives available on the CX4 series come in 7.2k and 5.4k rpm varieties. The 7.2k rpm
drives can deliver bandwidth performance within 20 percent to 30 percent of performance class drives.
However, the reduced rotational speed is a trade-off for significantly larger capacity. For example, the
largest Capacity Tier drives are 2 TB, compared to the 600 GB Performance Tier drives and 200 GB Flash
drives. These Capacity Tier drives offer roughly half the IOPS/drive of Performance Tier drives.
EMC FAST VP for Unified Storage Systems
A Detailed Review 7
Table 1. Feature tradeoffs for Flash, Performance, and Capacity drives
Flash drives Performance (SAS, FC) Capacity (NL-SAS, SATA) P
erfo
rman
ce -High IOPS/GB and low latency
-Sole use response time <1-5 ms
-Multi-access response time < 10
ms
-High bandwidth with contending
workloads
-Sole use response time ~5 ms
-Multi-access response time 10-50
ms
-Low IOPS/GB
-Sole use response time 7-10 ms
-Multi-access response time up to
100 ms
-Leverage storage array SP cache for
sequential and large block access
Str
eng
ths
-Provide extremely fast access
for reads
-Can execute multiple sequential
streams better than SAS/FC
-Sequential reads can leverage
read-ahead
-Sequential writes can leverage
system optimizations favoring
disks
-Read/write mixes give predictable
performance
-Large I/O is serviced fairly
efficiently
-Sequential reads can leverage read-
ahead
-Sequential writes can leverage
system optimizations favoring disks
Lim
itat
ion
s -Writes slower than reads
-Heavy concurrent writes affect
read rates
-Single-threaded large sequential
I/O equivalent to SAS/FC
-Uncached writes are slower than
reads
-Long response times when under
heavy write loads
-Not as good as FC at handling
multiple streams
FAST VP operations FAST VP operates by periodically relocating the most active data up to the highest available tier (typically
the Extreme Performance or Performance Tier). To ensure sufficient space in the higher tiers FAST VP
relocates less active data to lower tiers (Performance or Capacity Tiers). FAST VP works at a granularity
of 1 GB. Each 1 GB block of data is referred to as a “slice.” When FAST VP relocates data, it will move
the entire slice to a different storage tier.
Storage pools Heterogeneous storage pools are the framework that allows FAST VP to fully utilize each of the storage
tiers discussed. LUNs can then be created at the pool level. These pool LUNs are no longer bound to a
single storage tier; instead, they can be spread across different storage tiers within the same pool.
Figure 1. Heterogeneous storage pool concept
EMC FAST VP for Unified Storage Systems
A Detailed Review 8
LUNs must reside in a pool to be eligible for FAST VP relocation. Pools support thick LUNs and thin
LUNs. Thick LUNs are high-performing LUNs that use contiguous logical block addressing on the
physical capacity assigned from the pool. Thin LUNs use a capacity-on-demand model for allocating drive
capacity. Capacity usage is tracked at a finer granularity than thick LUNs to maximize capacity
optimizations. FAST VP is supported on both thick LUNs and thin LUNs.
RAID groups are by definition homogeneous and therefore are not eligible for tiering. LUNs in RAID
groups can be migrated to pools using LUN Migration. For a more in-depth discussion of pools, please see
the white paper EMC VNX Virtual Provisioning - Applied Technology.
FAST VP algorithm FAST VP uses three strategies to identify and move the correct slices to the correct tiers: statistics
collection, analysis, and relocation.
Statistics collection
A slice of data is hotter (more activity) or colder than another slice of data based on the relative activity
level of the two slices. Activity level is determined by counting the number of I/Os for each slice. FAST
VP maintains a cumulative I/O count and “weights” each I/O by how recently it arrived. This weight
deteriorates over time. New I/O is given full weight. After approximately 24 hours, the same I/O carries
about half-weight. After a week, the same I/O carries very little weight. Statistics are continuously
collected (as a background task) for all pool LUNs.
Analysis
Once per hour, the collected data is analyzed. This analysis produces a rank ordering of each slice within
the pool. The ranking progresses from the hottest slices to the coldest slices relative to the other slices in
the same pool. (For this reason, a hot slice in one pool may comparable to a cold slice in another pool.)
There is no system-level threshold for activity level.
Relocation
During user-defined relocation windows, 1 GB slices are promoted according to both the rank ordering
performed in the analysis stage and the tiering policy set by the user. During relocation, FAST VP
relocates higher-priority slices to higher tiers; slices are relocated to lower tiers only if the space they
occupy is required for a higher-priority slice. This way, FAST VP fully utilizes the highest-performing
spindles first. Lower-tier spindles are utilized as capacity demand grows. Relocation can be initiated
manually or by a user-configurable, automated scheduler.
Managing FAST VP at the storage pool level FAST VP properties can be viewed and managed at the pool level. Figure 2 shows the tiering information
for a specific pool.
EMC FAST VP for Unified Storage Systems
A Detailed Review 9
Figure 2. Storage Pool Properties window
The Tier Status section of the window shows FAST VP relocation information specific to the pool selected.
Scheduled relocation can be selected at the pool level from the drop-down menu labeled Auto-Tiering.
This can be set to either Automatic or Manual. Users can also connect to the array-wide relocation
schedule using the button located in the top right corner. This is discussed in the “Automated scheduler”
section. Data Relocation Status displays what state the pool is in with regards to FAST VP. The Ready
state indicates that relocation can begin on this pool at any time. The amount of data bound for a slower
tier is shown next to Data to Move Down and that amount of data bound for a faster tier is listed next to
Data to Move Up. Below that is the estimated time required to migrate all data within the pool to the
appropriate tier.
In the Tier Details section, users can see the exact distribution of their data. This panel shows all tiers of
storage residing in the pool. Each tier then displays the amount of data to be moved up and down, the total
capacity allocated (user capacity), and the consumed capacity.
Automated scheduler
The scheduler launched from the Pool Properties dialog box’s Relocation Schedule button is shown in
Figure 2. To this end, relocations can be scheduled to occur automatically. It is recommended that
relocations be scheduled during off-hours to minimize any potential performance impact the relocations
may cause. Figure 3 shows the Manage Auto-Tiering window.
EMC FAST VP for Unified Storage Systems
A Detailed Review 10
Figure 3. Manage Auto-Tiering window
The data relocation schedule shown in Figure 3 initiates relocations everyday at 10:00 PM for a duration of
eight hours (6:00 AM). You can select the days that the relocation schedule should run. In this example,
relocations run seven days a week, which is the default setting. From this status window, you can also
control the data relocation rate. The default rate is set to Medium so as not to significantly impact host I/O.
This rate will relocate data up to 300-400 GB per hour3.
Manual relocation
Manual relocation is initiated by the user through either the Unisphere GUI or the CLI. It can be initiated
at any time. When a manual relocation is initiated, FAST VP performs analysis on all statistics gathered,
independent of its regularly scheduled hourly analysis, prior to beginning the relocation. This ensures that
up-to-date statistics and settings are properly accounted for prior to relocation.
Although the automatic scheduler is an array-wide setting, manual relocation is enacted at the pool level
only. Common situations when users may want to initiate a manual relocation on a specific pool include
the following:
When reconfiguring the pool (for example, adding a new tier of drives)
When LUN properties have been changed and the new priority structure needs to be realized
immediately
As part of a script for a finer-grained relocation schedule
FAST VP LUN management Some FAST VP properties are managed at the LUN level. Figure 4 shows the tiering information for a
single LUN.
3 This rate depends on system type, array utilization, and other tasks competing for array resources. High
utilization rates may reduce this relocation rate.
EMC FAST VP for Unified Storage Systems
A Detailed Review 11
Figure 4. LUN Properties window
The Tier Details section displays the current distribution of 1 GB slices within the LUN. The Tiering
Policy section displays the available options for tiering policy.
Tiering policies
There are four tiering policies available within FAST VP:
Auto-tier (recommended)
Highest available tier
Lowest available tier
No data movement
Auto-tier Auto-tier is the default setting for all pool LUNs upon their creation. FAST VP relocates slices of these
LUNs based on their activity level. Slices belonging to LUNs with the auto-tier policy have second priority
for capacity in the highest tier in the pool after LUNs set to the highest tier.
Highest available tier The highest available tier setting should be selected for those LUNs which, although not always the most
active, require high levels of performance whenever they are accessed. FAST VP will prioritize slices of a
LUN with highest available tier selected above all other settings.
Lowest available tier Lowest available tier should be selected for LUNs that are not performance- or response-time-sensitive.
FAST VP will maintain slices of these LUNs on the lowest storage tier available regardless of activity
level.
No data movement No data movement may only be selected after a LUN has been created. FAST VP will not move slices
from their current positions once the no data movement selection has been made.
EMC FAST VP for Unified Storage Systems
A Detailed Review 12
Initial placement
The tiering policy chosen also affects the initial placement of a LUN’s slices within the available tiers.
Initial placement with the pool set to auto-tier will result in the data being distributed across all storage
tiers available within the pool. LUNs set to highest available tier will have their component slices placed
on the highest tier that has capacity available. LUNs set to lowest available tier will have their component
slices placed on the lowest tier that has capacity available.
LUNs with the tiering policy set to no data movement will use the initial placement policy of the setting
preceding the change to no data movement. For example, a LUN that was previously set to highest tier but
is currently set to no data movement will still take its initial allocations from the highest tier possible.
Using FAST VP for file In the VNX Operating Environment for file 7, file data is supported on LUNs created in pools with FAST
VP configured on both the VNX unified and EMC Symmetrix® with gateway systems.
Management The process for implementing FAST VP for file begins by provisioning LUNs from a pool with mixed tiers
(or across tiers for Symmetrix) that are placed in the protected File Storage Group. Rescanning the storage
systems from the System tab in Unisphere starts a diskmark that makes the LUNs available to VNX file
storage. The rescan automatically creates a pool for file using the same name as the corresponding pool for
block. Additionally it will create a disk volume in a 1:1 mapping for each LUN that was added to the File
Storage Group. A file system can then be created from the pool for file on the disk volumes. The FAST VP
policy that has been applied to the LUNs presented to file will operate as it does for any other LUN in the
system, dynamically migrating data between storage tiers in the pool.
Figure 5. FAST VP for file
EMC FAST VP for Unified Storage Systems
A Detailed Review 13
FAST VP for file is supported in Unisphere and the CLI. All relevant Unisphere configuration wizards
support a FAST VP configuration except for the VNX Provisioning Wizard (known pre-VNX as the
Celerra Provisioning Wizard). FAST VP properties can be seen within the properties pages of pools for
file (see Figure 6), and property pages for volumes and file systems (see Figure 7 on page 14) but they can
only be modified through the block pool or LUN areas of Unisphere. On the File System Properties pages
the FAST VP tiering policy is listed in the Advanced Data Services section along with whether thin,
compression, or mirrored is enabled. For more information on thin and compression reference the white
papers EMC VNX Virtual Provisioning and EMC VNX Deduplication and Compression.
New disk type options of the mapped disk volumes are as follows:
LUNs in a storage pool with a single disk type
o Extreme Performance (Flash drives)
o Performance (10k and 15k rpm SAS drives)
o Capacity (7.2k rpm NL-SAS)
LUNs in a storage pool with multiple disk types (used with FAST VP)
o Mixed
LUNs that are mirrored (mirrored means remote mirrored through MirrorView™ or
RecoverPoint)
o Mirrored_mixed
o Mirrored_performance
o Mirrored_capacity
o Mirrored_Extreme Performance
Figure 6. File Storage Pool Properties window
EMC FAST VP for Unified Storage Systems
A Detailed Review 14
Figure 7. File System Properties window
Best practices VNX file configurations, for both Symmetrix and CLARiiON, will not expressly forbid mixing LUNs with
different data service attributes, although users will be warned that it is not recommended to mix due to the
impact of spreading a file system across, for example, a thin and a thick LUN or LUNs with different
tiering policies.
Note: VNX file configurations will not allow mixing of mirrored and non-mirrored types in a pool (if attempted,
the disk mark will fail).
Unless you are managing file systems that are used for archival data, it is not recommended to use block
thin provisioning or compression on VNX LUNs used for file system allocations. If thin provisioning for
block is used, closely monitor back-end physical space usage from the block perspective to avoid a
situation where space runs out.
While Automated Volume Manager will attempt to ensure consistent performance for file systems that span
LUNs with different data service attributes, it is recommended that this be managed in one of two ways:
Optimize ease of use on the file side; configure LUNs with different service attributes in separate
pools and make these pools available independently so they can be assigned individually to
specific file systems per the demand of each file system.
Optimize ease of use on the block side and optimize the use of resources on the whole system;
define one or two large pools for use across all users of the system, which will typically have
different service attributes associated with different LUNs.
When used for file systems, these different LUN types will appear in a single file-side pool and can be
managed with Manual Volume Management to appear independently such that homogenous file systems
EMC FAST VP for Unified Storage Systems
A Detailed Review 15
can be configured in the manual pools. This approach allows users to leverage the aggregation effect of
large pools.
For best performance it is recommended to put file data in a separate pool from block data. This makes the
pool have a more common I/O profile that typically performs better and is easier to manage and
troubleshoot if performance issues arise.
Automated Volume Manager rules Automated Volume Manager (AVM) rules are different when creating a file system with underlying pool
LUNs as opposed to file systems with underlying RAID group LUNs. The rules for AVM with underlying
pool LUNs are as follows:
For VNX, the following rules apply:
1. Concatenation will be used. Striping will not be used (striping is done at the block pool
level).
2. Unless requested, slicing will not be used. Slicing is turned off by default because slicing
allows multiple file systems on one LUN. Since data services can only be changed on the
LUN level, all file systems on a LUN are affected by a single change. Slicing can cause
the loss of granularity to change data services on the individual file system level.
Turning slicing off forces the file system to use the entire LUN. LUNs should be sized
with consideration to the size of the file system that will be created on them.
3. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes.
4. AVM checks for free disk volumes:
If there are no free disk volumes and the slice option is set to no, there is not
enough space available and the request fails.
If there are free disk volumes:
a. AVM first checks for thick disk volumes that satisfy the size request.
b. If not found, AVM then checks for thin disk volumes that satisfy the
size request.
c. If still not found, AVM combines thick and thin disk volumes to find
ones that satisfy the size request.
5. If one disk volume satisfies the size request exactly, AVM takes the selected disk volume
and uses the whole disk to build the file system.
6. If a larger disk volume is found that is a better fit than any set of smaller disks, then
AVM uses the larger volume.
7. If multiple disk volumes satisfy the size request, AVM sorts the disk volumes from
smallest to largest, and then sorts in alternating SP A and SP B lists. Starting with the
first disk volume, AVM searches through a list for matching data services until the size
request is met. If the size request is not met, AVM searches again but ignores the data
services.
For VNX gateway with Symmetrix, the following rules apply:
1. Unless requested, slicing will not be used.
2. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes for the
purpose of striping together the same type of disk volumes:
If there are no free disk volumes and the slice option is set to no, there is not
enough space available and the request fails
If there are free disk volumes:
a. AVM first checks for a set of eight disk volumes.
b. If a set of eight is not found, AVM then looks for a set of four disk
volumes.
c. If a set of four is not found, AVM then looks for a set of two disk
volumes.
d. If a set of two is not found, AVM finally looks for one disk volume.
3. When free disk volumes are found:
a. AVM first checks for thick disk volumes that satisfy the size request, which can
be equal to or greater than the file system size. If thick disk volumes are
EMC FAST VP for Unified Storage Systems
A Detailed Review 16
available, AVM first tries to stripe the thick disk volumes that have the same
disk type. Otherwise, AVM stripes together thick disk volumes that have
different disk types.
b. If thick disks are not found, AVM then checks for thin disk volumes that satisfy
the size request. If thin disk volumes are available, AVM first tries to stripe the
thin disk volumes that have the same disk type, where “same” means the single
disk type of the pool in which it resides. Otherwise, AVM stripes together thin
disk volumes that have different disk types.
c. If thin disks are not found, AVM combines thick and thin disk volumes to find
ones that satisfy the size request.
4. If neither thick nor thin disk volumes satisfy the size request, AVM then checks for
whether striping of one same disk type will satisfy the size request, ignoring whether the
disk volumes are thick or thin.
5. If still no matches are found, AVM checks whether slicing was requested.
a. If slicing was requested, then AVM checks whether any stripes exist that satisfy
the size request. If yes, then AVM slices an existing stripe.
b. If slicing was not requested, AVM checks whether any free disk volumes can be
concatenated to satisfy the size request. If yes, AVM concatenates disk volumes,
matching data services if possible, and builds the file system.
6. If still no matches are found, there is not enough space available and the request fails.
Managing Volumes and File Systems with VNX AVM provides further information on using AVM with
mapped pools.
Guidance and recommendations The following table displays the total number of LUNs that can be set to leverage FAST VP based on the
array model. These limits are the same as the total number of pool LUNs per system. Therefore, all pool
LUNs in any given system can leverage FAST VP.
Table 2. FAST VP LUN limits
Array model VNX5300
CX4-120
VNX5500™
CX4-240
VNX5700™
CX4-480
VNX7500™
CX4-960
Maximum
number of pool
LUNs
512 1,024 2,048 2,048
FAST VP and FAST Cache FAST Cache allows the storage system to provide Flash drive class performance to the most heavily
accessed chunks of data across the entire system. FAST Cache absorbs I/O bursts from applications,
thereby reducing the load on back-end hard disks. This improves the performance of the storage solution.
For more details on this feature, refer to the EMC CLARiiON, Celerra Unified, and VNX FAST Cache
white paper available on Powerlink.
EMC FAST VP for Unified Storage Systems
A Detailed Review 17
The following table compares the FAST VP and FAST Cache features.
Table 3. Comparison between the FAST VP and FAST Cache features
FAST Cache FAST VP
Enables Flash drives to be used to extend the
existing caching capacity of the storage system.
Leverages pools to provide sub-LUN tiering,
enabling the utilization of multiple tiers of storage
simultaneously
Has finer granularity – 64 KB Less granular compared to FAST Cache – 1 GB
Copies data from HDDs to Flash drives when
they get accessed frequently
Moves data between different storage tiers based on
a weighted average of access statistics collected
over a period of time
Is designed primarily to improve performance While it can improve performance, it is primarily
designed to improve ease of use and reduce TCO
FAST Cache and the FAST VP sub-LUN tiering features can be used together to yield high performance
and improved TCO from the storage system. As an example, in scenarios where limited Flash drives are
available, they can be used to create FAST Cache and the FAST VP sub-LUN tiering feature can be used
on a pool consisting of Performance and Capacity disk drives. From a performance point of view, FAST
Cache will dynamically provide performance benefit to any bursty data while FAST VP will move warmer
data to Performance drives and colder data to Capacity drives. From a TCO perspective, FAST Cache with
a small number of Flash drives serves the data that is accessed most frequently, while FAST VP sub-LUN
tiering with Fibre Channel and SATA drives can optimize disk utilization and efficiency.
As a general rule, FAST Cache should be used in cases where storage system performance needs to be
improved immediately for burst-prone data. On the other hand, FAST VP optimizes storage system TCO as
it moves data to the appropriate storage tier based on sustained data access and demands over time. FAST
Cache focuses on improving performance while FAST VP focuses on improving TCO. Both features are
complementary to each other and help in improving performance and TCO.
The FAST Cache feature is storage-tier-aware and works with the FAST VP sub-LUN tiering feature to
make sure that the storage system resources are not wasted by unnecessarily copying data to FAST Cache if
it is already on a Flash drive. If FAST VP moves a chunk of data to the Extreme Performance Tier (which
consists of Flash drives) , FAST Cache will not promote that chunk of data into FAST Cache, even if FAST
Cache criteria is met for promotion. This ensures that the storage system resources are not wasted in
copying data from one Flash drive to another.
A general recommendation for the initial deployment of Flash drives in a storage system is to use them for
FAST Cache. However, in certain cases FAST Cache does not offer the most efficient use of Flash drives.
FAST Cache tracks I/Os that are smaller than 128 KB, and requires multiple hits to 64 KB chunks to
initiate promotions from rotating disk to FAST Cache. Therefore, I/O profiles that do not meet this criteria
are better served by Flash in a pool or RAID group.
What drive mix is right for my I/O profile? As previously mentioned, it is common for a small percentage of overall capacity to be responsible for
most of the I/O activity. This is known as skew. Analysis of an I/O profile may indicate that 85 percent of
the I/Os to a volume only involve 15 percent of the capacity. The resulting active capacity is called the
working set. Software like FAST VP and FAST Cache keeps the working set on the highest-performing
drives.
It is common for OLTP environments to yield working sets of 20 percent or less of their total capacity.
These profiles hit the “sweet spot” for FAST and FAST Cache. The white papers Leveraging Fully
Automated Storage Tiering (FAST) with Oracle Database Applications and EMC Tiered Storage for
Microsoft SQL Server 2008—Enabled by EMC Unified Storage and EMC Fully Automated Storage Tiering
(FAST) discuss performance and TCO benefits for several mixes of drive types.
EMC FAST VP for Unified Storage Systems
A Detailed Review 18
Other I/O profiles, like Decision Support Systems (DSS), may have much larger working sets. In these
cases, FAST VP may be used to deploy Flash drives because DSS workloads are not typically FAST
Cache-friendly. Capacity Tier drives may be used to lower TCO. The white paper Leveraging EMC Unified
Storage System Dynamic LUNs for Data Warehouse Deployments on Powerlink offers analysis on the use
of storage pools and FAST VP.
At a minimum, the capacity across the Performance Tier and Extreme Performance Tier (and/or FAST
Cache) should accommodate the working set. However, capacity is not the only consideration. The spindle
count of these tiers needs to be sized to handle the I/O load of the working set. Basic techniques for sizing
disk layouts based on IOPS and bandwidth are available in the EMC VNX Fundamentals for Performance
and Availability white paper on Powerlink.
As discussed above, using FAST Cache as your first line of Flash-drive deployment is a practical approach
when the I/O profile is amenable to its use. In circumstances where an I/O profile is not FAST-Cache
friendly, Flash can be deployed in a pool or RAID group instead.
Performance Tier drives are versatile in handling a wide spectrum of I/O profiles. Therefore, we highly
recommend that you include Performance Tier drives in each pool. FAST Cache can be an effective tool
for handling a large percentage of activity, but inevitably, there will be I/Os that have not been promoted or
are cache misses. In these cases, Performance Tier drives offer good performance for those I/Os.
Performance Tier drives also facilitate faster promotion of data into FAST Cache by quickly providing
promoted 64 KB chunks to FAST Cache. This minimizes FAST Cache warm-up time as some data gets hot
and other data goes cold. Lastly, if FAST Cache is ever in a degraded state due to a faulty drive, the FAST
Cache will become read only. If the I/O profile has a significant component of random writes, these are
best served from Performance Tier drives as opposed Capacity drives.
Capacity drives can be used for “everything else.” This often equates to comprising 60 percent to 80
percent of the pool’s capacity. Of course, there are profiles with low IOPS/GB and or sequential workload
that may result in the use of a higher percentage of Capacity Tier drives.
EMC Professional Services and qualified partners can be engaged to assist with properly sizing tiers and
pools to maximize investment. They have the tools and expertise to make very specific recommendations
for tier composition based on an existing I/O profile.
Conclusion Through the use of FAST VP, users can remove complexity and management overhead from their
environments. FAST VP utilizes Flash, performance, and capacity drives (or any combination thereof)
within a single pool. LUNs within the pool can then leverage the advantages of each drive type at the 1 GB
slice granularity. This sub-LUN-level tiering ensures that the most active dataset resides on the best-
performing drive tier available, while maintaining infrequently used data on lower-cost, high-capacity
drives.
Relocations occur without user interaction on a predetermined schedule, making FAST VP a truly
automated offering. In the event that relocation is required on-demand, FAST VP relocation can be
invoked through Unisphere on an individual pool.
Both FAST VP and FAST Cache work by placing data segments on the most appropriate storage tier based
on their usage pattern. These two solutions are complementary because they work on different granularity
levels and time tables. Implementing both FAST VP and FAST Cache can significantly improve
performance and reduce cost in the environment.
EMC FAST VP for Unified Storage Systems
A Detailed Review 19
References The following white papers are available on Powerlink:
EMC CLARiiON Storage System Fundamentals for Performance and Availability
EMC CLARiiON Best Practices for Performance and Availability
EMC CLARiiON, Celerra Unified, and VNX FAST Cache
EMC Unisphere: Unified Storage Management Solution
EMC VNX Virtual Provisioning
An Introduction to EMC CLARiiON and Celerra Unified Platform Storage Device Technology
EMC Tiered Storage for Microsoft SQL Server 2008—Enabled by EMC Unified Storage and EMC
Fully Automated Storage Tiering (FAST)
Leveraging Fully Automated Storage Tiering (FAST) with Oracle Database Applications
Leveraging EMC Unified Storage System Dynamic LUNs for Data Warehouse Deployments