hitachi dynamic provisioning lessons learned

34
© 2009 Hitachi Data Systems Hitachi Dynamic Provisioning Lessons Learned Best Practices Overview June 17, 2009 John Harker – Hitachi Data Systems Senior Product Marketing Manager Steve Burr – Hitachi Data Systems Solutions Architect Donald Naglich – UHC Director, Technology Infrastructure Steven H. Carlberg – UHC Senior Network Administrator

Upload: axxer

Post on 27-Oct-2014

133 views

Category:

Documents


1 download

DESCRIPTION

Hitachi Dynamic Provisioning Lessons

TRANSCRIPT

Page 1: Hitachi Dynamic Provisioning Lessons Learned

© 2009 Hitachi Data Systems

Hitachi Dynamic ProvisioningLessons LearnedBest Practices Overview

June 17, 2009

John Harker – Hitachi Data Systems Senior Product Marketing ManagerSteve Burr – Hitachi Data Systems Solutions ArchitectDonald Naglich – UHC Director, Technology InfrastructureSteven H. Carlberg – UHC Senior Network Administrator

Page 2: Hitachi Dynamic Provisioning Lessons Learned

2

Hitachi Data Systems WebTech Educational Seminar Series

Thin Provisioning – Lessons Learned

Until recently an exotic technology that was only available from a few vendors, thin provisioning is rapidly becoming a standard capability of advanced storage systems. Donald Naglich, Director, Technology Infrastructure and Steven H. Carlberg, Senior Network Administrator from University Health Systems Consortium will talk about what how they have taken advantage of this cost-saving storage virtualization feature. With topics ranging from ‘virtual pool configurations’ to ‘file system thin friendliness’ you will hear the latest updates from John Harker, Senior Product Manager, on guidelines and best practices to help you get the most out of the Hitachi Dynamic Provisioning software product. You’ll learn how to:

– Determine what a good starting thin provisioning pool configuration is– Know the things that affect how ‘thin’ you can run without the risk of running

out– Identify what to look for and what questions to ask about application behavior– Use replication and migration with virtual volumes

Page 3: Hitachi Dynamic Provisioning Lessons Learned

3

Hitachi Dynamic Provisioning• Overview• What’s New

Lessons Learned• Getting Started• Configuration Planning and Design• Performance• Server Side• Operations

Questions and Answers

Table of Contents

Page 4: Hitachi Dynamic Provisioning Lessons Learned

4

Hitachi Dynamic Provisioning• Overview• What’s New• University HealthSystems Consortium

Lessons Learned• Getting Started• Configuration Planning and Design• Performance• Server Side• Operations

Questions and Answers

Hitachi Dynamic Provisioning

Page 5: Hitachi Dynamic Provisioning Lessons Learned

5

Dynamic Provisioning = Efficient Storage AllocationUse only what you need where you need it when it’s needed

Challenges• High cost of storage• Cumbersome provisioning• Expensive optimization

Solution Capabilities• Simplify provisioning • Provision only what is used• Automates performance optimization• Auto Leveling• Replication Savings

Business Benefits• Reduce storage expense• Reduced operational expense• IT Agility

Hita

chi D

ynam

ic P

rovi

sion

ing

Virt

ual V

olum

es

Servers

Hitachi Dynamic ProvisioningPool Volumes

Page 6: Hitachi Dynamic Provisioning Lessons Learned

6

Virtualization

Dynamic Provisioning

Reclaims unused but allocated space

Virtualization Increases Utilization, Tiering Data Mobility

Ease of Migration and Asset Refresh

Storage Reclamation Service integrates best practices, key Hitachi Data Systems technologies to reclaim storage

Economically Superior Architectures

Dynamic Provisioning is part of a unique triumvirate of Hitachi Data Systems capabilities that leverage each other to deliver superior storage services

Page 7: Hitachi Dynamic Provisioning Lessons Learned

7

New! Get More From Storage Assets with Industry Unique Zero Page Reclaim Facility and Service

• Zero Page Reclaim facility examines the volumes of physical capacity on a Hitachi Universal Storage Platform® and where the firmware determines that no data other than zeros is found on a Dynamic Provisioning software pool page, the physical storage is unmapped and 'returned' to the pool's free capacity.

• Zero Page Reclaim is intended to be used after initial migration/restore– Migrate from the physical volume to the virtual volume (or restore volume from tape)– Zero Page Reclaim unused pages return physical storage to a dynamically provisioned

pool

Server Volume

Virtual Volume

Virtual Volume

Physical Volume

Page 8: Hitachi Dynamic Provisioning Lessons Learned

8

New! Bringing the advanced capabilities ofHitachi Dynamic Provisioning to the AMS 2000

• To address the needs of midrange customers facing rapid growth of data storage requirements and escalating storage expenses, Hitachi Data Systems adds support for Dynamic Provisioning to the AMS2000 product family

• Availability August 3, 2009

• Inexpensively licensed on a frame basis, similar to SNM2 and Device Manager

• Brings the advantages of thin provisioning – cost savings, automated performance optimization and easy provisioning – to a broader set of customers

Page 9: Hitachi Dynamic Provisioning Lessons Learned

9

New! Dynamic Provisioning on the USP V / VM features Improved Performance Optimization

V-VOL

Pool

1 2 3 4

1 2 34

Pool Vol#0 Pool Vol#1 Pool Vol#2

V-VOL

Pool

1 2 3 4

1 2 34

Pool Vol#0 Pool Vol#1 Pool Vol#2 Pool Vol#3

V-VOL

Pool

1 2 3 4

1 2 34

Pool Vol#0 Pool Vol#1 Pool Vol#2 Pool Vol#3

Add Pool Capacity

Optimize Pool

Automatic rebalancing after virtual storage pool expansion– Rebalances pool including rebalancing at individual virtual volume

level– All transparently online with no affect to application I/O

• Further simplifies storage provisioning

• Improved performance optimization

Page 10: Hitachi Dynamic Provisioning Lessons Learned

10

• With Hitachi virtualization technologies Overstock.com has seen storage capacity savings of 50 percent on some storage systems, now provision storage in 25 percent of the time, and have increased utilization rates by over 30 percent

• HUK Coburg reclaimed 30 percent capacity on IBM DS4800 using dynamic provisioning

• A global financial organization reclaimed 40-65 percent of its capacity through

dynamic provisioning saving in excess of 4M USD

• University HealthSystems Consortium have reclaimed 40 percent of storage capacity, have increased storage utilization rates and pushed out new storage acquisitions

Customer Proof Points

Page 11: Hitachi Dynamic Provisioning Lessons Learned

11

Donald NaglichDirector, Technology Infrastructure

University HealthSystems Consortium

Steven H. CarlbergSenior Network Administrator

University HealthSystems Consortium

Page 12: Hitachi Dynamic Provisioning Lessons Learned

12

Hitachi Dynamic Provisioning• Overview• What’s New• University HealthSystems Consortium

Lessons Learned• Getting Started• Configuration Planning and Design• Performance• Server Side• Operations

Questions and Answers

Lessons Learned

Page 13: Hitachi Dynamic Provisioning Lessons Learned

13

Getting Started

• Set realistic utilization objectives– Generally, target no higher than 60-80 percent capacity utilization per pool – A buffer should be provided for unexpected growth or a “runaway” application that

consumes more physical capacity than was originally planned for – There should be sufficient free space in the storage pool equal to the capacity of the

largest unallocated thin device – It is critical to understand the application’s behavior or patterns in terms of file allocation

and file growth (or shrinkage). This falls under the discipline of capacity planning. Lacking that, you must institute artificial controls when putting unknown applications under Hitachi Dynamic Provisioning

• Automating the monitoring of alert notifications is critical to maintaining an effective Hitachi Dynamic Provisioning operation, as well as adopting the operational procedures to take immediate action when a pool threshold trigger is encountered

– The user selectable level should be set at where the pool can not run out of free capacity before additional pool capacity is added

– Aside from the single user-specified pool threshold available in Hitachi Dynamic Provisioning via Storage Navigator and Device Manager, a customer can implement additional user-specified thresholds through monitoring capability in Tuning Manager

Page 14: Hitachi Dynamic Provisioning Lessons Learned

14

Configuration Planning and Design

• When planning a configuration using thin devices the first step involves determining how many separate thin pools are needed and the required composition of each thin data pool

– This will involve conceptually organizing disk storage into separate classes, with further subdivision as needed to allow pool isolation

– Depending on the mix of applications to be placed on thin devices, it will often be necessary to create multiple thin pools. But generally, the most efficient use of resources will be achieved by using a minimal number of pools

• Typically, a thin pool should be designed for use by a given application, or set of related applications, aligned with a given business group. The applications sharing a thin pool will compete for back end resources, including thin pool storage capacity, so applications should not share the same pool if this is not acceptable

Page 15: Hitachi Dynamic Provisioning Lessons Learned

15

Configuration Planning and Design

• Recommended pool size, where optimum performance is a requirement, is minimum of 4 array groups dedicated to single pool;

– Rather than trying to create 1 or 2 really big pools, be prepared to design 4 or more smaller pools for some isolation between candidate applications, based on workload profile, production vs. test/dev, and/or thin-friendly vs. not (including unknown application behaviors)

– Decide on RAID-10, RAID-5, or RAID-6 array groups for a given pool depending on normal application design rules. Do not intermix different RAID configurations in the same pool. RAID-6 provides some extra insulation from a failed array group from destroying a pool

– Dedicate whole array group(s) to a pool. Each parity group should be used 100 percent for Hitachi Dynamic Provisioning. Define one large LDEV per array group as a pool volume

– With a minimum of 4 array groups per pool, each assigned AG should be behind a different BED pair (if less than 4 BED pairs, spread across all BED pairs installed)

• Normal design principles apply– However, the performance requirement may be met by pooling together the requirements (see

mixed environments section) – You have to aggregate the sum of the applications in the pool - each workload needs to be modeled

and then added together. The total pool design is the sum of the individual requirements • As always keep ShadowImage P-VOLs and S-VOLs on different parity groups (different

pools) • Usual rules about distribution over multiple parity groups, BEDs etc. as many resources as

you have

Page 16: Hitachi Dynamic Provisioning Lessons Learned

16

Configuration Planning and Design

• DP-VOL design– Because Hitachi Dynamic Provisioning can be sparse if used right, then there can be

less reason to be restrictive about device sizes. If you want, every device can be right sized. Management may be simpler if you KISS however

– You no longer have the problem of fitting objects into the restriction of the parity group, devices can be any size. They can be bigger than any normal LDEV

– DP-VOLs can't be LUSEd so you can make LUSEd normal devices bigger than the largest DP-VOL

– Put one DP-VOL in each V-VOL group. This seems silly and a pain if you use SNAV for many different sized objects. But If you don't do this you can't resize them later

– Sometimes there is the question - many devices or one bigger device?: • On some systems there will be performance advantages for many devices - more

devices means more device queuing and less transaction interaction. (For example, in the past, it was recommended to never use LUSE on AIX for this reason)

• If you go for one big device – Make sure there is adequate CMD tag queue – Make sure the application is configured to use it (overlapping options, parallel options,

"buffers,“ and so on.) This is standard stuff not Hitachi Dynamic Provisioning specific

• DP-VOL/V-VOL Group considerations– Put only a single DP-VOL in each V-VOL Group. If you don't you cannot resize the DP-

VOL later. This is going to be enforced automatically soon

Page 17: Hitachi Dynamic Provisioning Lessons Learned

17

Configuration Planning and Design

• Pool Design Algorithm1. Determine storage capacity required. Include planned growth and risks associated with

any planned over-provisioninga. One approach with over-provisioning is to assume that one application will "go

rogue" and expand to its full capacity, but to happen with two would be rare b. Therefore, include capacity in the pool to accommodate all of the most over-

provisioned applications c. Monitor all applications against their plan. (DP-VOL monitoring may help here).

If one does go rogue, then you need to determine what went wrong and either:i. Accept it and expand the pool i. Migrate the rogue out of the pool to ensure it doesn’t block

and then fix the cause2. Determine total pool IOPS required. Include planned growth3. Choose Raid Level, Disk Type, Size etc. All must be the same4. Decide number of array groups needed to support required capacity5. Decide number of array groups needed to support IOPS requirement6. Take max of 4 and 5 and round up to four Raid Groups minimum7. Repeat 3-6 until cost is optimized and there isn’t too much waste 8. Create Raid Groups. Don’t use concatenation9. Create one LDEV on each Raid Group 10. Create pool

Page 18: Hitachi Dynamic Provisioning Lessons Learned

18

Configuration Planning and Design

• Allocating Pool LDEVS (USP V/VM)– Pool LDEVs should be formatted using SVP Install or Storage

Navigator VLL function with all LDEVs for a given pool created/formatted simultaneously, using same CU and LDEV ID dispersal. All LDEVs assigned to a pool should be added to the pool at the same time and in LDEV ID order. Finish the operation by pressing the Optimize button on the pool window, which will optimize the free page list

Page 19: Hitachi Dynamic Provisioning Lessons Learned

19

Configuration Planning and Design

• Data availability considerations of wide striping – Data availability considerations that apply to a thin device configuration are the same as

those that apply to a configuration in which device-level wide striping is achieved using RAID.

– Wide striping may increase the number of LUNs affected by a data loss event. When designing a configuration involving thin devices, consider the following availability aspects of the drives underlying the thin pool:

• Mean time between drive failure (MTBF) for drives underlying the set of data devices

• The type of RAID protection used • The number of RAID groups over which the set of server volume is spread• Mean time to repair (MTTR) including drive rebuild time

– So availability is dependent upon on the number of RAID groups underlying the thin pool. Because of the use of wide striping, the availability of any thin device using a pool may be impacted by the failure of any RAID group used by the thin pool. The dependency on the MTTR should also be noted.

– It is recommended that the drives underlying a thin pool be configured with available permanent spares.

– RAID-6 provides some extra insulation from a failed array group from destroying a pool.– When designing a configuration involving thin devices (or any other approach that

results in devices that are widely striped), device level availability implications should be carefully considered.

Page 20: Hitachi Dynamic Provisioning Lessons Learned

20

Performance

• The basic rule of thumb is that static provisioning performance is exactly the same as equivalent dynamic provisioning performance. Using Hitachi Dynamic Provisioning doesn't cost you in performance - the news is all good!

• So, if you start a design with a workload requirement for X IOPS and Y GB you would do analysis to determine a specific Raid level and drive type. With static provisioning this might imply a configuration of A array groups, each of Raid-R. If you wanted to put the same workload onto Hitachi Dynamic Provisioning you would want the same (or larger) pool design - A array groups, each of Raid-R

• The workloads aggregate together for all the applications you will be putting on the pool• Performance design requirements

– A pool should be constructed from four or more array groups– If you have multiple BEDs then spread the load evenly across them– The array groups used in Hitachi Dynamic Provisioning pools should not be used for any other

purpose– The array groups used in Hitachi Dynamic Provisioning pools should be used for only one pool– Pools must be homogenous:– All internal or all external– All same disk type– All same disk rotational speed– All same Raid Level– With identical Pool-VOL sizes– Pools volumes should occupy the whole array group– There are no hard limits on any of the above. You can create a pool which breaks all the above rules

but we don't recommend it for production use

Page 21: Hitachi Dynamic Provisioning Lessons Learned

21

Performance

• Common or Separate Pools– The advantage of larger pools are that there is more opportunity for smoothing out

workload differences. In general the bigger the better– Tests have been done on an exchange database with separate pools for log/data and a

common pool. An overall reduction in access time was observed for the common pool. You should however put the database and log files on different V-VOLS so that the cache algorithms can schedule for the different random/sequential characteristics

– The possible disadvantage of large pools with multiple workloads is that you cannot prevent one workload from "stealing" all the performance

– For replication, we recommend that you put P-VOLs and S-VOLs in separate pools, just as with static provisioning

Page 22: Hitachi Dynamic Provisioning Lessons Learned

22

Performance

• More on Performance of Mixed Workloads– We have found very good results combining logs and data (WP hoped for) – There is an argument that the larger you make a pool, the more likely you will benefit from natural load

balancing and reduction in hot spots – But there is the counter argument that some workloads must be isolated from one another

• These are often operational and management decisions outside of a technical driven storage design • So, you might pool together several workloads that had similar level characteristics. For example,

several OLTP systems• But avoid mixing together a bursty workload (BI or DW) with a response sensitive workload like OLTP

– You should, however, get some interesting benefits from mixing different workloads. There are four rough classes. See the matrix below:

– If the IOPS requirements are all low then any pool design will work. You might consider the lowest cost approach with Raid-5, SATA or External Storage

– The more GB you have in the pool, the more spindles it needs to implement it. As a consequence the Many IOPS + Many GB requirements are relatively easy to do - but you must do the analysis to determine you have adequate support

– The problem one is Many IOPS with few GB. This cannot work. Either there are too few spindles for the IOPS or too many spindles for the capacity. This is true on Hitachi Dynamic Provisioning or normal provisioning

– But you can leverage the performance leveling effect of large pools with wide-striping – A low IOPS+ many GB application has IOPS to spare. If you combine this workload in the same pool with a

Many IOPS with few GB application, then both requirements are served. You might even be able to

Requirement Few GB Many GBMany IOPS HL HH Few IOPS LL LH

Page 23: Hitachi Dynamic Provisioning Lessons Learned

23

Performance

• Performance and Capacity Planning Tools– Performance Monitor has limited reporting capability, but it can show real-time

DP-VOL performance statistics.– Tuning Manager has the best capability for reporting and trending of Pool and

DP-VOL utilization, as well as I/O statistics. Tuning Manager also provides implementation of additional thresholds besides the 3 standard thresholds provided by Dynamic Provisioning.

– Storage Navigator and Device Manager for provisioning and utilization information.

– With 6.2 Device Manager shows the total Virtual Capacity provisioned from a Hitachi Dynamic Provisioning Pool. This is the total capacity that can be demanded of the Pool. It also shows the amount consumed from the Pool and the amount of capacity provisioned and consumed for a DP-VOL. The latter represents aggregate total of all Pool pages allocated to the DP-VOL.

– This is also reported in Tiered Storage Manager and Tuning Manager. – Use Device Manager CLI to automate reporting and optionally provide

summaries.

Page 24: Hitachi Dynamic Provisioning Lessons Learned

24

Small (one page)Writes metadata to first block.NTFSWindows Server2003

Writes the metadata in 52MB increments

Small (when the Virtual-LU is more than 1GB.)Writes metadata in Virtual Storage Pool Size intervals

ZFS

OS File System Meta-Data Writing(The case of Default parameter)

DP Pool Capacity Consumed

Windows Server2008 NTFS Under investigation Under investigation

Linux XFS Writes metadata in Allocation Group Size intervals

Depends upon allocation group size. The amount of pool space consumed will be approximately [Virtual LU Size]*[32MB/Allocation Group Size]

Ext2 / Ext3 Writes the meta-data in 128MB intervals

About 25% of the size of the Virtual-LU Note: The default block size for these file systems is 4KB. This results in 25% of the Virtual-LU acquiring 32MB pages. If the file system block size is changed to 2KB or less, then the meta-data is written in 32MB or less intervals, so the DP Pool capacity consumption becomes 100%.

Solaris UFS Size of Virtual-LU

VxFS Writes metadata to first block Small (one page)

AIX JFS Writes metadata in 8MB intervals Size of Virtual-LUNote: If you change the Allocation Group Size settings when you create the file system, the meta-data can be written to a maximum 64MB intervals. Approximately 50%of the DP Pool is consumed

JFS2 Writes metadata to first block. Small (one page)

VxFS Writes metadata to first block Small (one page)

OS and File System Thin FriendlinessOS and File System Thin Friendliness

File-System’s spec for DP Pool capacity utilization.No problem – Make positive Low advantage No advantage

Page 25: Hitachi Dynamic Provisioning Lessons Learned

25

About 25% of the size of the Virtual-LU Note: The default block size for these file systems is 4KB. This results in 25% of the Virtual-LU acquiring 32MB pages. If the file system block size is changed to 2KB or less, then the meta-data is written in 32MB or less intervals, so the DP Pool capacity consumption becomes 100%

Ext2 / Ext3Linux

OS File System Meta-Data Writing(The case of Default parameter)

DP Pool Capacity Consumed

HP-UX JFS(VxFS) Writes metadata to first block Small (one page)

HFS Writes metadata in 10MB intervals Size of Virtual-LU

VMware Windows Server2003

NTFS Writes metadata to first block Small (one page)

Windows Server2008

NTFS Under investigation Under investigation

OS and File System Thin FriendlinessOS and File System Thin Friendliness

32MB Page

Case1: Writes metadata in 2GB intervals

2GB

Virtual-LU

DP Pool Capacity Consumed: 32MB / 2GB = 1.6%

Case2: Write metadata in 64MB intervals

DP Pool Capacity Consumed: 32MB / 64MB = 50%

Metadata

……

32MB Page

64MBMetadata

……Virtual-LU

File-System’s spec for DP Pool capacity utilization.

No problem Low advantage No advantage

Page 26: Hitachi Dynamic Provisioning Lessons Learned

26

Server Side

• From a server admin point of view, every system should:– Have moderate over-provisioning (the amount would depend on plan and platform flexibility) – Leverage volume managers where feasible – Monitor use of both virtual volumes and the pool:

• Usage file side vs. pool side - alert on discrepancy - if present, confirm whether expected, if not, change data class to: "leaky", "bad managed" and so on

• Monitor rate of usage vs. planned rate of usage - alert on discrepancy - if present, do management review for impact analysis

• Monitor usage versus planned usage - alert on discrepancy - if present, do management review for impact analysis

– Have expansion plan: when and how often to increase • Implement this with daemons for autoexpansion

• Server Partition Expansion:– Create and map over-provisioned volume– But don’t allocate all to partition– When top of partition is reached, OS is forced to reclaim– If total space gets low– Use OS to expand partition– Can be scripted– Generally easier than hardware add, LUN resize and so on

Page 27: Hitachi Dynamic Provisioning Lessons Learned

27

Server Side

• Hitachi Dynamic Provisioning and VMware– VMWare is a popular choice with Hitachi Dynamic Provisioning . VMware works as you would hope,

just use the rules appropriate for the client OS (Windows NTFS, Linux, and so on). VMFS is thin friendly:

– It generates little file system metadata• There is only one VMFS formatting option that is not thin-friendly (eagerzeroedthick)

– VMFS reclaims space efficiently Most Recently Used Space Allocation– Can leverage over-provisioning at the VMware level (and where appropriate at the client OS level) – The most important thing is to avoid putting too many guests on the same LUN (where 1 DP-VOL =

1 LUN = 1 VMFS in VMware) to limit issues with SCSI reserve contention; recommendation is 5 per, and no more than 10. The cases implemented have been using a mixture of Windows and Linux guests; the last one (last week) may also deploy some Unix guests

– Regarding thin-friendliness, when you add a LUN (DP-VOL) to VMware control, you either create a VMFS on it or you define it as an RDM (raw device, pass-through to the guest OS). VMFS is thin-friendly, it only writes metadata at the top (may use additional space based on snapshots or clones). RDM writes nothing on the DP-VOL, so thin-friendliness is entirely dependent on the guest OS file system (Windows NTFS vs. Linux EXT3)

– After creating the VMFS, you create a virtual disk for each guest OS (or possibly multiple virtual disks per guest). This is analogous to creating an LV in Veritas VxVM. There is a parameter that controls whether VMware will perform a hard format or write zeros on the virtual disk

Page 28: Hitachi Dynamic Provisioning Lessons Learned

28

Server Side

• Other Server Side Notes– Avoid tools which write all the disk:

• Defragmentation• Low level UNIX media format or check• Volume level copy tools (dd)

– Don’t software RAID-1 mirror or RAID-5– Use file level copy

Page 29: Hitachi Dynamic Provisioning Lessons Learned

29

Operations

• Growing Pools– Expansion of a pool must maintain the original design rules. The whole pool must be:– All internal or all external – All same disk type – All same disk rotational speed – All same Raid Level – With identical Pool-VOL sizes– When expanding a pool:

• For pools created before v5 of Hitachi Dynamic Provisioning :Always perform expansion with addition of the same number of dedicated AGs with which the pool was originally created

– For example, if pool is initially created with 4 AGs, expand with 4 more; if initially created with 8 AGs, expand with 8 more

– Assuming the initially assigned AGs are virtually exhausted of free pages, adding in equal increments of #AGs is critical to maintain the applications’ existing performance characteristics

• For pools created with v5 or later:Expand the pool using any number of dedicated array groups. Hitachi Dynamic Provisioning v5 performs an automatic rebalance of page assignments across the pool so adding even one array group will retain a balanced pool result

Page 30: Hitachi Dynamic Provisioning Lessons Learned

30

Operations

• Deleting Virtual Volumes– Thin devices can be deleted once they are unbound from the thin

storage pool. When thin devices are unbound, the extents that have been allocated to them from the thin pool are freed, causing all data from the thin device to be discarded

Page 31: Hitachi Dynamic Provisioning Lessons Learned

31

Operations

• Migrations– On the USP V and VM pretty much all the standard migration techniques available with non-Hitachi

Dynamic Provisioning volumes are now available including pool to pool migration– This includes migrating dynamic-provisioned volumes to static-provisioned and migrating static-

provisioned volumes to dynamic-provisioned. But, of course, the second would fully allocate thetarget DP-VOL

– As always with migration, the volumes must be the same capacity at the time of migration (and not changing in size)

– Any other normal limitations also apply– You cannot migrate back to the same pool – The target pool cannot be full or blocked (obviously) and you’ll be warned if you would cause it to go

over a threshold– On the AMS, only Shadowimage is supported for migration first release

• Migration Restrictions– You can’t migrate LUSE to non-LUSE (whether it’s Hitachi Dynamic Provisioning or not). This isn’t

an Hitachi Dynamic Provisioning issue. There is a method to do this, but its complex so we don’t allow the customer to do it

• As part of a GSS migration engagement we can:– Virtualize the USP V behind itself – LUSE is now presented as a non-LUSE – We then migrate that

– You cannot migrate POOL-vols. Therefore, you cannot change a pool from Raid5 to Raid10. You have to move the DP-VOLs not the pool

Page 32: Hitachi Dynamic Provisioning Lessons Learned

32

Hitachi Dynamic Provisioning• Overview• What’s New• University HealthSystems Consortium

Lessons Learned• Getting Started• Configuration Planning and Design• Performance• Server Side• Operations

Questions and Answers

Questions and Answers

Page 33: Hitachi Dynamic Provisioning Lessons Learned

33

Upcoming WebTech Seminars

• SAN Series Webcasts

– SAN Consolidation – The Next Step, July 15, 2009, 9 a.m. PT

• Upcoming Webcasts– Storage Reclamation: A Case Study, "How to Live off your

Body Fat in a Down Economy,” July 22, 2009, 9 a.m. PT– What’s Next for Sustainable IT?, July 29, 2009, 9 a.m. PT

• Please check www.hds.com/webtech for:

– Link to the recording, the presentation, and Qs&As (available next week)

– Schedule and registration for upcoming WebTech sessions

Page 34: Hitachi Dynamic Provisioning Lessons Learned

34

Thank You