ibm storage virtualization combined with ibm powervm ... · pdf fileibm storage virtualization...

47
© Copyright IBM Corporation, 2012. IBM storage virtualization combined with IBM PowerVM virtualization Proof of concept solution example and best practices IBM Systems and Technology Group ISV Enablement September 2012

Upload: vanphuc

Post on 12-Feb-2018

251 views

Category:

Documents


8 download

TRANSCRIPT

© Copyright IBM Corporation, 2012.

IBM storage virtualization combined with IBM PowerVM virtualization

Proof of concept solution example and best practices

IBM Systems and Technology Group ISV Enablement September 2012

© Copyright IBM Corporation, 2012

���������������

Abstract .................................................................................................................................... 1 Introduction .............................................................................................................................. 1 PowerVM virtualization ............................................................................................................ 2

Virtual SCSI..............................................................................................................................................2 Virtual Fibre Channel................................................................................................................................3

Virtual Fibre Channel and NPIV configuration...................................................................4 Multipath MPIO and SDDPCM ..........................................................................................4

Virtual Ethernet configuration and network planning ...............................................................................4 Zoning....................................................................................................................................... 4 Storage virtualization............................................................................................................... 4

IBM Storwize V7000 storage system .......................................................................................................4 SAN Volume Controller ............................................................................................................................4 Provisioning storage on the Storwize V7000 system or SAN Volume Controller ....................................4

Creating managed disks ....................................................................................................4 Creating storage pools.......................................................................................................4 Creating hosts....................................................................................................................4 Creating volumes ...............................................................................................................4 Thin-provisioned volumes..................................................................................................4 Real-time Compression .....................................................................................................4 IBM Easy Tier function.......................................................................................................4

Storwize V7000 Unified system ...............................................................................................................4 Live Partition Mobility .............................................................................................................. 4

Preparing the environment for active LPM...............................................................................................4 LPAR disk storage configuration............................................................................................ 4

Configuring volumes with virtual SCSI through the VIOS partitions ........................................................4 Creating volume groups, logical volumes, and application file systems ..................................................4 Setting up the mobile partition boot disk on the new shared hdisk ..........................................................4

Partition migration ................................................................................................................... 4 Proof of concept test ................................................................................................................................4

Components.......................................................................................................................4 Summary................................................................................................................................... 4 Resources................................................................................................................................. 4 Trademarks and special notices ............................................................................................. 4

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

1

Abstract

IBM virtualization products enable businesses to consolidate server and storage resources, and provide a more efficient, dynamic IT infrastructure that addresses stringent service level agreements (SLAs). The purpose of this white paper is to describe the setup and provisioning of storage from the IBM Storwize V7000 storage system or IBM System Storage® SAN Volume Controller to IBM Power partitions using IBM PowerVM server virtualization, including requirements and best practices that need to be implemented to ensure a high degree of availability and fault tolerance. A proof of concept test is described, which includes the migration of an active server partition from one Power system to another using IBM PowerVM Live Partition Mobility (LPM) and virtualized volumes from an IBM Storwize V7000 system. This paper demonstrates the speed of configuration and ease of management that customers can expect to provide a fully virtualized computing platform that offers the degree of system and storage infrastructure flexibility required by today’s production data centers and cloud environments.

Introduction Storage virtualization, like server virtualization, is now one of the foundations for building a flexible and reliable infrastructure solution.

The IBM® Storwize® V7000 disk system and IBM System Storage® SAN Volume Controller target medium to large businesses who seek enterprise-class storage efficiency and ease of management in consolidating capacity from different storage systems, both IBM and non-IBM branded, while enabling common copy functions, nondisruptive data movement, and improving performance and availability. IBM storage virtualization products combine hardware and software into an integrated, modular solution that forms a highly scalable storage cluster.

IBM PowerVM® is a combination of hardware, firmware, and software that provides virtualization of server resources. Businesses are turning to PowerVM virtualization to consolidate multiple workloads onto fewer systems, increase server utilization, and reduce cost. PowerVM technology provides a secure and scalable virtualization environment for IBM AIX®, IBM i, and Linux® applications, built upon the advanced reliability, availability, serviceability features and the leading performance of the IBM Power® platform.

The Live Partition Mobility function offered as part of the IBM PowerVM Enterprise Edition, is designed to enable the migration of an entire logical partition (LPAR) from one system to another. LPM uses a simple, automated procedure that transfers the configuration from source to destination without disrupting the hosted applications or the setup of the operating system.

When IBM storage virtualization is combined with PowerVM virtualization, the result is unprecedented flexibility for IBM POWER6® and IBM POWER7® processor-based servers.

Note: Although the configurations and proof of concept test described in this paper are based on AIX LPARs, most of the content for Storwize V7000, SAN Volume Controller, and Power virtualization applies to IBM i and Linux partitions as well.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

2

PowerVM virtualization IBM Power Systems™ with IBM PowerVM virtualization enable businesses to consolidate servers and applications, virtualize system resources, and provide a more flexible and dynamic IT infrastructure.

The IBM POWER Hypervisor™ enables the hardware to be divided into multiple partitions (LPARs), and ensures isolation between them. The hypervisor orchestrates and manages system virtualization, including the creation of LPARs and dynamically moving resources across multiple operating environments. The POWER Hypervisor also enforces partition security, and can provide interpartition communication that enables the virtual SCSI, virtual Fibre Channel (FC) and virtual Ethernet functions.

PowerVM Editions deliver advanced virtualization functions for AIX, IBM i, and Linux clients such as IBM Micro-Partitioning® technology, Virtual I/O Server (VIOS), Active Memory Sharing, Shared Storage Pools, and Live Partition Mobility. PowerVM features provide the ability to virtualize processor, memory, and I/O resources to increase asset utilization and reduce infrastructure costs. PowerVM also allows server resources to be dynamically adjusted to meet changing workload demands, without a server shutdown.

PowerVM Editions provide Virtual I/O Server technology to facilitate consolidation of LAN and disk I/O resources and minimizes the number of required physical adapters in a Power system. The VIOS actually owns the resources that are shared with clients. A physical adapter assigned to the VIOS partition can be shared by one or more other partitions. The VIOS can use both virtualized storage and network adapters, making use of the virtual SCSI, virtual Fibre Channel with N-Port ID Virtualization (NPIV), and virtual Ethernet facilities.

You can achieve continuous availability of virtual I/O by deploying multiple VIOS partitions (dual VIOS) on a Hardware Management Console (HMC) managed system to provide highly available virtual services to client partitions.

Virtual SCSI

The VIOS allows virtualization of physical storage resources, accessed as standard SCSI-compliant LUNs by the client partition, through virtual SCSI adapters. Virtual SCSI allows client LPARs to share disk storage and tape or optical devices that are assigned to the VIOS partition. The functionality for virtual SCSI is provided by the POWER Hypervisor. Virtual SCSI allows secure communications between partitions and a VIOS that provides storage backing devices. The VIOS is capable of exporting a pool of heterogeneous physical storage as a homogeneous pool of block storage in the form of SCSI disks.

Virtual Fibre Channel is most likely the best choice for accessing application volumes, but virtual SCSI is often used to configure the root volume group for AIX client partitions to avoid possible complexity or downtime during device driver upgrades. Best practice is to configure SAN disk that can be accessed from dual VIOS partitions, and use native multipath I/O (MPIO) installed on the VIOS. In such a dual VIOS setup, either Virtual I/O Server can be rebooted for maintenance without access loss to the virtual SCSI disks. MPIO for virtual SCSI devices only supports failover mode as the path selection algorithm. Load balancing is not an option. For any given virtual SCSI disk, a client partition uses only the primary path to one VIOS unless failover to the secondary path is necessary due to failure of the primary path.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

3

Virtual SCSI is also a good choice to share access to optical devices, and direct-attached tape drives. Virtual SCSI will be the best method to access SAN disk storage and FC networking devices that do not support NPIV.

Virtual SCSI was used to present a virtual rootvg disk to the client LPAR in the proof of concept test environment.

For details for configuring virtual SCSI in a dual VIOS environment, refer to chapter 4 of IBM PowerVM Virtualization Introduction and Configuration in IBM Redbooks® at: ibm.com/redbooks/redbooks/pdfs/sg247940.pdf

Virtual Fibre Channel

Because of ease-of-use in SAN storage management, most administrators will want to consider setting up virtual Fibre Channel adapters using NPIV technology to access the application volumes in production LPARs, instead of using virtual SCSI.

NPIV is a technology that allows multiple LPARs to access independent physical storage through the same physical FC adapter. Each partition is identified by a pair of unique worldwide port names (WWPNs), enabling you to connect each partition to independent physical storage on a SAN. Unlike virtual SCSI, only the client partitions see the disk. The VIOS partition acts only as a pass-through, managing the data transfer through the IBM POWER Hypervisor. The primary advantages of using NPIV are that storage device configuration is less complicated, and the client LPAR recognizes the disk volumes as if they had dedicated FC adapters directly attached. Also, multipathing software SDD or SDDPCM may be loaded on the client to take advantage of load-balancing functionality. Tape libraries should be configured through NPIV as well, since LAN-free backup is not supported through virtual SCSI. When using NPIV, well established SAN storage management techniques, such as zone mapping and masking, are valid without the need to additionally map volumes as backing devices in the VIOS partition.

To implement NPIV, you must verify that the Fibre Channel host bus adapters (HBAs), switches, and directors are NPIV-capable, and that the versions of the AIX, VIOS, and HMC support it. The virtual FC adapters for both the VIOS partition and the client partition are configured in the partition profile using the HMC.

If enough physical adapters are available, best practice suggests two physical FC adapters have ports assigned from each of the VIOS partitions to provide redundant paths when only a single VIOS is available.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

4

Figure 1: Virtual storage network of the proof of concept test environment

Virtual Fibre Channel and NPIV configuration

The following commands and screen captures can help you in configuring NPIV virtual FC adapters through each of the dual VIOS partitions.

The lsnports command lists the available physical FC adapters supporting NPIV, shows if the attached switch port is NPIV-capable, and gives a short overview of available NPIV ports (column aports in the command output):

padmin@isvp17v > lsnports name physloc fabric tports aports swwpns awwpns fcs0 U78A0.001.DNWK129-P1-C2-T1 1 64 63 2048 2042 fcs1 U78A0.001.DNWK129-P1-C2-T2 1 64 64 2048 2045 fcs2 U78A0.001.DNWK129-P1-C3-T1 0 64 64 2048 2048

Configure a virtual FC adapter on each VIOS partition.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

5

Figure 2: Dual-node virtual FC adapters

Create two virtual FC adapters on the client LPAR, using the Partner Adapter and Server Partition values to assign the VIOS virtual FC adapters you just created. After editing, you need to shutdown and restart the partitions on the new profiles.

Figure 3: Virtual FC adapters on the client LPAR

On the Virtual Fibre Channel tab of the HMC Virtual Storage Management panel, select the Fibre Channel port to assign to the client virtual FC adapter. Perform this step for both VIOS. Two WWPNs are displayed. The first is the primary WWPN, the second is only used for Live Partition Mobility.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

6

Figure 4: WWPNs are displayed in the Virtual Storage Management window on the HMC

On the VIOS partitions, the command lsmap –all –npiv shows a compact overview of the current mapping of virtual FC host adapters (vfchost0) to physical FC ports (FC name:fcs0) in the VIOS partition and virtual FC client adapters (VFC client name:fcs0) in the client partition.

padmin@isvp17v > lsmap -all -npiv Name Physloc ClntID ClntName ClntOS ------------- ---------------------------------- ------ -------------- ------ vfchost0 U8233.E8B.065D51P-V2-C35 1 isvp17 AIX Status:LOGGED_IN FC name:fcs0 FC loc code:U78A0.001.DNWK129-P1-C2-T1 Ports logged in:3 Flags:a<LOGGED_IN,STRIP_MERGE> VFC client name:fcs0 VFC client DRC:U8233.E8B.065D51P-V1-C36-T1

Run the cfgmgr command on the client LPAR to recognize the new FC devices. On the client LPAR, the virtual FC adapter appears as an fcsx device, similar to a physical FC adapter. Use the lscfg command to show the adapter’s WWPN and hardware location.

isvp17> cfgmgr isvp17> lscfg -vl fcs0 fcs0 U8233.E8B.065D51P-V1-C36-T1 Virtual Fibre Channel Client Adapter

Network Address.............C05076037CF00000 ROS Level and ID............ Device Specific.(Z0)........ Device Specific.(Z1)........ Device Specific.(Z2)........ Device Specific.(Z3)........ Device Specific.(Z4)........ Device Specific.(Z5)........ Device Specific.(Z6)........ Device Specific.(Z7)........ Device Specific.(Z8)........C05076037CF00000 Device Specific.(Z9)........ Hardware Location Code......U8233.E8B.065D51P-V1-C36-T1

Now zoning can be performed using the client LPAR WWPNs for each virtual adapter. If implementing Live Partition Mobility, zone both WWPNs provided. For detailed NPIV information,

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

7

refer to section 2.4 of PowerVM Migration from Physical to Virtual Storage from IBM Redbooks at: ibm.com/redbooks/redbooks/pdfs/sg247825.pdf

Multipath MPIO and SDDPCM

Multipath support for AIX 7.1 and Storwize V7000 is delivered through the IBM Subsystem Device Driver Path Control Module (SDDPCM). With the dual VIOS and NPIV setup used in the proof of concept test environment, SDDPCM must be loaded on the client LPAR. SDDPCM provides multipath and load-balancing management for the redundant FC paths defined in the environment.

In the event you decide to use virtual SCSI (VSCSI) to access the application volumes, multipath software should be loaded on the VIOS partitions.

You can download SDDPCM 2.6.1.0 for Storwize V7000, Platform AIX 7.1 from the following website:

ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg1S4000201&loc=en_US&cs=utf-8&lang=en+en#SVC

You will also be required to download and install the host attachment for SDDPCM on AIX from the following website:

ibm.com/support/docview.wss?uid=ssg1S4000203

Follow the installation directions for both packages found in the SDDPCM users guide at the following website:

ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1

After the multipath software installation is completed, you can check the multipath status through smit and with the lspath command.

isvp17> smitty mpio --> MPIO Device Management --> Change/Show MPIO Device Characteristics --> select device name

Figure 5: smit screen showing MPIO characteristics

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

8

isvp17> lspath -l hdisk2 Enabled hdisk2 fscsi0 Enabled hdisk2 fscsi0 Enabled hdisk2 fscsi1 Enabled hdisk2 fscsi1

Virtual Ethernet configuration and network planning

Virtual LAN (VLAN) using the Hypervisor allows secure communication between LPARs without the need for physical I/O adapters or cabling. By creating virtual Ethernet adapters on each LPAR and connecting them to virtual LANs, LPARs do not need physical hardware assigned to communicate with each other. TCP/IP communication over these virtual LANs is routed through the server firmware. Virtual Ethernet adapters are connected to an IEEE 802.1q style virtual Ethernet switch to allow LPARs to communicate with each other by using virtual Ethernet adapters and assigning VLAN IDs that enable them to share a common logical network. The HMC is used to create virtual Ethernet adapters and make VLAN ID assignments.

The Virtual I/O Server allows shared access to external networks through the Shared Ethernet Adapter (SEA), which provides access by connecting internal VLANs with the VLANs on the external switches. This eliminates the need for each client LPAR to own a real adapter to connect to the external network.

A commonly used method to provide continuous availability for shared Ethernet access to external networks is Shared Ethernet Adapter Failover in a dual VIOS setup, which offers Ethernet redundancy to the client at the VIOS level. In an SEA failover configuration, two Virtual I/O Servers have the bridging functionality of the Shared Ethernet Adapter. They use a control channel to determine which one supplies Ethernet service to the client. If one SEA loses access to the external network through its physical Ethernet adapter or one VIOS is shut down for maintenance, it will automatically fail over.

Only virtual Ethernet adapters are allowed on a mobile partition. Ethernet adapters assigned directly to a mobile partition, including a Logical Host Ethernet Adapters (LHEA) assigned through the VIOS, will generate errors in Live Partition Mobility validation.

The proof of concept test environment took advantage of the SEA in a dual VIOS setup for the private 10.0.0.x network.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

9

������������� ����

��������������

����������

��� �����������

��� � � ��������

!���"��#�$%� ������

�"�&�������#$'�(���������������

�"���������#$'� ��������������

�"�&�������#$'� �

�"���������#$'� �

�"����)��

�"����)�������������� (�

�#$'� �� �"����� )�""��

�#$'� �

�"����*$

�� �"+���

��� ������������"����������#$'�(�

�"����)����������������

�"����)��

�"���������#$'� �

�"�&�������#$'� �

�"����*$

���� ���

�#$'�(�

�"�����*$

Figure 6: Virtual LAN topology of the proof-of-concept test environment

The following Virtual Ethernet Adapter Properties screens from the HMC show the setup of the virtual Ethernet adapters used for the Shared Ethernet Adapter. Virtual Ethernet adapters are created on each VIOS for the VLAN access and its control channel. A virtual Ethernet adapter is created to access the VLAN on the client LPAR.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

10

Figure 7: Virtual Ethernet adapter for the internal VLAN for the dual VIOS SEA setup

Figure 8: Virtual Ethernet adapter for the control channel VLAN for the dual VIOS SEA setup

After the virtual Ethernet adapters are active, create the Shared Ethernet AdapterA with the mkvdev command to bridge the VLAN to the physical adapter connected to the network.

padmin@isvp17v2 > mkvdev –sea ent3 –vadapter ent6 –default ent6 –defaultid 8 –attr ha_mode=auto ctl_chan=ent7 ent8 Available

Use the lsmap and lsdev commands to verify the SEA setup.

padmin@isvp17v2 > lsmap -all -net SVEA Physloc ------ -------------------------------------------- ent6 U8233.E8B.065D51P-V6-C21-T1 SEA ent8 Backing device ent3 Status Available Physloc U78A0.001.DNWK129-P1-C5-T3

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

11

padmin@isvp17v2 > lsdev -dev ent8 -attr user_settable accounting disabled Enable per-client accounting of network statistics ctl_chan ent7 Control Channel adapter for SEA failover gvrp no Enable GARP VLAN Registration Protocol (GVRP) True ha_mode auto High Availability Mode True jumbo_frames no Enable Gigabit Ethernet Jumbo Frames True large_receive no Enable receive TCP segment aggregation True largesend 0 Enable Hardware Transmit TCP Resegmentation True netaddr 0 Address to ping True pvid 8 PVID to use for the SEA device True pvid_adapter ent6 Default virtual adapter to use for non-VLAN-tagged packet s True qos_mode disabled N/A True real_adapter ent3 Physical adapter associated with the SEA True thread 1 Thread mode enabled (1) or disabled (0) True virt_adapters ent6 List of virtual adapters associated with the SEA (comma s eparated) True

Network planning for a highly available virtualized environment is important and complex. For detailed networking information, refer to chapter 4 of IBM PowerVM Best Practices, from IBM Redbooks at: ibm.com/redbooks/redpieces/pdfs/sg248062.pdf

Zoning In the proof of concept test environment, the conventional SAN zoning is configured between the Power system and the Storwize V7000 system to allow for the host attachment of the virtualized disks. When zoning in an NPIV environment, you will use the WWPNs assigned to each client. For Live Partition Mobility, both WWPNs specified for each virtual FC adapter need to be included in the zone. The SAN switch will not recognize the LPM WWPN since it is not broadcast until migration occurs, so it needs to be added to the zone manually instead of selected from a list in the SAN management software.

If you decide to use virtual SCSI instead of NPIV to access the application volumes, you will use the WWPNs assigned to the VIOS partitions when zoning to the Storwize V7000 volumes.

Zoning for the proof of concept client partition is listed in this section with alias and member names referenced in Figure 1.

zone san3_v7k4_p17npiv0: isvp17_npiv0 isvp17_npiv0_lpm v7k4A v7k4B zone san4_v7k4_p17npiv1: isvp17_npiv1 isvp17_npiv1_lpm

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

12

v7k4A v7k4B zone san3_v7k4_p17vios0: isvp17_vios0 v7k4A v7k4B zone san4_v7k4_p18vios0: isvp18_vios0 v7k4A v7k4B

Storage virtualization Storage virtualization allows an organization to implement pools of storage across physically separate disk systems, which may be from different vendors. Storage can then be deployed from these pools and can be migrated between pools without any outage to the attached host systems. Storage virtualization provides a single set of tools for advanced copy functions, such as instant-copy and remote-mirroring solutions. This means that deploying storage can be performed using a single tool, regardless of the underlying storage hardware.

IBM storage virtualization products yield numerous benefits for storage administration and management that include:

• Combining storage capacity from multiple heterogeneous disk systems into a single reservoir that can be managed as a business resource rather than as separate boxes

• Increasing storage utilization by providing host applications with more flexible access to capacity • Improving productivity for storage administrators by enabling management of heterogeneous

storage systems through a common interface • Improving application availability by insulating host applications from changes to the underlying

physical storage infrastructure • Enabling a tiered storage environment where the cost of storage can be matched to the value of

data and easily migrated between those tiers • Allowing administrators to apply a single set of replication and copy services that operate in a

consistent manner, regardless of the backing storage that is being used

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

13

Figure 9: View of the Storwize V7000 system and SAN Volume Controller management through a web-based GUI

IBM Storwize V7000 storage system

The IBM Storwize V7000 solution provides a modular storage system that includes the capability to virtualize external SAN-attached storage as well as its own internal storage. Included with the Storwize V7000 system is a simple and easy-to-use GUI that is designed to allow quick and efficient deployment of storage. The GUI runs through a web-browser window on the Storwize V7000 system, so there is no need for a separate console.

The Storwize V7000 system can virtualize external storage up to 32 PB of usable capacity and supports a range of external disk systems. For a list of compatible external disk systems, refer to the following website: ibm.com/support/docview.wss?uid=ssg1S1003908

The Storwize V7000 enclosures support a wide range of drives including solid-state drive (SSD), serial-attached SCSI (SAS), and nearline SAS drive types, with each enclosure holding 12 x 3.5 inch or 24 x 2.5 inch drives, depending on type. The Storwize V7000 solution consists of a control enclosure and up to nine expansion enclosures, with support for intermixing 3.5 inch and 2.5 inch type enclosures. Each control enclosure provides redundant dual-active controllers, with a combined 16 GB cache, eight 8 Gbps FC ports, four 1 Gbps iSCSI ports, and optionally four 10 Gbps iSCSI / FCoE ports.

Figure 10: Storwize V7000 disk system

The base building block of a Storwize V7000 cluster, called the I/O group, is formed by combining a pair of redundant Storwize V7000 controllers, called nodes, which are based on the IBM System x® server technology. There can be up to four I/O groups in a Storwize V7000 cluster. A volume is always presented to a host server by a single I/O group of the cluster, and all I/O for that volume are serviced by

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

14

the owning I/O group. Within an I/O group, there is a preferred node for each volume, and volumes are split between nodes. Each node acts as a failover partner within the I/O group.

Storage presented to the cluster from internal or external back-end disk is known as managed disk (MDisk). As volume data can exist on multiple MDisks, there are guidelines for the characteristics that each MDisk must have when combined in a storage pool. Factors, such as RAID type, RAID array size, disk type, disk rotations per minute (rpm), and disk controller performance must all be taken into consideration when creating storage pools.

The Storwize V7000 system automatically reserves and assigns very small areas of storage to be used exclusively for cluster management, called quorum disk candidates. A quorum disk is used to hold important cluster information and to break a tie when a SAN fault occurs that leaves exactly half of the nodes that were previously members of the cluster present, which otherwise might lead to a corrupting split-brain issue. Only one quorum disk is active and one of the other quorum disk candidates will take over in the event that the active quorum fails.

The Storwize V7000 system includes flexible host-connectivity options with support for 8 Gb Fibre Channel, 1 Gb iSCSI, or 10 Gb iSCSI / FCoE connections. In addition, there is also a full array of advanced software features including:

• Seamless data migration • Thin provisioning • IBM Real-time Compression™ • Volume mirroring • Global Mirror and Metro Mirror replication • IBM FlashCopy® – 256 targets, cascaded, incremental, space efficient (thin provisioned) • Integration with IBM Tivoli Productivity Center • IBM System Storage® Easy Tier® – Provides a mechanism to seamlessly migrate hot spots to a

higher performing storage pool within the IBM Storwize V7000 solution

SAN Volume Controller

SAN Volume Controller is designed to deliver the benefits of storage virtualization in environments of all sizes. SAN Volume Controller allows customers to manage all the storage in their IT infrastructure from a single point of control, and also increase the utilization, flexibility, and availability of storage resources.

IBM SAN Volume Controller combines hardware and software into an integrated, modular solution that is highly scalable. The current nodes are each equipped with an Intel® Xeon® 5500 2.4 GHz quad-core processor, 24 GB of cache, and four 8 Gbps Fibre Channel ports.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

15

Figure 11: Two SAN Volume Controller nodes form an I/O group

The SAN Volume Controller provides all of the same features and flexibility as the Storwize V7000 system, with increased processing and throughput to handle even larger storage environments. The primary differences between SAN Volume Controller and the Storwize V7000 system include:

• SAN Volume Controller has no facility for managing internal disk. All managed disk will be in external storage arrays.

• SAN Volume Controller nodes are in two separate enclosures.

For further details, you can refer to SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, at the following website: ibm.com/redbooks/abstracts/sg247521.html?Open

Provisioning storage on the Storwize V7000 system or SAN Volume Controller

Through the intuitive web management GUI and powerful command-line interface (CLI), Storwize V7000 and SAN Volume Controller provide a simplified way to provision and manage a storage infrastructure. For readability, the following instructions mention only Storwize V7000, but the same instructions apply equally to SAN Volume Controller.

Creating managed disks

An external storage system must be present and properly zoned to the Storwize V7000 system before storage can be allocated for MDisks. After the zoning is completed and storage has been provisioned from external storage systems as necessary, an MDisk can be created.

Perform the following steps to create managed disks.

1. Navigate to the Physical Storage ���� External page and verify that the storage system is shown in the list.

2. Select the storage system and click Detect MDisks to scan for available MDisks.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

16

Figure 12: Creating managed disks

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

17

Creating storage pools

The Storwize V7000 storage pools contain one or more MDisks. MDisks that are grouped in a storage pool need to share the same physical characteristics. When a storage pool is created, it is optional to add one or more MDisks.

To create a storage pool:

1. Navigate to the Physical Storage ���� Pools page and click New Pool. In the Create Storage Pool dialog box, enter a name for the new storage pool and select a custom icon.

Figure 13: Specify Storage pool name

2. Select the MDisk to include in the storage pool.

Figure 14: Assigning MDisks to the storage pool

Creating hosts

A host system is a server image that is connected to the Storwize V7000 system either through a FC adapter or an Ethernet interface. A host object is a logical object within the Storwize V7000 system that represents a list of WWPNs or a list of iSCSI names that identify the interfaces that the host

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

18

system uses to communicate with Storwize V7000. A typical configuration is to have one host object for each host system that is attached to the Storwize V7000 system. Volumes must be mapped to each host before they are accessible to host systems.

To create hosts on the Storwize V7000 system:

1. Access the Storwize V7000 GUI with superuser authority. On the main GUI menu, double-click Hosts, then click New Host, and then click Fibre-Channel Host.

Figure 15: Creating a new host

2. Enter a name for the host being configured, and then select the WWPNs associated with that specific host from the Fibre-Channel Ports list. Note that the list is populated with WWPNs that are zoned to the Storwize V7000 system. Host systems must have completed SAN zoning prior to the WWPNs being presented on Storwize V7000. If configuring for Live Partition Mobility, both WWPNs associated with the host’s NPIV setup must be included.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

19

Figure 16: Assigning FC ports to host

Creating volumes

Volumes are made up of one or more volume copies. Each volume copy is an independent physical copy of the data that can be stored within the same or different storage pools. Volumes can be provisioned as generic (where the physical storage consumed is equal to the maximum volume size) or thin (where the physical storage consumed is equal to the size of the volume at that time).

Perform the following steps to create a volume.

1. Access the Storwize V7000 GUI with super user authority. From the main GUI menu, navigate to the Volumes ���� All Volumes page and click New Volume. Select a preset volume type and storage pool, and then enter a volume name.

Note: By clicking the + sign button next to the Volume Name and Size fields, you can create multiple volumes very quickly.

Figure 17: Naming a new volume

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

20

2. After the volume is created, click Continue, and select the source system host name. You may want to make a note of the volumes UID for future reference. After ensuring that the volume and host assignment are correct, click Apply.

Figure 18: Assigning new volume to the host

3. To create a volume mirror copy, navigate to the Volumes ���� Volumes by Pool page, right-click the volume that was just created and click Volume Copy Actions ���� Add Mirrored Copy. Select the target storage pool and click Add Copy.

Figure 19: Setting up a volume mirror

4. Repeat these steps to create all volumes needed to run on your hosts. For hosts that have Live Partition Mobility enabled, all disk volumes must be shared on shared storage systems such as

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

21

those managed by Storwize V7000 and accessed through the FC adapters assigned using NPIV or using virtual SCSI through the VIOS partitions.

Thin-provisioned volumes

Although not required for setting up volumes, thin-provisioning functionality is a powerful and popular advantage of implementing the Storwize V7000 system or SAN Volume Controller in your environment.

The thin-provisioning feature uses physical storage capacity only when data is written to the virtual disks, instead of dedicating physical capacity at the time of provisioning. This functionality ensures that storage will be provisioned as the business grows.

As thin provisioning is implemented entirely on the Storwize V7000 system or SAN Volume Controller, without requiring any special configuration on the servers, using thin-provisioned volumes is transparent to systems and application administrators. If implemented, Live Partition Mobility will access the same thin-provisioned volumes used on the source system, after a migration to the target system.

In the proof of concept test environment, thin provisioning was used to create the volumes designated for application reporting. To implement thin-provisioned volumes, simply select Thin Provision instead of Generic during the volume creation process.

Figure 20: Selecting thin-provisioned volumes

Real-time Compression

The Random Access Compression Engine (RACE) technology which was first introduced in the IBM Real-time Compression Appliance™, is now integrated into SAN Volume Controller and the Storwize V7000 software stack with version 6.4. Implementing Real-time Compression in Storwize V7000 or SAN Volume Controller provides the following benefits:

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

22

• Compression for active primary data: IBM Real-time Compression can be used with active

primary data, therefore it supports workloads that are not candidates for compression in other solutions. The solution supports online compression of existing data and allows storage administrators to gain back free disk space in the existing storage system without enforcing administrators and users to clean up or archive data. This significantly enhances the value of existing storage assets and the benefits to the business are immediate. Capital expense of upgrading or expanding the storage system is delayed.

• Compression for replicated / mirrored data: Remote volume copies can be compressed in addition to the volumes at the primary storage tier. This reduces storage requirements in Metro Mirror and Global Mirror destination volumes as well.

• No changes to the existing environment are required: IBM Real-time Compression is an integral part of the storage system, and was designed with transparency in mind so that it can be implemented without changes to applications, hosts, networks, fabrics, or external storage systems.

• Overall savings in operational expenses and reduced rack space: More data can be stored in a given rack space, therefore fewer storage expansion enclosures are required to store a given data set.

• Reduced power and cooling requirements: More data is stored in a given system therefore requiring less power and cooling per gigabyte or used capacity.

• Reduced software licensing for additional functions in the system: More data stored per enclosure reduces the overall spending on licensing.

• Disk space savings are immediate: The space reduction occurs immediately when the host writes the data, unlike other compression solutions in which some or all of the reduction is realized only after a post-process compression batch job is executed.

To implement compressed volumes, simply select Compressed instead of Generic during the volume creation process or while creating a volume, select Advanced ���� Capacity Management ���� Compressed.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

23

Figure 21: Selecting compressed volumes

You can find additional information on IBM Real-time Compression at: ibm.com/redbooks/redpieces/pdfs/redp4859.pdf

IBM Easy Tier function

Another advanced capability of Storwize V7000 and SAN Volume Controller is the Easy Tier function. When SSDs are configured as part of the Storwize V7000 solution, Easy Tier provides a mechanism to seamlessly migrate SAS-drive hot spots to the high performance SSD storage pool to optimize use of this premium resource.

The Easy Tier function operates independently of AIX and, therefore, no operating system, application tuning, or policy setting is necessary to implement it.

The Easy Tier function is included with the IBM Storwize V7000 system and is available with a separately purchased license for the SAN Volume Controller. If this capability is to be used to optimize the utilization of SSDs, you must enable this function. On Storwize V7000, Easy Tier will be enabled by default for all volumes in a storage pool when adding SSD tier MDisks to that storage pool.

The detailed properties screen for one of the volumes in the proof of concept test (refer to Figure 22) shows that Easy Tier is active for the Pool1_SAS_R5 storage pool and that 1.3 GB of the 200 GB pool is SSD tier storage.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

24

Figure 22: Detailed properties of an Easy Tier volume

Easy Tier has now been enabled and Automatic Data Placement Mode is active for that volume. Extents are automatically migrated to, or from, high performance disk tiers, and the statistic summary collection is active. You can download the statistics log file to analyze the number of extents that have been migrated, and to monitor if it makes sense to add more SSDs to the multitiered storage pool.

On the main GUI menu, click Settings ���� Support ���� Show Full Log Listing and filter the output to show files with the string heat in the name. Click Actions ���� Download to download the heat-log file to your workstation.

Figure 23: STAT tool heat file

To analyze the heat file, you can download the IBM Storage Tier Advisor Tool (STAT), at the following address: ibm.com/support/docview.wss?uid=ssg1S4000935

After you install the STAT tool, run the stat command with the file name of the heat file as an argument. This places an HTML file in the same directory as the STAT executable. To view the report, open the HTML file with your browser.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

25

Figure 24: Easy Tier STAT report

Storwize V7000 Unified system

While the Storwize V7000 system meets enterprise block storage virtualization needs, the Storwize V7000 Unified system combines block and file storage requirements into a single system for simplicity and efficiency. A single management GUI for managing both block and file storage streamlines administration.

The Storwize V7000 Unified system adds two file modules in a clustered system that provide file systems for use as network-attached storage supporting protocols, such as Network File System (NFS), Common Internet File System (CIFS), Hyper Text Transfer Protocol (HTTP), and File Transfer Protocol (FTP). Storwize V7000 block storage provides the file modules with volumes.

Another feature provided with the Storwize V7000 Unified system is IBM Active Cloud Engine™ which provides the following functionality:

• Reduces costs through policy-based management of files through use of tiered storage, and improves data governance

• Automates movement of less frequently used files to lower cost tiers of storage, including tape in an IBM Tivoli Storage Manager system

• Automates deletion of unwanted or expired files

You can find additional information on IBM Storwize V7000 Unified implementation at: ibm.com/redbooks/redpieces/pdfs/sg248010.pdf

Live Partition Mobility Before initiating the migration of a partition, the HMC verifies the capability and compatibility of the source and destination servers, and the characteristics of the mobile partition to determine whether a migration is possible.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

26

A partition migration operation can occur either when a partition is powered off (inactive), or when a partition is up and providing service (active). Although active partition requires careful planning and configuration, the benefits of completing the migration without shutting down applications, disrupting users, and avoiding the risks of a manual shutdown, likely outweighs any inconvenience. The configuration steps provided in this paper are for the more complicated active migration.

The following preparatory tasks may be omitted when configuring for inactive partition mobility.

• Enabling the mover service partition on the source and destination VIOS is not required. • Synchronizing the time-of-day clocks is not required. • Resource Monitoring and Control (RMC) connections are not required. • The mobile partition may have dedicated I/O, and these dedicated I/O devices will be removed

automatically from the partition before the migration occurs. • Barrier-synchronization registers may be used in the mobile partition • The mobile partition may use huge pages • The applications need not be migration-aware or migration-safe.

Preparing the environment for active LPM

Perform the following steps to prepare the HMC, Power Systems, VIOS, and mobile partitions for active migration using LPM.

1. Check that the HMC is of Version 7 Release 3.2.0 or later with required fixes. If you do not have this level, upgrade the HMC to the correct level.

2. PowerVM Enterprise Edition must be licensed for both the source and destination POWER systems. In order to verify this, make sure that Active Partition Mobility Capable has its value set to True on the Properties screen, Capabilities tab of the HMC GUI.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

27

Figure 25: The licensed capabilities panel in the HMC GUI

3. Both source and destination systems must be at firmware level 01Ex320 or later, where x depends on the type of server. Refer to the migration matrix for more information about migration between different firmware levels at: http://www14.software.ibm.com/webapp/set2/sas/f/pm/migrate.html

4. Ensure that the logical memory block size is the same on the source and destination systems. On

the HMC Navigation pane, select Systems Management, and then select Servers. Then, select the system you want to check and click Tasks ���� Operations ���� Launch Advanced System Management (ASM). Log in to the Advanced System Management Interface (ASMI) window as admin. Then, select Performance Setup and then Logical Memory Block Size. The logical memory block size, as shown in Figure 26 is displayed.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

28

Figure 26: Checking the logical memory block size with ASMI

5. Enable the mover service partitions on both the source and destination VIOS, which allows it to use its Virtual Asynchronous Services Interface adapter for communicating with IBM POWER Hypervisor. Also, enable time-of-day synchronization. Using the HMC, go to the Properties menu for the VIOS partition, and on the General tab, make sure that the Mover service partition check box is selected. Then, in the Settings tab set Time reference to Enabled.

Figure 27: Enabling mover service partition

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

29

Figure 28: Synchronizing the time-of-day clocks

6. Ensure that the RMC connections are established. From a root command prompt on the mobility partition, enter the following command:

� � # lsrsrc -a b IBM.MCP |grep Status Status = 1

If Status = 1, then RMC connections are established. Otherwise, refer to IBM PowerVM Live Partition Mobility, section 3.8.2 on IBM Redbooks at: ibm.com/redbooks/redbooks/pdfs/sg247460.pdf

Note: For AIX 6.1 or earlier versions, use the command lsrsrc IBM.ManagementServer |grep ManagerType and check that ManagerType=”HMC”.

7. Ensure that the mobile partition is not using a virtual serial adapter in slots higher than slot 1. For a logical partition to participate in a partition migration, it cannot have any required virtual serial adapters, except for the two reserved for the HMC. Using the HMC, go to the Properties menu for the mobility partition, and on the Virtual Adapters tab, make sure there are only two serial adapters specified in slots 0 and 1.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

30

Figure 29:Verifying the number of serial adapters on the mobile partition

8. Also, while on this screen, make a note of the connecting adapter for the client SCSI adapter. You need to confirm that the VIOS is correctly referencing this adapter in later steps.

9. Ensure that the mobile partition is not using barrier-synchronization register (BSR) arrays or huge pages. Using the HMC, on the Configuration menu for the mobility partition, click Manage����Profiles, and on the Memory tab of the active profile, make sure that the input for the BSR arrays and pages are set to 0, as shown in Figure 30.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

31

Figure 30: Verifying that the BSR arrays and huge page memory are set to zero

10. Ensure that the applications running in the mobile partition are mobility-safe or mobility-aware. Most applications and databases will be mobility-safe. This is discussed in more detail later in this paper.

Note: If you changed any partition profile attributes, you need to shutdown and activate the new profile, so that the new values can take effect.

LPAR disk storage configuration This section describes configuring the Storwize V7000 volumes into enhanced concurrent volume groups that can be accessed by an LPM migrating client from either the source or target system. If you are using virtual SCSI to access the volumes, the volumes must be zoned and masked on the SAN to the VIOS partitions on each system.

The client in the proof of concept test accessed the application volumes through NPIV. With NPIV, the volumes are zoned and masked on the SAN to the client LPAR. After assigning the volumes to the client LPAR, skip the VIOS partition configuration steps shown here and go to the steps in the “Creating volume groups, logical volumes, and application file systems” section.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

32

Configuring volumes with virtual SCSI through the VIOS partitions

As shared storage has been configured in the form of Storwize V7000 volumes, you now need to:

• Verify that the VIOS partitions recognize the new devices. • Verify or set required attributes on the new VIOS hdisks. • Assign the new VIOS hdisks to both nodes (client LPARs).

To verify that the VIOS partition recognizes the new devices:

1. Log in as padmin on the VIOS partition. 2. Enter the cfgdev command to detect new devices. 3. Enter the lsdev command and look for new hdisk devices corresponding to the new Storwize

V7000 volumes. $ lspv NAME PVID VG STATUS hdisk0 00f65d5193c4e342 rootvg active $ lsdev |grep hdisk hdisk0 Available MPIO FC 2145 $ cfgdev $ lspv NAME PVID VG STATUS hdisk0 00f65d5193c4e342 rootvg active hdisk1 none None $ lsdev |grep hdisk hdisk0 Available MPIO FC 2145 hdisk1 Available MPIO FC 2145

To verify or set the required attributes on the new VIOS hdisks:

1. Set the reserve_policy attribute on the new disk to no_reserve. Use the lsdev command to check the attribute, and chdev command to set the attribute, as necessary.

$ lsdev -dev hdisk1 -attr |grep reserve_policy reserve_policy single_path Reserve Policy True $ chdev -dev hdisk1 -attr reserve_policy=no_reserve -perm hdisk1 changed $ lsdev -dev hdisk1 -attr |grep reserve_policy reserve_policy no_reserve Reserve Policy True

2. Use the chkdev command to verify that there is a unique identifier set for the disk. You should recognize the UID as part of the Storwize V7000 ID.

$ chkdev -dev hdisk6 NAME: hdisk6 IDENTIFIER: 332136005076802838002800000000000005804214503IBMfcp PHYS2VIRT_CAPABLE: YES VIRT2NPIV_CAPABLE: NA VIRT2PHYS_CAPABLE: NA

3. Use the lspv command to check that the new disk has a PVID. If it does not, create it using the chdev command.

$ lspv NAME PVID VG STATUS

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

33

hdisk0 00f65d5193c4e342 rootvg active hdisk1 none None $ chdev -dev hdisk1 -attr pv=yes -perm hdisk1 changed $ lspv NAME PVID VG STATUS hdisk0 00f65d5193c4e342 rootvg active hdisk1 00f65d5168c1bea1 None

4. Repeat steps 1 through 3 for each Storwize V7000 virtual SCSI volume.

To assign the new VIOS hdisks to a client LPAR:

1. Use the mkvdev command to assign the hdisks to the vhost adapter corresponding to the client LPAR.

2. Use the lsmap command to verify that it is assigned correctly.

$ mkvdev -f -vdev hdisk6 -vadapter vhost0 -dev vscsi_root17 vscsi_root17 Available $ lsmap -vadapter vhost0 SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost0 U8233.E8B.065D51P-V6-C2 0x00000000 VTD vscsi_root17 Status Available LUN 0x8100000000000000 Backing device hdisk1 Physloc U78A0.001.DNWK129-P1-C1-T1-W50050768021000F6-L0 Mirrored false

3. Repeat steps 1 and 2 for each Storwize V7000 virtual SCSI volume. Make sure to use the PVID (output in the lspv command) to verify that you are working with the correct volume.

Creating volume groups, logical volumes, and application file systems

Use the following steps to recognize the virtual SCSI and NPIV volumes that have been assigned to the client LPAR from the Storwize V7000.

1. Log in to the client LPAR with root authority. 2. Use the cfgmgr, lspv and lsattr commands to recognize the new disks. The hdisk devices will

have been assigned either as virtual SCSI through the VIOS partition or virtual Fibre Channel (NPIV). The unique_id parameter matches the volume ID from the Storwize V7000 volume. If an hdisk has PVID set to none, you can have a PVID assigned using the chdev command.

isvp17> cfgmgr isvp17> lspv hdisk0 00f65d51a5aa3cf1 rootvg active hdisk1 00f65d5163f9e469 swap_vg active hdisk2 none None hdisk3 none None hdisk4 none None hdisk5 none None hdisk6 none None hdisk7 none None

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

34

isvp17> lsattr -El hdisk2 |grep unique_id unique_id 332136005076802838002800000000000004F04214503IBMfcp Device Unique Identification False isvp17> chdev -l hdisk2 -a pv=yes hdisk2 changed isvp17> lspv hdisk0 00f65d51a5aa3cf1 rootvg active hdisk1 00f65d5163f9e469 swap_vg active hdisk2 00f65d5164637057 None hdisk3 none None hdisk4 none None hdisk5 none None hdisk6 none None hdisk7 none None

The new Storwize V7000 volumes are now available to the client LPAR operating system to be used as raw volumes, or to be configured into AIX Logical Volume Manager volume groups, logical volumes, and file systems.

Setting up the mobile partition boot disk on the new shared hdisk

Live Partition Mobility requires that all mobile client LPARs use virtual SCSI or virtual FC to access all disk storage, including the boot device and root volume group. If you are modifying an existing LPAR that has a local disk assigned for the boot device, relocate the boot device to a Storwize V7000 volume that you assigned in the earlier steps.

1. Use the alt_disk_copy –d hdiskX command to copy the rootvg boot disk to your new hdisk on the Storwize V7000. The command might take several minutes, but sets up the new hdisk as the primary boot device. After running the command, check whether the new hdisk is set up as the alternate boot disk with the bootlist command, and then reboot.

isvp17> lspv hdisk0 00f65d51a5aa3cf1 rootvg active hdisk1 00f65d5163f9e469 swap_vg active hdisk2 00f65d5164637057 appA_data_vg hdisk3 00f65d5168101461 appA_logs_vg hdisk4 00f65d51681be024 appA_reports_vg hdisk5 00f65d5267e657e9 appB_data_vg hdisk6 00f65d5268115139 appB_logs_vg hdisk7 00f65d52681ccf44 appB_reports_vg hdisk8 00f65d51786c3d5e caavg_private active hdisk9 00f65d518844ae62 orabinA_vg hdisk10 00f65d518844d6c3 orabinB_vg isvp17> cfgmgr hdisk0 00f65d51a5aa3cf1 rootvg active hdisk1 00f65d5163f9e469 swap_vg active hdisk2 00f65d5164637057 appA_data_vg hdisk3 00f65d5168101461 appA_logs_vg hdisk4 00f65d51681be024 appA_reports_vg hdisk5 00f65d5267e657e9 appB_data_vg hdisk6 00f65d5268115139 appB_logs_vg hdisk7 00f65d52681ccf44 appB_reports_vg hdisk8 00f65d51786c3d5e caavg_private active hdisk9 00f65d518844ae62 orabinA_vg hdisk10 00f65d518844d6c3 orabinB_vg hdisk11 00f65d51c105a447 None isvp17> bootlist -o -m normal

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

35

hdisk0 blv=hd5 pathid=0 isvp17> alt_disk_copy -d hdisk11 Calling mkszfile to create new /image.data file. Checking disk sizes. Creating cloned rootvg volume group and associated logical volumes. Creating logical volume alt_hd5 Creating logical volume alt_hd6 : : : : : forced unmount of /alt_inst/admin forced unmount of /alt_inst forced unmount of /alt_inst Changing logical volume names in volume group descriptor area. Fixing LV control blocks... Fixing file system superblocks... Bootlist is set to the boot disk: hdisk11 blv=hd5 isvp17> bootlist -o -m normal hdisk11 blv=hd5 pathid=0 isvp17> lspv hdisk0 00f65d51a5aa3cf1 rootvg active hdisk1 00f65d5163f9e469 swap_vg active hdisk2 00f65d5164637057 appA_data_vg hdisk3 00f65d5168101461 appA_logs_vg hdisk4 00f65d51681be024 appA_reports_vg hdisk5 00f65d5267e657e9 appB_data_vg hdisk6 00f65d5268115139 appB_logs_vg hdisk7 00f65d52681ccf44 appB_reports_vg hdisk8 00f65d51786c3d5e caavg_private active hdisk9 00f65d518844ae62 orabinA_vg hdisk10 00f65d518844d6c3 orabinB_vg hdisk11 00f65d51c105a447 altinst_rootvg # shutdown -Fr

2. After the reboot is complete, use the lspv command to verify that the new hdisk is set up as the

active rootvg. # lspv hdisk0 00f65d51a5aa3cf1 old_rootvg hdisk1 00f65d5163f9e469 swap_vg active hdisk2 00f65d5164637057 appA_data_vg hdisk3 00f65d5168101461 appA_logs_vg hdisk4 00f65d51681be024 appA_reports_vg hdisk5 00f65d5267e657e9 appB_data_vg hdisk6 00f65d5268115139 appB_logs_vg hdisk7 00f65d52681ccf44 appB_reports_vg hdisk8 00f65d51786c3d5e caavg_private active hdisk9 00f65d518844ae62 orabinA_vg hdisk10 00f65d518844d6c3 orabinB_vg hdisk11 00f65d51c105a447 rootvg active

3. After you are certain, you do not need to go back to the old boot device; you can remove it using

the alt_rootvg_op and rmdev commands. # alt_rootvg_op –X old_rootvg # rmdev –dl hdisk0

4. If the old boot disk was virtualized through the VIOS and assigned to a mobile partition, you need to unassign it through the HMC virtual storage management screens, or using the rmdev

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

36

command from the VIOS. You cannot migrate the mobile partition with an hdisk assigned through a VIOS storage pool.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

37

Partition migration The following steps are for an active LPAR migration, where the mobile partition is up and running. To perform an inactive migration, execute the same steps with the mobile partition shut down in a Not Running state.

To start the active migration, in the HMC GUI navigation pane, click Systems Management, then click Servers, and then select the mobile partition. Click the Operations menu, and then click Mobility����Migrate.

Figure 31: Migration wizard start menu

Navigate through the wizard menus. Select the destination system name from the list on the Destination page.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

38

Figure 32: Select the migration destination system

The validation step performs several checks to ensure that all LPM prerequisites have been met. All validation errors need to be corrected before the migration can start.

If no errors are found during validation, continue to navigate through the wizard, and click Finish on the Summary page to begin the active partition migration.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

39

Figure 33: Migration summary

An active partition migration might take several minutes depending on the amount of cache data to be transferred and the distance between systems.

If there is a problem with the migration that was not detected in the validation step, you might be instructed to recover the mobile partition on the source Power system. This will restore the active partition to the same state that it was in before migration started.

To start the partition recovery, in the HMC GUI navigation pane, click Systems Management, then click Servers, and then select the mobile partition on the source system. Click the Operations menu, and then click Mobility ���� Recover.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

40

Figure 34: Migration recover

When a successful migration is complete, you will notice that the LPAR is no longer listed on the source system, but listed as running on the destination system.

Figure 35: Migration success

Proof of concept test

The proof of concept solution is intended to illustrate the increased flexibility and ease of implementation when using Storwize V7000 system storage to set up an active Live Partition Mobility migration.

Components • Two IBM Power 750 servers configured with dual VIOS partitions, with each server having equal

operating resources available to facilitate active partition migration. • A client LPAR configured with IBM AIX 7.1 TL1 SP3 • IBM PowerVM Standard Edition with VIOS 2.2.0.11 configured on each VIOS partition • SDDPCM for IBM Storwize V7000 2.6.1.0 configured on the client LPAR • An IBM Storwize V7000 system version 6.3.0.0 with 300 GB 10K and 300 GB SSDs

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

41

• IBM HMC V7R7.3.0 • Gb Ethernet LAN connectivity between all partitions with adequate bandwidth to perform active

partition migration • Fibre Channel SAN using two IBM 2498-B24 Fibre Channel switches

A test application configured with the JFS2 file systems was configured and started on the mobile partition. The partition was migrated to the second Power system. The application was verified to be up and operating normally, and accessed through the same IP address used before the migration. The partition was migrated back to the original Power system and the application was verified to be up and operating normally.

Figure 36: The LPM active partition migration proof of concept test environment

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

42

Summary IBM virtualization products enable businesses to consolidate server and storage resources, and provide a more efficient and dynamic IT infrastructure that addresses stringent SLAs.

IBM storage products, Storwize V7000 and SAN Volume Controller, provide the following important features to virtualize storage.

• External storage virtualization allows pools of storage to be configured across physically-separate disk systems from different vendors.

• Thin provisioning and Real-time Compression are technologies that help provision storage more efficiently.

• Easy Tier automatically provides efficient use of high performing, more expensive SSD technology.

PowerVM virtualization features, including virtual Fibre Channel and Ethernet adapters, provide redundancy in the high-availability environment, while efficiently sharing adapters. These virtualized adapters provide the interface to virtualized storage through the Storwize V7000 system or SAN Volume Controller.

Creating disk volumes and assigning them to AIX LPARs is simplified with the use of the Storwize V7000 GUI and NPIV technology.

With LPM providing the ability to move an active LPAR to another POWER system, and Storwize V7000 providing the ability to move active storage volumes to different storage pools without disruption, the system administrator of the environment has powerful tools for implementing upgrades in software and firmware, changing performance characteristics in servers and storage, and avoiding outages for planned maintenance.

IBM storage virtualization products and IBM PowerVM are powerful tools in providing a fully virtualized computing platform that offers the degree of system and storage infrastructure flexibility that is required by today’s production data centers and cloud environments.

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

43

Resources The following websites provide useful references to supplement the information contained in this paper:

• System Storage on IBM PartnerWorld® ibm.com/partnerworld/systems/storage

• IBM Publications Center www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?CTY=US

• Power Systems InfoCenter http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/iphat/iphbllparwithhmcp6.htm

• IBM developerWorks® ibm.com/developerWorks

• IBM Redbooks ibm.com/redbooks

• IBM PowerVM Live Partition Mobility ibm.com/redbooks/redbooks/pdfs/sg247460.pdf

• IBM Storwize V7000 Unified implementation ibm.com/redbooks/redpieces/pdfs/sg248010.pdf

• IBM Real-time Compression ibm.com/redbooks/redpieces/pdfs/redp4859.pdf

• SAN Volume Controller Best Practices and Performance Guidelines ibm.com/redbooks/abstracts/sg247521.html?Open

• IBM PowerVM Best Practices ibm.com/redbooks/redpieces/pdfs/sg248062.pdf

• SDDPCM users guide ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1

• IBM PowerVM Virtualization Introduction and Configuration ibm.com/redbooks/redbooks/pdfs/sg247940.pdf

• PowerVM Migrating from Physical to Virtual Storage ibm.com/redbooks/redbooks/pdfs/sg247825.pdf

• Overview of the IBM Storwize V7000 ibm.com/redbooks/redpieces/pdfs/sg247938.pdf

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

44

Trademarks and special notices © Copyright IBM Corporation 2012.

References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction.

Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The

IBM storage virtualization combined with IBM PowerVM virtualization © Copyright IBM Corporation, 2012

45

information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.