deploying recover point with aix hosts technical notes 300-004-905

23
1 These technical notes describes required procedures and best practices when deploying RecoverPoint with AIX hosts: Introduction........................................................................................... 2 Environmental prerequisites............................................................... 4 Configuring consistency groups ...................................................... 10 Working with AIX virtual I/O.......................................................... 14 Working with HACMP ...................................................................... 21 EMC ® RecoverPoint Deploying RecoverPoint with AIX Hosts Technical Notes P/N 300-004-905 Rev A01 January 21, 2009

Upload: usmanqadeer

Post on 08-Apr-2015

905 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

These technical notes describes required procedures and best practices when deploying RecoverPoint with AIX hosts:

◆ Introduction........................................................................................... 2◆ Environmental prerequisites............................................................... 4◆ Configuring consistency groups ...................................................... 10◆ Working with AIX virtual I/O.......................................................... 14◆ Working with HACMP...................................................................... 21

EMC® RecoverPointDeploying RecoverPoint with AIX Hosts

Technical NotesP/N 300-004-905

Rev A01

January 21, 2009

1

Page 2: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

2

Introduction

IntroductionThe EMC RecoverPoint system provides full support for data replication and disaster recovery with AIX-based host servers.

This document presents information and best practices relating to deploying the RecoverPoint system with AIX hosts.

RecoverPoint can support AIX hosts with host-based, fabric-based, and array-based splitters. The outline of the installation tasks for all splitter types are similar to installation with any other host, but the specific procedures are slightly different. The work flow outline of the installation tasks for all splitter types is:

1. Creating and presenting volumes to their respective hosts at each location; LUN masking and zoning host initiators to storage targets (not covered in this document)

2. Creating the file system or setting raw access on the host.

3. Configuring RecoverPoint to replicate volumes (LUNs): configuring volumes, replication sets, and consistency group policies; attaching to splitters; and first-time initialization (full synchronization) of the volumes.

4. Validating failover and failback

Scope The document is intended primarily for system administrators who are responsible for implementing their companies’ disaster recovery plans. Technical knowledge of AIX is assumed.

AIX and replicating boot-from-SAN volumes

RecoverPoint does not support replicating AIX boot-from-SAN volumes when using host-based splitters. When using fabric-based or array-based splitters, boot-from-SAN volumes are replicated the same way as data volumes.

Related documents EMC RecoverPoint Administrator’s Guide

EMC RecoverPoint CLI Reference Guide

EMC RecoverPoint Installation Guide

EMC Deploying RecoverPoint with SANTap Technical Notes

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 3: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Introduction

Supported configurations

Consult the EMC Support Matrix for RecoverPoint for information about supported RecoverPoint configurations, operating systems, cluster software, Fibre Channel switches, storage arrays, and storage operating systems.

3EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 4: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

4

Environmental prerequisites

Environmental prerequisitesThis document assumes you have already installed a RecoverPoint system and are either replicating volumes or ready to replicate. In other words, it is assumed that the RecoverPoint ISO image is installed on each RecoverPoint appliance (RPA); that initial configuration, zoning, and LUN masking are completed; and that the license is activated. In addition, it is assumed that AIX hosts with all necessary patches are installed both at the production side and the replica side. In the advanced procedures (LPAR-VIO, HACMP, etc.), it is assumed that the appropriate hardware and environment are set up.

Ensuring fast_fail mode for AIX-based hosts

For AIX hosts using host-based splitters, the replicated device’s FC SCSI I/O Controller Protocol Device attribute (fc_err_recov) must be set to fast_fail. Although this setting is not mandatory for fabric-based or storage-based splitters, it is required by some multipathing software. Check your multipathing software’s user manual.

To check the current settings for the attribute, run:

# lsattr –El<FC_SCSI_I/O_Controller_Protocol_Device>

For example:

# lsattr –El fscsi0

Check the output for the current value of fc_err_recov.

If it is necessary to change fc_err_recov for a device, run:

# chdev –l<FC SCSI I/O Controller Protocol Device> –a fc_err_recov=fast_fail -P

Make the change for all relevant devices. Reboot the host machine. The change takes effect only after the host machine has been rebooted.

PowerPath device mapping

If you need to know the relationship between PowerPath logical devices (as used by the Logical Volume Manager) and the hdisks (as seen by the host), use the following command in PowerPath:

# powermt display dev=<powerpath_device_#>

Example:

# powermt display dev=0

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 5: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Environmental prerequisites

The output is in the following format:

Pseudo name=hdiskpower0Symmetrix ID=000190300519Logical device ID=011Dstate=alive; policy=SymmOpt; priority=0; queued-IOs=0==============================================================================---------------- Host -- Stor - -- I/O Path - -- Stats ---### HW Path I/O Paths Interf. Mode State Q-IOs Errors==============================================================================

1 fscsi1 hdisk15 FA 16cA active alive 0 21 fscsi1 hdisk21 FA 2cA active alive 0 20 fscsi0 hdisk3 FA 16cA active alive 0 20 fscsi0 hdisk9 FA 2cA active alive 0 2

In this example, hdiskpower0 consists of four other disks: hdisk15, hdisk21, hdisk3, and hdisk9.

AIX and SCSI reservation

By default, AIX hosts use SCSI reservation. Whether RecoverPoint can support SCSI reservation depends on the SCSI reservation type (SCSI-2 or SCSI-3) and the code level of the storage array. If RecoverPoint cannot support the SCSI reservation, it must be disabled on the AIX host.

SCSI-3 reservation is supported if the consistency group’s Reservation support is enabled in RecoverPoint.

SCSI-2 reservation is supported with host-based splitters according to Table 1 on page 6

5EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 6: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

6

Environmental prerequisites

For third-party storage arrays, consult EMC’s Support Matrix or EMC Customer Service.

The procedure for disabling SCSI reservation on AIX hosts follows.

Disabling SCSIreservations on AIX

host

A RecoverPoint appliance (RPA) cannot access LUNs that have been reserved with SCSI-2 reservations. For the RPA to be able to use those LUNs during replication, AIX SCSI-2 reservation on those LUNs must be disabled; that is, the AIX disk attribute reserve_policy must be set to no_reserve. For more information on the reserve_policy attribute, search for reserve_policy at www.ibm.com. On some AIX systems, reserve_lock = no is used instead of reserve_policy = no_reserve.

1. To check if reservations are enabled for a storage device, run:

# lsattr –El <disk_name>

Check for the reserve_policy. On some AIX systems, reserve_lock is used instead of reserve_policy.

2. To disable AIX reservation, change the value of the reserve_policy as follows:

# chdev –l <disk_name> –a reserve_policy=no_reserve

Table 1 RecoverPoint SCSI-2 support with AIX host-based splitters

With PowerPath Without PowerPath

Storage

Stand-alone host Host in cluster

Stand-alone host Host in cluster

CLARiiON OK OK Disable reserva-tions at host

Only for fabric-based or array based splitters

Symmetrix 5772 or later

OK OK OK OK

Symmetrix 5771 orearlier

Disable reserva-tions at host

Only for fabric-based or array based splitters

Disable reserva-tions at host

Only for fabric-based or array based splitters

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 7: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Environmental prerequisites

On some AIX systems, reserve_lock = no is used instead of reserve_policy = no_reserve.

If chdev returns device_busy, use one of the following workarounds:

• Close all applications using the disk service. Then run:

# sync# umount <mount_point># varyoffvg <volume_group>

Then run chdev again as before. Then mount the disks and reactivate the volume groups as follows:

# varyonvg <volume_group># mount /dev/r<logical_volume_name> /<mount_point>

Restart applications.

• Use the same command with the addition of the –P flag:

# chdev –l hdisk1 –a reserve_policy=no_reserve –P

When the -P flag is used, use of SCSI reservations will be disabled the next time the host is rebooted. The host must then be rebooted before RecoverPoint can be used.

7EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 8: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

8

Configuring RecoverPoint

Configuring RecoverPointTo use RecoverPoint with AIX hosts, the following tasks must be completed:

◆ Installing RecoverPoint appliances and configuring the RecoverPoint system

◆ Installing splitters on AIX servers

◆ Required device adjustments when using fabric-based splitters

◆ Configuring consistency groups for the AIX servers

◆ Performing first-time failover

Installing and configuring RecoverPoint appliances

To install RecoverPoint appliances and configure the RecoverPoint system, refer to the EMC RecoverPoint Installation Guide for your version of RecoverPoint.

Installing splitters on AIX servers

To install splitters on AIX servers, refer to the EMC RecoverPoint Installation Guide for your version of RecoverPoint.

Required adjustments when using fabric-based splitters with AIX servers

When using AIX operating system, adjustments are required to the RecoverPoint system because of the way AIX uses FC ID and Physical Volume Identifiers.

FC ID The AIX operating system uses the Fibre Channel identifier (FC ID) assigned by the fabric switch as part of the device path to the storage target. Fabric-based splitters rely on changing the FC ID to reroute I/Os. When attaching a volume to a fabric-based splitter, its FC ID may change. As a result, the AIX operating system may not recognize a volume that it accessed previously.

When using Brocade switches, to allow hosts running AIX to identify the volume with the new FC ID, hosts must disable the storage device (using rmdev -d) and rediscover them (using cfgmgr). When using SANTap switches, manually set the FC ID to be persistent.

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 9: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Configuring RecoverPoint

Brocade SAS-based splitter:a. After a volume is attached to Brocade SAS-based splitter,

remove the volume from the host:

# rmdev -dl <volume>

b. Rediscover the volume, allowing the hosts to reread the FC ID of the volume:

# cfgmgr

This procedure is required for both all production and all replica volumes. If the volumes are unbound, it will be necessary to repeat this procedure.

SANTap-based splitter:a. When a volume is attached to a SANTap-based splitter,

manually make the FC ID of storage volumes persistent. Refer to the section “Persistent FC ID” in EMC Deploying RecoverPoint with SANTap Technical Notes for instructions.

b. When the persistent FC ID is used, no additional special procedures are required, because the FC ID is not altered.

If persistent FC ID is not used, the following procedure must be carried out:

1. After moving the AIX initiators to the front-end VSAN, remove the volume from the host:# rmdev -dl <volume>

2. Rediscover the volume, allowing the hosts to reread the FC ID of the volume:# cfgmgr

If the persistent FC ID is not used, this procedure is required for all production and replica volumes. If initiators are moved to the back-end VSAN, it will be necessary to repeat the procedure.

Physical Volume Identifier

RecoverPoint replicates at the block level. As soon as the replica storage is initialized for replication, the Physical Volume Identifier (PVID) of production storage devices will be copied to replica storage devices. However, the Object Data Manager at the replica side will know the replica volume by its PVID before initialization. It will not recognize the volume with the replicated PVID. Run “First-time

9EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 10: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

10

Configuring RecoverPoint

failover” on page 10 to allow the AIX hosts at the replica to update their Object Data Manager and recognize the volumes with new PVIDs.

Configuring consistency groups

Configuring consistency groups for AIX servers is identical to the procedure for other operating systems. Refer to the EMC RecoverPoint Administrator’s Guide for instructions.

First-time failover You must perform this procedure on a consistency group before you can access images or fail over the consistency group, because RecoverPoint will change the Physical Volume Identifier of the volumes. Carry out the following procedure after first-time initialization of a consistency group:

1. Ensure that the initialization of the consistency group has been completed.

2. At the production host, stop I/O to volumes being replicated, and unmount the production file systems:

# sync# umount <mount_point># varyoffvg <volume_group_name>

3. Access an image. For instructions, refer to “Accessing a Replica” in the EMC RecoverPoint Administrator’s Guide.

4. When using host-based splitters on the replica host, detach all volumes from the replica-side host-based splitters.

5. At the replica-side host, force AIX to reread volumes. Run the following commands.

a. For each storage device, update the volume information:

# chdev -l <virtual_disk#> -a pv=yes

b. Run the following command on both the production and the replica Virtual I/O server and verify that the production and replica devices have the same Physical Volume Identifiers:

# lspv

6. Disable image access.

7. When using host-based splitters on the replica side, you must reattach volumes. Attaching volumes to splitters will trigger a full synchronization. Since storage volumes are not mounted at this

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 11: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Configuring RecoverPoint

point, no data has changed. As a result, if you need to avoid a full resynchronization because of bandwidth limitations, you may clear markers (attach as clean) and not resynchronize the volumes. The best practice is to allow a full resynchronization at this point and not to clear markers or attach as clean.

a. Reattach the volumes to those host-based splitters. For instructions, refer to the EMC RecoverPoint Administrator’s Guide.

b. Repeat this step for each volume that you detached in Step 4 on page 10.

8. Wait (if needed) for the full resynchronization to finish. Then, at the replica-side host, test the data integrity. To do so, run the following commands.

a. Enable image access.

b. At the replica AIX host, import the volume group:

# importvg –y <vg_name> <hdisk#>

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

c. Mount the volume:

# mount /dev/r<logical_volume_name> /<mount_point>

d. Verify that all required data is on the volume.

9. At the replica AIX host, export the volume group:

# umount <mount_point(s)># varyoffvg <volume_group_name># exportvg <volume_group_name>

10. Disable image access.

11. At the production side, mount the storage volumes on the production host.

Failing over and failing back

After performing first-time failover, subsequent failovers do not require disabling and enabling the storage devices.

Planned failover A planned failover is used to make the remote side the production side, allowing planned maintenance of the local side. For more

11EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 12: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

12

Configuring RecoverPoint

information about failovers, see the chapter “Failover choices” in the EMC RecoverPoint Administration Guide.

1. Ensure that the first initialization for the consistency group has been completed.

2. At the production-side host AIX host, stop the applications. Then flush the local devices and unmount the file systems (stop I/O to the mount point):

a. Run the following commands:

# sync# umount <mount_ point># varyoffvg <volume_group_name>

3. At the RecoverPoint Management Application:

a. Access an image. Fail over to the image. Refer to the EMC RecoverPoint Administrator’s Guide for instructions.

4. At the replica-side AIX host:

a. Import the volume group:

# importvg –y <volume_group_name> <hdisk#>

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

b. Mount the file system:

# mount /dev/r<logical_volume_name> /<mount_point>

c. Run the application on the replica-side host.

5. For instructions to fail back to the production side, refer to the EMC RecoverPoint Administrator’s Guide.

Unplanned failover RecoverPoint is designed for recovering from disasters and unexpected failures. Use the following procedure.

1. At the RecoverPoint Management Application, access an image.

2. At the replica-side AIX host, import the volume group on the replica-side host:

# importvg –y <volume_group_name> <hdisk#>

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 13: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Configuring RecoverPoint

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

d. Mount the file system:

# mount /dev/r<logical_volume_name> /<mount_point>

e. Run the application.

3. Fail over to the image. Verify that it is not corrupted. Refer to the EMC RecoverPoint Administrator’s Guide for instructions.

Failing back For instructions to fail back to the production side, refer to the EMC RecoverPoint Administrator’s Guide.

13EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 14: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

14

Working with AIX virtual I/O

Working with AIX virtual I/O

Virtual I/O Overview The Virtual I/O Server is part of the IBM System p Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between logical partitions (LPARs), including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.

For more information about the Virtual I/O Server, search at www.ibm.com for Virtual I/O Server Advanced Power Virtualization.

RecoverPoint and Virtual I/O

Deploying RecoverPoint in a system with Virtual I/O Server and the IBM System p Advanced Power Virtualization hardware feature requires detailed knowledge of volumes, and an understanding of the implications and special handling of Virtual I/O configuration.

The following procedure illustrates the recommended practices for such deployment.

The steps for configuring Virtual I/O in several basic RecoverPoint scenarios will be presented.

Note: Due to the nature of the Virtual I/O implementation, you may replicate a disk volume only between two Virtual I/O systems or between two non-Virtual I/O systems. RecoverPoint does not support replicating between a Virtual I/O system and a non-Virtual I/O system.

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 15: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Working with AIX virtual I/O

Replicating Virtual I/O

Virtual I/O is configured in the following manner:

Storage—a storage array, which contains the user volume(s) and the RecoverPoint volumes (optional).

LUN—the LUN from the storage which is to be replicated with RecoverPoint and which contains the user data, from which the virtual disks are created.

AIX LPAR—dynamic partitioning of server resources into logical partitions (LPARs), each of which can support a virtual server and multiple clients.

Virtual I/O Server—the instance in the LPAR which runs the server portion of the Virtual I/O. This instance uses the physical HBA(s) and sees the ‘real world’.

vdisk—the portion of the LUN presented to the VIO client by the VIO server.

Virtual I/O Client—the instance in the LPAR which runs the client portion of the Virtual I/O. The client does not have access to the physical devices (HBA, LUN, etc.) but gets a virtual device via the Virtual I/O mechanism instead. Its disk is of the type Virtual SCSI Disk Drive.

Volume group—The Virtual I/O server has a volume group configured on the storage LUN. One of the volumes in the volume group is mapped to the Virtual I/O client.

Storage

vdisk

vdisk

Volume group

Volume group

vdisk

vdisk

VirtualI/O

client

VirtualI/O

client

LUN

VirtualI/O

server

AIX LPAR

15EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 16: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

16

Working with AIX virtual I/O

First-time failover The need for first-time failover is explained in “Required adjustments when using fabric-based splitters with AIX servers” on page 8. After first-time initialization of a consistency group:

1. Ensure that the initialization of the consistency group has been completed.

2. At the production host, stop I/O to replicated volumes, and unmount the file systems:

# sync# umount <mount_point># varyoffvg <volume_group_name>

3. Access an image. For instructions, refer to “Accessing a Replica” in the EMC RecoverPoint Administrator’s Guide.

4. When using host-based splitters on the replica host, detach all volumes from the replica-side host-based splitters.

5. At the replica-side Virtual I/O server, force AIX to reread volumes. For each storage device, update the disk information:

# chdev -l <virtual_disk#> -a pv=yes

6. Run the following command on both the production and the replica Virtual I/O server and verify that the production and replica devices have the same Physical Volume Identifiers:

# lspv

7. When using host-based splitters on the replica side, you must reattach volumes. Attaching volumes to splitters will trigger a full synchronization. Since storage volumes are not mounted at this point, no data has changed. As a result, if you need to avoid a full resynchronization because of bandwidth limitations, you may clear markers (attach as clean) and not resynchronize the volumes. The best practice is to allow a full resynchronization at this point and not to clear markers or attach as clean.

a. Reattach the volumes to those host-based splitters. For instructions, refer to the EMC RecoverPoint Administrator’s Guide.

b. Repeat this step for each volume that you detached in Step 4 on page 10.

8. At the replica virtual I/O client, import the volume group:

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 17: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Working with AIX virtual I/O

# importvg –y <vg_name> <virtual_disk#>

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

9. On the replica Virtual I/O Server, map the disk or disks to the Virtual I/O clients. Use the following command:

# ioscli mkvdev -vdev <device_name> -vhost <vhost_name> -dev <virtual_lun_name>

On some AIX systems, vadapter is used instead of vhost.

10. Use the following command to verify the mapping:

# ioscli lsmap -all

11. On the replica virtual I/O client, rediscover the disks:

# cfgmgr

12. Wait (if needed) for the full resynchronization to finish. Then, at the replica-side virtual I/O client, test the data integrity. To do so, run the following commands.

a. Enable image access.

b. At the replica virtual I/O client, import the volume group:

# importvg –y <volume_group_name> <virtual_disk#>

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

c. Mount the volume:

# mount /dev/r<logical_volume_name> /<mount_point>

d. Verify that all required data is on the volume.

13. At the replica side, stop host access and unmount the file systems. On the remote virtual I/O client, unmount the file systems:

# umount <mount_point>

On the replica virtual I/O server, deactivate and export the volume group:

17EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 18: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

18

Working with AIX virtual I/O

# varyoffvg <volume_group_name># exportvg <volume_group_name>

14. Disable image access.

15. At the production side, mount the storage volumes on the production host.

Failover After the first failover has been completed (“First-time failover” on page 10), subsequent failovers do not require disabling and enabling the storage devices.

Planned failover A planned failover is used to make the remote side the production side, allowing planned maintenance of the local side. For more information about failovers, refer to the EMC RecoverPoint Administration Guide.

1. Ensure that the first initialization for the consistency group has been completed.

2. At the production-side virtual I/O client, stop the applications. Then flush the local devices and unmount the file systems (stop I/O to the mount point):

# sync# umount <mount_point(s)># varyoffvg <volume_group_name>

3. At the RecoverPoint Management Application, access an image. Refer to the EMC RecoverPoint Administrator’s Guide for instructions.

4. At the replica-side virtual I/O server, import the volume group:

# importvg –y <volume_group_name> <virtual_disk#>

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

5. On the replica-side virtual I/O server, map the disks to the virtual I/O client:

# ioscli mkvdev -vdev <device_name> -vhost <vhost_name> -dev <virtual_lun_name>

On some AIX systems, vadapter is used instead of vhost.

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 19: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Working with AIX virtual I/O

6. Use the following command to verify the mapping:

# ioscli lsmap -all

7. On the replica virtual I/O client, rediscover the disks:

# cfgmgr

8. Then, at the replica-side virtual I/O client, test the data integrity. To do so, run the following commands.

a. Enable image access.

b. At the replica virtual I/O client, import the volume group:

# importvg –y <volume_group_name> <virtual_disk#>

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

c. Mount the volume:

# mount /dev/r<logical_volume_name> /<mount_point>

d. Verify that all required data is on the volume. If you wish to test a different image, umount and varyoffvg, change the selected image, and run the procedure again.

9. Run the application on the replica-side LPAR (logical partition).

Failing back 10. For instructions to fail back to the production side, refer to the EMC RecoverPoint Administrator’s Guide.

Unplanned failover RecoverPoint is designed for recovering from disasters and unexpected failures. Use the following procedure.

1. At the RecoverPoint Management Application, access an image.

2. At the replica-side virtual I/O server, import the volume group:

# importvg –y <volume_group_name> <virtual_disk_name>

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

19EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 20: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

20

Working with AIX virtual I/O

3. On the replica-side virtual I/O server, map the disks to the virtual I/O client:

# ioscli mkvdev -vdev <device_name> -vhost <vhost_name> -dev <virtual_lun_name>

On some AIX systems, vadapter is used instead of vhost.

4. Use the following command to verify the mapping:

# ioscli lsmap -all

5. On the replica virtual I/O client, rediscover the disks:

# cfgmgr

6. At the replica-side virtual I/O client, test the data integrity.

a. At the replica virtual I/O client, import the volume group:

# importvg –y <volume_group_name> <virtual_disk#>

Even if the volume group contains multiple disks, it is sufficient to specify just one; all of the other disks in the volume group will be imported automatically using the information from the one specified.

b. Mount the volume:

# mount /dev/r<logical_volume_name> /<mount_point>

c. Verify that all required data is on the volume. If you wish to test a different image, umount and varyoffvg, change the selected image, and run the procedure again.

7. Run the application on the replica-side LPAR (logical partition).

8. If you wish to reverse the direction of replication, at the RecoverPoint Management Application, define the remote side as production. Refer to refer to the EMC RecoverPoint Administrator’s Guide for details.

Failing back For instructions to fail back to the production side, refer to the EMC RecoverPoint Administrator’s Guide.

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 21: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Working with HACMP

Working with HACMP

HACMP overview HACMP (High Availability Cluster Multi-Processing) is a host cluster system that runs on the IBM AIX operating system.

First-time failover After initial synchronization of a consistency group, it is necessary to disable and enable the storage devices on the AIX servers at the replica side. For details, refer to “Required adjustments when using fabric-based splitters with AIX servers” on page 8.

The first-time failover procedure for HACMP is similar to the procedure for stand-alone hosts, expect for the mounting and unmounting of file systems.

You must perform “First-time failover” on page 10 before you can access images or fail over that consistency group, because RecoverPoint will change the Physical Volume Identifiers of the volumes at the replica side. Carry out the following procedure after first-time initialization of a consistency group:

1. Ensure that the initialization of the consistency group has been completed.

2. At the production side, take the replicated resource off-line:

# clRGmove -s false -d -i -g <resource_group_name> -n <node_name>

The same command will also sync, umount, and varyoffvg the volume group.

3. Access an image. For instructions, refer to “Accessing a Replica” in the EMC RecoverPoint Administrator’s Guide.

4. When using host-based splitters on the replica host, detach all volumes from the replica-side host-based splitters.

5. At the replica-side host, force AIX to reread volumes. Run the following commands.

a. For each storage device, update the disk information:

# chdev -l <device name> -a pv=yes

b. Run the following command on both the production and the replica Virtual I/O server and verify that the production and replica devices have the same Physical Volume Identifiers:

21EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 22: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

22

Working with HACMP

# lspv

6. Disable image access.

7. When using host-based splitters on the replica side, you must reattach volumes. Attaching volumes to splitters will trigger a full synchronization. Since storage volumes are not mounted at this point, no data has changed. As a result, if you need to avoid a full resynchronization because of bandwidth limitations, you may clear markers (attach as clean) and not resynchronize the volumes. The best practice is to allow a full resynchronization at this point and not to clear markers or attach as clean.

a. Reattach the volumes to those host-based splitters. For instructions, refer to the EMC RecoverPoint Administrator’s Guide.

b. Repeat this step for each volume that you detached in Step 4 on page 10.

8. Bring the resource group back on-line at the replica side. Use the following command:

# clRGmove -s false -u -i -g <resource_group_name> -n <node_name>

The same command will also vary on the volume group and mount the file systems.

9. At the replica-side host, test the data integrity. Verify that all required data is on the volume.

10. After testing is completed, disable image access at the replica side.

11. Bring the resource group back on-line at the production side. Use the following command:

# clRGmove -s false -u -i -g <resource_group_name> -n <node_name>

The same command will also vary on the volume group and mount the file systems.

Failing over and failing back

The procedures for failing over and failing back HACMP clusters are identical to the procedures for stand-alone hosts, except for the commands for taking resources off-line and bringing them on-line at the active side.

EMC RecoverPoint Deploying RecoverPoint with AIX hosts

Page 23: Deploying Recover Point With AIX Hosts Technical Notes 300-004-905

Working with HACMP

Use the following command to take resources off-line:

# clRGmove -s false -d -i -g <resource_group_name> -n <node_name>

The same command will also sync, umount, and varyoffvg the volume group.

Use the following command to bring resources on-line:

# clRGmove -s false -u -i -g <resource_group_name> -n <node_name>

The same command will also vary on the volume group and mount the file systems.

Copyright © 2009 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.All other trademarks used herein are the property of their respective owners.

23EMC RecoverPoint Deploying RecoverPoint with AIX hosts