recover point implementation lab guide supplementv3.2

43
RecoverPoint Implementation Lab Guide Supplement September 2009

Upload: mlinarec

Post on 10-Apr-2015

1.250 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

September 2009

Page 2: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 2 of 43

Copyright

Copyright © 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC, ICDA® (Integrated Cached Disk Array), and EMC2® (the EMC logo), and Symmetrix®, are registered trademarks of EMC Corporation. EMC™ and SRDF™ are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.

Page 3: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 3 of 43

Trademark Information

EMC Trademarks

EMC2, EMC, Symmetrix, Celerra, CLARiiON, CLARalert, Connectrix, Dantz, Documentum, HighRoad, Legato, Navisphere, PowerPath, ResourcePak, SnapView/IP, SRDF, TimeFinder, VisualSAN, and where information lives are registered trademarks and EMC Automated Networked Storage, EMC ControlCenter, EMC Developers Program, EMC OnCourse, EMC Proven, EMC Snap, Access Logix, AutoAdvice, Automated Resource Manager, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, Centera, CentraStar, CLARevent, CopyCross, CopyPoint, DatabaseXtender, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, Enginuity, FarPoint, FLARE, GeoSpan, InfoMover, MirrorView, NetWin, OnAlert, OpenScale, Powerlink, PowerVolume, RepliCare, SafeLine, SAN Architect, SAN Copy, SAN Manager, SDMS, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, Universal Data Tone, and VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.

Third Party Trademarks

AIX is a registered trademark of International Business Machines Corporation. Brocade, SilkWorm, SilkWorm Express, and the Brocade logo are trademarks or registered trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Compaq and the names of Compaq products referenced herein are either trademarks and/or service marks or registered trademarks and/or service marks of Compaq. Hewlett-Packard, HP, HP-UX, OpenView, and OmniBack are trademarks, or registered trademarks of Hewlett-Packard Company. McDATA, the McDATA logo, and ES-2500 are registered trademarks of McDATA Corporation. Microsoft, Windows, and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. NobleNet is a registered trademark of Rogue Wave Software, Inc. SANbox is a trademark of QLogic Corporation. Sun, Sun Microsystems, the Sun Logo, SunOS and all Sun-based trademarks and logos, Java, the Java Coffee Cup Logo, and all Java-based trademarks and logos, Solaris, and NFS, are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. UNIX is a registered trademark of The Open Group.

Page 4: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 4 of 43

Document Revision History

Rev # File Name Date

1.0 Lab Guide Supplement 3.0 March 2008 1.1 Lab Guide Supplement 3.0 April 2008 2.0 RP3.1 Workshop Lab Guide Supplement.doc December 2008 2.1 RP3.1.1_Workshop_Labguide_Supplement.doc July 2009 2.2 RPImp_Labguide_Supplement.doc September 2009

Page 5: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 5 of 43

Table of Contents

Lab Exercise S1: Working With AIX Volume Management...............................................7

Part 1: AIX Volume Group and File System Setup ............................................................8 Part 2: Working with Bookmarks on AIX ..........................................................................13

Lab Exercise S2: Presenting RPAs to a VMWare Windows Guest ................................15 Part 1: Adding RPAs as RDM Volumes to a Windows Guest ..........................................16

Lab Exercise S3: Working with RDM Volumes on a VMWare Guest .............................19 Part 1: Adding RDM Volumes to VMWare Windows Guest .............................................20 Part 2: Initialize the New Windows Disk...........................................................................27

Lab Exercise S4: Brocade Fabric Splitter (Multi-VI) ........................................................31 Part 1: Configure Zoning and Masking – Brocade Splitter in Multi-VI Mode ....................32

Appendix: Command Syntax Reference ..........................................................................40

Page 6: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 6 of 43

This page intentionally left blank.

Page 7: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 7 of 43

Lab Exercise S1: Working With AIX Volume Management

Purpose:

Create a volume group, logical volume, and file system for use with data replication exercises on an AIX host.

Tasks: In this lab, you perform the following tasks: • Configure the new volumes and attain continuous sync on the new

consistency group. • Successfully mount a bookmark file system on the Site 2 host

replicated volume group.

References: • EMC RecoverPoint Installation Guide

• EMC RecoverPoint Administrator’s Guide

Page 8: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 8 of 43

Part 1: AIX Volume Group and File System Setup Step Action

1 On the Site 1 production host, use the powermt display command to display a list of the PowerPath devices.

2 Use the lsvg command to display a list of the current volume groups. As you perform the

steps in this lab, be sure you do not perform any operations on the rootvg volume group.

Page 9: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 9 of 43

Step Action 3 Use smitty to create a new volume group using at least two PowerPath devices. If you are

familiar with the commands needed to create a volume group, logical volume, and file system on IBM AIX you may use those commands instead of smitty.

4 Use the lsvg -p command to list the devices in the volume group and verify the physical

volume mapping.

5 Use smitty to create an enhanced journaled file system (jfs2).

Page 10: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 10 of 43

Step Action 6 Make the file system large enough to span at least two physical volumes.

7 Use the lsvg –l command to display the newly created volume group, logical volume, and

file system details.

8 Mount the new file system (if not already mounted).

Page 11: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 11 of 43

Step Action 9 Use the lsattr command to examine the hdiskpower devices for the reserve_lock attribute.

10 If the reserve_lock attribute is set to yes, use the chdev command to change the setting to

no.

Page 12: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 12 of 43

Step Action 11 Use the lsattr command to examine the fc_err_recov attribute setting.

12 If the fc_err_recov is set to delayed_fail, use the chdev command to change the setting to

fast_fail.

End of Lab Exercise

Page 13: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 13 of 43

Part 2: Working with Bookmarks on AIX Step Action

1 Before taking a bookmark, on the Site 1 host, umount the file system, vary off the volume group, and export the volume group. This ensures the volume group is in the correct state to enable importing on the Site 2 host.

2 From the RecoverPoint Management Console, create and access a bookmark. Reference

Lab 5 - Managing Replication Jobs in the main lab guide for specific steps and examples.

3 On the Site 2 target host, run the cfgmgr command to rescan devices.

4 On the Site 2 target host, run the lspv command to validate the device labels have been created.

5 Use the importvg command to import the Site 2 target volume group and mount the target volume file system.

6 Before exiting logged access on the target volume, umount the file system, vary off the

volume group, and export the volume group.

End of Lab Exercise

Page 14: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 14 of 43

This page intentionally left blank.

Page 15: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 15 of 43

Lab Exercise S2: Presenting RPAs to a VMWare Windows Guest

Purpose:

Present RecoverPoint Appliances as RDM volumes to a VMWare Windows guest.

Tasks: In this lab, you perform the following tasks: • Modify RPA site details to present RPAs to VMWare guests. • Add RPAs to a VMWare guest as RDM volumes.

References: • EMC RecoverPoint Installation Guide

• EMC RecoverPoint Administrator’s Guide

Page 16: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 16 of 43

Part 1: Adding RPAs as RDM Volumes to a Windows Guest Step Action

1 Log into each RPA and modify Site details, Number of exposed LUNs. Up to 16 LUNs can be supported. In this lab, we have only 1 VM so this number can be set to 1. It would need to be changed if more VMs were added to the RecoverPoint configuration. This must be done for Site 1 and Site 2 RPAs.

2 From the VMWare Client interface, select the ESX server in the tree panel and click the

Configuration tab in the management panel on the right. Click Storage Adapters under Hardware and click Rescan. Select Scan for New Storage Devices. Click Ok. This makes the RPA targets visible to the ESX server.

Page 17: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 17 of 43

Step Action 3 Once the scan is complete, verify the RPA targets are visible.

4 The RPA targets must be added as RDMs to the ESX server. From the VMWare Client

click the Virtual Machines tab. Right click your VM and select Edit Settings from the drop down menu. Use the Add Hardware Wizard to add each RPA target as an RDM.

Page 18: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 18 of 43

Step Action 5 Open a console on your VM and verify the RPA targets appear under Device Manager.

End of Lab Exercise

Page 19: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 19 of 43

Lab Exercise S3: Working with RDM Volumes on a VMWare Guest

Purpose:

Create RDM volumes for a VMWare Windows Guest for use with data replication exercises.

Tasks: In this lab, you perform the following tasks: • Configure the new volumes and attain continuous sync on the new

consistency group.

References: • EMC RecoverPoint Installation Guide

• EMC RecoverPoint Administrator’s Guide

Page 20: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 20 of 43

Part 1: Adding RDM Volumes to VMWare Windows Guest Step Action

1 Open a Console on your VMWare Windows Guest O/S. Open Computer Management > Device Manager and expand Disk Drives. You see only the VMWare Virtual disk which represents the guest O/S’s boot device. This device is not replicated with RecoverPoint. Shutdown Windows before proceeding with Step 2.

Page 21: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 21 of 43

Step Action 2 From the VMWare Client interface, select the ESX server in the tree panel and click the

Virtual Machines tab in the management panel on the right. Right-click your virtual machine and select Edit Settings from the drop-down menu.

Page 22: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 22 of 43

Step Action 3 Highlight Hard Disk 1 and click Add.

4 From the Add Hardware Wizard, select Hard Disk. Click Next.

Page 23: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 23 of 43

Step Action 5 To replicate devices on a VMWare guest with the host splitter, the devices must be added to

the guest OS as Raw Device Mappings (RDM). From the Add Hardware Wizard, select Raw Device Mappings. Click Next.

6 From the Add Hardware Wizard, select a device to configure as a raw LUN. Click Next.

Page 24: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 24 of 43

Step Action 7 From the Add Hardware Wizard, select Store with Virtual Machine. Click Next.

8 From the Add Hardware Wizard under Compatibility, select Physical. Click Next.

Page 25: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 25 of 43

Step Action 9 From the Add Hardware Wizard, accept the default Virtual Device Node presented by

the wizard. Click Next.

10 From the Add Hardware Wizard, review the options summary before committing

changes. If the options are correct click Finish to continue. If the options are not correct click Back to make changes.

Page 26: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 26 of 43

Step Action 11 Once you click Finish, the Virtual Machine Properties window displays the new drive with

a status of (adding). The device is not added until you click Ok. If you want to add more devices click Add. Be aware that the first device you selected is still on the list of devices to add. After clicking Ok to complete the add operation, the device is no longer listed.

12 Monitor the status of the Add at the bottom of the VMWare Client window in the Recent

Tasks panel. Once the Add completes you can restart the Windows OS.

13 Perform Steps 1 through 12 on the Site 2 VMWare server.

Continued on next page

Page 27: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 27 of 43

Part 2: Initialize the New Windows Disk Step Action

1 Open a Console on your VM. Open Computer Management > Device Manager and expand Disk Drives. If you do not see the device listed, right click Disk drives and select Scan for hardware changes from the drop-down menu.

2 Expand Disk drives and verify your device is visible.

Page 28: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 28 of 43

Step Action 3 From Computer Management, select Disk Management. The Initialize and Convert Disk

Wizard launches automatically because the new device has not been initialized. Click Next.

4 From the Initialize and Convert Disk Wizard, select your disk to initialize. Click Next.

When presented with the window to convert the disk, click Next without selecting the disk.

Page 29: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 29 of 43

Step Action 5 From the Initialize and Convert Disk Wizard review the summary and click Finish to

complete the initialization.

6 From Computer Management > Disk Management right-click the new disk and select

New Partition from the drop-down menu. Create an NTFS on the disk and format.

Page 30: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 30 of 43

Step Action 7 Once the disk is formatted and has a drive letter assigned you can add files via Windows

Explorer. From this point, use kutils to work with bookmarks as you would for any Windows system.

End of Lab Exercise

Page 31: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 31 of 43

Lab Exercise S4: Brocade Fabric Splitter (Multi-VI)

Purpose:

Perform zoning and masking required for implementing a Brocade fabric based splitter in Multi-VI mode.

Tasks: • Create zones needed to support fabric splitting in Multi-VI mode • Perform LUN masking needed to support fabric splitting in Multi-VI

mode

References: Deploying RecoverPoint with the Brocade AP7600 and FA4-18 Technical Notes

Page 32: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 32 of 43

Part 1: Configure Zoning and Masking – Brocade Splitter in Multi-VI Mode

The procedure in this section is for Multi-VI mode. If you are going to implement Frame Redirect mode, go to Lab8 , Part 4 - Configure Zoning – Frame Redirect of the full lab guide document. Step Action

1 To configure Multi-VI, set up zoning and LUN masking for the RPAs, Host VI’s (HVI) and Host Initiators as necessary. Reference the table below for the list of zones needed for Multi-VI. In this exercise, you are migrating from a host splitter environment. Delete the RPA target to Host initiator and Host initiator to Storage Target zone when you are finished implementing Multi-VI. If this is a new implementation (rather than a migration), remove the original Host to Storage zone only.

2 Run the cfgacvtshow command to display the current effective configuration on your switch and verify which zones are part of the configuration. Your zones should be configured for host-based splitting. The existing RPA to storage target zone is the RPA Initiator Zone. If this zone(s) does not exist, create it now. Make note of the configuration name, you need this information to add zones to the configuration in a later step.

Page 33: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 33 of 43

Step Action 3 Create the RPA Target Zone. Add the System VI and RPA ports to this zone. This enables

the RPA to locate the Connectrix-based splitter. Command Syntax: zonecreate “zone name”, “RPA1 PWWNa; RPA1 PWWNb; RPA2 PWWNa; RPA2 PWWNb; SysVI WWN”

4 Create the RPA Front End Zone. Add the RPA ports and appliance virtual targets (AVTs)

to the zone. The AVT WWNs are on the BP listed in: /thirdparty/recoverpoint/init_host/scimitar_wwns_list.txt Command Example: BFOS: root> more scimitar_wwns_list.txt Command Syntax: zonecreate “zone name”, “RPA1 PWWNa; RPA1 PWWNb; RPA2 PWWNa; RPA2 PWWNb; AVT WWN; AVT WWN; ...”

Displaying the AVT WWNs Using the more Command

Page 34: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 34 of 43

Step Action 5 Add the new zones to the configuration and enable the configuration. This makes the

splitter visible in the RecoverPoint Management Application which you add in the next step. Command Syntax: cfgadd “configuration name”, “zone name; zone name” cfgenable “configuration name”

Adding Zones and Enabling Configuration

6 Use the RecoverPoint Management Application or CLI to add the Connectrix-based splitter to the RecoverPoint configuration.

Adding the Splitter in the GUI

Page 35: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 35 of 43

Step Action 7 SSH into the RPA as admin and use the bind_host_initiators CLI command to

configure host binding to virtual initiators. Enter the site name and the splitter name when prompted. Choose 2 (No) to configure for Multi-VI mode. Select your physical host initiator PWWN from the list. If the splitter was not successfully added in the previous step, the bind operation will not work.

Example: bind_host_initiators

Page 36: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 36 of 43

Step Action 8 The bind command returns the WWN of the virtual initiator that is bound to the host

initiator. You can view the binding relationship from the RPA admin prompt by running the get_initiator_bindings command.

Output: get_initiator_bindings

9 Add the virtual initiator WWN that you obtained during the binding process to the RPA Target Zone. Command Syntax: zoneadd “RPA Target Zone name”, “HVI WWN”

Adding the HVI to the “RPA Target Zone”

10 Create the Connectrix Zone and add the host virtual initiator and physical storage target. Command Syntax: zonecreate “zone name”, “HVI WWN; Storage target PWWN”

Creating the “Connectrix Zone”

Page 37: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 37 of 43

Step Action 11 Add the Connectrix Zone to the configuration and enable the configuration.

Command Syntax: cfgadd “configuration name”, “zone name” cfgenable “configuration name”

Adding the “Connectrix Zone” and Enabling the Configuration

12 Log into the RPA as admin and run rescan_san. Then run get_virtual_targets to obtain the list of virtual targets accessible to each host initiator.

Output: get_virtual_targets

Page 38: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 38 of 43

Step Action 13 Create the Front-end Zone and add the physical host initiator and virtual storage target.

Command Syntax: zonecreate “zone name”, “Host initiator PWWN; Virtual Storage target WWN”

Creating the “Front-end Zone”

14 Add the new zone to the configuration and enable the configuration. Command Syntax: cfgadd “configuration name”, “zone name” cfgenable “configuration name”

Adding the “Front-end Zone” and Enabling the Configuration

15 Define storage LUN masking for the virtual initiators according to the host binding scheme. The production volumes must be masked to the host virtual initiator WWN before you remove the original host to storage zone in the next step. If you do not complete the masking operation, the host looses access to the devices when you remove the host to storage zone. Symmetrix Example: symmask

Page 39: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 39 of 43

Step Action 16 Currently, there should be two zones in your active configuration which are no longer

needed. The first is the zone containing the RPA ports and the host initiator PWWN (which was used for the host based splitter configuration). The second is the original host to storage zone containing the physical host initiator PWWN and the physical storage target PWWN (Initiator-target Zone). Remove these zones from the active configuration. Command Syntax: cfgremove “configuration name”, “zone name; zone name” cfgenable “configuration name”

Removing Zones From the Configuration and Enabling the Configuration

17 Re-enable the path you disabled when preparing to migrate from host-based to fabric-based splitting.

18 Repeat these steps on the second fabric and then at site 2 (both fabrics).

End of Lab Exercise

Page 40: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 40 of 43

Appendix: Command Syntax Reference

Symcli Common Steps Step 1: List Addresses symcfg -sid XXX -dir 15a -p 0 -addresses -available list symdev -sid XXX list Step 2: Mask Devices symmask -sid XXX -wwn 123456789ABACABA -dir 13a -p 0 add devs "XXXX:YYYY" Step 3: Send changes to the FAs symmask -sid XXX refresh

Other Commands Show the Contents of the VCMDB symmaskdb -sid XXX list database Show the Logins to the VCMDB symmaskdb -sid XXX list logins Show the Symm WWNs symcfg -sid XXX -fa all list

:: ADDITIONAL SOLUTIONS ENABLER COMMANDS ::

Symcfg Commands symcfg discover – discover the storage environment symcfg list -list local and remote symmetrixes symcfg list –clariion -list clariions symcfg -dir all list - To get configuration and status information about all directors symcfg list –v – lists whether the Symmetrix director has device masking turned on symcfg list –FA all list – lists all fibre directors in a Symmetrix system symcfg list -dir all -address -sid 6196 - identify the address information for devices symcfg list -dir all -address -available -sid 6196 -returns the next available LUN address symcfg list -lockn all - list of visible Symm exclusive locks symcfg -sid 098712341357 -lockn 15 release - release a lock on a Symmetrix array. Symdev Commands symdev list -list all devices on symm symdev –sa –p list -list devices maped to that one FA symdev list –bcv or –rdf1 -list all bcv or rdf1 volumes

Page 41: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 41 of 43

symdev list –noport - list devices not mapped to any FE ports symdev list -clariion Commands to see devices sympd list – lists the Symmetrix devices that the host can see sympd list –vcm – lists all the physical device name in the device masking database syminq – syminq -cid - list Clariion devices syminq hba –fibre -list HBA inq – symdisk Symmaskdb Commands symmaskdb list devs – lists all devices accessible to an HBA on a specified Symmetrix system symmaskdb remove – removes the meta member devices symmaskdb restore – restores a database from a specified file symmaskdb backup – backs up a database to a specified file Symmaskdb init - deletes and creates a new VCMDB Symmask Commands symmask add devs – adds a device to the list of devices that a WWN can access in the database symmask remove devs – removes a device from the list of devices that a WWN can access in the database symmask delete – deletes all access rights for a WWN in the database symmask replace – allows one HBA to replace another symmask refresh – refresh vcmdb to all FA ports symmask login – lists for each Fibre director which hosts and HBA’s are logged in to a Symmetrix system symmask list HBA’s – lists the WWN of the Fibre HBAs on this host symmask -sid 381 -wwn 50060B000024F9F6 -dir 16C -p 1 set heterogeneous on hp_ux Other SYMCLI Commands symdg -creates/deletes/renames device groups symld - addes & removes devices to a deivce group symbcv –associates/disassociate BCV with device groups symmir –performs (split/establish/restore) BCV mirror commands against device groups symclone –performs (split/establish/restore/activate/terminate/recreate) symsnap –performs (restore/activate/terminate/recreate) symrdf –performs (split/establish/restore/failover/update/failback/suspend/resume) against RDF device groups symcg – Performs operations on a Symmetrix RDF composite group symrslv - Displays logical-to-physical mapping information about a logical object that is stored on a disk. symstat - Displays statistics information about a Symmetrix, any or all directors, a device group, a disk, or a device. symioctl - sends I/O control commands to application

Page 42: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 42 of 43

:: MDS-SERIES (CISCO) COMMANDS :: MDS-SERIES Switch Commands show running-config – view configuration running on the switch show environment – shows status of all installed hardware components show flogi database – shows database list of all FLOGI events show interface brief – lists the interfaces and status MDS-SERIES Zoning Commands zone name TestZone1 vsan 4 - creates a zone called TesZone1 on vsan 4 member fwwn 10:01:10:01:10:ab:cd:ef – add a member to a previously create zone zoneset name Zoneset1 vsan 4 - creates a zoneset called Zoneset1 on vsan 4 zoneset activate name Zoneset1 vsan 4 – activase zoneset Zoneset1 on vsan 4 zone copy active-zoneset full-Zoneset1 vsan 4 copy running-config start-up config – copy from running to startup configuration vsan database – go into vsan configuration mode show zoneset – shows all zonesets that are active show zone vsan <#> - shows all zones active in vsan show zoneset active – displays the active zoneset show vsan – shows the vsans on the switch

:: B-SERIES (BROCADE) COMMANDS ::

B-SERIES Switch Commands switchDisable – offline switchEnable – online ipAddrSet ipAddrShow configure – change switch parameters routehelp – routing commands switchShow – display switch info supportShow – full detailed switch info portShow # - display port info nsShow – Name server contents nsAllShow – NS for full fabric fabricShow – fabric information B-SERIES Zoning Commands zoneCreate “Zone1”, “20:00:00:e0:69:40:07:08; 50:06:04:82:b8:90:c1:8d” cfgCreate “Test_cfg”, “Zone1; Zone2” cfgSave – saves zoning information across reboots cfgEnable “Test_cfg” zoneShow or cfgShow – shows defined and effective zones and configurations zoneAdd – adds a member to a zone zoneRemove – removes a member from a zone zoneDelete – deletes a zone cfgAdd – adds a zone to a zone configuration cfgRemove – removes a zone from a zone configuration cfgDelete – deletes a zone from a zone configuration cfgClear – clears all zoning information/ must disable the effective configuration cfgDisable – disables the effective zone configuration

Page 43: Recover Point Implementation Lab Guide Supplementv3.2

RecoverPoint Implementation Lab Guide Supplement

Copyright © 2009 EMC Corporation. All Rights Reserved. Version 2.2 Page 43 of 43

:: NAVICLI COMMANDS ::

navicli –h <SP IP> getsp – verify connectivity navicli –h <SP IP> storagegroup –list – all info about existing groups navicli –h <SP IP> getrg -lunlist – list all existing raid groups and LUNS navicli –h <SP IP> getdisk – shows numbers of disks in storage array navicli –h <SP IP> getrg <rg id> – shows the number of raid groups navicli –h <SP IP> get lun <lun id> - lists all the disks navicli –h <SP IP> storagegroup –list – displays storage groups navicli –h <SP IP> getcache – shows the cache navicli –h <SP IP> storagegroup –create –gname <name> - creates a new storage group navicli –h <SP IP> storagegroup –addhlu –gname <name> -hlu <#> -alu <#> - assigns LUNs to storage group navicli –h <SP IP> storagegroup –connecthost –host <hname> -gname <gname> - assigns host to storage group

:: POWERPATH COMMANDS :: powermt – manage powerpath environment powercf – configure powerpath devices emcpreg – manage powerpath license registration emcpupgrade – convert powerpath configuration files

:: SUN/SOLARIS HOST COMMANDS ::

SOLARIS Software Installation ptree –a Shows all running processes in a tree format showrev –p Displays currently installed Solaris patches prtconf – prints system configurations modinfo – displays info about loaded kernel modules pkginfo – lists installed software packages pkgadd – install software packages pkgchk -l package – List all files pkgrm – removes installed software packages SOLARIS Device Commands devinfo – print device specific information about disk devices drvconfig – generates special device files uname – prints system type, name, kernel, build, patch revision reboot -- -r – reboots system to discover configuration changes /etc/system – system files /kernel/drv/sd.conf – lists of available target ids and luns /kernel/drc/lpfc.conf - used for persistent binding /var/adm/messages – system meSOLARIS Filesystem Commands

ssages

Solaris Volume Manager Administration Guide - http://docs.sun.com/app/docs/doc/816-4520 format Disk partitioning and maintenance program sysdef – Device Listing prtvtoc – Disk Label metadevadm Checks device ID configuration. metainit Configures volumes. metastat Displays the status of volumes or hot spare pools. metaset Administers disk sets.