technote_vmware_iofencing

6
Support for I/O Fencing in a VMware Environment About this technote This technote provides information about the I/O fencing support for Storage Foundation High Availability (SFHA) and Storage Foundation Cluster File System (SFCFS) clusters of VMware Virtual Machines. The technote includes the following topics: Introduction to I/O fencing Support for virtual environments prior to this technote Support introduced by this technote Required patches About allocating Block Storage Array requirements Limitations Introduction to I/O fencing I/O fencing is a feature that prevents data corruption in the event of a communication failure in a SFHA or SFCFS cluster. I/O fencing consists of two components: Membership Arbitration and Data Protection. Membership Arbitration allows only one of the multiple partitions of a cluster to continue operation in case of a network partition. The I/O fencing module uses coordination points such as SCSI3 compliant disks or Coordination Point Servers (CP Servers) for membership arbitration. At the time of a network partition each partition races for the coordination points and the partition that grabs the majority of coordination points survives, whereas nodes from all other partitions panic. Data Protection allows write access only for members of the cluster that survive after arbitration. It blocks non-members from accessing storage so that even a node that is accidently alive is unable to cause damage to data. Traditionally, I/O fencing uses SCSI3 Persistent Reservation (SCSI3-PR) to ensure that I/O operations from the losing node cannot reach a disk that the surviving partition has taken over. For more details about I/O fencing, refer to the following documents: I/O fencing White Paper on Symantec Connect http://www.symantec.com/connect/articles/veritas-cluster-server-io-fencing-deployment- considerations

Upload: velmurugan-gurusamy-pandian

Post on 01-Jan-2016

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TechNote_VMware_IOFencing

Support for I/O Fencing in a VMware Environment

About this technote

This technote provides information about the I/O fencing support for Storage Foundation High

Availability (SFHA) and Storage Foundation Cluster File System (SFCFS) clusters of VMware Virtual

Machines.

The technote includes the following topics:

Introduction to I/O fencing

Support for virtual environments prior to this technote

Support introduced by this technote

Required patches

About allocating Block Storage

Array requirements

Limitations

Introduction to I/O fencing

I/O fencing is a feature that prevents data corruption in the event of a communication failure in a SFHA

or SFCFS cluster. I/O fencing consists of two components: Membership Arbitration and Data Protection.

Membership Arbitration allows only one of the multiple partitions of a cluster to continue operation in

case of a network partition. The I/O fencing module uses coordination points such as SCSI3 compliant

disks or Coordination Point Servers (CP Servers) for membership arbitration. At the time of a network

partition each partition races for the coordination points and the partition that grabs the majority of

coordination points survives, whereas nodes from all other partitions panic.

Data Protection allows write access only for members of the cluster that survive after arbitration. It

blocks non-members from accessing storage so that even a node that is accidently alive is unable to

cause damage to data. Traditionally, I/O fencing uses SCSI3 Persistent Reservation (SCSI3-PR) to ensure

that I/O operations from the losing node cannot reach a disk that the surviving partition has taken over.

For more details about I/O fencing, refer to the following documents:

I/O fencing White Paper on Symantec Connect

http://www.symantec.com/connect/articles/veritas-cluster-server-io-fencing-deployment-

considerations

Page 2: TechNote_VMware_IOFencing

The following sections in the Veritas Cluster Server Installation Guide

About I/O fencing

https://sort.symantec.com/public/documents/vcs/5.1sp1/linux/productguides/html/vcs

_install/ch01s03s03.htm

About configuring VCS clusters for data integrity

https://sort.symantec.com/public/documents/vcs/5.1sp1/linux/productguides/html/vcs

_install/ch01s06.htm

Support for virtual environments prior to this technote

Prior to this technote, Symantec supported non-SCSI3 server-based I/O fencing in virtual environments.

Non-SCSI3 fencing uses CP servers as coordination points with some additional configuration changes to

support I/O fencing in virtual environments.

Symantec did not support SCSI3-Persistent Reservations (SCSI3-PR) and SCSI3 I/O fencing in virtual

environments.

Support introduced by this technote

This technote introduces support for SCSI3 I/O fencing with SFHA and SFCFS clusters of VMware Virtual

Machines. Symantec now supports both disk-based I/O fencing1 and server-based I/O fencing.2 SCSI3

I/O fencing requires that all data and coordinator disks support SCSI3-PR.

Table 1: Supported versions of VMWare ESX and SFHA/SFCFS clusters

VMware ESX Version Corresponding SFHA/SFCFS Version

4.1 (See Required Patches) 5.1 SP1 RP1 or later

5.0, 5.0U1 5.1 SP1 RP1 or later, 6.0

1 I/O fencing that uses coordinator disks is referred to as disk-based I/O fencing. 2 I/O fencing that uses at least one CP server system is referred to as server-based I/O fencing.

Page 3: TechNote_VMware_IOFencing

Supported Guest operating systems

Supported versions of Guest operating systems include those that are supported by:

The SFHA/SFCFS version that is deployed on VMware Virtual Machines

The list of supported operating systems for a particular SFHA or SFCFS version is mentioned in the release notes. The release notes are available at: https://sort.symantec.com/documents/

The ESX version that is deployed on VMware Virtual Machines

The list of supported operating systems is mentioned in the VMware compatibility guide available at: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=software

Required patches

VMware ESX 4.1 requires the following VMware patch:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externa

lId=2000604

About allocating Block Storage

Block Storage must be allocated to the Virtual Machines using Raw Device Mapping in Pass-through

Mode (RDM-P).

Array requirements

The array used for housing the coordinator disks should be listed in the Symantec Hardware

Compatibility List (HCL) for 5.1SP1. It should also be listed under the Guest operating system running

inside the VMware Virtual Machine. The HCL is available via the Symantec Operations Readiness Tools

(SORT) website at:

https://sort.symantec.com/documents/

Confirming that the array meets the I/O fencing requirements

Confirm that the array meets the I/O fencing requirements by running the vxfentsthdw utility. The

vxfentsthdw(1m) utility is available on the product disc in the cluster_server/tools directory. It is not

necessary to install SFHA or SFCFS to run this utility.

For more information, see the section “Testing the disks using vxfentsthdw utility” in the Veritas Cluster

Server Installation Guide. Installation guides for various SFHA and SFCFS versions are available at:

https://sort.symantec.com/documents/

Page 4: TechNote_VMware_IOFencing

Limitations

The I/O fencing support for VMware Virtual Machines has the following limitations:

Each SFHA or SFCFS cluster should have no more than one Virtual Machine from each physical

server. However, a physical server can host more than one Virtual Machine as long as they do

not belong to the same cluster.

For example, consider two ESX servers as depicted in figure 1 and figure 2. Server A and Server B

with two Virtual Machines created on each server: VM_A_1 and VM_A_2 on Server A and

VM_B_1 and VM_B_2 on Server B. Two clusters can be created: Cluster 1 with VM_A_1 and

VM_B_1 and Cluster 2 with VM_A_2 and VM_B_2 (figure 1). However, I/O fencing does not

support a cluster created on the same machine. For example, I/O fencing does not support a

cluster with VM_A_1 and VM_A_2 Virtual Machines (figure 2) or a cluster with VM_B_1 and

VM_B_2 Virtual Machines.

One set of coordinator disks must be made available to only one Virtual Machine per ESX server.

A unique set of coordinator disks must be used for each cluster. Symantec does not support

assigning coordinator disks of one cluster to another cluster.

VMware currently does not support SCSI3-PR with N-Port ID Virtualization (NPIV). Consequently,

SCSI3 I/O fencing support in SFHA and SFCFS is restricted to block storage that is allocated in

RDM-P mode.

Figure 1: Supported cluster configuration

VM_A_2

VM_B_2

ESX Server A ESX Server B Cluster 1

Cluster 2

coorddg1

coorddg2

VM_A_1 VM_B_1

Page 5: TechNote_VMware_IOFencing

Figure 2: Unsupported cluster configuration

VM_A_2

VM_B_2

ESX Server A ESX Server B

Cluster 1

Cluster 2

coorddg1

coorddg2

VM_A_1 VM_B_1

Page 6: TechNote_VMware_IOFencing

About Symantec Symantec is a global leader in providing security, storage and systems management solutions to help businesses and consumers secure and manage their information. Headquartered in Mountain View, Calif., Symantec has operations in 40 countries. More information is available at www.symantec.com.

For specific country offices and contact numbers, please visit our Web site. For

product information in the U.S., call toll-free 1 (800) 745 6054.

Symantec Corporation World Headquarters 350 Ellis Street Mountain View, CA 94043

USA +1 (408) 517 8000 1 (800) 721 3934 www.symantec.com

Copyright © 2012 Symantec Corporation. All rights reserved. Symantec and the Symantec logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may

be trademarks of their respective owners.