302453 vcs

38
Symantec Technical Network White Paper WHITE PAPER: HOW-TO How to configure failover of Solaris non-global zones using Storage Foundation for Oracle/HA Procedural steps to set up a highly available Oracle application in a Solaris non-global zone environment Julie Luckcuck, Principal SQA Engineer Software Integration and Function Testing group

Upload: sddsdv

Post on 08-Nov-2014

53 views

Category:

Documents


0 download

DESCRIPTION

VCS

TRANSCRIPT

Page 1: 302453 VCS

Symantec Technical Network White Paper

WH

ITE PAPER: HO

W-TO

How to configure failover of Solaris non-global zones using Storage Foundation for Oracle/HA Procedural steps to set up a highly available Oracle application in a Solaris non-global zone environment

Julie Luckcuck, Principal SQA EngineerSoftware Integration and Function Testing group

Page 2: 302453 VCS

ContentIntroduction to High Availability Zones.....................................................................3

About high availability of Solaris Containers.............................................................3

About planning for clustered zones...........................................................................5

About creating and installing Solaris Containers.......................................................6

Summary.................................................................................................................25

Where to get more information...............................................................................25

Appendix.................................................................................................................25

White Paper: Symantec How-to

Content

Page 3: 302453 VCS

How-to White Paper

Introduction to High Availability ZonesClustering global and non-global zones is one way to create a high availability environment. When a particular section of a cluster fails, workload is transferred to the remaining active zones in the cluster. Non-global zones appear to exist as stand-alone systems that are able to run applications independent of the global zone. Each non-global zone runs independently of other non-global zones running on the same host. The terms “non-global zone”, “local-zone”, and “Solaris container” are synonymous for the purpose of this white paper.

About high availability of Solaris ContainersMaximization of host hardware usage and application up time is a top priority for most data centers. Information provided in the following pages describes in detail how to achieve the greatest usage of host hardware and longest uptime. It does not describe several critical concepts like how local zones function in a global zone

Creating a non-global zone high availability cluster overview steps:1. Read the “Planning” section of this white paper and verify the clustered

environment is ready for non-global zone configuration.2. Create non-global zones on clustered nodes.3. Install Oracle in non-global zones.4. Configure Oracle to run in a non-global zone and create a test database.5. Configure the Cluster Server Zone Agent for fail-over of non-global zones to

an alternate host in the cluster.6. Create a symbolic link in an existing NetBackup client to back up the data

on the non-global zone.

Sample diagram of proposed configuration:

3

Page 4: 302453 VCS

How-to White Paper

Diagram Summary:Node A with Zone1 installed and configured on external shared diskNode B with Zone2 installed and configured on external shared diskNode B is the failover host for Zone1 and Oracle databaseNode A is the failover host for Zone1 and Oracle databaseOracle binaries are installed in Zones 1 and 2 that can be deported and importedOracle database is installed on external disks that can be deported and imported independent of Zone1 and Zone2

High Availability failover summary of Zone1:

4

Page 5: 302453 VCS

How-to White Paper

Cluster Server main.cf Oracle Agent brings down Oracle database and Zone1 on Node A Veritas Cluster Server main.cf deports the disk groups for the Oracle database and Zone 1 on Node ACluster Server main.cf imports the disk group for Zone1 on Node B and initializes the zoneCluster Server main.cf imports the Oracle database disk group on Node BCluster Server main.cf Oracle Agent brings up the database on Zone1 running on Node BZone1 and Zone2, including their Oracle databases, run on Node B

About planning for clustered zonesPlanning for shared zones includes meeting specific hardware, software, and networking requirements. Planning requirements must be met before attempting to create and configure private and shared non-global zones. When the minimum host hardware requirements are not met, zone creation will fail. When the minimum software requirements are not met, zone creation cannot start. When the minimum networking requirements are not met, zone and Oracle high availability will fail.

PrerequisitesThe following prerequisites must be completed before beginning zone creation on any nodes in the cluster:

All nodes hosting applications configured in zones must be running the same version of the operating system.

Install and configure Storage Foundation for Oracle high availability in the global zone of all nodes before creating local zones.

Reserve unique public IP addresses and hostnames. Network configuration for non-global zones is initiated after the installation process completes.

Reserve a unique public IP address for the configuration of Storage Foundation for Oracle High Availability. The IP address is used for the high availability failover of the Oracle database between non-global zones.

Each shared disk must be a minimum of 6 GB (1 GB for the sparse operating system used by the zone; 5 GB for the Oracle binary).

5

Page 6: 302453 VCS

How-to White Paper

Each non-global zone must have a minimum of one CPU and 1 GB memory assigned to it from the host it is using. For example, if the hosts in a cluster all have 6 CPUs each and 6 GB memory, six total zones could be created (3 GB on each host allows failover of 3 GB non-global zones, or containers, from one node to fail over to an alternate node the cluster)

Tasks related to set upThe following bulleted items are configuration notes and tasks that need to be reviewed and completed when setting up the environment:

Solaris 10 /etc/system file on all nodes in the cluster must have the entry set noexec_user_stack=1. If not set, Oracle’s DataBase Configuration Assistant will experience an error when trying to configure a test database.

Veritas Cluster Server supports Universal File System and Veritas File System mounts for the zone root; Veritas Cluster Server does not support cluster file system mounts for the zone root.

The Veritas Cluster Server and Veritas Storage Services install directories must be defined in the inherit-pkg-dir parameter in the zone configuration.

All application mounts must be part of the zone configuration, and must be configured in the Veritas Cluster Server service group.

Mount points used by application running in zones must be set relative to the zone’s root. You must also set the Mount Point attribute of the Mount resource to this path.

Verify the external disks are attached to the cluster and that each node recognizes the same disks using the format command..

You can create an id with superuser privileges or use the root id to enter the commands provided in this whitepaper. Superuser in the context of this whitepaper is synonymous with the root id and its functions.

6

Page 7: 302453 VCS

How-to White Paper

About creating and installing Solaris ContainersCreating and installing non-global zones can begin after the hardware, software, and networking requirements have been met. The hosts must have membership in a Veritas Cluster. First the container for the non-global zone is created. Then the installation process copies required sections of the operating system into the frame. Lastly, the zone is booted using zonecfg and zoneadm commands. When the non-global zone boots it is able to run applications inside the local zone.

Review the diagram VCS Cluster1 and to determine which host in your cluster will be Node A and which will be Node B. The steps below expect the reader to substitute their host names where Node A and Node B appear in the steps.

Creating volumes on shared disks

1. Verify Veritas Cluster Server high availability is configured in the global zone on both nodes and are in RUNNING state. The output should show the clustered hosts in a RUNNING state. Execute the following command as superuser on one system in the cluster.

hastatus -sum

2. Select one disk that is recognized by Volume Manager and by all hosts in the cluster to use the local zones. The output of the volume manager command should show a list of possible disks. Execute the following command as superuser on all systems in the cluster:

vxdisk list

3. Using Node A create a volume manager disk group using the selected disk. Choose a name that can be visibly associated with Zone1. There will be no visible output of the command. Execute the following command as superuser:

vxdg init Zone1_dg_name disk_name=disk_selected_from_list

7

Page 8: 302453 VCS

How-to White Paper

4. Mirroring the disks in a disk group is recommended, but not required. To mirror the disks you must add a second disk to the existing disk group. There will be no visible output of the command. Execute the following command on one line as superuser:

vxdg –g Zone1_dg_name adddisk Zone1_disk_name=second_disk_selected_from_list

5. Display the disks on all nodes in the cluster to verify they recognize the disk group created in steps 3 and 4. Expected output is a list of disks, two of which should have the disk group name used in steps 3 and 4. Execute the following command as superuser:

vxdisk –o alldgs list

6. Repeat steps 2-5 on the alternate node in the cluster. Choose a dg_name that can be visibly associated with Zone2.

7. Create a mirrored volume on the Zone1 disk group using Node A. There will be no visible output of the command. Execute the following command as superuser:

vxassist –g Zone1_dg_name make Zone1_volume_name 6g layout=mirror

8. Create a mirrored volume on the Zone2 disk group using Node B. There will be no visible output of the command. Execute the following command as superuser:

vxassist –g Zone2_dg_name make Zone2_volume_name 6g layout=mirror

9. Create a file system for the Zone1 volume using Node A. Output after the command has been entered will state the number of blocks formatted for use. Execute the following command as superuser:

8

Page 9: 302453 VCS

How-to White Paper

mkfs –F vxfs /dev/vx/rdsk/Zone1_dg_name/Zone1_volume_name

10. Create a file system for the Zone2 volume group using Node B. Output after the command has been entered will state the number of blocks formatted for use. Execute the following command as superuser:

mkfs –F vxfs /dev/vx/rdsk/Zone2_dg_name/Zone2_volume_name

11. Use the steps 2 through 10 as a guideline for creating disk groups, volumes, and file systems for the Oracle databases. Create the resources for the Oracle database in Zone1 on Node A. Execute the following commands as superuser on Node A:

vxdisk –o alldgs listvxdg init Zone1_oracle_dg_name

oracle_disk_name=disk_selected_from_list vxdg –g Zone1_oracle_dg_name adddisk

oracle_disk_name=disk_selected_from_listvxassist –g Zone1_oracle_dg_name make Zone1_oracle_volume_name 6g

layout=mirrormkfs –F vxfs

/dev/vx/rdsk/Zone1_oracle_dg_name/Zone1_oracle_volume_name

12. Use the same procedures described in Step 11 to create a disk group, volume, and file system for a database to run in Zone2. Execute the following commands as superuser on Node B:

vxdisk –o alldgs listvxdg init Zone2_oracle_dg_name

oracle_disk_name=disk_selected_from_list vxdg –g Zone2_oracle_dg_name adddisk

oracle_disk_name=disk_selected_from_list

9

Page 10: 302453 VCS

How-to White Paper

vxassist –g Zone2_oracle_dg_name make Zone2_oracle_volume_name 6g layout=mirror

mkfs –F vxfs /dev/vx/rdsk/Zone2_oracle_dg_name/Zone2_oracle_volume_name

13. Create directories to use as mount points for the file systems of both zones. Execute the following commands as superuser from all the nodes in the cluster:

mkdir –p /zones/Zone1mkdir –p /zones/Zone2mkdir –p /oracle/oradataz1mkdir –p /oracle/oradataz2

14. Mount the Zone1 volume on Node A. Execute the following command as superuser on Node A:

mount -F vxfs /dev/vx/dsk/Zone1_dg_name/Zone1_volume_name /zones/Zone1

15. Mount the Zone1 Oracle database volume on Node A. Execute the following command as superuser on Node A. There will be no visible output of the command:

mount -F vxfs /dev/vx/dsk/Zone1_oracle_dg_name/Zone1_oracle_volume_name /oracle/oradataz1

16. Mount the Zone2 volume on Node B. Execute the following command as superuser on Node B:

mount -F vxfs /dev/vx/dsk/Zone2_dg_name/Zone2_volume_name /zones/Zone2

10

Page 11: 302453 VCS

How-to White Paper

17. Mount the Zone2 Oracle database volume on Node B. Execute the following command as superuser on Node B. There will be no visible output of the command. :

mount -F vxfs /dev/vx/dsk/Zone2_oracle_dg_name/Zone2_oracle_volume_name /oracle/oradataz2

18. Change the permissions of the zone directories to 700. Execute the following command as superuser on all nodes in the cluster:

chmod 700 /zones/Zone1chmod 700 /zones/Zone2

19. Verify the permissions changed to 700. Output should show the /zones directory as read, write, execute for the root id. Execute the command on all nodes in the cluster.

ls -l /zones

Creating and configuring Solaris Containers

After the required resources are in place to house the zones, they can be created, installed and started. Once active on the public net they act as their own independent operating system. The Oracle binary can then be installed in the running zone.

1. Initialize the zone creation process on Node A using the zonecfg command. Execute the following command as superuser:

zonecfg –z Zone1 <Zone1 is hostname assigned to reserved ip address>

11

Page 12: 302453 VCS

How-to White Paper

2. Once the zone configuration creation process has initiated, the host will remain in a zone configuration shell until you exit that shell. Enter the following command as superuser:

create

3. Define the location of the zone you are creating. Enter the following command as superuser:

set zonepath=/zones/Zone1 <mount point path for Zone1>

4. Add a public IP address for Zone1. Use the public network address associated with the Zone1 hostname reserved as part of the planning process.

add netset address=public_ip_for_zoneset physical=public_port_interfaceend

5. Set the autoboot property for the zone to false:

set autoboot=falsecommitexit

6. Install the first zone on Node A. Enter the install command as superuser. Installation will end when the location of the log is printed and the superuser prompt appears.

zoneadm –z Zone1 install

7. List the zone on Node A to verify it is in the installed state:

12

Page 13: 302453 VCS

How-to White Paper

zoneadm list -cv

8. Start the zone on Node A. Execute the following command as superuser from the global zone:

zoneadm –z Zone1 boot

9. Login to the console and configure the menu driven prompts to complete the new host configuration. Login as superuser to the zone:

zlogin –C Zone1

10. Edit the /etc/default/login to comment out the CONSOLE entry in order to allow telnet access as superuser. Use a standard Solaris command to shut down the zone. Execute the following command as superuser while logged into the zone:

shutdown –y –i0 –g0

11. Use the tilde ~ and full stop to exit the zone console.

12. While logged in as superuser on Node A, unmount the local zone file system, Zone1, from the global zone. Execute the following command from the global zone:

umount /zones/Zone1

13. Deoport the Zone1 disk group from Node A. Execute the following command as superuser from the global zone:

vxdg deport Zone1_dg_name

14. Configure and install a second zone on Node B. Refer to steps 1-13, substituting Zone2 where Zone1 specifics appear.

13

Page 14: 302453 VCS

How-to White Paper

Configuring Solaris Non-global zone failover

Non-global zones can now be configured in Veritas Cluster Server to run on either node in the cluster. An entry is added to the /etc/zones file when a non-global zone is configured. That file is then modified by the operating system when the zone is installed. Configurations for each zone on the alternate node must be created and the /etc/zones file updated to include those zones.

1. Import the Zone1 disk group on Node B. Execute the following command as superuser:

vxdg import Zone1_dg_name

2. Import the Zone2 disk group on Node A. Execute the following command as superuser:

vxdg import Zone2_dg_name

3. Verify the disk groups are imported and the volumes are started. Execute the following command as superuser from the global zone on both nodes in the cluster:

vxrecover -s

4. Mount the Zone1 file system associated with the volume. Execute the following command as superuser from the global zone on Node B:

mount –F vxfs /dev/vx/dsk/Zone1_dg_name/Zone1_volume_name /zones/Zone1

5. Mount the Zone2 file system associated with the volume. Execute the following command as superuser from the global zone on Node A:

14

Page 15: 302453 VCS

How-to White Paper

mount –F vxfs /dev/vx/dsk/Zone2_dg_name/Zone2_volume_name /zones/Zone2

6. Create a backup copy of the /etc/zones/index file. Execute the following command as superuser from the global zones on both nodes in the cluster:

cp /etc/zones/index /etc/zones/index.backup

7. Follow steps 1-5 in the section named Zone Creation and Configuration to create the Zone1 configuration on Node B. Edit the /etc/zones/index file after the configuration is created for Zone1. Locate the entry for Zone1 in the /etc/index file and change the word configured to installed.

8. Follow steps 1-5 in the section named Zone Creation and Configuration to create the Zone2 configuration on Node A. Edit the /etc/zones/index file after the configuration is created for Zone2. Locate the entry for Zone2 in the /etc/index file and change the word configured to installed.

9. Start Zone1 on Node B. Execute the following command as superuser from the global zone:

zoneadm –z Zone1 boot

10. Start Zone2 on Node B. Execute the following command as superuser from the global zone:

zoneadm –z Zone2 boot

11. Verify the non-global zones started by displaying their status using the zoneadm command. The output of the command should show the zone in running status. Execute the following command as superuser from the global zone on both nodes in the cluster:

zoneadm list -cv

15

Page 16: 302453 VCS

How-to White Paper

12. Telnet to Zone1 and log in as superuser to verify functionality of its public network. Shut down the zone. Execute the following command as superuser from the global zone of Node B:

shutdown –y –i0 –g0

13. Telnet to Zone2 and log in as superuser to verify functionality of its public network. Shut down the zone. Execute the following command as superuser from the global zone on Node A:

shutdown –y –i0 –g0

14. Unmount the Zone1 file system from the global zone of Node B. Execute the following command as superuser from the global zone on Node B:

umount /dev/vx/dsk/Zone1_dg_name/Zone1_volume_name

15. Unmount the Zone2 file system from the global zone of Node A. Execute the following command as superuser from the global zone on Node A:

umount /dev/vx/dsk/Zone2_dg_name/Zone2_volume_name

16. Deport the Zone1 disk group. Execute the following command as superuser from the global zone on Node B:

vxdg deport Zone1_dg_name

17. Deport the Zone2 disk group. Execute the following command as superuser from the global zone on Node A:

vxdg deport Zone2_dg_name

16

Page 17: 302453 VCS

How-to White Paper

18. Import the Zone1 disk group on Node A. Execute the following command as superuser from the global zone on Node A:

vxdg import Zone1_dg_name

19. Import the Zone2 disk group on Node B. Execute the following command as superuser from the global zone on Node B:

vxdg import Zone2_dg_name

20. Verify the disk groups are imported and the volumes are started. Execute the following command as superuser from the global zone on both nodes in the cluster:

vxrecover -s

21. Mount the Zone1 file system associated with the volume. Execute the following command as superuser from the global zone on Node A:

mount –F vxfs /dev/vx/dsk/Zone1_dg_name/Zone1_volume_name /zones/Zone1

22. Mount the Zone2 file system associated with the volume. Execute the following command as superuser from the global zone on Node B:

mount –F vxfs /dev/vx/dsk/Zone2_dg_name/Zone2_volume_name /zones/Zone2

23. Start Zone1 on Node A. Execute the following command as superuser from the global zone:

zoneadm –z Zone1 boot

17

Page 18: 302453 VCS

How-to White Paper

24. Start Zone2 on Node B. Execute the following command as superuser from the global zone:

zoneadm –z Zone2 boot

25. Verify the non-global zones started by displaying their status using the zoneadm command. The output of the command should show the zone in running status. Execute the following command as superuser from the global zone on both nodes in the cluster:

zoneadm list -cv

Installing Oracle in the Non-global zoness:

Several system administration functions must be performed on all zones before installing the Oracle application. Other functions need only be performed on the non-global zones.

1. Create a directory for the Oracle home on both non-global zones. Execute the following command as superuser on Zone1 and Zone2:

mkdir –p /export/home/oracle

2. Create the groups for oinstall and dba. Execute the following commands as superuser on Zone1 and Zone2:

groupadd –g 100 oinstallgroupadd –g 101 dba

3. Create the Oracle user id. Execute the following commands as superuser on Zone1 and Zone2:

useradd –g 100 –u 100 –d /export/home/oracle oracle

18

Page 19: 302453 VCS

How-to White Paper

4. Edit the /etc/group to add root and oracle to the oinstall group, and add oracle to the dba group.

5. On Zone1 and Zone2 create a .rhosts file in the root directory to authorize access for the Oracle binary installation. The file should be owned by the superuser id.

6. On Zone1 and Zone2 create a .rhosts file in the /export/home/oracle directory to authorize access for the Oracle binary installation. The file should be owned by the oracle user id.

7. Set the password for oracle. Execute the following command as superuser on Zone1 and Zone2:

passwd oracle

8. Create the directories for the oracle installation using the oracle id on Zone1 and Zone2.

9. Install the Oracle binaries identically on Zone1 and Zone2

10. After the Oracle binaries are installed remove the ODM library that is bundled with the Oracle application. Execute the following command in the global zone on both nodes in the cluster:

rm oracle_home/lib/libodm10.so

11. Link the VERITAS Extension for Oracle Disk Manager library into the Oracle 10g home directory. Execute the following command in the global zone on both nodes in the cluster:

ln -s /opt/VRTSodm/lib/sparcv9/libodm.so oracle_home/lib/libodm10.so

19

Page 20: 302453 VCS

How-to White Paper

12. Use the zonecfg command to add the ODM license to the non-global zones. Execute the following commands in the global zone on both nodes in the cluster:

zonecfg -z Zone1add fsset dir=/etc/vx/licenses/licset special=/etc/vx/licenses/licset type=lofsendexit

13. Create a directory in the global zone of Zone1 to mount /dev/odm in the local zone. Execute the following command in the global zone on Node A:

mkdir -p /zones/Zone1/dev/odm

14. Create a directory in the global zone of Zone2 to mount /dev/odm in the local zone. Execute the following command in the global zone on Node B:

mkdir -p /zones/Zone2/dev/odm

15. Restart the local zones on both nodes in the cluster and verify /dev/odm is mounted. Output of the command should show the device is mounted. Execute the following command on Zone1 and Zone2 as the superuser:

mount | grep -i odm

Configuring Oracle to run in non-global zones and creating a test database

A pool of host resources must be assigned to each zone and the id oracle must be assigned authority to use those resources. If adequate host resources are not assigned to each zone then the Oracle application may not be able to start.

20

Page 21: 302453 VCS

How-to White Paper

1. To create the pool you must first create a script and then call the script using the poolcfg command. As superuser create a script called /tmp/Zone1_pool in the global zone on both nodes in the cluster. The following script assigns one CPU to each non-global zone:

create pset pset_Zone1 ( uint pset.min = 1 ; uint pset.max = 1)create pool pool_Zone1associate pool pool_Zone1 ( pset pset_Zone1 )

2. Create a pool of resources on Node B. As superuser create a script called /tmp/Zone2_pool in the global zone on both nodes in the cluster. The following script assigns one CPU to each non-global zone:

create pset pset_Zone2 ( uint pset.min = 1 ; uint pset.max = 1)create pool pool_Zone2associate pool pool_Zone2 ( pset pset_Zone1 )

3. Enable the pool facility. Execute the following command as superuser in the global zone of all nodes in the cluster:

pooladm –e

4. Update the pool with its current dynamic configuration. Execute the following command as superuser in the global zone of all nodes in the cluster:

pooladm –s

5. As superuser execute the pool creation scripts in the global zone of all nodes in the cluster:

poolcfg –f /tmp/Zone1_poolpoolcfg –f /tmp/Zone2_pool

21

Page 22: 302453 VCS

How-to White Paper

6. Verify the content of pooladm.conf. Execute the following command as superuser in the global zone of all nodes in the cluster:

pooladm –c /etc/pooladm.conf

7. Verify that the output of the pooladm command displays the pool and pset created for Zone1 and Zone2. Execute the following command as superuser in the global zone of all the nodes in the cluster:

pooladm

8. Create a new project called oracle on all of the global and non-global zones. Execute the following command as superuser on all nodes and in all zones of the cluster:

projadd oracle

9. Using the superuser id, edit the /etc/user_attr file on all of the global and non-global zones. Change the line oracle:::: to oracle::::project=oracle. If this line doesn’t exist then add it manually using your editor.

10. Verify the oracle user’s project changed. The output should display the project id of the oracle user id. Execute the following command as superuser on all nodes and in all zones of the cluster:

id –p oracle

11. In different terminal windows login to all global and non-global zones using the oracle id. There must be at least one process running under that project in order to apply resource controls.

12. Return to the windows where root is logged to each of the global and non-global zones to modify the resource controls. Execute the following command as superuser on all nodes and in all zones of the cluster:

22

Page 23: 302453 VCS

How-to White Paper

projmod -s -K "project.max-shm-memory=(priv,4gb,deny)" oracle

13. The file system which will house the database must be mounted in the global and local zone. This is done by mounting the file system in the global zone. Do not attempt to mount the file system with the mount command in the non-global zone. Execute the following commands as superuser in the global zone of all nodes in the cluster:

mkdir –p /oracle/oradataz1mkdir –p /oracle/oradataz2

14. Change the ownership of the Zone1 oracle volume to oracle. Execute the following command as superuser in the global zone of Node A:

vxedit –g Zone1_oracle_dg_name set user=oracle group=oinstall mode=755 Zone1_oracle_volume_name

15. Change the ownership of the Zone2 oracle volume to oracle. Execute the following command as superuser in the global zone of Node B:

vxedit –g Zone2_oracle_dg_name set user=oracle group=oinstall mode=755 Zone2_oracle_volume_name

16. Mount the volume that will house the database for Zone1 in the global zone. Execute the following command as superuser in the global zone of Node A:

mount –F vxfs /dev/vx/dsk/Zone1_oracle_dg_name/Zone1_oracle_volume_name /oracle/oradataz1

23

Page 24: 302453 VCS

How-to White Paper

17. Mount the volume that will house the database for Zone2 in the global zone. Execute the following command as superuser in the global zone of Node B:

mount –F vxfs /dev/vx/dsk/Zone2_oracle_dg_name/Zone2_oracle_volume_name /oracle/oradataz2

18. Use the global zone to add the file system mount point for the Zone1_oracle_volume mount point to the non-global Zone1. The file system will be added to the non-global zone as a loop back file system. Execute the following commands as superuser in the global zone of Node A:

zonecfg -z Zone1add fsset dir=/oracle/oradataz1 set special=/oracle/oradataz1 set type=lofsendexit

19. Verify the entry was added by printing the configuration of the zone. Execute the following command as superuser in the global zone of Node A:

zonecfg -z Zone1 export

20. Use the global zone to add the file system mount point for the Zone2_oracle_volume mount point to the non-global Zone2. The file system will be added to the non-global zone as a loop back file system. Execute the following commands as superuser in the global zone of Node B:

zonecfg -z Zone1

24

Page 25: 302453 VCS

How-to White Paper

add fsset dir=/oracle/oradataz2 set special=/oracle/oradataz2 set type=lofsendexit

21. Verify the entry was added by printing the configuration of the zone. Execute the following command as superuser in the global zone of Node B:

zonecfg -z Zone2 export

22. Restart Zone1 on Node A. The zone configuration will automatically create the directories in the local zone when it boots and will mount the FS from the global zone into the local zone.

23. Restart Zone2 on Node B. The zone configuration will automatically create the directories in the local zone when it boots and will mount the FS from the global zone into the local zone.

24. After successful reboot of the non-global zones verify that /oracle/oradataz1 is mounted on Zone1. Execute the following command as superuser in the non-global zone of Node A:

mount | grep oradata

25. After successful reboot of the non-global zones verify that /oracle/oradataz2 is mounted on Zone2. Execute the following command as superuser in the non-global zone of Node B:

mount | grep oradata

25

Page 26: 302453 VCS

How-to White Paper

26. Create a profile for the oracle userid on both zones in the cluster. Verify the environment variables ORACLE_BASE, ORACLE_HOME and ORACLE_SID are defined in the file /export/home/oracle/.profile.

27. Log in to Zone1 as oracle. Use Oracle’s dbca tool executed in Zone1 to create a test database in the /oracle/oradataz1 directory. The dbca tool will exit stating that the installation is successful.

28. Log in to Zone2 as oracle. Use Oracle’s dbca tool executed in Zone2 to create a test database in the /oracle/oradataz2 directory. The dbca tool will exit stating that the installation is successful.

29. The databases can be started using SQL commands after they have been successfully built. While logged into Zone1 and Zone2 as oracle, connect to the database. Execute the following command as oracle on Zone1 and Zone2:

sqlplus “/ as sysdba”

30. Start the database on Zone1. Execute the following command as oracle from the SQL prompt:

startup;

31. Verify the ODM library is linked by finding the message Oracle instance running with ODM in the oracle alert log.

Planning for zone and application failover

26

Page 27: 302453 VCS

How-to White Paper

Planning for zone and application failover entails running a script that adds a resource type of zone to the application service group. If there is no pre-existing application service group the script will create one. When the script completes the service group must be modified to enable monitoring of an application running in a zone. The script also creates a user account with group administrative privileges to enable inter-zone communication.

In order to run the script you must consider the user defined parameters that are required to execute the script

servicegroup_name represents the name of the application service group. zoneres_name represents the name of the resource configured to monitor

the zone. zone_name represents the name of the zone. password represents the password to be assigned to the VERITAS CLUSTER

SERVER created by the command. systems represents the list of systems on which the service group will be

configured. Use this option only when creating the service group.

Configuring non-global zone and Oracle application for failover

Non-global zone and application failover between hosts in a cluster begins by executing the hazonesetup script. After the script completes you must edit the failover configuration file main.cf to add specifics about the application it is to manage.

After completion of the script the service group configuration must be modified to enable monitoring the application running in a zone.

1. Verify Veritas Cluster Server high availability is configured in the global zone on both nodes and are in RUNNING state. Verify the output of the command shows the clustered hosts in a RUNNING state. Execute the following command as superuser on one system in the cluster.

27

Page 28: 302453 VCS

How-to White Paper

hastatus –sum

2. Run the hazonesetp.pl script to set up the zone configuration. Execute the following command in the global zone on Node A as superuser:

/opt/VRTSvcs/bin/hazonesetup service_group_name zone_resource_name zone_name password

3. In order to make the required changes to the failover configuration file main.cf you must stop the cluster. Execute the following command from any node in the cluster as superuser:

hastop -all

4. After the cluster is down used Node A to edit the main.cf file located in the /etc/VRTSvcs/conf/config directory. Specify the ZoneName attribute for the application resource Zone.

5. Sample main.cf entries:

include "types.cf"include "OracleTypes.cf"

cluster e10k (UserNames = { admin = INOgNInKOjOOmWOiNL, root = ajjNjmJgkF,

zoneadmin = aHIkHQgPHfHCgFHnHKgP,Administrators = { admin, root, zoneadmin }CredRenewFrequency = 0CounterInterval = 5)

system e10kd1 ()

28

Page 29: 302453 VCS

How-to White Paper

system e10kd2 ()

system e10kd3 ()

group orae10b_grp (SystemList = { e10kd1 = 0, e10kd2 = 1 }AutoStart = 0AutoStartList = { e10kd1, e10kd2 })

DiskGroup orae10_dg (DiskGroup = orae10StartVolumes = 0StopVolumes = 0)

IP orae10b_listener_ip (Device = hme0Address = "10.182.70.11")

Mount orae10vol_mount (MountPoint = "/oracle/oradata"BlockDevice = "/dev/vx/dsk/orae10/orae10vol"FsckOpt = "-y")

NIC listener_NIC (Device = hme0)

Netlsnr orae10b_listener (

29

Page 30: 302453 VCS

How-to White Paper

Critical = 0Owner = oracleHome = "/oracle/ohome"TnsAdmin = "/oracle/ohome/network/admin"Listener = listenerMonScript = "/opt/VRTSvcs/bin/Netlsnr/LsnrTest.pl"ContainerName = e1z0)

Oracle ora_e1z0 (Critical = 0Sid = orae10bOwner = oracleHome = "/oracle/ohome"Pfile = "/oracle/oradata/orae10b/pfileorae10b.ora"MonScript = "/opt/VRTSvcs/bin/Oracle/SqlTest.pl"ContainerName = e1z0)

Volume orae10_vol (Volume = orae10volDiskGroup = orae10)

Zone ora_zone_e1z0 (ZoneName = e1z0)

ora_e1z0 requires ora_zone_e1z0ora_zone_e1z0 requires orae10vol_mountorae10_vol requires orae10_dgorae10b_listener requires ora_e1z0orae10b_listener requires orae10b_listener_iporae10vol_mount requires orae10_vol

30

Page 31: 302453 VCS

How-to White Paper

// resource dependency tree//// group orae10b_grp// {// NIC listener_NIC// Netlsnr orae10b_listener// {// Oracle ora_e1z0// {// Zone ora_zone_e1z0// {// Mount orae10vol_mount// {// Volume orae10_vol// {// DiskGroup orae10_dg// }// }// }// }// IP orae10b_listener_ip// }// }

6. Continue editing the main.cf file. Set the ContainerName attribute for the application resource to the name of the zone in which the application runs. Save the configuration file and run the following command as superuser from the /etc/VRTSvcs/conf/config directory. There will be no visible output from the command if the configuration verification passes the syntax checking successfully:

31

Page 32: 302453 VCS

How-to White Paper

hacf –verify .

Any other changes and additions can be made to the main.cf at this time. Recommended additions to the main.cf include disk groups, volume, and file system mount points. The main.cf can also manage the Oracle resources including the disk groups, volumes, and file system mount points for the databases.

7. Save the cluster configuration file. Run the following command as superuser from the /etc/VRTSvcs/conf/config directory:

hacf –verify .

8. Start the cluster. Execute the following command in the global zone from Node A as superuser:

hastart

9. After Node A is in the running state, start the cluster on Node B as the superuser:

hastart

10. Verify the Zone Configuration. Execute the following command in the global zone on any node in the cluster as superuser:

hazoneverify service_group_name

Configuring the NetBackup Client for Solaris Containers

When NBU5.1MP4 was released to the general public not many customers were running Solaris 10 with non-global zones configured. Official support for this environment emerged in later releases of NetBackup.

32

Page 33: 302453 VCS

How-to White Paper

Currently an unsupported workaround exists for implementing NetBackup 5.1MP4 agents in non-global zones. Procedures describe a work around for a known problem that is fixed in more recent releases of the product. You are encouraged to upgrade to later versions of NetBackup, versions 5.1MP5 and 5.1MP6 or later that support Solaris non-global zones. To implement the proposed work around you must symbolically link a directory in the global zone to a similar directory in the non-global zone.

1. Create a symbolic link from /usr/openv to /var/openv. Execute the following command in the global zone on all nodes in the cluster as superuser:

ln -s /var/openv /usr/openv

2. Create the /var/openv directory. Execute the following command in all non-global zones in cluster as superuser:

mkdir /var/openv

3. Verify that all non-global zones have a .rhosts entry created by the superuser in the NetBackup master server.

4. Install the UNIX client software from the master to the clients.

Your NBU clients should install without error and perform properly.

33

Page 34: 302453 VCS

How-to White Paper

SummarySetting up a high availability application in a non-global zone environment can be a challenging and rewarding experience. When all of the planning bullets are addressed the time spent on creating the environment described in the white paper is minimal. Non-global zones allow you to better utilize and manage your host’s resources. Application high availability has been proven to significantly reduce downtime. The ability to compartmentalize a host’s workload into different non-global zones can ease the administration and management of applications. The combination of Solaris containers, Veritas Cluster Server, and NetBackup as described in this paper is a viable solution faced by data centers today.

Where to get more informationhttp://www.symantec.com

AppendixReference 1: Best Practices for Running Oracle Databases in Solaris Containers by Ritu Kamboj and Fernando Castano, September 2005.Reference 2: Solaris 10 System Administrator Collection, System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.Reference 3: Veritas Cluster Server 4.1 User’s Guide Solaris

34

Page 35: 302453 VCS

About Symantec

Symantec is a global leader in infrastructure software, enabling businesses and consumers to have confidence in a connected world. The company helps customers protect their infrastructure, information, and interactions by delivering software and services that address risks to security, availability, compliance, and performance. Headquartered in Cupertino, Calif., Symantec has operations in 40 countries. More information is available at www.symantec.com.

For specific country offices and contact numbers, please visit our Web site. For product information in the U.S., call toll-free 1 (800) 745 6054.

Symantec CorporationWorld Headquarters20330 Stevens Creek BoulevardCupertino, CA 95014 USA+1 (408) 517 80001 (800) 721 3934

Copyright © 2007 Symantec Corporation. All rights reserved. Symantec and the Symantec logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners.

10/07 xxxxxxxx [Style: 07__Legal]

Page 36: 302453 VCS

www.symantec.com