sg rac oracl installation guide with steps

40
Oracle Database 11g Release 2 Real Application Clusters with SLVM/RAW on HP-UX Installation Cookbook Technical white paper Table of contents Executive Summary .............................................................................................................................. 2 Introduction ......................................................................................................................................... 2 Audience ............................................................................................................................................ 3 General System Installation Requirements ............................................................................................... 3 Hardware Requirements.................................................................................................................... 3 Network Requirements ...................................................................................................................... 4 Required HP-UX Patches .................................................................................................................... 6 Kernel Parameter Settings.................................................................................................................. 7 Create the Oracle User ........................................................................................................................ 8 Oracle RAC 11g Cluster Preparation Steps ............................................................................................. 9 SLVM Configuration ......................................................................................................................... 9 Preparation for Oracle Software Installation .......................................................................................... 12 Prepare HP-UX Systems for Oracle software installation ...................................................................... 13 Check Cluster Configuration with Cluster Verification Utility ................................................................ 14 Install Oracle Clusterware 11gR2 ........................................................................................................ 15 Install and Create Oracle Database RAC 11gR2 ................................................................................... 22 Reference ......................................................................................................................................... 39

Upload: kulmahesh2004

Post on 22-Apr-2015

149 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: Sg Rac Oracl Installation Guide With Steps

Oracle Database 11g Release 2 Real Application Clusters with SLVM/RAW on HP-UX Installation Cookbook Technical white paper

Table of contents

Executive Summary .............................................................................................................................. 2 Introduction......................................................................................................................................... 2 Audience............................................................................................................................................ 3 General System Installation Requirements ............................................................................................... 3

Hardware Requirements.................................................................................................................... 3 Network Requirements...................................................................................................................... 4 Required HP-UX Patches.................................................................................................................... 6 Kernel Parameter Settings.................................................................................................................. 7

Create the Oracle User ........................................................................................................................ 8 Oracle RAC 11g Cluster Preparation Steps............................................................................................. 9

SLVM Configuration ......................................................................................................................... 9 Preparation for Oracle Software Installation.......................................................................................... 12

Prepare HP-UX Systems for Oracle software installation ...................................................................... 13 Check Cluster Configuration with Cluster Verification Utility ................................................................ 14

Install Oracle Clusterware 11gR2 ........................................................................................................ 15 Install and Create Oracle Database RAC 11gR2 ................................................................................... 22 Reference ......................................................................................................................................... 39

Page 2: Sg Rac Oracl Installation Guide With Steps

Executive Summary Oracle Real Application Cluster 11g Release 2 can be configured with various storage options. The storage option SLVM/RAW is one of them. This storage option provides many benefits to users. It provides multipathing; extra level of protection against inadvertent overwrites from nodes inside/outside the cluster, fast database recovery time and fast cluster reformation time.

This white paper describes Oracle Real Application Cluster 11g Release 2 and SGeRAC A.11.19 installation with SLVM/RAW on HP UX 11.31 HAOE.

Introduction This white paper is intended to provide help installing Oracle Real Application Clusters 11g Release 2 and SGeRAC A.11.19 with SLVM/RAW on HP Integrity servers running HP-UX 11.31 HAOE operating system. Oracle RAC 11gR2 GUI does not provide the option to install RAC with SLVM/RAW. You will have to follow some manual steps to install RAC 11gR2 with SLVM/RAW storage options. This white paper describes all those required steps.

Within this cookbook, all scenarios are based on a 2 node cluster—node 1 referred to as “bike” and node 2 as “cycle”.

Figure 1: Example of two node cluster for all scenarios in this whitepaper.

In this white paper, we use the following logic:

bike# <command> = command needs to be issued as root from node bike

cycle$ <command> = command needs to be issued as oracle from node cycle

bike/cycle # <command> = command needs to be issued as root from both nodes bike + cycle.

2

Page 3: Sg Rac Oracl Installation Guide With Steps

3

Audience This white paper intended for readers who wish to install/configure Oracle Real Application Clusters 11g Release 2 with SLVM/RAW on HP-UX.

General System Installation Requirements

Hardware Requirements • At least 4 GB of physical RAM. Use the following command to verify the amount of memory

installed on your system: # /usr/contrib/bin/machinfo | grep -i Memory or # /usr/sbin/dmesg | grep "Physical:"

• Swap space equivalent to the multiple of the available RAM, indicated as follows: Available RAM Swap Space Required Between 4 GB and 8 GB 2 times the size of RAM Between 8 GB and 32 GB 1.5 times the size of RAM More than 32 GB 32 GB

Use the following command to determine the amount of swap space installed on your system:

# /usr/sbin/swapinfo -a

• 1 GB of disk space in the /tmp directory. To determine the amount of disk space available in the /tmp directory, enter the following command: # bdf /tmp

• The Oracle Clusterware (Grid) home requires 5 GB of disk space. • 8.2 GB of disk space for the Oracle database binaries. You can determine the amount of free disk

space on the system using # bdf

• At least 5.5 GB for Oracle Clusterware and Database files. • Operating System: HP-UX 11.31 Itanium. To determine if you have a 64-bit configuration enter the

following command: # /bin/getconf KERNEL_BITS

• Asnyc I/O is required for Oracle on RAW devices and configured on HP-UX 11.31 by default. You can check if you have the following file: # ll /dev/async

# crw-rw-rw- 1 bin bin 101 0x000000 Jun 9 09:38 /dev/async

Use the following privileges to establish access to /dev/async leaving the protection at the default.

# getprivgrp

global privileges: CHOWN

dba: RTPRIO MLOCK RTSCHED

oinstall: RTPRIO MLOCK RTSCHED

Page 4: Sg Rac Oracl Installation Guide With Steps

4

• If you want to use Oracle on RAW devices and Async I/O is not configured, then – Create the /dev/async character device # /sbin/mknod /dev/async c 101 0x0

# chown oracle:dba /dev/async

# chmod 660 /dev/async

– Configure the async driver in the kernel using SAM or HP SMS. – Set HP-UX kernel parameter max_async_ports using SAM. max_async_ports limits the

maximum number of processes that can concurrently use /dev/async. Set this parameter to the sum of “processes” from init.ora + number of background processes. If max_async_ports is reached, subsequent processes will use synchronous i/o.

– Set HP-UX kernel parameter aio_max_ops using SAM. aio_max_ops limits the maximum number of asynchronous i/o operations that can be queued at any time. Set this parameter to the default value (2048), and monitor over time using glance

• To allow you to successfully relink Oracle products after installing this software, please ensure that the following symbolic links have been created:

# cd /usr/lib

# ln -s /usr/lib/libX11.3 libX11.sl

# ln -s /usr/lib/libXIE.2 libXIE.sl

# ln -s /usr/lib/libXext.3 libXext.sl

# ln -s /usr/lib/libXhp11.3 libXhp11.sl

# ln -s /usr/lib/libXi.3 libXi.sl

# ln -s /usr/lib/libXm.4 libXm.sl

# ln -s /usr/lib/libXp.2 libXp.sl

# ln -s /usr/lib/libXt.3 libXt.sl

# ln -s /usr/lib/libXtst.2 libXtst.sl

• Ensure that each member node of the cluster is set (as closely as possible) to the same date and time. Oracle strongly recommends using the Network Time Protocol feature of most operating systems for this purpose, with all nodes using the same reference Network Time Protocol server.

Note: Oracle Grid Infrastructure Installation Guide 11gR2 for HP-UX should be referred for latest hardware requirement.

Network Requirements You need the following IP addresses per node to build a RAC 11g cluster:

• Public interface that will be used for client communication • Virtual IP address (VIP) that will be bind by Oracle Clusterware to the public interface. It will be

used by clients to access the RAC database. If a node or interconnect fails, then the affected VIP is relocated to the surviving node.

• Private interface that will be used for inter-cluster traffic. There are three major categories of inter-cluster traffic: – SG-HB = Serviceguard heartbeat and communications traffic. It is recommended to use two active

networks for SG HB. – CSS-HB = Oracle CSS heartbeat traffic and communications traffic for Oracle Clusterware.

CSS-HB uses a single logical connection over a single subnet network. – RAC-IC = RAC instance peer to peer traffic and communications for Global Cache Service (GCS) and

Global Enqueue Service (GES), formally Cache Fusion (CF) and Distributed Lock Manager (DLM).

Page 5: Sg Rac Oracl Installation Guide With Steps

5

• Cluster SCAN address that will be used by all clients connecting to the cluster. It should be on same subnet as of public interface. It is a domain name registered to at least one and up to three IP addresses. It changes the need to change clients when nodes are added to or removed from the cluster. It should be registered on Domain Name Server (DNS) or configured in /etc/host of all nodes of cluster.

When configuring these networks, please consider:

• The public and private interface names associated with the network adapters for each network should be the same on all nodes, for example, lan0 for private interconnect and lan1 for public interconnect. If this is not the case, you can use the ioinit command to map the LAN interfaces to new device instances: – Write down the hardware path that you want to use: # lanscan

Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI

Path Address In# State NamePPA ID Type Support Mjr#

1/0/8/1/0/6/0 0x000F203C346C 1 UP lan1 snap1 1 ETHER Yes 119

1/0/10/1/0 0x00306EF48297 2 UP lan2 snap2 2 ETHER Yes 119C

– Create a new ascii file with the following syntax: Hardware_Path Device_Group New_Device_Instance_Number

Example:

# vi newio

1/0/8/1/0/6/0 lan 8

1/0/10/1/0 lan 9

Please note that you have to choose a device instance number that is currently not in use.

– Activate this configuration with the following command (-r option will issue a reboot): # ioinit -f /root/newio -r

– When the system is up again, check new configuration: # lanscan

Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI

Path Address In# State NamePPA ID Type Support Mjr#

1/0/8/1/0/6/0 0x000F203C346C 1 UP lan8 snap8 1 ETHER Yes 119

1/0/10/1/0 0x00306EF48297 2 UP lan9 snap9 2 ETHER Yes 119

• For the public network – Each network adapter must support TCP/IP.

• For the private network – Oracle recommends to use a subnet reserved for private networks, such as 10.0.0.0 or

192.168.0.0. – Private IP address and private network name must be configured in /etc/hosts file on each node. – The following interconnect technologies are currently supported (see also Oracle RAC

Technologies Certification Matrix for Unix at http://www.oracle.com/technology/products/database/clustering/certify/tech_generic_unix_new)

– UDP over 1Gbit Ethernet – IPoIB = IP protocol over Infiniband hardware

Page 6: Sg Rac Oracl Installation Guide With Steps

6

– Crossover cables are not supported for the cluster interconnect; switch is mandatory for production implementation, even for only 2 nodes architecture.

– Please note that Oracle Clusterware Heartbeat timeout default (“misscount”) is 30 seconds for clusters without Serviceguard and 600 for clusters with Serviceguard. This ensures that Serviceguard will be first to recognize any failures and to initiate cluster reformation activities.

– In order to make this private interconnect high available, you can either use HP Serviceguard or HP Auto Port Aggregation (APA) LAN Monitor. Oracle does not provide any mechanism to make this private interconnect high available.

• For the virtual IP (VIP) address – This must be on the same subnet as the public interface – VIP address and VIP host name must be currently unused. VIPs must be resolvable, preferably in

DNS. (it can be registered in a DNS, but should not be accessible by a ping command). – In order to make this VIP high available, please see Oracle Metalink Note

296874.1(“Configuring the HP-UX Operating System for the Oracle 10g VIP“). Please see section “Notes for configuring Oracle VIPs for Oracle RAC 11gR2” in this Note.

• For the Cluster scan address – This must be on the same subnet as the public interface. – Cluster scan address and Cluster scan host name must be currently unused (it can be registered in

a DNS, but should not be accessible by a ping command). – The SCAN should be resolved in DNS and have 3 IP addresses assigned using round robin load

balancing.

• Ping all IP addresses. The public and private IP addresses should respond to ping commands. The VIP addresses and cluster scan addresses should not respond.

Required HP-UX Patches You should install latest Serviceguard A.11.19 and SGeRAC A.11.19 Patches from following link. http://itresourcecenter.hp.com/

You should refer to “Oracle Grid Infrastructure Installation Guide 11gR2 for HP-UX“ to know about other required Patches.

• To determine which operating system patches are installed, enter the following command: # /usr/sbin/swlist -l patch

• To determine if a specific operating system patch has been installed, enter the following command: # /usr/sbin/swlist -l patch <patch_number>

• To determine which operating system bundles are installed, enter the following command: # /usr/sbin/swlist -l bundle

Page 7: Sg Rac Oracl Installation Guide With Steps

7

Kernel Parameter Settings Verify that the kernel parameters shown in the following table (Table 1) are set either to the value shown, or to values greater than or equal to the recommended value shown. If the current value for any parameter is higher than the value listed in this table, do not change the value of that parameter.

Table 1: Kernel Parameters

ksi_alloc_max 32768

executable_stack 0

maxfiles 1024

maxfiles_lim 63488

max_thread_proc 1024

maxdsiz 1073741824 (1 GB)

maxdsiz_64bit 2147483648 (2 GB)

maxssiz 134217728 (128 MB)

maxssiz_64bit 1073741824 (1 GB)

maxuprc 3686

msgmap 4096

msgmni 4096

msgseg 32767

msgtql 4096

ncsize 35840

nflocks 4096

ninode 34816

nkthread 7184

nproc 4096

semmni 4096

semmns 8192

semmnu 4092

semvmx 32767

nfile 126976

These kernel parameters details can change. Please refer to “Oracle Grid Infrastructure Installation Guide 11gR2 for HPUX“ to get the latest information about these parameters.

You can modify the kernel settings either by using HP-UX System Management Homepage, kcweb Application (/usr/sbin/kcweb -F) or by using the kctune command line utility (kmtune on PA-RISC). For System Management Homepage, just visit http://<nodename>:2301. For kctune, you can use below commands:

# kctune > /tmp/kctune.log (lists all current kernel settings)

# kctune tunable>=value The tunable’s value will be set to value, unless it is already greater

# kctune -D > /tmp/kctune.log (Restricts output to only those parameters which have changes being held until next boot)

Page 8: Sg Rac Oracl Installation Guide With Steps

8

Create the Oracle User • Log in as the root user • Create database groups on each node. The group ids must be unique. The id used here are just

examples, you can use any group id not used on any of the cluster nodes. – The OSDBA group, typically dba: bike/cycle# /usr/sbin/groupadd -g 201 dba

– The optional ORAINVENTORY group, typically oinstall; this group owns the Oracle inventory, which is a catalog of all Oracle software installed on the system. bike/cycle# /usr/sbin/groupadd -g 200 oinstall

• Create the Oracle software user on each node. The user id must be unique. The user id used below is just an example, you can use any id not used on any of the cluster nodes. bike# /usr/sbin/useradd -u 200 -g oinstall -G dba oracle

• Check User: bike# id oracle

uid=203(oracle) gid=103(oinstall) groups=101(dba),104(oper)

• Create HOME directory for Oracle user bike/cycle# mkdir /home/oracle

bike/cycle# chown oracle:oinstall /home/oracle

• Change Password on each node: bike/cycle# passwd oracle

• During the installation of Oracle RAC, the Oracle database Installer needs to copy files to and execute programs on the other nodes in the cluster. In order to allow Oracle installer to do that, you must configure Secure Shell (SSH) to allow the execution of programs on other nodes in the cluster without requiring password prompts.

SSH Set-up

bike/cycle$ mkdir ~/.ssh

bike/cycle$ chmod 755 ~/.ssh

bike/cycle$ /usr/bin/ssh-keygen –t rsa

Here, we leave the passphrase empty.

Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

Next, contents of id_rsa.pub file of both nodes bike and cycle need to be put into a file called /home/oracle/.ssh/authorized_keys on both nodes.

bike$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

bike$ ssh oracle@cycle cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

bike$ chmod 644 ~/.ssh/authorized_keys

bike$ scp ~/.ssh/authorized_keys cycle:~/.ssh/authorized_keys

Page 9: Sg Rac Oracl Installation Guide With Steps

9

Oracle RAC 11g Cluster Preparation Steps In this section, we provide examples of command sequences that can be used to prepare the cluster. All examples demonstrate how storage is configured using new Next Generation Mass storage Stack introduced with HP-UX 11.31. This new I/O stack provides native multi-pathing and load balancing, as well as agile and persistent addressing. Using the agile address, HP-UX will automatically and transparently use the redundant path for the LUN in the background.

SLVM Configuration To use shared raw logical volumes, HP Serviceguard Extensions for RAC must be installed on all cluster nodes. For a basic database configuration with Shared Logical Volume Manager (SLVM), the following shared logical volumes are required. Note that in this scenario, one SLVM volume group is used for Oracle Clusterware and one SLVM volume group is used for database files. In cluster environment with more than one RAC database, it is recommended to have separate SLVM volume groups for each RAC database.

Table 2: SLVM lvols required for Clusterware and RAC database setup

Create Device for File Size Sample Name <dbname> should be replaced with your database name.

Comments

Oracle Cluster Repository (OCR)

1024 MB ora_ocr_1024m You need to create this raw logical volume only once in the cluster, if you create more than one database, they all will share the same OCR.

Oracle Voting disk 512 MB ora_vote_512m You need to create this raw logical volume only once in the cluster, if you create more than one database, they all will share the same voting disk.

SYSTEM tablespace 1300 MB <dbname>_system_1300m

SYSAUX tablespace 1300 MB <dbname>_sysaux_1300m

One Undo tablespace per instance

508 MB <dbname>_undotbs2_508m

EXAMPLE tablespace 168 MB <dbname>_example_168m

USERS tablespace 128 MB <dbname>_users_128m

Two ONLINE Redo log files per instance

128 MB <dbname>_redo1_1_128m

<dbname>_redo1_2_128m

<dbname>_redo2_1_128m

<dbname>_redo2_2_128m

First and second control file 118 MB <dbname>_control1_118m

<dbname>_control2_118m

TEMP tablespace 258 MB <dbname>_temp_258m

Server parameter file 5 MB <dbname>_spfile_raw_5m

Password file 5 MB <dbname>_pwdfile_5m

Page 10: Sg Rac Oracl Installation Guide With Steps

10

• Disks need to be properly initialized before being added into volume groups. Do the following step for all the disks (LUNs) you want to configure for your RAC volume group(s) from node bike: bike# pvcreate –f /dev/rdisk/disk30

bike# pvcreate –f /dev/rdisk/disk40

• Create the volume group directory with the character special file called group: bike# mkdir /dev/vg_oracle

bike# mknod /dev/vg_oracle/group c 64 0x050000

bike# mkdir /dev/vg_rac

bike# mknod /dev/vg_rac/group c 64 0x060000

Note: <0x050000> and <0x060000> are the minor numbers in this example. This minor number for each group file must be unique among all the volume groups on the system.

• Create Volume Group (VG) (optionally using PV-LINKs) and extend the volume group if required: bike# vgcreate /dev/vg_oracle /dev/disk/disk30

bike# vgcreate /dev/vg_rac /dev/disk/disk40

• Create LVs for OCR and Voting: bike# lvcreate –L 1024 –n ora_ocr_1024m /dev/vg_oracle

bike# lvcreate –L 512 –n ora_vote_512m /dev/vg_oracle

• Create logical volumes as shown in the Table 2 above for the RAC database with the command bike# lvcreate –i 10 –I 1024 –L 100 –n Name /dev/vg_rac

-i: number of disks to stripe across

-I: stripe size in kilobytes

-L: size of logical volume in MB

• Check to see if your volume groups are properly created and available: bike# strings /etc/lvmtab

bike# vgdisplay –v /dev/vg_rac

• Export the volume group: – De-activate the volume group: bike# vgchange –a n /dev/vg_rac

– Create the volume group map file: bike# vgexport –v –p –s –m mapfile /dev/vg_rac

– Copy the mapfile to all the nodes in the cluster: bike# rcp mapfile cycle:/tmp/scripts

• Import the volume group on the second node in the cluster – Create a volume group directory with the character special file called group: bike# mkdir /dev/vg_rac

bike# mknod /dev/vg_rac/group c 64 0x060000

Note: The minor number has to be the same as on the other node.

Page 11: Sg Rac Oracl Installation Guide With Steps

11

– Import the volume group: cycle# vgimport –v –s -N –m /tmp/scripts/mapfile /dev/vg_rac

Note: the -N option is for HP-UX 11.31 agile addressing

Note: The minor number has to be the same as on the other node.

– Check to see if devices are imported: cycle# strings /etc/lvmtab

• Disable automatic volume group activation on all cluster nodes by setting AUTO_VG_ACTIVATE to 0 in file /etc/lvmrc. This ensures that shared volume group vg_rac is not automatically activated at system boot time. In case you need to have any other volume groups activated, you need to explicitly list them at the customized volume group activation section.

• It is recommended best practice to create symbolic links for each of these raw files on all systems of your RAC cluster. Example as follows. bike/cycle# cd /oracle/RAC/ (directory where you want to have the links)

bike/cycle# ln -s /dev/vg_rac/<dbname>_system_1300m

bike/cycle# ln -s /dev/vg_rac/<dbname>_sysaux_1300m

• Change the permissions of the database volume group vg_rac to 775, and change the permissions of all raw logical volumes to 660 and the owner to oracle:oinstall. bike/cycle# chmod 775 /dev/vg_rac

bike/cycle# chmod 660 /dev/vg_rac/r*

bike/cycle# chown oracle:dba /dev/vg_rac/r*

• Change the permissions of the OCR and voting disk logical volumes: bike/cycle# chown oracle:dba /dev/vg_oracle/r*

bike/cycle# chmod 640 /dev/vg_rac/roar_ocr_1024m

bike/cycle# chmod 660 /dev/vg_rac/roar_vote_512m

Serviceguard/SGeRAC Configuration After SLVM set-up, you can now start the Serviceguard cluster configuration.

In general, you can configure your Serviceguard cluster using lock disk or quorum server. We describe here the cluster lock disk set-up. Since we have already configured one volume group for the entire RAC cluster vg_rac, we use vg_rac for the lock volume as well.

1. This step is applicable only, if you want to use lvm cluster lock disk. Skip this step, if you are planning to use Quorum Server and Lock LUN. Activate the lock disk on the configuration node ONLY. Lock volume can only be activated on the node where the cmapplyconf command is issued so that the lock disk can be initialized accordingly. bike# vgchange -a y /dev/vg_rac

2. Create a cluster configuration template: bike# cmquerycl –n bike –n cycle –v –C /etc/cmcluster/rac.asc

3. Edit the cluster configuration file (rac.asc).

Make the necessary changes to this file for your cluster. For example, change the Cluster Name adjust the heartbeat interval and node timeout to prevent unexpected failovers due to DLM traffic. Configure all shared volume groups that you are using for RAC, including the volume group that contains the Oracle CRS files using the parameter OPS_VOLUME_GROUP at the bottom of the file. Also, ensure to have the right LAN interfaces configured for the SG heartbeat.

• Check the cluster configuration: bike# cmcheckconf -v -C rac.asc

Page 12: Sg Rac Oracl Installation Guide With Steps

12

• Create the binary configuration file and distribute the cluster configuration to all the nodes in the cluster: bike# cmapplyconf -v -C rac.asc

Note: the cluster is not started until you run cmrunnode on each node or cmruncl.

• De-activate the lock disk on the configuration node after cmapplyconf bike# vgchange -a n /dev/vg_rac

• Start the cluster and view it to be sure it’s up and running. See the next section for instructions on starting and stopping the cluster.

How to start up the cluster:

• Start the cluster from any node in the cluster bike# cmruncl -v

Or, on each node bike/cycle# cmrunnode -v

• Make all RAC volume groups and Cluster Lock volume groups sharable and cluster aware (not packages) from the cluster configuration node. This has to be done only once. bike# vgchange -S y -c y /dev/vg_rac

• Then on all the nodes, activate the volume group in shared mode in the cluster. This has to be done each time when you start the cluster. bike# vgchange -a s /dev/vg_rac

• Check the cluster status: bike# cmviewcl –v

How to shut down the cluster (not needed here):

• Shut down the RAC instances (if up and running)

• On all the nodes, deactivate the volume group in shared mode in the cluster: bike# vgchange –a n /dev/vg_rac

• Halt the cluster from any node in the cluster bike# cmhaltcl –v

• Check the cluster status: bike# cmviewcl –v

Preparation for Oracle Software Installation The Oracle RAC 11gR2 installation requires you to perform a two-phase process in which you run the Oracle Grid Infrastructure installer and Oracle Database installer. The first phase installs Oracle Clusterware and the second phase installs the Oracle Database 11gR2 software with RAC.

In case that you have downloaded the software you might have the following files:

• hpia64_11gR2_clusterware.zip Oracle Clusterware • hpia64_11gR2_database_1of2.zip Oracle Database Software • hpia64_11gR2_database_2of2.zip Oracle Database Software

You can unpack the software with the following commands as root user:

bike# /usr/local/bin/unzip hpia64_11gR2_clusterware.zip

Page 13: Sg Rac Oracl Installation Guide With Steps

13

Prepare HP-UX Systems for Oracle software installation On HP-UX, most processes use a time-sharing scheduling policy. Time sharing can have detrimental effects on Oracle performance by descheduling an Oracle process during critical operations, for example, when it is holding a latch. HP-UX has a modified scheduling policy, referred to as SCHED_NOAGE, that specifically addresses this issue. Unlike the normal time-sharing policy, a process scheduled using SCHED_NOAGE does not increase or decrease in priority, nor is it preempted.

This feature is suited to online transaction processing (OLTP) environments because OLTP environments can cause competition for critical resources. The use of the SCHED_NOAGE policy with Oracle Database can increase performance by 10 percent or more in OLTP environments.

The SCHED_NOAGE policy does not provide the same level of performance gains in decision support environments because there is less resource competition. Because each application and server environment is different, you should test and verify that your environment benefits from the SCHED_NOAGE policy. When using SCHED_NOAGE, Oracle recommends that you exercise caution in assigning highest priority to Oracle processes. Assigning highest SCHED_NOAGE priority to Oracle processes can exhaust CPU resources on your system, causing other user processes to stop responding.

The RTSCHED and RTPRIO privileges grant Oracle the ability to change its process scheduling policy to SCHED_NOAGE and also tell Oracle what priority level it should use when setting the policy. The MLOCK privilege grants Oracle the ability to execute asynch I/Os through the HP asynch driver. Without this privilege, Oracle9i generates trace files with the following error message: “Ioctl ASYNCH_CONFIG error, errno = 1”.

As root, do the following:

• If it does not already exist, create the /etc/privgroup file. Add the following line to the file: dba MLOCK RTSCHED RTPRIO

• Use the following command syntax to assign these privileges: bike/cycle# setprivgrp -f /etc/privgroup

• � Create the /var/opt/oraInventory directory and make it owned by the oracle account. After installation, this directory will contain a few small text files that briefly describe the Oracle software installations and databases on the server. These commands will create the directory and give it appropriate permissions: bike/cycle# mkdir /var/opt/oraInventory

bike/cycle# chown oracle:dba /var/opt/oraInventory

bike/cycle# chmod 755 /var/opt/oraInventory

• Create the following Oracle directories: – Local Home directory:

Oracle Clusterware: bike/cycle# mkdir -p /var/opt/product/crs_r2

bike/cycle# chown –R oracle:dba /var/opt/product

bike/cycle# chmod –R 775 /var/opt/product

Oracle RAC bike/cycle# mkdir –p /var/opt/oracle/rac_r2

bike/cycle# chown –R oracle:dba /var/opt/oracle

bike/cycle# chmod –R 775 /var/opt/oracle

Page 14: Sg Rac Oracl Installation Guide With Steps

14

• Set Oracle environment variables by adding an entry similar to the following example to each user startup .profile file for the Bourne or Korn shells, or .login file for the C shell. # Oracle Environment

export ORACLE_BASE=/var/opt/oracle

export ORACLE_HOME=$ORACLE_BASE/rac_r2

export ORA_CRS_HOME=/var/opt/product/crs_r2

export ORACLE_SID=<SID>

export ORACLE_TERM=xterm

export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$PATH

Check Cluster Configuration with Cluster Verification Utility Cluster Verification Utility (Cluvfy) is a cluster utility introduced with Oracle Clusterware 10g Release 2. The wide domain of deployment of Cluvfy ranges from initial hardware setup through fully operational cluster for RAC deployment and covers all the intermediate stages of installation and configuration of various components. Cluvfy is provided with two scripts: runcluvfy.sh, which is designed to be used before installation, and Cluvfy, which is in the path ORA_CRS_HOME/bin. The script runcluvfy.sh contains temporary variable definitions which enable it to be run before installing Oracle Clusterware or Oracle Database. After you install Oracle Clusterware, use the command Cluvfy to check prerequisites and perform other system readiness checks.

Before Oracle software is installed, to enter a Cluvfy command, change directories and start runcluvfy.sh using the following syntax:

cd /mountpoint

./runcluvfy.sh options

With Cluvfy, you can either

• Check the status for a specific component cluvfy comp -list Cluvfy displays a list of components that can be checked, and brief descriptions of how each component is checked.

cluvfy comp -help Cluvfy displays detailed syntax for each of the valid component checks.

or

• Check the status of your cluster/systems at a specific point (= stage) during your RAC installation. cluvfy stage -list Cluvfy displays a list of valid stages.

cluvfy stage -help Cluvfy displays detailed syntax for each of the valid stage checks.

• Example 1: Checking Network Connectivity among all cluster nodes: bike$ <OraStage>/clusterware/runcluvfy.sh comp nodecon -n bike,cycle [-verbose]

• Example 2: Performing pre-checks for cluster services setup bike$ <OraStage>/clusterware/runcluvfy.sh stage -pre crsinst -n bike,cycle [-verbose]

Page 15: Sg Rac Oracl Installation Guide With Steps

Install Oracle Clusterware 11gR2 This section describes the procedures for using the Oracle Grid Infrastructure (OGI) to install Oracle Clusterware.

1 Login as Oracle User and set the ORACLE_HOME environment variable to the Oracle

Clusterware Home directory. Then start the Oracle Grid Infrastructure Installer from Clusterware mount directory by issuing the command

bike/cycle$ export ORACLE_HOME=/opt/oracle/CRS_R2

bike/cycle$ ./runInstaller &

Ensure that you have the DISPLAY set.

2 At the Oracle Grid Infrastructure First screen, select “Install Grid Infrastructure Software Only” and click NEXT.

15

Page 16: Sg Rac Oracl Installation Guide With Steps

3 Select Product Language and click NEXT.

4 Here you can specify operating system groups. You will get warning message if you select same OS group for OSDBA, OSOPER, and OSASM.

16

Page 17: Sg Rac Oracl Installation Guide With Steps

5 Specify Installation Location.

6 Specify Inventory Directory.

17

Page 18: Sg Rac Oracl Installation Guide With Steps

7 Here the installer verifies that your environment meets all minimum requirements for installing and configuring Oracle Clusterware.

8 Ensure that you don’t get any warning or error during Prerequisite check. After Prerequisite check click NEXT.

18

Page 19: Sg Rac Oracl Installation Guide With Steps

9 Verify the details about the installation that appear on the Summary page and click FINISH or click Back to revise your installation.

10

19

Page 20: Sg Rac Oracl Installation Guide With Steps

11

12 Open a terminal window of both nodes (bike and cycle), Log in as “oracle”, Open file /var/opt/product/crs_r2/crs/install/crsconfig_params and edit following fields.

ORACLE_HOME=/var/opt/product/crs_r2 # Oracle Home dir

ORACLE_BASE=/var/opt/oracle # Oracle Base dir

OCR_LOCATIONS=/dev/vg_oracle/ora_ocr_1024m

CLUSTER_NAME=bike-cluster # Any name can be given

HOST_NAME_LIST=bike,cycle # All hosts of the cluster

NODE_NAME_LIST=bike,cycle # All nodes of the cluster

PRIVATE_NAME_LIST=bike-priv,cycle-priv # Private host names

VOTING_DISKS=/dev/vg_oracle/ora_vote_512m # Voting disk

CRS_NODEVIPS='bike-vip/255.255.255.0/lan0,cycle-vip/255.255.255.0/lan0'

# VIPs

NODELIST=bike,cycle # Node list

NETWORKS="lan0"/15.154.62.0:public,"lan2"/192.76.1.0:cluster_interconnect

#Public interface

SCAN_NAME=bike-cluster-scan #Cluster scan host name

SCAN_PORT=1521 #Cluster scan port no. 13 Execute rootcrs.pl. command on both node (bike and cycle) as follows.

/var/opt/product/crs_r2/perl/bin/perl -I/var/opt/product/crs_r2/perl/lib

-I/var/opt/product/crs_r2/crs/install

/var/opt/product/crs_r2/crs/install/rootcrs.pl

14 Execute following commands on both node (bike and cycle) to link oracle binary with RAC enabled. export ORACLE_HOME = /var/opt/product/crs_r2 (CRS HOME)

cd CRSHOME/rdbms/lib

make -f ins_rdbms.mk rac_on ioracle

20

Page 21: Sg Rac Oracl Installation Guide With Steps

15 Execute /var/opt/product/crs_r2/crs/install/rootcrs.pl on both node (bike and cycle) again. /var/opt/product/crs_r2/perl/bin/perl -I/var/opt/product/crs_r2/perl/lib

-I/var/opt/product/crs_r2/crs/install

/var/opt/product/crs_r2/crs/install/rootcrs.pl

16 Execute following command from clusterware mount dir on both node (bike and cycle). ./runInstaller -updateNodeList -local CRS=true ORACLE_HOME=$ORA_CRS_HOME "CLUSTER_NODES=bike,cycle"

17 Now Press OK on “execute configuration scripts” dialog box (you got on step 11) on each node.

18 Press CLOSE on final screen.

19 To ensure that the Oracle Clusterware install on all the nodes is valid, the following should be checked on all the nodes: $ $ORA_CRS_HOME/bin/crsctl check crs

Output CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

21

Page 22: Sg Rac Oracl Installation Guide With Steps

Install and Create Oracle Database RAC 11gR2 This part describes phase two of the installation procedures for installing the Oracle Database 11g with Real Application Clusters (RAC).

1 Login as Oracle User and set the ORACLE_HOME environment variable to the Oracle Home directory. Then start the Oracle Database Installer by issuing following command from RAC database mount dir.

$ export ORACLE_HOME= /var/opt/oracle/rac_r2

$ ./runInstaller &

Ensure that you have the DISPLAY set

2 Specify your email address to be informed of latest update and press NEXT.

22

Page 23: Sg Rac Oracl Installation Guide With Steps

3 Select “Install database software only” as Installation option and click NEXT.

4 Select “Real Application Clusters database installation” and click NEXT.

23

Page 24: Sg Rac Oracl Installation Guide With Steps

5 Select Product Language and click NEXT.

6 Select “Enterprise Edition” and click NEXT.

24

Page 25: Sg Rac Oracl Installation Guide With Steps

7 Specify Installation Location and click NEXT.

8 Specify OS groups and click NEXT.

25

Page 26: Sg Rac Oracl Installation Guide With Steps

9 Now you will get prerequisite check screen.

10 Ensure that you don’t get any warning or error during Prerequisite check. After Prerequisite check click NEXT.

11 Verify the details about the installation that appear on the Summary page and click FINISH or click Back to revise your installation.

26

Page 27: Sg Rac Oracl Installation Guide With Steps

12 Now you will get following Screen.

13 Execute configuration script on both node (bike, cycle) as instructed and click OK

27

Page 28: Sg Rac Oracl Installation Guide With Steps

14 You should get final screen now. Click CLOSE.

15 Execute netca from ORA_CRS_HOME (/var/opt/product/crs_r2) to configure listener.

Select Listener configuration and click NEXT.

28

Page 29: Sg Rac Oracl Installation Guide With Steps

16 Select Add and click NEXT.

17 Specify Listener Name and click NEXT.

29

Page 30: Sg Rac Oracl Installation Guide With Steps

18 Select Protocol and click NEXT.

19 Specify Port number and click NEXT. It is recommend to use standard Port number.

30

Page 31: Sg Rac Oracl Installation Guide With Steps

20 Select NO and click NEXT.

21 You should get following screen now. Click NEXT.

31

Page 32: Sg Rac Oracl Installation Guide With Steps

22 Click Finish on following Screen.

23 Execute following commands to configure cluster scan and cluster scan listener.

a) bike$ srvctl add scan -n SCAN1 # Here SCAN1 is scan name.

b) bike$ srvctl start scan # Start Cluster Scan.

c) bike$ srvctl add scan_listener -l LISTENER_SCAN -s -p TCP:1521

# Add Cluster Scan Listener.

d) bike$ srvctl start scan_listener # Start Cluster Scan Listener.

Verify status of Cluster Scan listener. bike$ srvctl status scan_listener

Output should be

SCAN Listener LISTENER_SCAN_SCAN1 is enabled

SCAN listener LISTENER_SCAN_SCAN1 is running on node bike

24 Now you are ready to create a RAC database. Create following directories.

Please note that here “slvmdb” is dbname.

mkdir -p $ORACLE_BASE/admin/slvmdb/adump

mkdir -p $ORACLE_BASE/admin/slvmdb/dpdump

mkdir -p $ORACLE_BASE/admin/slvmdb/hdump

mkdir -p $ORACLE_BASE/admin/slvmdb/pfile

mkdir -p $ORALCE_BASE/cfgtoollogs/dbca/slvmdb

mkdir –p $ORACLE_BASE/admin/slvmdb/scripts

32

Page 33: Sg Rac Oracl Installation Guide With Steps

33

25 Create pfile init.ora in $ORACLE_BASE/admin/slvmdb/scripts with following contents.

###########################################

# Archive

###########################################

log_archive_format=%t_%s_%r.dbf

###########################################

# Cache and I/O

###########################################

db_block_size=8192

###########################################

# Cluster Database

###########################################

#cluster_database=true

#remote_listener=auto-cluster-scan:1521

###########################################

# Cursors and Library Cache

###########################################

open_cursors=300

###########################################

# Database Identification

###########################################

db_domain=ind.hp.com

db_name=slvmdb

###########################################

# File Configuration

###########################################

control_files=("/dev/vg_rac/rorcl_control1_118m", "/dev/vg_rac/rorcl_control2_118m")

###########################################

# Miscellaneous

###########################################

compatible=11.2.0.0.0

diagnostic_dest=/var/opt/oracle

memory_target=3425697792

###########################################

# Processes and Sessions

###########################################

processes=150

###########################################

# Security and Auditing

###########################################

audit_file_dest=/var/opt/oracle/admin/slvmdb/adump

audit_trail=db

remote_login_passwordfile=exclusive

###########################################

# Shared Server

##########################################

dispatchers="( PROTOCOL = TCP ) ( SERVICE = slvmdbXDB )"

Page 34: Sg Rac Oracl Installation Guide With Steps

34

slvmdb1.instance_number=1

slvmdb2.instance_number=2

slvmdb2.thread=2

slvmdb1.thread=1

slvmdb2.undo_tablespace=UNDOTBS2

slvmdb1.undo_tablespace=UNDOTBS1

26 Create pfile initslvmdbTemp.ora in $ORACLE_BASE/admin/slvmdb/scripts with following contents. ###########################################

# Archive

###########################################

log_archive_format=%t_%s_%r.dbf

###########################################

# Cache and I/O

###########################################

db_block_size=8192

###########################################

# Cluster Database

###########################################

#cluster_database=true

#remote_listener=auto-cluster-scan:1521

###########################################

# Cursors and Library Cache

###########################################

open_cursors=300

###########################################

# Database Identification

###########################################

db_domain=ind.hp.com

db_name=slvmdb

###########################################

# File Configuration

###########################################

control_files=("/dev/vg_rac/rorcl_control1_118m", "/dev/vg_rac/rorcl_control2_118m")

###########################################

# Miscellaneous

###########################################

compatible=11.2.0.0.0

diagnostic_dest=/var/opt/oracle

memory_target=3425697792

###########################################

# Processes and Sessions

###########################################

processes=150

###########################################

# Security and Auditing

###########################################

audit_file_dest=/var/opt/oracle/admin/slvmdb/adump

audit_trail=db

remote_login_passwordfile=exclusive

Page 35: Sg Rac Oracl Installation Guide With Steps

35

###########################################

# Shared Server

###########################################

#dispatchers=" ( PROTOCOL = TCP ) ( SERVICE = slvmdbXDB )"

_no_recovery_through_resetlogs=true

slvmdb1.instance_number=1

slvmdb2.instance_number=2

slvmdb2.thread=2

slvmdb1.thread=1

slvmdb2.undo_tablespace=UNDOTBS2

slvmdb1.undo_tablespace=UNDOTBS1

27 Export Oracle SID with Database instance name and include Oracle HOME bin dir in PATH. bike$ ORACLE_SID=slvmdb1

bike$ export ORACLE_SID

bike$ PATH=$ORACLE_HOME/bin:$PATH;

bike$ export PATH

Add following entry in /etc/oratab. slvmdb:/var/opt/oracle/rac_r2:Y

28 Connect to sql through following command. sqlplus ‘ / as sysdba’

You will get sql prompt now. Execute following commands.

• ACCEPT sysPassword CHAR PROMPT 'Enter new password for SYS: ' HIDE

• ACCEPT systemPassword CHAR PROMPT 'Enter new password for SYSTEM: ' HIDE

• ACCEPT sysmanPassword CHAR PROMPT 'Enter new password for SYSMAN: ' HIDE

• ACCEPT dbsnmpPassword CHAR PROMPT 'Enter new password for DBSNMP: ' HIDE

• host /var/opt/oracle/rac_r2/bin/orapwd file=/var/opt/oracle/rac_r2/dbs/orapwslvmdb1 force=y

• host /var/opt/oracle/rac_r2/bin/srvctl add database -d slvmdb -o /var/opt/oracle/rac_r2 -p /dev/vg_rac/rorcl_spfile_5m -n slvmdb -m ind.hp.com

• host /var/opt/oracle/rac_r2/bin/srvctl add instance -d slvmdb -i slvmdb1 -n bike

• host /var/opt/oracle/rac_r2/bin/srvctl add instance -d slvmdb -i slvmdb2 -n cycle

• host /var/opt/oracle/rac_r2/bin/srvctl disable database -d slvmdb

29 Execute following commands to create clone datafiles and controlfiles.

• SET VERIFY OFF

• connect "SYS"/"&&sysPassword" as SYSDBA

• set echo on

• startup nomount pfile="/var/opt/oracle/admin/slvmdb/scripts/init.ora";

• set verify off;

• set echo off;

• set serveroutput on;

• select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual;

• variable devicename varchar2(255);

• declare omfname varchar2(512) := NULL;

• done boolean;

• begin dbms_output.put_line(' ');

dbms_output.put_line(' Allocating device.... ');

dbms_output.put_line(' Specifying datafiles... ');

:devicename := dbms_backup_restore.deviceAllocate;

dbms_output.put_line(' Specifing datafiles... ');

dbms_backup_restore.restoreSetDataFile;

dbms_backup_restore.restoreDataFileTo(1,

Page 36: Sg Rac Oracl Installation Guide With Steps

36

'/dev/vg_rac/rorcl_system_1300m', 0, 'SYSTEM');

dbms_backup_restore.restoreDataFileTo(2,

'/dev/vg_rac/rorcl_sysaux_1300m', 0, 'SYSAUX');

dbms_backup_restore.restoreDataFileTo(3,

'/dev/vg_rac/rorcl_undotbs1_512m', 0, 'UNDOTBS1');

dbms_backup_restore.restoreDataFileTo(4,

'/dev/vg_rac/rorcl_users_128m', 0, 'USERS');

dbms_output.put_line(' Restoring ... ');

dbms_backup_restore.restoreBackupPiece('/var/opt/oracle/rac_r2/ assistants/dbca/templates/Seed_Database.dfb', done);

if done then

bms_output.put_line(' Restore done.');

else

bms_output.put_line(' ORA-XXXX: Restore failed ');

end if;

dbms_backup_restore.deviceDeallocate;

end;

/

• select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual;

30 Execute following commands to create clone databases.

• SET VERIFY OFF

• connect "SYS"/"&&sysPassword" as SYSDBA

• set echo on

• Create controlfile reuse set database "slvmdb" MAXINSTANCES 32

MAXLOGHISTORY 1

MAXLOGFILES 192

MAXLOGMEMBERS 3

MAXDATAFILES 1024

Datafile

'/dev/vg_rac/rorcl_system_1300m',

'/dev/vg_rac/rorcl_sysaux_1300m',

'/dev/vg_rac/rorcl_undotbs1_512m',

'/dev/vg_rac/rorcl_users_128m'

LOGFILE GROUP 1 ('/dev/vg_rac/rorcl_redo1_1_128m') SIZE 51200K,

GROUP 2 ('/dev/vg_rac/rorcl_redo1_2_128m') SIZE 51200K RESETLOGS;

• exec dbms_backup_restore.zerodbid(0);

• shutdown immediate;

• startup nomount pfile="/var/opt/oracle/admin/slvmdb/scripts/initslvmdbTemp.ora";

• Create controlfile reuse set database "slvmdb" MAXINSTANCES 32

MAXLOGHISTORY 1

MAXLOGFILES 192

MAXLOGMEMBERS 3

MAXDATAFILES 1024

Datafile

'/dev/vg_rac/rorcl_system_1300m',

'/dev/vg_rac/rorcl_sysaux_1300m',

'/dev/vg_rac/rorcl_undotbs1_512m',

'/dev/vg_rac/rorcl_users_128m'

Page 37: Sg Rac Oracl Installation Guide With Steps

37

LOGFILE GROUP 1 ('/dev/vg_rac/rorcl_redo1_1_128m') SIZE 51200K,

GROUP 2 ('/dev/vg_rac/rorcl_redo1_2_128m') SIZE 51200K RESETLOGS;

• alter system enable restricted session;

• alter database "slvmdb" open resetlogs;

• exec dbms_service.delete_service('seeddata');

• exec dbms_service.delete_service('seeddataXDB');

• alter database rename global_name to "slvmdb.ind.hp.com";

• ALTER TABLESPACE TEMP ADD TEMPFILE '/dev/vg_rac/rorcl_temp_256m'

• SIZE 20480K REUSE AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED;

• select tablespace_name from dba_tablespaces where tablespace_name='USERS';

• alter system disable restricted session;

• connect "SYS"/" shutdown immediate;

• connect "SYS"/"&&sysPassword" as SYSDBA

• startup restrict

• pfile="/var/opt/oracle/admin/slvmdb/scripts/initslvmdbTemp.ora";

• select sid, program, serial#, username from v$session;

• alter database character set INTERNAL_CONVERT WE8MSWIN1252;

• alter database national character set INTERNAL_CONVERT AL16UTF16;

• alter user sys account unlock identified by "&&sysPassword";

• alter user system account unlock identified by "&&systemPassword";

• alter system disable restricted session;&&sysPassword" as SYSDBA

31 Execute post script commands.

• SET VERIFY OFF

• connect "SYS"/"&&sysPassword" as SYSDBA

• set echo on

• @/var/opt/oracle/rac_r2/rdbms/admin/dbmssml.sql;

• execute dbms_datapump_utl.replace_default_dir;

• commit;

• connect "SYS"/"&&sysPassword" as SYSDBA

• alter session set current_schema=ORDSYS;

• @/var/opt/oracle/rac_r2/ord/im/admin/ordlib.sql;

• alter session set current_schema=SYS;

• connect "SYS"/"&&sysPassword" as SYSDBA

• connect "SYS"/"&&sysPassword" as SYSDBA

• execute ORACLE_OCM.MGMT_CONFIG_UTL.create_replace_dir_obj;

• CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '/dev/vg_rac/rorcl_undotbs2_512m' SIZE 25600K REUSE AUTOEXTEND ON MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL;

32 Execute following commands to create Cluster DB views.

• SET VERIFY OFF

• connect "SYS"/"&&sysPassword" as SYSDBA

• set echo on

• @/var/opt/oracle/rac_r2/rdbms/admin/catclust.sql;

33 Execute following commands to update SPFILE info.

• host echo "SPFILE='/dev/vg_rac/rorcl_spfile_raw_5m'"

• /var/opt/oracle/rac_r2/dbs/initslvmdb1.ora

Page 38: Sg Rac Oracl Installation Guide With Steps

38

34 Execute following commands to lock account.

• SET VERIFY OFF

• set echo on

• BEGIN

FOR item IN ( SELECT USERNAME FROM DBA_USERS WHERE ACCOUNT_STATUS

IN ('OPEN', 'LOCKED', 'EXPIRED') AND USERNAME

NOT IN ('SYS','SYSTEM') )

LOOP

dbms_output.put_line('Locking and Expiring: ' || item.USERNAME);

execute immediate 'alter user ' ||

sys.dbms_assert.enquote_name(

sys.dbms_assert.schema_name(

item.USERNAME),false) || ' password expire account lock' ;

END LOOP;

END;

/

35 Execute following commands for Post Database creation steps.

• SET VERIFY OFF

• connect "SYS"/"&&sysPassword" as SYSDBA

• set echo on

• select 'utl_recomp_begin: ' || to_char(sysdate, 'HH:MI:SS') from dual;

• execute utl_recomp.recomp_serial();

• select 'utl_recomp_end: ' || to_char(sysdate, 'HH:MI:SS') from dual;

• execute dbms_swrf_internal.cleanup_database(cleanup_local => FALSE);

• commit;

• shutdown immediate;

• connect "SYS"/"&&sysPassword" as SYSDBA

• startup mount pfile="/var/opt/oracle/admin/slvmdb/scripts/init.ora";

• alter database archivelog;

• alter database open;

• select group# from v$log where group# =3;

• select group# from v$log where group# =4;

• ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 3 ('/dev/vg_rac/rorcl_redo2_1_128m') SIZE 51200K,

• GROUP 4 ('/dev/vg_rac/rorcl_redo2_2_128m') SIZE 51200K;

• ALTER DATABASE ENABLE PUBLIC THREAD 2;

• host echo cluster_database=true >>/var/opt/oracle/admin/slvmdb/scripts/init.ora;

• host echo remote_listener=auto-cluster-scan:1521>>/var/opt/oracle/admin/slvmdb/scripts/init.ora;

• connect "SYS"/"&&sysPassword" as SYSDBA

• set echo on

• create spfile='/dev/vg_rac/rorcl_spfile_raw_5m' FROM pfile='/var/opt/oracle/admin/slvmdb/scripts/init.ora';

• shutdown immediate;

• host /var/opt/oracle/rac_r2/bin/srvctl enable database -d slvmdb;

• host /var/opt/oracle/rac_r2/bin/srvctl start database -d slvmdb;

• connect "SYS"/"&&sysPassword" as SYSDBA

Page 39: Sg Rac Oracl Installation Guide With Steps

39

36 Exit from SQL Prompt. Execute Following command to see the status of the Database.

• bike$ crsctl status resource ora.slvmdb.db

Output should be NAME=ora.slvmdb.db

TYPE=ora.database.type

TARGET=ONLINE , ONLINE

STATE=ONLINE on bike, ONLINE on cycle

37 Congrats!! You have successfully completed Oracle Real Application Cluster 11g Release 2 installation with SLVM/RAW.

Reference • http://www.oracle.com/pls/db112/portal.portal_db?selected=11&frame=#hp-

ux_installation_guides • http://download.oracle.com/docs/cd/E11882_01/install.112/e10815.pdf • http://download.oracle.com/docs/cd/E11882_01/install.112/e17214.pdf • http://docs.hp.com/en/ha.html