upgrade oracle rac cluster 11.2.0.3 on oel 6.1 to oracle ... · pdf fileupgrade oracle rac...

Post on 10-Feb-2018

242 Views

Category:

Documents

5 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Upgrade Oracle RAC cluster 11.2.0.3 on OEL 6.1 to

Oracle RAC cluster 12c on OEL6.5

In the article you will have a look at the guidelines and steps to upgrade an

Oracle RAC cluster 11.2.0.3 on OEL 6.1 to Oracle RAC cluster 12c on OEL 6.5.

The initial cluster configuration (Oracle 11.2.0.3 on OEL 6.1) is described

here. The upgrade comprises the following steps:

1. Rolling upgrade of the Virtual Box machines (OEL61A and OEL61B) from

OEL 6.1 to OEL 6.5 with UEKR3 (kernel 3.8.13-35.el6uek.x86_64)

2. Upgrade from Oracle 11gR2 11.2.0.3 to Oracle 12c 12.1.0.1 for both

GI/RDBMS software. The database upgrade involves a downtime during the

upgrade.

The following software will be used:

1. Oracle VM Virtual Box 4.3.12 – download from here.

2. Oracle 12c (database, grid, examples) – download from OTN here.

3. Oracle OEL 6.5 – download from Oracle Software Delivery Cloud (former

Edelivery) here.

I upgraded the Oracle VM Virtual Box software, used in the setup here, to the

latest version available at the time of writing the article. Thus,

henceforth, I will not discuss the Oracle VM Virtual Box software upgrade in

the article.

I will use the existing VMs (OEL61A and OEL61B) without any change from the

original configuration for the upgrade to OEL65 and Oracle 12c. The only

changes are to enable internet access for the duration of the OEL upgrade

Overall, I must admit, I was quite impressed by how easy and smooth the

upgrade was compared to similar Oracle RAC upgrades from 9i->10g->11g where I

had to spend longer time on MOS (former Metalink) working around upgrade

issues.

1. Rolling upgrade of the Virtual Box machines (OEL61A and

OEL61B) from OEL 6.1 to OEL 6.5 with UEKR3 (kernel 3.8.13-

35.el6uek.x86_64)

Prior to the upgrade, OEL61A and OEL61B are running Oracle Enterprise Linux

(OEL) 6.1. I will upgrade it to OEL 6.5 using UEKR3 3.8.13-35.el6uek.x86_64.

I will implement an upgrade procedure first on OEL61A while Oracle is running

on OEL61B and then I will implement the same procedure on OEL61B while OEL61A

is running upgraded to OEL65.

Essentially, you must make sure that you can run successfully ‘yum upgrade’

and ‘yum install’ commands while connected to the appropriate channel. The

procedure is as follows:

1.1 Stop Oracle Clusterware and the database on the node where the

upgrade to OEL 6.5 is to be performed.

1.2 Configure any interface on oel61a/oel61b to obtain IP through DHCP

only for the duration of the upgrade from OEL 6.1 to OEL 6.5.

Required to obtain an internet connection to the Public YUM

repository.

The change can be made on this screen. For example on OEL61A from

the fixed IP change to DHCP obtained IP.

Later I will disable the NetworkManager as it rewrites

/etc/resolv.conf

1.3 Change Oracle VM Virtual Box network adapter for the corresponding

OEL interface to NAT from bridge only for the duration of the

upgrade. Required to obtain an internet connection to the Public YUM

repository. The change will have to be made on this screen.

1.4 Restart the VM

1.5 Stop Oracle Clusterware and the database where the upgrade is

performed.

1.6 Go to /etc/yum.repos.d/ directory running cd /etc/yum.repos.d/

1.7 Pull the repository running the following command wget

http://public-yum.oracle.com/public-yum-ol6.repo

1.8 Edit the public-yum-ol6.repo to enable access to ol6_latest,

public_ol6_UEKR3_latest, public_ol6_UEK_latest, public_ol6_ofed_UEK.

[root@oel61a yum.repos.d]# cat public-yum-ol6.repo

[public_ol6_latest]

name=Oracle Linux $releasever Latest ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=1

[public_ol6_addons]

name=Oracle Linux $releasever Add ons ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/addons/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_ga_base]

name=Oracle Linux $releasever GA installation media copy ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_u1_base]

name=Oracle Linux $releasever Update 1 installation media copy

($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_u2_base]

name=Oracle Linux $releasever Update 2 installation media copy

($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_u3_base]

name=Oracle Linux $releasever Update 3 installation media copy

($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_u4_base]

name=Oracle Linux $releasever Update 4 installation media copy

($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/4/base/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_u5_base]

name=Oracle Linux $releasever Update 5 installation media copy

($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/5/base/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_UEKR3_latest]

name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever

($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/UEKR3/latest/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1 enabled=1

[public_ol6_UEK_latest]

name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever

($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=1

[public_ol6_UEK_base]

name=Unbreakable Enterprise Kernel for Oracle Linux $releasever

($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_playground_latest]

name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) -

Unsupported

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_MySQL]

name=MySQL 5.5 for Oracle Linux 6 ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/MySQL/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_gdm_multiseat]

name=Oracle Linux 6 GDM Multiseat ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/gdm_multiseat/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_ofed_UEK]

name=OFED supporting tool packages for Unbreakable Enterprise Kernel on

Oracle Linux 6 ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/ofed_UEK/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=1

[public_ol6_MySQL56]

name=MySQL 5.6 for Oracle Linux 6 ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/MySQL56/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_spacewalk20_server]

name=Spacewalk Server 2.0 for Oracle Linux 6 ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/spacewalk20/server/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[public_ol6_spacewalk20_client]

name=Spacewalk Client 2.0 for Oracle Linux 6 ($basearch)

baseurl=http://public-

yum.oracle.com/repo/OracleLinux/OL6/spacewalk20/client/$basearch/

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

gpgcheck=1

enabled=0

[root@oel61a yum.repos.d]#

1.9 Run yum install oracle-rdbms-server-12cR1-preinstall

1.10 Run yum update. As the output is quite long I will not include it. I

encountered the following problem.

Total

168 kB/s | 294 kB 00:01

Running rpm_check_debug

ERROR with rpm_check_debug vs depsolve:

libqpidclient.so.5()(64bit) is needed by (installed) matahari-net-0.4.0-

5.el6.x86_64

libqpidclient.so.5()(64bit) is needed by (installed) libvirt-qpid-0.2.22-

6.el6.x86_64

libqpidcommon.so.5()(64bit) is needed by (installed) matahari-net-0.4.0-

5.el6.x86_64

libqpidcommon.so.5()(64bit) is needed by (installed) libvirt-qpid-0.2.22-

6.el6.x86_64

libmcommon.so.0.0.1()(64bit) is needed by (installed) matahari-net-0.4.0-

5.el6.x86_64

libmnet.so.0.0.1()(64bit) is needed by (installed) matahari-net-0.4.0-

5.el6.x86_64

matahari-lib = 0.4.0-5.el6 is needed by (installed) matahari-net-0.4.0-

5.el6.x86_64

libmqmfagent.so.0.0.1()(64bit) is needed by (installed) matahari-net-

0.4.0-5.el6.x86_64

matahari-agent-lib = 0.4.0-5.el6 is needed by (installed) matahari-net-

0.4.0-5.el6.x86_64

libqmf.so.4()(64bit) is needed by (installed) libvirt-qpid-0.2.22-

6.el6.x86_64

Please report this error in http://yum.baseurl.org/report

You could try running: rpm -Va --nofiles --nodigest

Your transaction was saved, rerun it with: yum load-transaction

/tmp/yum_save_tx-2014-05-20-20-10Xp21Gm.yumtx

[root@oel61a yum.repos.d]# yum install

1.11 Use the fix for BUG 919514

In my case I ran the following. If you have problems with different

rpms adjust accordingly.

yum shell

install matahari

remove matahari-service

remove matahari-lib

remove matahari-host

remove matahari-agent

remove matahari-net

run

exit

yum shell

install libvirt-qpid

remove libvirt-qpid

run

exit

1.12 Rerun yum update

1.13 Install Guest Additions CD. Required for Clipboard and mouse

integration.

1.14 Change the eth0 configuration back to the fixed IP from IP obtained

from DHCP. Essentially, it reverses what I did in 1.2

1.15 Change /etc/grub.conf so that by default to boot 3.8.13-

35.el6uek.x86_64 and optimize it by adding divider=10.

[root@oel61a db_1]# cat /etc/grub.conf

# grub.conf generated by anaconda

#

# Note that you do not have to rerun grub after making changes to this

file

# NOTICE: You have a /boot partition. This means that

# all kernel and initrd paths are relative to /boot/, eg.

# root (hd0,0)

# kernel /vmlinuz-version ro root=/dev/sda5

# initrd /initrd-[generic-]version.img

#boot=/dev/sda

default=2

timeout=5

splashimage=(hd0,0)/grub/splash.xpm.gz

hiddenmenu

title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-

431.el6.x86_64)

root (hd0,0)

kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=UUID=ef6e890d-860a-

4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM

LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us

rhgb quiet numa=off transparent_hugepage=never crashkernel=auto

initrd /initramfs-2.6.32-431.el6.x86_64.img

title Oracle Linux Server (3.8.13-35.el6uek.x86_64.debug)

root (hd0,0)

kernel /vmlinuz-3.8.13-35.el6uek.x86_64.debug ro

root=UUID=ef6e890d-860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM

rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16

KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet numa=off

transparent_hugepage=never

initrd /initramfs-3.8.13-35.el6uek.x86_64.debug.img

title Oracle Linux Server (3.8.13-35.el6uek.x86_64)

root (hd0,0)

kernel /vmlinuz-3.8.13-35.el6uek.x86_64 ro root=UUID=ef6e890d-

860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM

LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us

rhgb quiet numa=off transparent_hugepage=never divider=10

initrd /initramfs-3.8.13-35.el6uek.x86_64.img

title Oracle Linux Server-uek (2.6.32-100.34.1.el6uek.x86_64)

root (hd0,0)

kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64 ro

root=UUID=ef6e890d-860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM

rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16

KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet numa=off

transparent_hugepage=never

kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64 ro

root=UUID=ef6e890d-860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM

rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16

KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet numa=off

transparent_hugepage=never

initrd /initramfs-2.6.32-100.34.1.el6uek.x86_64.img

title Oracle Linux Server-uek-debug (2.6.32-100.34.1.el6uek.x86_64.debug)

root (hd0,0)

kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64.debug ro

root=UUID=ef6e890d-860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM

rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16

KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet numa=off

transparent_hugepage=never

initrd /initramfs-2.6.32-100.34.1.el6uek.x86_64.debug.img

title Oracle Linux Server (2.6.32-131.0.15.el6.x86_64)

root (hd0,0)

kernel /vmlinuz-2.6.32-131.0.15.el6.x86_64 ro root=UUID=ef6e890d-

860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM

LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us

crashkernel=auto rhgb quiet numa=off transparent_hugepage=never

initrd /initramfs-2.6.32-131.0.15.el6.x86_64.img

[root@oel61a db_1]#

1.16 Shutdown the VM

1.17 Change the VM network adapter back to bridged. Essentially, it

reverses what I did in 1.3

1.18 Start the VM

1.19 Stop GI and database on the node.

1.20 Re-link Oracle RDBMS and Oracle GI binaries. For GI binary relink

you might want to look at ‘How To Relink The Oracle Grid

Infrastructure Standalone (Restart) Installation Or Oracle Grid

Infrastructure RAC/Cluster Installation (11.2 or 12c). (Doc ID

1536057.1)’

1.21 Start GI and database on the node.

1.22 This concludes the upgrade from OEL 6.1 to OEL 6.5.

Before the upgrade I had

[root@oel61a yum.repos.d]# uname -a

Linux oel61a.gj.com 2.6.32-100.34.1.el6uek.x86_64 #1 SMP Wed May 25

17:46:45 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux

[root@oel61a yum.repos.d]#

After the upgrade I have

[root@oel61a ~]# uname -a

Linux oel61a.gj.com 3.8.13-35.el6uek.x86_64 #2 SMP Tue May 13 14:09:13

PDT 2014 x86_64 x86_64 x86_64 GNU/Linux

[root@oel61a ~]#

Start the OEL61A VM and perform an upgrade using the above procedure to

OEL61B.

2. Upgrade from Oracle 11gr2 11.2.0.3 to Oracle 12c 12.1.0.1

for both GI/RDBMS software. The database upgrade involves a

downtime during the upgrade.

Note that the some of the prerequisites to run 11.2.0.3 are also

prerequisites for running 12c. Installing oracle-rdbms-server-12cR1-

preinstall performed most of the prerequisites configurations. I addition

perform the following steps while both VMs are up and running

2.1 Run cluster verify utility. The output is in Annex A.

./runcluvfy.sh stage -pre crsinst -n oel61a,oel61b

./runcluvfy.sh stage -post hwos -n oel61a,oel61b

2.2 Make sure that the cluster is up and running

I ran

./crsctl check cluster –all

./crsctl stat res –t

The output is in Annex A

2.3 Create directories for the New GI 12c and the new RDBMS homes

mkdir -p /u01/app/12.1.0/grid

chown -R grid:oinstall /u01/app/12.1.0

mkdir –p /u01/app/oracle/product/12.1.0/db_1

chown –R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1

2.4 Oracle 12c asks for avahi-daemon

Run the follow commends:

service avahi-daemon start

chkconfig avahi-daemon on

2.5 Addressing possible PRVE-0420 problem. Look at PRVE-0420 /dev/shm is

not found mounted on node (Doc ID 1568100.1). BUG:17080954 -

/DEV/SHM MOUNT OPTION WRONG.

PRVE-0420 : /dev/shm is not found mounted on node ""

PRVE-0420 : /dev/shm is not found mounted on node ""

If you observe the error while running the cluvfy or OUI prform the

following steps:

Change /etc/fstab to contain the line

tmpfs /dev/shm tmpfs rw,exec,size=3500m 0 0

Run the following command

mount -o remount,size=3500m /dev/shm

The problem is fixed

[root@oel61a ~]# mount -o remount,size=3500m /dev/shm

[root@oel61a ~]# df -k

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sda5 9948012 1092380 8327248 12% /

/dev/sda1 9948012 169128 9250500 2% /boot

/dev/sda6 9948012 23192 9396436 1% /home

/dev/sda7 9948012 47064 9372564 1% /opt

/dev/sda8 9948012 45900 9373728 1% /tmp

/dev/sda11 218783604 33712556 173934436 17% /u01

/dev/sda9 9948012 7948612 1471016 85% /usr

/dev/sda10 9948012 22640 9396988 1% /usr/local

/dev/sda2 9948012 4950092 4469536 53% /var

D_DRIVE 976759804 966175568 10584236 99% /media/sf_D_DRIVE

E_DRIVE 1953382396 1898114636 55267760 98% /media/sf_E_DRIVE

VM 1953382396 1898114636 55267760 98% /media/sf_VM

software 1953382396 1898114636 55267760 98% /media/sf_software

tmpfs 3584000 215908 3368092 7% /dev/shm

[root@oel61a ~]#

2.6 Set in Set in /etc/ssh/sshd_config

LoginGraceTime 0

2.7 Enabling the Name Service Cache Daemon

chkconfig --level 35 nscd on

service nscd start

2.8 Disable Linux Firewalls.

chkconfig iptables off

2.9 Install cvuqdisk-1.0.9-1.rpm

2.10 Change /etc/resolv.conf. Add the GNS

[root@oel61a bin]# cat cat /etc/resolv.conf

cat: cat: No such file or directory

# Generated by NetworkManager

search gj.com

nameserver 192.168.2.1

nameserver 192.168.2.11

nameserver 192.168.2.52

[root@oel61a bin]#

[root@oel61b bin]# cat /etc/resolv.conf

# Generated by NetworkManager

search gj.com

nameserver 192.168.2.11

nameserver 192.168.2.1

nameserver 192.168.2.52

[root@oel61b bin]#

2.11 Stop Network Manager in order to avoid over-writing /etc/resolv.conf

chkconfig NetworkManager off

service NetworkManager stop

2.12 Make profiles (.bash_profile) for grid and oracle users on both VMs

OEL61A and OEL61B . Look at the Annex A for details.

2.13 Unzip 12c binaries in a staging area and set permissions on the

staging area.

2.14 Run runInstaller from grid 12c stage area.

Select Skip and press Next to continue.

Select Upgrade and press Next.

Select the languages and press Next.

Make sure all nodes are selected and press Next.

Select the option to configure Grid Infrastructure Management

Repository.

Review the OS groups and press Next.

Review the location and press Next.

Opt to manually run the scripts.

Review the checks

The errors are as listed

The avahi-daemon is asked but then it complains. The rest can be

ignored. As this is Oracle VM Virtual Box environment memory

requirement is not important.

PRVF-7530 can be ignored. As this is Oracle VM Virtual Box

environment memory requirement is not important. In production

environment increase the RAM size to avoid the error.

PRVF-5636 can be ignored. As this is Oracle VM Virtual Box the error

can be ignored. Look at The DNS response time for an unreachable

node exceeded "15000" ms on following nodes (Doc ID 1356975.1) to

avoid the error in production environment.

PRVG-1359 cannot find info for the time being.

Daemon "avahi-daemon" not configured and running - This test checks that

the "avahi-daemon" daemon is not configured and running on the cluster

nodes.

Check Failed on Nodes: [oel61b, oel61a]

Verification result of failed node: oel61b Details:

-

PRVG-1359 : Daemon process "avahi-daemon" is configured on node "oel61b"

- Cause: The identified daemon process was found configured on the

indicated node. - Action: Ensure that the identified daemon process is

not configured on the indicated node.

-

PRVG-1360 : Daemon process "avahi-daemon" is running on node "oel61b" -

Cause: The identified daemon process was found running on the indicated

node. - Action: Ensure that the identified daemon process is stopped

and not running on the indicated node.

Back to Top

Verification result of failed node: oel61a Details:

-

PRVG-1359 : Daemon process "avahi-daemon" is configured on node "oel61a"

- Cause: The identified daemon process was found configured on the

indicated node. - Action: Ensure that the identified daemon process is

not configured on the indicated node.

-

PRVG-1360 : Daemon process "avahi-daemon" is running on node "oel61a" -

Cause: The identified daemon process was found running on the indicated

node. - Action: Ensure that the identified daemon process is stopped

and not running on the indicated node.

Back to Top

Physical Memory - This is a prerequisite condition to test whether the

system has at least 4GB (4194304.0KB) of total physical memory.

Check Failed on Nodes: [oel61b, oel61a]

Verification result of failed node: oel61b

Expected Value

: 4GB (4194304.0KB)

Actual Value

: 3.8614GB (4048936.0KB)

Details:

-

PRVF-7530 : Sufficient physical memory is not available on node "oel61b"

[Required physical memory = 4GB (4194304.0KB)] - Cause: Amount of

physical memory (RAM) found does not meet minimum memory requirements. -

Action: Add physical memory (RAM) to the node specified.

Back to Top

Verification result of failed node: oel61a

Expected Value

: 4GB (4194304.0KB)

Actual Value

: 3.8614GB (4048936.0KB)

Details:

-

PRVF-7530 : Sufficient physical memory is not available on node "oel61a"

[Required physical memory = 4GB (4194304.0KB)] - Cause: Amount of

physical memory (RAM) found does not meet minimum memory requirements. -

Action: Add physical memory (RAM) to the node specified.

Back to Top

Task resolv.conf Integrity - This task checks consistency of file

/etc/resolv.conf file across nodes

Check Failed on Nodes: [oel61b, oel61a]

Verification result of failed node: oel61b Details:

-

PRVF-5636 : The DNS response time for an unreachable node exceeded

"15000" ms on following nodes: oel61a,oel61b - Cause: The DNS response

time for an unreachable node exceeded the value specified on nodes

specified. - Action: Make sure that 'options timeout', 'options

attempts' and 'nameserver' entries in file resolv.conf are proper. On

HPUX these entries will be 'retrans', 'retry' and 'nameserver'. On

Solaris these will be 'options retrans', 'options retry' and

'nameserver'. Make sure that the DNS server responds back to name lookup

request within the specified time when looking up an unknown host name.

-

Check for integrity of file "/etc/resolv.conf" failed - Cause: Cause Of

Problem Not Available - Action: User Action Not Available

Back to Top

Verification result of failed node: oel61a Details:

-

PRVF-5636 : The DNS response time for an unreachable node exceeded

"15000" ms on following nodes: oel61a,oel61b - Cause: The DNS response

time for an unreachable node exceeded the value specified on nodes

specified. - Action: Make sure that 'options timeout', 'options

attempts' and 'nameserver' entries in file resolv.conf are proper. On

HPUX these entries will be 'retrans', 'retry' and 'nameserver'. On

Solaris these will be 'options retrans', 'options retry' and

'nameserver'. Make sure that the DNS server responds back to name lookup

request within the specified time when looking up an unknown host name.

-

Check for integrity of file "/etc/resolv.conf" failed - Cause: Cause Of

Problem Not Available - Action: User Action Not Available

Back to Top

Confirm that you want to continue.

Wait until installation prompts for running scripts as root.

When prompted run as root

On oel61a

[root@oel61a bin]# /u01/app/12.1.0/grid_1/rootupgrade.sh

Performing root user operation for Oracle 12c

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file:

/u01/app/12.1.0/grid_1/crs/install/crsconfig_params

2014/05/23 00:19:09 CLSRSC-363: User ignored prerequisites during

installation

ASM upgrade has started on first node.

OLR initialization - successful

2014/05/23 00:24:23 CLSRSC-329: Replacing Clusterware entries in file

'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

2014/05/23 00:29:05 CLSRSC-343: Successfully started Oracle clusterware

stack

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

2014/05/23 00:33:31 CLSRSC-325: Configure Oracle Grid Infrastructure for

a Cluster ... succeeded

[root@oel61a bin]#

On oel61b

[root@oel61b ~]# /u01/app/12.1.0/grid_1/rootupgrade.sh

Performing root user operation for Oracle 12c

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME= /u01/app/12.1.0/grid_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file:

/u01/app/12.1.0/grid_1/crs/install/crsconfig_params

2014/05/23 00:37:21 CLSRSC-363: User ignored prerequisites during

installation

OLR initialization - successful

2014/05/23 00:40:39 CLSRSC-329: Replacing Clusterware entries in file

'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

2014/05/23 00:44:27 CLSRSC-343: Successfully started Oracle clusterware

stack

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 12c Release 1.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Start upgrade invoked..

Started to upgrade the Oracle Clusterware. This operation may take a few

minutes.

Started to upgrade the OCR.

The OCR was successfully upgraded.

Started to upgrade the CSS.

The CSS was successfully upgraded.

Started to upgrade Oracle ASM.

Started to upgrade the CRS.

The CRS was successfully upgraded.

Oracle Clusterware operating version was successfully set to 12.1.0.1.0

2014/05/23 00:53:16 CLSRSC-325: Configure Oracle Grid Infrastructure for

a Cluster ... succeeded

[root@oel61b ~]#

Wait for the assistants to complete

Review the Cluster Verification Utility

[root@oel61a grid_1]# grep ERR

/u01/app/oraInventory/logs/installActions2014-05-22_10-39-26PM.log

INFO: ERROR: [Result.addErrorDescription:607] PRVF-7530 : Sufficient

physical memory is not available on node "oel61b" [Required physical

memory = 4GB (4194304.0KB)]

INFO: ERROR: [Result.addErrorDescription:607] PRVF-7530 : Sufficient

physical memory is not available on node "oel61a" [Required physical

memory = 4GB (4194304.0KB)]

ERRORMSG(oel61b): PRVF-7530 : Sufficient physical memory is not

available on node "oel61b" [Required physical memory = 4GB (4194304.0KB)]

ERRORMSG(oel61a): PRVF-7530 : Sufficient physical memory is not

available on node "oel61a" [Required physical memory = 4GB (4194304.0KB)]

INFO: ERROR: [Result.addErrorDescription:607] PRVF-5636 : The DNS

response time for an unreachable node exceeded "15000" ms on following

nodes: oel61a,oel61b

INFO: ERROR: [Result.addErrorDescription:607] PRVF-5636 : The DNS

response time for an unreachable node exceeded "15000" ms on following

nodes: oel61a,oel61b

ERRORMSG(oel61b): PRVF-5636 : The DNS response time for an

unreachable node exceeded "15000" ms on following nodes: oel61a,oel61b

ERRORMSG(oel61a): PRVF-5636 : The DNS response time for an

unreachable node exceeded "15000" ms on following nodes: oel61a,oel61b

INFO: ERROR: [Result.addErrorDescription:618] PRVF-5636 : The DNS

response time for an unreachable node exceeded "15000" ms on following

nodes: oel61a,oel61b

INFO: ERROR: [Result.addErrorDescription:618] PRVF-5636 : The DNS

response time for an unreachable node exceeded "15000" ms on following

nodes: oel61a,oel61b

INFO: ERROR: [Result.addErrorDescription:607] Check for integrity of

file "/etc/resolv.conf" failed

INFO: ERROR: [Result.addErrorDescription:607] Check for integrity of

file "/etc/resolv.conf" failed

ERRORMSG(oel61b): PRVF-5636 : The DNS response time for an

unreachable node exceeded "15000" ms on following nodes: oel61a,oel61b

ERRORMSG(oel61b): Check for integrity of file

"/etc/resolv.conf" failed

ERRORMSG(oel61a): PRVF-5636 : The DNS response time for an

unreachable node exceeded "15000" ms on following nodes: oel61a,oel61b

ERRORMSG(oel61a): Check for integrity of file

"/etc/resolv.conf" failed

INFO: ERROR: [Result.addErrorDescription:607] PRVG-1359 : Daemon process

"avahi-daemon" is configured on node "oel61b"

INFO: ERROR: [Result.addErrorDescription:607] PRVG-1359 : Daemon process

"avahi-daemon" is configured on node "oel61a"

INFO: ERROR: [Result.addErrorDescription:618] PRVG-1359 : Daemon process

"avahi-daemon" is configured on node "oel61b"

INFO: ERROR: [Result.addErrorDescription:618] PRVG-1359 : Daemon process

"avahi-daemon" is configured on node "oel61a"

INFO: ERROR: [Result.addErrorDescription:607] PRVG-1360 : Daemon process

"avahi-daemon" is running on node "oel61b"

INFO: ERROR: [Result.addErrorDescription:607] PRVG-1360 : Daemon process

"avahi-daemon" is running on node "oel61a"

INFO: ERROR: [Result.addErrorDescription:618] PRVG-1360 : Daemon process

"avahi-daemon" is running on node "oel61b"

INFO: ERROR: [Result.addErrorDescription:618] PRVG-1360 : Daemon process

"avahi-daemon" is running on node "oel61a"

ERRORMSG(oel61b): PRVG-1359 : Daemon process "avahi-daemon" is

configured on node "oel61b"

ERRORMSG(oel61b): PRVG-1360 : Daemon process "avahi-daemon" is

running on node "oel61b"

ERRORMSG(oel61a): PRVG-1359 : Daemon process "avahi-daemon" is

configured on node "oel61a"

ERRORMSG(oel61a): PRVG-1360 : Daemon process "avahi-daemon" is

running on node "oel61a"

INFO: ERROR:

[root@oel61a grid_1]#

2.15 Verify GI installation

[root@oel61a bin]# ./crsctl check cluster -all

**************************************************************

oel61a:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

oel61b:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

[root@oel61a bin]# ./crsctl stat res -t

-------------------------------------------------------------------------

-------

Name Target State Server State

details

-------------------------------------------------------------------------

-------

Local Resources

-------------------------------------------------------------------------

-------

ora.DATA.dg

ONLINE ONLINE oel61a STABLE

ONLINE ONLINE oel61b STABLE

ora.LISTENER.lsnr

ONLINE ONLINE oel61a STABLE

ONLINE ONLINE oel61b STABLE

ora.asm

ONLINE ONLINE oel61a

Started,STABLE

ONLINE ONLINE oel61b STABLE

ora.net1.network

ONLINE ONLINE oel61a STABLE

ONLINE ONLINE oel61b STABLE

ora.ons

ONLINE ONLINE oel61a STABLE

ONLINE ONLINE oel61b STABLE

ora.registry.acfs

ONLINE OFFLINE oel61a STABLE

ONLINE OFFLINE oel61b STABLE

-------------------------------------------------------------------------

-------

Cluster Resources

-------------------------------------------------------------------------

-------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE oel61b STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE oel61a STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE oel61a STABLE

ora.MGMTLSNR

1 ONLINE ONLINE oel61a

169.254.46.130 10.10

.2.21

10.10.10.21,ST

ABLE

ora.cvu

1 ONLINE ONLINE oel61b STABLE

ora.gns

1 ONLINE ONLINE oel61a STABLE

ora.gns.vip

1 ONLINE ONLINE oel61a STABLE

ora.mgmtdb

1 ONLINE ONLINE oel61a Open,STABLE

ora.oc4j

1 ONLINE ONLINE oel61b STABLE

ora.oel61a.vip

1 ONLINE ONLINE oel61a STABLE

ora.oel61b.vip

1 ONLINE ONLINE oel61b STABLE

ora.racdb.db

1 ONLINE ONLINE oel61a Open,STABLE

2 ONLINE OFFLINE Instance

Shutdown,ST

ABLE

ora.racdb.racdbsrv.svc

1 ONLINE ONLINE oel61a STABLE

2 ONLINE OFFLINE STABLE

ora.scan1.vip

1 ONLINE ONLINE oel61b STABLE

ora.scan2.vip

1 ONLINE ONLINE oel61a STABLE

ora.scan3.vip

1 ONLINE ONLINE oel61a STABLE

-------------------------------------------------------------------------

-------

[root@oel61a bin]#

2.16 This concludes the Grid Infrastructure Upograde from 11.2.0.3 to 12c

12.1.0.1

2.17 Install Oracle 12c RDBMS binaries in a separate $OH.

Run OUI from 12c database stage directory.

Select Skip updates

Select Install software only

Select RAC database installation.

Select All Nodes

Select languages

Select Enterprise Edition

Verify the locations.

Verify OS groups.

Review

The errors are as follows

Task resolv.conf Integrity - This task checks consistency of file

/etc/resolv.conf file across nodes

Check Failed on Nodes: [oel61b, oel61a]

Verification result of failed node: oel61b Details:

-

PRVF-5636 : The DNS response time for an unreachable node exceeded

"15000" ms on following nodes: oel61a,oel61b - Cause: The DNS response

time for an unreachable node exceeded the value specified on nodes

specified. - Action: Make sure that 'options timeout', 'options

attempts' and 'nameserver' entries in file resolv.conf are proper. On

HPUX these entries will be 'retrans', 'retry' and 'nameserver'. On

Solaris these will be 'options retrans', 'options retry' and

'nameserver'. Make sure that the DNS server responds back to name lookup

request within the specified time when looking up an unknown host name.

-

Check for integrity of file "/etc/resolv.conf" failed - Cause: Cause Of

Problem Not Available - Action: User Action Not Available

Back to Top

Verification result of failed node: oel61a Details:

-

PRVF-5636 : The DNS response time for an unreachable node exceeded

"15000" ms on following nodes: oel61a,oel61b - Cause: The DNS response

time for an unreachable node exceeded the value specified on nodes

specified. - Action: Make sure that 'options timeout', 'options

attempts' and 'nameserver' entries in file resolv.conf are proper. On

HPUX these entries will be 'retrans', 'retry' and 'nameserver'. On

Solaris these will be 'options retrans', 'options retry' and

'nameserver'. Make sure that the DNS server responds back to name lookup

request within the specified time when looking up an unknown host name.

-

Check for integrity of file "/etc/resolv.conf" failed - Cause: Cause Of

Problem Not Available - Action: User Action Not Available

Back to Top

Single Client Access Name (SCAN) - This test verifies the Single Client

Access Name configuration. Error:

-

PRVG-1101 : SCAN name "oel61-cluster-scan.gns.grid.gj.com" failed to

resolve - Cause: An attempt to resolve specified SCAN name to a list of

IP addresses failed because SCAN could not be resolved in DNS or GNS

using 'nslookup'. - Action: Check whether the specified SCAN name is

correct. If SCAN name should be resolved in DNS, check the configuration

of SCAN name in DNS. If it should be resolved in GNS make sure that GNS

resource is online.

-

PRVG-1101 : SCAN name "oel61-cluster-scan.gns.grid.gj.com" failed to

resolve - Cause: An attempt to resolve specified SCAN name to a list of

IP addresses failed because SCAN could not be resolved in DNS or GNS

using 'nslookup'. - Action: Check whether the specified SCAN name is

correct. If SCAN name should be resolved in DNS, check the configuration

of SCAN name in DNS. If it should be resolved in GNS make sure that GNS

resource is online.

Check Failed on Nodes: [oel61b, oel61a]

Verification result of failed node: oel61b Back to Top

Verification result of failed node: oel61a Back to Top

For PRVF-5636 Look at : PRVF-5636 : The DNS response time for an unreachable

node exceeded "15000" ms on following nodes (Doc ID 1356975.1)

For PRVG-1101 Do the following

Before:

[grid@oel61a ~]$ cluvfy comp gns -postcrsinst -verbose

Verifying GNS integrity

Checking GNS integrity...

Checking if the GNS subdomain name is valid...

The GNS subdomain name "gns.grid.gj.com" is a valid domain name

Checking if the GNS VIP belongs to same subnet as the public network...

Public network subnets "192.168.2.0, 192.168.2.0, 192.168.2.0,

192.168.2.0, 192.168.2.0" match with the GNS VIP "192.168.2.0,

192.168.2.0, 192.168.2.0, 192.168.2.0, 192.168.2.0"

Checking if the GNS VIP is a valid address...

GNS VIP "192.168.2.52" resolves to a valid IP address

Checking the status of GNS VIP...

Checking if FDQN names for domain "gns.grid.gj.com" are reachable

WARNING:

PRVF-5218 : "oel61a-vip.gns.grid.gj.com" did not resolve into any IP

address

PRVF-5827 : The response time for name lookup for name "oel61a-

vip.gns.grid.gj.com" exceeded 15 seconds

WARNING:

PRVF-5218 : "oel61b-vip.gns.grid.gj.com" did not resolve into any IP

address

PRVF-5827 : The response time for name lookup for name "oel61b-

vip.gns.grid.gj.com" exceeded 15 seconds

Checking status of GNS resource...

Node Running? Enabled?

------------ ------------------------ ------------------------

oel61a yes yes

oel61b no yes

GNS resource configuration check passed

Checking status of GNS VIP resource...

Node Running? Enabled?

------------ ------------------------ ------------------------

oel61a yes yes

oel61b no yes

GNS VIP resource configuration check passed.

GNS integrity check failed

Verification of GNS integrity was unsuccessful on all the specified

nodes.

[grid@oel61a ~]$

Make sure that the GNS static VIP address is in /etc/resolv.conf.

Note that if NetwokManager is running the file content is

overridden.

[root@oel61a bin]# cat cat /etc/resolv.conf

cat: cat: No such file or directory

# Generated by NetworkManager

search gj.com

nameserver 192.168.2.1

nameserver 192.168.2.11

nameserver 192.168.2.52

[root@oel61a bin]#

[root@oel61b bin]# cat /etc/resolv.conf

# Generated by NetworkManager

search gj.com

nameserver 192.168.2.11

nameserver 192.168.2.1

nameserver 192.168.2.52

[root@oel61b bin]#

After the modification we get

[oracle@oel61a dbs]$ nslookup oel61-cluster-scan.gns.grid.gj.com

Server: 192.168.2.52

Address: 192.168.2.52#53

Name: oel61-cluster-scan.gns.grid.gj.com

Address: 192.168.2.117

Name: oel61-cluster-scan.gns.grid.gj.com

Address: 192.168.2.111

Name: oel61-cluster-scan.gns.grid.gj.com

Address: 192.168.2.112

[oracle@oel61a dbs]$

[grid@oel61b ~]$ nslookup oel61-cluster-scan.gns.grid.gj.com

Server: 192.168.2.52

Address: 192.168.2.52#53

Name: oel61-cluster-scan.gns.grid.gj.com

Address: 192.168.2.117

Name: oel61-cluster-scan.gns.grid.gj.com

Address: 192.168.2.111

Name: oel61-cluster-scan.gns.grid.gj.com

Address: 192.168.2.112

[grid@oel61b ~]$ cluvfy comp gns -postcrsinst -verbose

Verifying GNS integrity

Checking GNS integrity...

Checking if the GNS subdomain name is valid...

The GNS subdomain name "gns.grid.gj.com" is a valid domain name

Checking if the GNS VIP belongs to same subnet as the public network...

Public network subnets "192.168.2.0, 192.168.2.0, 192.168.2.0" match with

the GNS VIP "192.168.2.0, 192.168.2.0, 192.168.2.0"

Checking if the GNS VIP is a valid address...

GNS VIP "192.168.2.52" resolves to a valid IP address

Checking the status of GNS VIP...

Checking if FDQN names for domain "gns.grid.gj.com" are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

GNS resolved IP addresses are reachable

Checking status of GNS resource...

Node Running? Enabled?

------------ ------------------------ ------------------------

oel61a yes yes

oel61b no yes

GNS resource configuration check passed

Checking status of GNS VIP resource...

Node Running? Enabled?

------------ ------------------------ ------------------------

oel61a yes yes

oel61b no yes

GNS VIP resource configuration check passed.

GNS integrity check passed

Verification of GNS integrity was successful.

[grid@oel61b ~]$

Select Ignore All

Wait until prompted for the actions that need to be ran as root.

Run the scripts

[root@oel61a bin]# /u01/app/oracle/product/12.1.0/db_1/root.sh

Performing root user operation for Oracle 12c

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/12.1.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

[root@oel61a bin]#

[root@oel61b ~]# /u01/app/oracle/product/12.1.0/db_1/root.sh

Performing root user operation for Oracle 12c

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/12.1.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

[root@oel61b ~]#

Edit OUI

2.18 From the OLD $OH make sure that database is started and FRA is large

enough and you db_recovery_file_dest_size sized appropriately.

SQL> alter system set db_recovery_file_dest_size=60G scope=both sid='*';

System altered.

SQL>

2.19 Run dbua

Take a Note that dbua makes a convenient backup that you can use for

recovery if something goes wrong. For you to take advantage of this

option you need to select it on the dbua before you start. I

strongly recommend using it. I had a problem with the

db_recovery_file_dest_size set too low and had to restore and re-try.

Invoke dbua from the new 12c $OH

Select Upgrade Oracle Database.

Select the database, review and press Next.

Wait for the prerequisites check to complete.

Examine the findings. This particular one is fixable so press Next

to continue.

Note here is where you specify the backup option, some parallelism

options, statistics gathering prior to the upgrade etc. If something

goes wrong a script will wait for you in the specified location for

restore.

Select an option for EM Express

Note here is where you specify the backup location. If something

goes wrong a script will wait for you in the specified location for

restore.

You should not get this is FRA is big enough. If you are stuck

restore the database to a state to before the upgrade. Fix whatever

the problem is and retry.

Review the summary

Review the actions

Wait for the upgrade to complete.

At the end of the upgrade you can have something like this.

View the results and close dbua.

2.20 Verify that database is successfully upgraded.

[oracle@oel61b ~]$ srvctl config database -d racdb

Database unique name: RACDB

Database name: RACDB

Oracle home: /u01/app/oracle/product/12.1.0/db_1

Oracle user: oracle

Spfile: +DATA/racdb/spfileracdb.ora

Password file:

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: servpool

Database instances:

Disk Groups: DATA

Mount point paths:

Services: racdbsrv

Type: RAC

Start concurrency:

Stop concurrency:

Database is policy managed

[oracle@oel61b ~]$

[oracle@oel61b admin]$ srvctl status database -d racdb

Instance RACDB_1 is running on node oel61b

Instance RACDB_2 is running on node oel61a

[oracle@oel61b admin]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Sat May 24 22:59:37 2014

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit

Production

With the Partitioning, Real Application Clusters, Automatic Storage

Management, OLAP,

Advanced Analytics and Real Application Testing options

SQL> select * from v$active_instances;

INST_NUMBER

-----------

INST_NAME

-------------------------------------------------------------------------

-------

CON_ID

----------

1

oel61b.gj.com:RACDB_1

0

2

oel61a.gj.com:RACDB_2

0

INST_NUMBER

-----------

INST_NAME

-------------------------------------------------------------------------

-------

CON_ID

----------

SQL> set linesize 300

SQL> /

INST_NUMBER INST_NAME

CON_ID

----------- -------------------------------------------------------------

-------------------------------------------------------------------------

-------------------------------------------------------------------------

--------------------------------- ----------

1 oel61b.gj.com:RACDB_1

0

2 oel61a.gj.com:RACDB_2

0

SQL>

SQL> select * from gv$instance;

INST_ID INSTANCE_NUMBER INSTANCE_NAME HOST_NAME

VERSION STARTUP_T STATUS PAR

THREAD# ARCHIVE LOG_SWITCH_WAIT LOGINS SHU DATABASE_STATUS

INSTANCE_ROLE ACTIVE_ST BLO CON_ID INSTANCE_MO EDITION

---------- --------------- ---------------- -----------------------------

----------------------------------- ----------------- --------- ---------

--- --- ---------- ------- --------------- ---------- --- ---------------

-- ------------------ --------- --- ---------- ----------- -------

FAMILY

-------------------------------------------------------------------------

-------

2 2 RACDB_2 oel61a.gj.com

12.1.0.1.0 24-MAY-14 OPEN YES 2

STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE

NORMAL NO 0 REGULAR EE

1 1 RACDB_1 oel61b.gj.com

12.1.0.1.0 24-MAY-14 OPEN YES 1

STARTED ALLOWED NO ACTIVE PRIMARY_INSTANCE

NORMAL NO 0 REGULAR EE

SQL>

[grid@oel61b ~]$ crsctl stat res -t

-------------------------------------------------------------------------

-------

Name Target State Server State

details

-------------------------------------------------------------------------

-------

Local Resources

-------------------------------------------------------------------------

-------

ora.DATA.dg

ONLINE ONLINE oel61a STABLE

ONLINE ONLINE oel61b STABLE

ora.LISTENER.lsnr

ONLINE ONLINE oel61a STABLE

ONLINE ONLINE oel61b STABLE

ora.asm

ONLINE ONLINE oel61a

Started,STABLE

ONLINE ONLINE oel61b

Started,STABLE

ora.net1.network

ONLINE ONLINE oel61a STABLE

ONLINE ONLINE oel61b STABLE

ora.ons

ONLINE ONLINE oel61a STABLE

ONLINE ONLINE oel61b STABLE

ora.registry.acfs

ONLINE OFFLINE oel61a STABLE

ONLINE OFFLINE oel61b STABLE

-------------------------------------------------------------------------

-------

Cluster Resources

-------------------------------------------------------------------------

-------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE oel61a STABLE

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE oel61b STABLE

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE oel61b STABLE

ora.MGMTLSNR

1 ONLINE ONLINE oel61b

169.254.81.184 10.10

.10.22

10.10.5.22,ST

ABLE

ora.cvu

1 ONLINE ONLINE oel61b STABLE

ora.gns

1 ONLINE ONLINE oel61b STABLE

ora.gns.vip

1 ONLINE ONLINE oel61b STABLE

ora.mgmtdb

1 ONLINE ONLINE oel61b Open,STABLE

ora.oc4j

1 ONLINE ONLINE oel61b STABLE

ora.oel61a.vip

1 ONLINE ONLINE oel61a STABLE

ora.oel61b.vip

1 ONLINE ONLINE oel61b STABLE

ora.racdb.db

1 ONLINE ONLINE oel61b Open,STABLE

2 ONLINE ONLINE oel61a Open,STABLE

ora.racdb.racdbsrv.svc

1 ONLINE ONLINE oel61a STABLE

2 ONLINE ONLINE oel61b STABLE

ora.scan1.vip

1 ONLINE ONLINE oel61a STABLE

ora.scan2.vip

1 ONLINE ONLINE oel61b STABLE

ora.scan3.vip

1 ONLINE ONLINE oel61b STABLE

-------------------------------------------------------------------------

-------

[grid@oel61b ~]$

2.21 Handy and Useful script to recover the database in case of failed

upgrade.

[root@oel61a RACDB]# cd backup

[root@oel61a backup]# pwd

/u01/app/oracle/admin/RACDB/backup

[root@oel61a backup]# ls

createSPFile_RACDB.sql ctl_backup_1400922874007 df_backup_04p93abe_1_1

RACDB_2_restore.sh

ctl_backup_1400807180325 df_backup_01p8voqo_1_1 init.ora

rmanRestoreCommands_RACDB

[root@oel61a backup]# cat createSPFile_RACDB.sql

connect / as sysdba

CREATE SPFILE='+DATA/racdb/spfileracdb.ora' from

pfile='/u01/app/oracle/admin/RACDB/backup/init.ora';

exit;

[root@oel61a backup]# cat RACDB_2_restore.sh

#!/bin/sh

# -- Run this Script to Restore Oracle Database Instance RACDB_2

echo -- Shutting down the database from the new oracle home ...

ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1; export ORACLE_HOME

/u01/app/oracle/product/12.1.0/db_1/bin/srvctl stop database -d RACDB

echo -- Downgrading the database CRS resources ...

echo y | /u01/app/oracle/product/12.1.0/db_1/bin/srvctl downgrade database -

d RACDB -t 11.2.0.3.0 -o /u01/app/oracle/product/11.2.0/db_1

ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1; export ORACLE_HOME

ORACLE_SID=RACDB_2; export ORACLE_SID

echo y | /u01/app/oracle/product/11.2.0/db_1/bin/srvctl modify database -d

RACDB -p +DATA/RACDB/spfileRACDB.ora

echo -- Removing /u01/app/oracle/cfgtoollogs/dbua/logs/Welcome_RACDB_2.txt

file

rm -f /u01/app/oracle/cfgtoollogs/dbua/logs/Welcome_RACDB_2.txt ;

/u01/app/oracle/product/11.2.0/db_1/bin/sqlplus /nolog

@/u01/app/oracle/admin/RACDB/backup/createSPFile_RACDB.sql

/u01/app/oracle/product/11.2.0/db_1/bin/rman

@/u01/app/oracle/admin/RACDB/backup/rmanRestoreCommands_RACDB

echo -- Starting up the database from the old oracle home ...

/u01/app/oracle/product/11.2.0/db_1/bin/srvctl start database -d RACDB

[root@oel61a backup]#

[root@oel61a backup]# cat rmanRestoreCommands_RACDB

connect target /;

startup nomount;

set nocfau;

restore controlfile from

'/u01/app/oracle/admin/RACDB/backup/ctl_backup_1400922874007';

alter database mount;

restore database;

alter database open resetlogs;

exit

[root@oel61a backup]#

2.22 This conclude the database upgrade from 11.2.0.3 to 12c (12.1.0.1)

Annex A

Cluvfy output

[grid@oel61a grid]$ ./runcluvfy.sh stage -pre crsinst -n oel61a,oel61b

Performing pre-checks for cluster services setup

Checking node reachability...

Node reachability check passed from node "oel61a"

Checking user equivalence...

User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.2.0"

Node connectivity passed for subnet "192.168.2.0" with node(s)

oel61b,oel61a

TCP connectivity check passed for subnet "192.168.2.0"

Check: Node connectivity using interfaces on subnet "10.10.10.0"

Node connectivity passed for subnet "10.10.10.0" with node(s)

oel61b,oel61a

TCP connectivity check passed for subnet "10.10.10.0"

Check: Node connectivity using interfaces on subnet "10.10.2.0"

Node connectivity passed for subnet "10.10.2.0" with node(s)

oel61a,oel61b

TCP connectivity check passed for subnet "10.10.2.0"

Check: Node connectivity using interfaces on subnet "10.10.5.0"

Node connectivity passed for subnet "10.10.5.0" with node(s)

oel61b,oel61a

TCP connectivity check passed for subnet "10.10.5.0"

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "10.10.2.0".

Subnet mask consistency check passed for subnet "192.168.2.0".

Subnet mask consistency check passed for subnet "10.10.10.0".

Subnet mask consistency check passed for subnet "10.10.5.0".

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "10.10.2.0" for multicast communication with multicast

group "224.0.0.251"...

Check of subnet "10.10.2.0" for multicast communication with multicast

group "224.0.0.251" passed.

Checking subnet "10.10.10.0" for multicast communication with multicast

group "224.0.0.251"...

Check of subnet "10.10.10.0" for multicast communication with multicast

group "224.0.0.251" passed.

Checking subnet "10.10.5.0" for multicast communication with multicast

group "224.0.0.251"...

Check of subnet "10.10.5.0" for multicast communication with multicast

group "224.0.0.251" passed.

Check of multicast communication passed.

Checking ASMLib configuration.

Check for ASMLib configuration passed.

Total memory check failed

Check failed on nodes:

oel61b,oel61a

Available memory check passed

Swap space check passed

Free disk space check passed for "oel61b:/usr"

Free disk space check passed for "oel61a:/usr"

Free disk space check passed for "oel61b:/var"

Free disk space check passed for "oel61a:/var"

Free disk space check passed for "oel61b:/etc,oel61b:/sbin"

Free disk space check passed for "oel61a:/etc,oel61a:/sbin"

Free disk space check passed for "oel61b:/u01/app/11.2.0/grid"

Free disk space check passed for "oel61a:/u01/app/11.2.0/grid"

Free disk space check passed for "oel61b:/tmp"

Free disk space check passed for "oel61a:/tmp"

Check for multiple users with UID value 1100 passed

User existence check passed for "grid"

Group existence check passed for "oinstall"

Group existence check passed for "dba"

Membership check for user "grid" in group "oinstall" [as Primary] passed

Membership check for user "grid" in group "dba" passed

Run level check passed

Hard limits check passed for "maximum open file descriptors"

Soft limits check passed for "maximum open file descriptors"

Hard limits check passed for "maximum user processes"

Soft limits check passed for "maximum user processes"

System architecture check passed

Kernel version check passed

Kernel parameter check passed for "semmsl"

Kernel parameter check passed for "semmns"

Kernel parameter check passed for "semopm"

Kernel parameter check passed for "semmni"

Kernel parameter check passed for "shmmax"

Kernel parameter check passed for "shmmni"

Kernel parameter check passed for "shmall"

Kernel parameter check passed for "file-max"

Kernel parameter check passed for "ip_local_port_range"

Kernel parameter check passed for "rmem_default"

Kernel parameter check passed for "rmem_max"

Kernel parameter check passed for "wmem_default"

Kernel parameter check passed for "wmem_max"

Kernel parameter check passed for "aio-max-nr"

Package existence check passed for "binutils"

Package existence check passed for "compat-libcap1"

Package existence check passed for "compat-libstdc++-33(x86_64)"

Package existence check passed for "libgcc(x86_64)"

Package existence check passed for "libstdc++(x86_64)"

Package existence check passed for "libstdc++-devel(x86_64)"

Package existence check passed for "sysstat"

Package existence check passed for "gcc"

Package existence check passed for "gcc-c++"

Package existence check passed for "ksh"

Package existence check passed for "make"

Package existence check passed for "glibc(x86_64)"

Package existence check passed for "glibc-devel(x86_64)"

Package existence check passed for "libaio(x86_64)"

Package existence check passed for "libaio-devel(x86_64)"

Package existence check passed for "nfs-utils"

Check for multiple users with UID value 0 passed

Current group ID check passed

Starting check for consistency of primary group of root user

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...

No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

Core file name pattern consistency check passed.

User "grid" is not part of "root" group. Check passed

Default user file creation mask check passed

Checking integrity of file "/etc/resolv.conf" across nodes

"domain" and "search" entries do not coexist in any "/etc/resolv.conf"

file

All nodes have same "search" order defined in file "/etc/resolv.conf"

PRVF-5636 : The DNS response time for an unreachable node exceeded

"15000" ms on following nodes: oel61a,oel61b

Check for integrity of file "/etc/resolv.conf" failed

Time zone consistency check passed

Checking integrity of name service switch configuration file

"/etc/nsswitch.conf" ...

All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"

Check for integrity of name service switch configuration file

"/etc/nsswitch.conf" passed

Checking daemon "avahi-daemon" is not configured and running

Daemon not configured check failed for process "avahi-daemon"

Check failed on nodes:

oel61b,oel61a

Daemon not running check failed for process "avahi-daemon"

Check failed on nodes:

oel61b,oel61a

Starting check for Reverse path filter setting ...

Check for Reverse path filter setting passed

Starting check for /dev/shm mounted as temporary file system ...

Check for /dev/shm mounted as temporary file system passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

[grid@oel61a grid]$

[grid@oel61a grid]$ ./runcluvfy.sh stage -post hwos -n oel61a,oel61b

Performing post-checks for hardware and operating system setup

Checking node reachability...

Node reachability check passed from node "oel61a"

Checking user equivalence...

User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.2.0"

Node connectivity passed for subnet "192.168.2.0" with node(s)

oel61b,oel61a

TCP connectivity check passed for subnet "192.168.2.0"

Check: Node connectivity using interfaces on subnet "10.10.10.0"

Node connectivity passed for subnet "10.10.10.0" with node(s)

oel61a,oel61b

TCP connectivity check passed for subnet "10.10.10.0"

Check: Node connectivity using interfaces on subnet "10.10.2.0"

Node connectivity passed for subnet "10.10.2.0" with node(s)

oel61b,oel61a

TCP connectivity check passed for subnet "10.10.2.0"

Check: Node connectivity using interfaces on subnet "10.10.5.0"

Node connectivity passed for subnet "10.10.5.0" with node(s)

oel61b,oel61a

TCP connectivity check passed for subnet "10.10.5.0"

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "10.10.2.0".

Subnet mask consistency check passed for subnet "192.168.2.0".

Subnet mask consistency check passed for subnet "10.10.10.0".

Subnet mask consistency check passed for subnet "10.10.5.0".

Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "10.10.2.0" for multicast communication with multicast

group "224.0.0.251"...

Check of subnet "10.10.2.0" for multicast communication with multicast

group "224.0.0.251" passed.

Checking subnet "10.10.10.0" for multicast communication with multicast

group "224.0.0.251"...

Check of subnet "10.10.10.0" for multicast communication with multicast

group "224.0.0.251" passed.

Checking subnet "10.10.5.0" for multicast communication with multicast

group "224.0.0.251"...

Check of subnet "10.10.5.0" for multicast communication with multicast

group "224.0.0.251" passed.

Check of multicast communication passed.

Check for multiple users with UID value 0 passed

Time zone consistency check passed

Checking shared storage accessibility...

ASM Disk Group Sharing Nodes (2 in count)

------------------------------------ ------------------------

DATA oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sde oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sde1 oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sdf oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sdf1 oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sdb oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sdb1 oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sdd oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sdd1 oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sdc oel61a oel61b

Disk Sharing Nodes (2 in count)

------------------------------------ ------------------------

/dev/sdc1 oel61a oel61b

Shared storage check was successful on nodes "oel61a,oel61b"

Checking integrity of name service switch configuration file

"/etc/nsswitch.conf" ...

All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"

Check for integrity of name service switch configuration file

"/etc/nsswitch.conf" passed

Post-check for hardware and operating system setup was successful.

[grid@oel61a grid]$

MAKE SURE THE CLUSTER IS UP AND RUNNING

[root@oel61a bin]# ./crsctl check cluster -all

**************************************************************

oel61a:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

oel61b:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

[root@oel61a bin]# ./crsctl stat res -t

-------------------------------------------------------------------------

-------

NAME TARGET STATE SERVER

STATE_DETAILS

-------------------------------------------------------------------------

-------

Local Resources

-------------------------------------------------------------------------

-------

ora.DATA.dg

ONLINE ONLINE oel61a

ONLINE ONLINE oel61b

ora.LISTENER.lsnr

ONLINE ONLINE oel61a

ONLINE ONLINE oel61b

ora.asm

ONLINE ONLINE oel61a Started

ONLINE ONLINE oel61b Started

ora.gsd

OFFLINE OFFLINE oel61a

OFFLINE OFFLINE oel61b

ora.net1.network

ONLINE ONLINE oel61a

ONLINE ONLINE oel61b

ora.ons

ONLINE ONLINE oel61a

ONLINE ONLINE oel61b

ora.registry.acfs

ONLINE OFFLINE oel61a

ONLINE OFFLINE oel61b

-------------------------------------------------------------------------

-------

Cluster Resources

-------------------------------------------------------------------------

-------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE oel61a

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE oel61b

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE oel61b

ora.cvu

1 ONLINE ONLINE oel61b

ora.gns

1 ONLINE ONLINE oel61b

ora.gns.vip

1 ONLINE ONLINE oel61b

ora.oc4j

1 ONLINE ONLINE oel61b

ora.oel61a.vip

1 ONLINE ONLINE oel61a

ora.oel61b.vip

1 ONLINE ONLINE oel61b

ora.racdb.db

1 ONLINE ONLINE oel61a Open

2 ONLINE ONLINE oel61b Open

ora.racdb.racdbsrv.svc

1 ONLINE ONLINE oel61a

2 ONLINE ONLINE oel61b

ora.scan1.vip

1 ONLINE ONLINE oel61a

ora.scan2.vip

1 ONLINE ONLINE oel61b

ora.scan3.vip

1 ONLINE ONLINE oel61b

[root@oel61a bin]#

BASH PROFILES

[oracle@oel61a ~]$ cat .bash_profile

# .bash_profile

umask 022

ORACLE_BASE=/u01/app/oracle

ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1

#ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

ORACLE_HOSTNAME=oel61a

ORACLE_SID=RACDB_2

ORACLE_UNQNAME=RACDB

LD_LIBRARY_PATH=$ORACLE_HOME/lib

PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH

ORACLE_HOSTNAME ORACLE_UNQNAME

TEMP=/tmp

TMPDIR=/tmp

export TEMP TMPDIR

ulimit -t unlimited

ulimit -f unlimited

ulimit -d unlimited

#ulimit -s unlimited

ulimit -v unlimited

ulimit -n 36500

if [ -t 0 ]; then

stty intr ^C

fi

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

[oracle@oel61a ~]$

[grid@oel61a ~]$ cat .bash_profile

# .bash_profile

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

umask 022

ORACLE_BASE=/u01/app/grid

ORACLE_HOME=/u01/app/12.1.0/grid_1

ORACLE_HOSTNAME=oel61a

ORACLE_SID=+ASM1

LD_LIBRARY_PATH=$ORACLE_HOME/lib

PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH

ORACLE_HOSTNAME

TEMP=/tmp

TMPDIR=/tmp

export TEMP TMPDIR

ulimit -t unlimited

ulimit -f unlimited

ulimit -d unlimited

ulimit -s unlimited

ulimit -v unlimited

if [ -t 0 ]; then

stty intr ^C

fi

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

[grid@oel61a ~]$

[root@oel61b ~]# su - oracle

[oracle@oel61b ~]$ cat .bash_profile

# .bash_profile

umask 022

ORACLE_BASE=/u01/app/oracle

ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1

ORACLE_HOSTNAME=oel61b

ORACLE_SID=RACDB_1

ORACLE_UNQNAME=RACDB

LD_LIBRARY_PATH=$ORACLE_HOME/lib

PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH

ORACLE_HOSTNAME ORACLE_UNQNAME

TEMP=/tmp

TMPDIR=/tmp

export TEMP TMPDIR

ulimit -t unlimited

ulimit -f unlimited

ulimit -d unlimited

#ulimit -s 65000

ulimit -v unlimited

ulimit -n 32560

if [ -t 0 ]; then

stty intr ^C

fi

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

[oracle@oel61b ~]$

[root@oel61b ~]# su - oracle

[oracle@oel61b ~]$ cat .bash_profile

# .bash_profile

umask 022

ORACLE_BASE=/u01/app/oracle

ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1

ORACLE_HOSTNAME=oel61b

ORACLE_SID=RACDB_1

ORACLE_UNQNAME=RACDB

LD_LIBRARY_PATH=$ORACLE_HOME/lib

PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH

ORACLE_HOSTNAME ORACLE_UNQNAME

TEMP=/tmp

TMPDIR=/tmp

export TEMP TMPDIR

ulimit -t unlimited

ulimit -f unlimited

ulimit -d unlimited

#ulimit -s 65000

ulimit -v unlimited

ulimit -n 32560

if [ -t 0 ]; then

stty intr ^C

fi

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

[oracle@oel61b ~]$ exit

logout

[root@oel61b ~]# su - grid

[grid@oel61b ~]$ cat .bash_profile

# .bash_profile

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

umask 022

ORACLE_BASE=/u01/app/grid

ORACLE_HOME=/u01/app/12.1.0/grid_1

ORACLE_HOSTNAME=oel61b

ORACLE_SID=+ASM2

LD_LIBRARY_PATH=$ORACLE_HOME/lib

PATH=$PATH:$ORACLE_HOME/bin

export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH

ORACLE_HOSTNAME

TEMP=/tmp

TMPDIR=/tmp

export TEMP TMPDIR

ulimit -t unlimited

ulimit -f unlimited

ulimit -d unlimited

ulimit -s unlimited

ulimit -v unlimited

ulimit -n 32560

if [ -t 0 ]; then

stty intr ^C

fi

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

[grid@oel61b ~]$

top related