build two node oracle rac 11gr2 11-2-0 3 with gns dns dhcp and haip1(1)

143
Build two node Oracle RAC 11gR2 11.2.0.3 with GNS (DNS, DHCP) and HAIP In the article you will have a look at how to use some Oracle VirtualBox features to build two node Oracle 11gR2 (11.2.0.3) RAC system on Oracle Enterprise Linux (OEL 6.1). The setup will implement a role separation with different users for Oracle RDBMS and Oracle GI that is, user oracle and grid respectively in order to split the responsibilities between DBAs and storage administrators. The article will show you how to configure DHCP and a sample DNS setup for GNS deployment. You will have a glimpse at deploying HAIP feature allowing up to four private interconnect interfaces. An overview to Oracle Virtualization solutions can be seen here . You can see how to use Oracle VM VirtualBox to build a two node Solaris cluster here . For information related to building a RAC 11gR2 cluster on OEL without GNS click here . In the article you will see how to configure Linux in an Oracle VM VirtualBox Virtual machines, install Oracle GI, Oracle RDBMS and will create a policy managed database and service. The following software will be used: 1. Oracle 11gR2 (11.2.0.3) for Linux (x86-64). Patch 10404530 . Download from MOS here . 2. Oracle Enterprise Linux OEL 6.1(x86-64). Download from here . 3. Oracle VM VirtualBox 4.1.2. Download from here . Three virtual machines will be created and used. 1. OEL61A for RAC node oel61a 2. OEL61B for RAC node oel61b 3. OEL61 for DNS and DHCP server Ideally a DNS server should be on a dedicated physical server not a part of the cluster. Due to limited resources in this article DNS server will be configured to meet the prerequisites for GI and RDBMS installation for later node addition. Two virtual machines, OEL61A and OEL61B, will be configured for RAC nodes each with: 4GB RAM 300GB bootable disk (Disk space will be dynamically allocated not a fixed size pre-allocation) NIC bridged for public interface in RAC with address 192.168.2.21/22 (first IP 192.168.2.21 on oel61a and second IP 192.168.2.22 on node oel61b). These are public interface in RAC. NIC bridged for private interface in RAC with address 10.10.2.21/22 (first IP 10.10.2.21 on oel61a and second IP 10.10.2.22 on node oel61b). These are private interface in RAC.

Upload: silanca

Post on 25-Nov-2015

130 views

Category:

Documents


0 download

DESCRIPTION

Oracle RAC with GNS

TRANSCRIPT

  • Build two node Oracle RAC 11gR2 11.2.0.3 with GNS (DNS, DHCP)

    and HAIP

    In the article you will have a look at how to use some Oracle VirtualBox

    features to build two node Oracle 11gR2 (11.2.0.3) RAC system on Oracle

    Enterprise Linux (OEL 6.1). The setup will implement a role separation with

    different users for Oracle RDBMS and Oracle GI that is, user oracle and grid

    respectively in order to split the responsibilities between DBAs and storage

    administrators. The article will show you how to configure DHCP and a sample

    DNS setup for GNS deployment. You will have a glimpse at deploying HAIP

    feature allowing up to four private interconnect interfaces.

    An overview to Oracle Virtualization solutions can be seen here. You can see

    how to use Oracle VM VirtualBox to build a two node Solaris cluster here. For

    information related to building a RAC 11gR2 cluster on OEL without GNS click

    here.

    In the article you will see how to configure Linux in an Oracle VM VirtualBox

    Virtual machines, install Oracle GI, Oracle RDBMS and will create a policy

    managed database and service.

    The following software will be used:

    1. Oracle 11gR2 (11.2.0.3) for Linux (x86-64). Patch 10404530. Download from MOS here.

    2. Oracle Enterprise Linux OEL 6.1(x86-64). Download from here.

    3. Oracle VM VirtualBox 4.1.2. Download from here.

    Three virtual machines will be created and used.

    1. OEL61A for RAC node oel61a 2. OEL61B for RAC node oel61b 3. OEL61 for DNS and DHCP server

    Ideally a DNS server should be on a dedicated physical server not a part of

    the cluster. Due to limited resources in this article DNS server will be

    configured to meet the prerequisites for GI and RDBMS installation for later

    node addition.

    Two virtual machines, OEL61A and OEL61B, will be configured for RAC nodes

    each with:

    4GB RAM 300GB bootable disk (Disk space will be dynamically allocated not

    a fixed size pre-allocation) NIC bridged for public interface in RAC with address

    192.168.2.21/22 (first IP 192.168.2.21 on oel61a and second IP

    192.168.2.22 on node oel61b). These are public interface in RAC.

    NIC bridged for private interface in RAC with address 10.10.2.21/22 (first IP 10.10.2.21 on oel61a and second IP

    10.10.2.22 on node oel61b). These are private interface in RAC.

  • NIC bridged for private interface in RAC with address 10.10.5.21/22 (first IP 10.10.5.21 on oel61a and second IP

    10.10.5.22 on node oel61b). These are private interface in RAC. NIC bridged for private interface in RAC with address

    10.10.10.21/22 (first IP 10.10.10.21 on oel61a and second IP

    10.10.10.22 on node oel61b). These are private interface in RAC. 5 10GB attached shared disks for the ASM storage. (Normal

    Redundancy ASM disk groups will be deployed).

    Virtual machine OEL61 will be configured as follows (I will use it for add

    node later on):

    4GB RAM

    300GB bootable disk (Disk space will be dynamically allocated not a fixed size pre-allocation)

    NIC bridged for public interface in RAC with address

    192.168.2.11 NIC bridged for private interface in RAC with address

    10.10.2.11 NIC bridged for private interface in RAC with address

    10.10.5.11 NIC bridged for private interface in RAC with address

    10.10.10.11

    The interfaces IP addresses will be as show in the table 1 below.

    OEL61 DNS server and for

    later add node

    process.

    OEL61A RAC node

    oel61a

    OEL61B RAC node

    oel61a

    eth0 10.10.2.11 10.10.2.21 10.10.2.22

    eth1 192.168.2.11 192.168.2.21 192.168.2.22

    eth2 10.10.10.11 10.10.10.21 10.10.10.22

    eth3 10.10.5.11 10.10.5.21 10.10.5.22

    The following MOS notes were used:

    1. 11gR2 Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip [ID 1210883.1]

    2. DNS and DHCP Setup Example for Grid Infrastructure GNS [ID 946452.1]

    The article will cover the following topics

    1. Create an OEL61A VM with OEL 6.1 as guest OS for node oel61a. 2. Configure the OEL61A VM to meet the prerequisites for GI and RAC

    11.2.0.3 deployment.

    3. Clone OEL61A to OEL61B. 4. Clone OEL61A to OEL61.

  • 5. Set up DNS and DHCP server on OEL61. 6. Install GI 11.2.0.3 on oel61a and oel61b. 7. Install RAC RDBMS 11.2.0.3 on oel61a and oel61b. 8. Create a policy managed database RACDB oel61a and oel61b. 9. Verify database creation and create a service.

    Create an OEL61A VM with OEL 6.1 as guest OS for node oel61a

    In this section you will look at how to create a guest OEL 6.1 VM

    using Oracle VM VirtualBox.

    Press the New button on the menu bar and press Next.

    Specify the name of the VM and type of the OS and press Next.

  • Specify the RAM for the OEL61A VM and press Next.

  • Select an option to create a new disk.

    Select VDI type and press Next button.

  • Select dynamically allocated and press Next button.

  • Select a size of 300GB and press Next to continue.

  • Press Create.

  • Change the VM Settings related to network adapters and CPU. Enable I/O

    APIC.

  • Change the CPUs

  • Make sure all network adapters are enabled and configured as bridged

    adapters.

  • Once the VM is created in the left pane select the VM and press the

    Start button (green arrow in 4.1.2). Press Next to continue.

  • Select the ISO image for OEL61 and press Next to continue.

  • Press start to continue.

  • Press Install or upgrade an existing system

  • Press Skip to continue.

  • Press Next to continue.

  • Select language.

  • Select keyboard.

  • Select basic storage devices.

  • Press Yes.

  • Specify hostname and press Configure Network.

  • Add Static IP addresses to each interface. Use the table for reference

    as IP and interface mapping.

    For eth0 specify

  • For eth1 specify

  • For eth2 specify.

  • For eth3 specify.

  • Select time zone and press Next to continue.

  • Specify root password and press Next to continue.

  • Specify create custom layout.

  • The following list specifies the file systems, and their respective

    sizes, that will be created. Create /u01 last with size the remaining

    part of the 300GB disk. The approach can be used to create any custom

    size. Plan accordingly to have sufficient disk space.

    1. / - 10000M 2. /boot - 10000M 3. /home - 10000M 4. /opt - 10000M 5. /tmp - 10000M 6. /usr - 10000M 7. /usr/local - 10000M 8. /var - 10000M 9. swap - 10000M 10. /u01 - the remaining disk space.

    Press Create.

  • Select Standard partition and press Create.

  • Specify / and the fixed size and press OK to continue.

  • Repeat the same steps for all file systems and swap. Once done you

    will have file systems similar to the image. Press Next to continue.

  • Select Database Server and Customize now and press Next to continue.

  • I selected all.

  • Wait until all packages get installed and press Reboot.

  • Skip registration and press Forward.

  • Press Forward.

  • Create user and press Forward.

  • Synchronize NTP and press Forward.

  • Press Finish.

  • After reboot you will have a similar screen.

  • Login as root and select Devices->Install Guest Additions. Press OK.

  • Press the Run button.

  • Wait for the installation to complete.

  • If the auto start window does not prompt you to run Guest Additions

    installation go to the media folder and execute the following command.

    sh ./VBoxLinuxAdditions.run

    Configure the OEL61A VM to meet the prerequisites for GI and RAC

    11.2.0.3 deployment

    Add divider into /etc/grub.conf

    Before

    [root@oel61a dhcp]# cat /etc/grub.conf

    # grub.conf generated by anaconda

    #

    # Note that you do not have to rerun grub after making changes to this file

    # NOTICE: You have a /boot partition. This means that

    # all kernel and initrd paths are relative to /boot/, eg.

    # root (hd0,0)

  • # kernel /vmlinuz-version ro root=/dev/sda5

    # initrd /initrd-[generic-]version.img

    #boot=/dev/sda

    default=0

    timeout=5

    splashimage=(hd0,0)/grub/splash.xpm.gz

    hiddenmenu

    title Oracle Linux Server-uek (2.6.32-100.34.1.el6uek.x86_64)

    root (hd0,0)

    kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64 ro root=UUID=ef6e890d-

    860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM

    LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb

    quiet

    initrd /initramfs-2.6.32-100.34.1.el6uek.x86_64.img

    title Oracle Linux Server-uek-debug (2.6.32-100.34.1.el6uek.x86_64.debug)

    root (hd0,0)

    kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64.debug ro

    root=UUID=ef6e890d-860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD

    rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us

    rhgb quiet

    initrd /initramfs-2.6.32-100.34.1.el6uek.x86_64.debug.img

    title Oracle Linux Server (2.6.32-131.0.15.el6.x86_64)

    root (hd0,0)

    kernel /vmlinuz-2.6.32-131.0.15.el6.x86_64 ro root=UUID=ef6e890d-860a-

    4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8

    SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto rhgb

    quiet

    initrd /initramfs-2.6.32-131.0.15.el6.x86_64.img

    [root@oel61a dhcp]# uname -a

    Linux oel61a.gj.com 2.6.32-100.34.1.el6uek.x86_64 #1 SMP Wed May 25 17:46:45

    EDT 2011 x86_64 x86_64 x86_64 GNU/Linux

    [root@oel61a dhcp]#

    After:

    [root@oel61a dhcp]# cat /etc/grub.conf # grub.conf generated by anaconda

    #

    # Note that you do not have to rerun grub after making changes to this file

    # NOTICE: You have a /boot partition. This means that

    # all kernel and initrd paths are relative to /boot/, eg.

    # root (hd0,0)

    # kernel /vmlinuz-version ro root=/dev/sda5

    # initrd /initrd-[generic-]version.img

    #boot=/dev/sda

    default=0

    timeout=5

    splashimage=(hd0,0)/grub/splash.xpm.gz

    hiddenmenu

    title Oracle Linux Server-uek (2.6.32-100.34.1.el6uek.x86_64)

    root (hd0,0)

    kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64 ro root=UUID=ef6e890d-

    860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM

    LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb

    quiet

    kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64 ro root=UUID=ef6e890d-

    860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM

    LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb

    quiet divider=10

    initrd /initramfs-2.6.32-100.34.1.el6uek.x86_64.img

    title Oracle Linux Server-uek-debug (2.6.32-100.34.1.el6uek.x86_64.debug)

  • root (hd0,0)

    kernel /vmlinuz-2.6.32-100.34.1.el6uek.x86_64.debug ro

    root=UUID=ef6e890d-860a-4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD

    rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us

    rhgb quiet

    initrd /initramfs-2.6.32-100.34.1.el6uek.x86_64.debug.img

    title Oracle Linux Server (2.6.32-131.0.15.el6.x86_64)

    root (hd0,0)

    kernel /vmlinuz-2.6.32-131.0.15.el6.x86_64 ro root=UUID=ef6e890d-860a-

    4554-bb70-4315af978e6b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8

    SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto rhgb

    quiet

    initrd /initramfs-2.6.32-131.0.15.el6.x86_64.img

    [root@oel61a dhcp]#

    [root@oel61a network-scripts]#

    Add into /etc/sysctl.conf

    net.bridge.bridge-nf-call-ip6tables = 0

    net.bridge.bridge-nf-call-iptables = 0

    net.bridge.bridge-nf-call-arptables = 0

    fs.aio-max-nr = 1048576

    fs.file-max = 6815744

    kernel.shmall = 2097152

    kernel.shmmni = 4096

    kernel.sem = 250 32000 100 128

    net.ipv4.ip_local_port_range = 9000 65500

    net.core.rmem_default = 262144

    net.core.rmem_max = 4194304

    net.core.wmem_default = 262144

    net.core.wmem_max = 1048586

    net.ipv4.conf.eth2.rp_filter = 2

    net.ipv4.conf.eth2.rp_filter = 2

    net.ipv4.conf.eth1.rp_filter = 1

    net.ipv4.conf.eth0.rp_filter = 2

    kernel.shmmax = 2074277888

    fs.suid_dumpable = 1

    # Controls the maximum number of shared memory segments, in pages

    kernel.shmall = 4294967296

    Set the user limits for oracle and grid users in /etc/security/limits.conf to

    restrict the maximum number of processes for the oracle software owner users

    to 16384 and maximum number of open files to 65536.

    oracle soft nproc 2047

    oracle hard nproc 16384

    oracle soft nofile 1024

    oracle hard nofile 65536

    grid soft nproc 2047

    grid hard nproc 16384

    grid soft nofile 1024

    grid hard nofile 65536

    Add in /etc/pam.d/login, as per MOS Note 567524.1, the line below in order

    for the login program to load the pam_limits.so so that the

    /etc/security/limits.conf is read and limits activated and enforced.

    session required pam_limits.so

  • You have two options for time synchronization: an operating system configured

    network time protocol (NTP), or Oracle Cluster Time Synchronization Service.

    Oracle Cluster Time Synchronization Service is designed for organizations

    whose cluster servers are unable to access NTP services. If you use NTP, then

    the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer

    mode. If you do not have NTP daemons, then ctssd starts up in active mode and

    synchronizes time among cluster members without contacting an external time

    server. So there are two options:

    For enabling the NTP make sure that /etc/sysconfig/ntpd has the line

    modified to include -x.

    o OPTIONS=-x -u ntp:ntp -p /var/run/ntpd.pid

    For disabling the NTP make sure that the NTP service is stopped and disabled for auto-start and there is not configuration file.

    o /sbin/service ntpd stop

    o chkconfig ntpd off

    o mv /etc/ntp.conf to /etc/ntp.conf.org

    In the article NTP is disabled.

    Add users and groups.

    groupadd -g 1000 oinstall

    groupadd -g 1020 asmadmin

    groupadd -g 1021 asmdba

    groupadd -g 1031 dba

    groupadd -g 1022 asmoper

    groupadd -g 1023 oper

    useradd -u 1100 -g oinstall -G asmadmin,asmdba,dba,asmoper grid

    useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle

    [root@oel61a shm]# groupadd -g 1020 asmadmin

    [root@oel61a shm]# groupadd -g 1000 oinstall

    [root@oel61a shm]# groupadd -g 1020 asmadmin

    [root@oel61a shm]# groupadd -g 1021 asmdba

    [root@oel61a shm]# groupadd -g 1031 dba

    [root@oel61a shm]# groupadd -g 1022 asmoper

    [root@oel61a shm]# groupadd -g 1023 oper

    [root@oel61a shm]#

    [root@oel61a shm]# useradd -u 1100 -g oinstall -G asmadmin,asmdba,dba,asmoper

    grid

    [root@oel61a shm]# useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle

    [root@oel61a shm]#

    Set the permissions and directories. Note that Oracle RDBMS directory will be

    created by OUI in the location specified in the profile.

  • [root@oel61a shm]#

    [root@oel61a shm]# mkdir -p /u01/app/11.2.0/grid

    [root@oel61a shm]# mkdir -p /u01/app/grid

    [root@oel61a shm]# mkdir -p /u01/app/oracle

    [root@oel61a shm]# chown grid:oinstall /u01/app/11.2.0/grid

    [root@oel61a shm]# chown grid:oinstall /u01/app/grid

    [root@oel61a shm]# chown oracle:oinstall /u01/app/oracle

    [root@oel61a shm]# chown -R grid:oinstall /u01

    [root@oel61a shm]# mkdir -p /u01/app/oracle

    [root@oel61a shm]# chmod -R 775 /u01/

    [root@oel61a shm]#

    Create the profiles for the grid and oracle users.

    For grid user

    [grid@oel61a ~]$ cat .bash_profile

    # .bash_profile

    # Get the aliases and functions

    if [ -f ~/.bashrc ]; then

    . ~/.bashrc

    fi

    umask 022

    ORACLE_BASE=/u01/app/grid

    ORACLE_HOME=/u01/app/11.2.0/grid

    ORACLE_HOSTNAME=oel61a

    ORACLE_SID=+ASM1

    LD_LIBRARY_PATH=$ORACLE_HOME/lib

    PATH=$PATH:$ORACLE_HOME/bin

    export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH ORACLE_HOSTNAME

    TEMP=/tmp

    TMPDIR=/tmp

    export TEMP TMPDIR

    ulimit -t unlimited

    ulimit -f unlimited

    ulimit -d unlimited

    ulimit -s unlimited

    ulimit -v unlimited

    if [ -t 0 ]; then

  • stty intr ^C

    fi

    # Get the aliases and functions

    if [ -f ~/.bashrc ]; then

    . ~/.bashrc

    fi

    # User specific environment and startup programs

    PATH=$PATH:$HOME/bin

    export PATH

    # User specific environment and startup programs

    PATH=$PATH:$HOME/bin

    export PATH

    [grid@oel61a ~]$

    For oracle user

    [oracle@oel61a ~]$ cat .bash_profile

    # .bash_profile

    umask 022

    ORACLE_BASE=/u01/app/oracle

    ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

    ORACLE_HOSTNAME=oel61a

    ORACLE_SID=RACDB_1

    ORACLE_UNQNAME=RACDB

    LD_LIBRARY_PATH=$ORACLE_HOME/lib

    PATH=$PATH:$ORACLE_HOME/bin

    export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH ORACLE_HOSTNAME

    ORACLE_UNQNAME

    TEMP=/tmp

    TMPDIR=/tmp

    export TEMP TMPDIR

  • ulimit -t unlimited

    ulimit -f unlimited

    ulimit -d unlimited

    ulimit -s unlimited

    ulimit -v unlimited

    if [ -t 0 ]; then

    stty intr ^C

    fi

    # Get the aliases and functions

    if [ -f ~/.bashrc ]; then

    . ~/.bashrc

    fi

    # User specific environment and startup programs

    PATH=$PATH:$HOME/bin

    export PATH

    # Get the aliases and functions

    if [ -f ~/.bashrc ]; then

    . ~/.bashrc

    fi

    # User specific environment and startup programs

    PATH=$PATH:$HOME/bin

    export PATH

    [oracle@oel61a ~]$

    Create shared disks

    Lets look from OEL perspective at the available disk devices

    [root@oel61a dev]# ls sd*

    sda sda10 sda2 sda4 sda6 sda8

    sda1 sda11 sda3 sda5 sda7 sda9

    [root@oel61a dev]#

    Lets create 5 shared disks and attach them to the OEL61A VM.

  • e:\vb>VBoxManage createhd --filename d:\vb\l1asm1.vdi --size 10240 --format VDI

    --variant Fixed

    0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

    Disk image created. UUID: 2a173888-d0fb-4cfd-a80c-068951bfc4ff

    e:\vb>VBoxManage createhd --filename d:\vb\l1asm2.vdi --size 10240 --format VDI

    --variant Fixed

    0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

    Disk image created. UUID: b584ae76-b5a1-4000-b450-e8de6b8356c4

    e:\vb>VBoxManage createhd --filename d:\vb\l1asm3.vdi --size 10240 --format VDI

    --variant Fixed

    0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

    Disk image created. UUID: f1252521-0011-48e0-84f8-103f7718748b

    e:\vb>VBoxManage createhd --filename d:\vb\l1asm4.vdi --size 10240 --format VDI

    --variant Fixed

    0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

    Disk image created. UUID: dbfc0831-8b99-4332-adbd-febc8c5268f7

    e:\vb>VBoxManage createhd --filename d:\vb\l1asm5.vdi --size 10240 --format VDI

    --variant Fixed

    0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

    Disk image created. UUID: bad953ea-2ea0-42a1-b9f1-b943d080ce92

    e:\vb>

    e:\vb>VBoxManage storageattach OEL61A --storagectl "SATA Controller" --port 1 -

    -

    device 0 --type hdd --medium d:\vb\l1asm1.vdi --mtype shareable

    e:\vb>VBoxManage storageattach OEL61A --storagectl "SATA Controller" --port 2 -

    -

    device 0 --type hdd --medium d:\vb\l1asm2.vdi --mtype shareable

    e:\vb>VBoxManage storageattach OEL61A --storagectl "SATA Controller" --port 3 -

    -

    device 0 --type hdd --medium d:\vb\l1asm3.vdi --mtype shareable

    e:\vb>VBoxManage storageattach OEL61A --storagectl "SATA Controller" --port 4 -

    -

    device 0 --type hdd --medium d:\vb\l1asm4.vdi --mtype shareable

    e:\vb>VBoxManage storageattach OEL61A --storagectl "SATA Controller" --port 5 -

    -

    device 0 --type hdd --medium d:\vb\l1asm5.vdi --mtype shareable

    e:\vb>VBoxManage modifyhd d:\vb\lasm1.vdi --type shareable

    e:\vb>VBoxManage modifyhd d:\vb\lasm2.vdi --type shareable

    e:\vb>VBoxManage modifyhd d:\vb\lasm3.vdi --type shareable

    e:\vb>VBoxManage modifyhd d:\vb\lasm4.vdi --type shareable

    e:\vb>VBoxManage modifyhd d:\vb\lasm5.vdi --type shareable

    e:\vb>

    Now we can confirm that there are 5 new disk devices sdb[b-e].

  • [root@oel61a dev]# ls sd*

    sda sda10 sda2 sda4 sda6 sda8 sdb sdd sdf

    sda1 sda11 sda3 sda5 sda7 sda9 sdc sde

    [root@oel61a dev]#

    Format each of the new devices. For example for /dev/sdb issue the command

    below.

    [root@oel61a dev]# fdisk /dev/sdb

    Device contains neither a valid DOS partition table, nor Sun, SGI or OSF

    disklabel

    Building a new DOS disklabel with disk identifier 0xb4cd1737.

    Changes will remain in memory only, until you decide to write them.

    After that, of course, the previous content won't be recoverable.

    Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

    WARNING: DOS-compatible mode is deprecated. It's strongly recommended to

    switch off the mode (command 'c') and change display units to

    sectors (command 'u').

    Command (m for help): n

    Command action

    e extended

    p primary partition (1-4)

    p

    Partition number (1-4): 1

    First cylinder (1-1305, default 1):

    Using default value 1

    Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305):

    Using default value 1305

    Command (m for help): p

    Disk /dev/sdb: 10.7 GB, 10737418240 bytes

    255 heads, 63 sectors/track, 1305 cylinders

    Units = cylinders of 16065 * 512 = 8225280 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0xb4cd1737

    Device Boot Start End Blocks Id System

    /dev/sdb1 1 1305 10482381 83 Linux

    Command (m for help): w

    The partition table has been altered!

    Calling ioctl() to re-read partition table.

    Syncing disks.

    [root@oel61a dev]#

    Repeat the steps listed below for each disk from /dev/sdb to /dev/sdf.

    [root@oel61a dev]# ls sd*

    sda sda10 sda2 sda4 sda6 sda8 sdb sdc sdd sde sdf

    sda1 sda11 sda3 sda5 sda7 sda9 sdb1 sdc1 sdd1 sde1 sdf1

    [root@oel61a dev]#

    Configure ASMlib

  • [root@oel61a dev]# oracleasm configure -i

    Configuring the Oracle ASM library driver.

    This will configure the on-boot properties of the Oracle ASM library

    driver. The following questions will determine whether the driver is

    loaded on boot and what permissions it will have. The current values

    will be shown in brackets ('[]'). Hitting without typing an

    answer will keep that current value. Ctrl-C will abort.

    Default user to own the driver interface []: grid

    Default group to own the driver interface []: asmadmin

    Start Oracle ASM library driver on boot (y/n) [n]: y

    Scan for Oracle ASM disks on boot (y/n) [y]: y

    Writing Oracle ASM library driver configuration: done

    [root@oel61a dev]#

    [root@oel61a dev]# /usr/sbin/oracleasm init

    Creating /dev/oracleasm mount point: /dev/oracleasm

    Loading module "oracleasm": oracleasm

    Mounting ASMlib driver filesystem: /dev/oracleasm

    [root@oel61a dev]#

    Label the shared disks

    [root@oel61a dev]# /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1

    Writing disk header: done

    Instantiating disk: done

    [root@oel61a dev]# /usr/sbin/oracleasm createdisk DISK2 /dev/sdc1

    Writing disk header: done

    Instantiating disk: done

    [root@oel61a dev]# /usr/sbin/oracleasm createdisk DISK3 /dev/sdd1

    Writing disk header: done

    Instantiating disk: done

    [root@oel61a dev]# /usr/sbin/oracleasm createdisk DISK4 /dev/sde1

    Writing disk header: done

    Instantiating disk: done

    [root@oel61a dev]# /usr/sbin/oracleasm createdisk DISK5 /dev/sdf1

    Writing disk header: done

    Instantiating disk: done

    [root@oel61a dev]#

    [root@oel61a dev]# /usr/sbin/oracleasm scandisks

    Reloading disk partitions: done

    Cleaning any stale ASM disks...

    Scanning system for ASM disks...

    [root@oel61a dev]# /usr/sbin/oracleasm listdisks

    DISK1

    DISK2

    DISK3

    DISK4

    DISK5

    [root@oel61a dev]#

    Make sure you set proper permissions for the devices. Put into /etc/rc.local

    chmod -R 660 /dev/oracleasm

    chown -R grid:asmadmin /dev/oracleasm

  • Reboot oel61a server.

    Clone OEL61A to OEL61B

    Right click the VM and select Clone or press CTRL+O. Enter the new name, mark

    the check box to reinitialize the MAC address of all network interfaces and

    press Next button. Here a new OEL61B VM is created from OEL61A.

    Select Full Clone and press Clone.

  • Wait until the clone succeeds. This procedure creates a full copy of the

    original OEL61A including all of the 6 disks from OEL61A. As the intention is

    to have the five disks shared across both OEL61A and OEL61B I will drop the

    non OEL disks from OEL61B and will re-attach the shared disks to OEL61B.

    After dropping the disks from OEL61B I re-attach the shared disks using the

    following commands.

    E:\vb>type oel61_att1.bat

    VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 1 --

    device

    0 --type hdd --medium d:\vb\l1asm1.vdi --mtype shareable

    VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 2 --

    device

    0 --type hdd --medium d:\vb\l1asm2.vdi --mtype shareable

    VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 3 --

    device

    0 --type hdd --medium d:\vb\l1asm3.vdi --mtype shareable

    VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 4 --

    device

    0 --type hdd --medium d:\vb\l1asm4.vdi --mtype shareable

    VBoxManage storageattach OEL61B --storagectl "SATA Controller" --port 5 --

    device

    0 --type hdd --medium d:\vb\l1asm5.vdi --mtype shareable

    VBoxManage modifyhd d:\vb\lasm1.vdi --type shareable

    VBoxManage modifyhd d:\vb\lasm2.vdi --type shareable

    VBoxManage modifyhd d:\vb\lasm3.vdi --type shareable

    VBoxManage modifyhd d:\vb\lasm4.vdi --type shareable

    VBoxManage modifyhd d:\vb\lasm5.vdi --type shareable

    E:\vb>

    Now power the OEL61B and complete the following steps

  • Set up the networking

    [root@oel61b network-scripts]# ifconfig -a | grep eth

    eth4 Link encap:Ethernet HWaddr 08:00:27:B2:90:0F

    eth5 Link encap:Ethernet HWaddr 08:00:27:50:C8:FE

    eth6 Link encap:Ethernet HWaddr 08:00:27:1F:22:8F

    eth7 Link encap:Ethernet HWaddr 08:00:27:31:76:22

    [root@oel61b network-scripts]#

    Modify the /etc/sysconfig/ifcfg-eth[0-3] as show below and replace the

    IPADDR, HWADDR and DNS1. For IPADDR use the values from the table 1 for node

    oel61a. For HWADDR use the values from the grep example above. For DNS1 use

    192.168.2.11 that will be set later on.

    [root@oel61b network-scripts]# cat ifcfg-eth0

    DEVICE="eth0"

    NM_CONTROLLED="yes"

    ONBOOT=yes

    TYPE=Ethernet

    BOOTPROTO=none

    DEFROUTE=yes

    IPV4_FAILURE_FATAL=yes

    IPV6INIT=no

    NAME="System eth0"

    IPADDR=10.10.2.22

    PREFIX=24

    GATEWAY=192.168.2.1

    DNS1=192.168.2.11

    UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03

    HWADDR=08:00:27:50:C8:FE

    [root@oel61b network-scripts]# cat ifcfg-eth1

    DEVICE="eth1"

    NM_CONTROLLED="yes"

    ONBOOT=yes

    TYPE=Ethernet

    BOOTPROTO=none

    IPADDR=192.168.2.22

    PREFIX=24

    GATEWAY=192.168.2.1

    DNS1=192.168.2.11

    DEFROUTE=yes

    IPV4_FAILURE_FATAL=yes

    IPV6INIT=no

    NAME="System eth1"

    HWADDR=08:00:27:1F:22:8F

    UUID=9c92fad9-6ecb-3e6c-eb4d-8a47c6f50c04

    [root@oel61b network-scripts]# cat ifcfg-eth2

    DEVICE="eth2"

    NM_CONTROLLED="yes"

    ONBOOT=yes

    TYPE=Ethernet

    BOOTPROTO=none

    IPADDR=10.10.10.22

    PREFIX=24

    GATEWAY=192.168.2.1

    DNS1=192.168.2.11

    DEFROUTE=yes

    IPV4_FAILURE_FATAL=yes

  • IPV6INIT=no

    NAME="System eth2"

    HWADDR=08:00:27:31:76:22

    UUID=3a73717e-65ab-93e8-b518-24f5af32dc0d

    [root@oel61b network-scripts]#

    [root@oel61b network-scripts]# cat ifcfg-eth3

    DEVICE="eth3"

    NM_CONTROLLED="yes"

    ONBOOT=yes

    TYPE=Ethernet

    BOOTPROTO=none

    IPADDR=10.10.5.22

    PREFIX=24

    GATEWAY=192.168.2.1

    DNS1=192.168.2.11

    DEFROUTE=yes

    IPV4_FAILURE_FATAL=yes

    IPV6INIT=no

    NAME="System eth3"

    HWADDR=08:00:27:B2:90:0F

    UUID=c5ca8081-6db2-4602-4b46-d771f4330a6d

    [root@oel61b network-scripts]#

    Modify the hostname from OEL61B VM by replacing the HOSTNAME with

    oel61b.gj.com

    [root@oel61b sysconfig]# cat network

    NETWORKING=yes

    HOSTNAME=oel61b.gj.com

    NOZEROCONF=yes

    [root@oel61b sysconfig]#

    Update the profiles for oracle and grid users

    [grid@oel61b ~]$ cat .bash_profile

    # .bash_profile

    # Get the aliases and functions

    if [ -f ~/.bashrc ]; then

    . ~/.bashrc

    fi

    umask 022

    ORACLE_BASE=/u01/app/grid

    ORACLE_HOME=/u01/app/11.2.0/grid

    ORACLE_HOSTNAME=oel61b

    ORACLE_SID=+ASM2

    LD_LIBRARY_PATH=$ORACLE_HOME/lib

    PATH=$PATH:$ORACLE_HOME/bin

    export ORACLE_BASE ORACLE_HOME ORACLE_SID LD_LIBRARY_PATH PATH ORACLE_HOSTNAME

  • TEMP=/tmp

    TMPDIR=/tmp

    export TEMP TMPDIR

    ulimit -t unlimited

    ulimit -f unlimited

    ulimit -d unlimited

    ulimit -s unlimited

    ulimit -v unlimited

    if [ -t 0 ]; then

    stty intr ^C

    fi

    # Get the aliases and functions

    if [ -f ~/.bashrc ]; then

    . ~/.bashrc

    fi

    # User specific environment and startup programs

    PATH=$PATH:$HOME/bin

    export PATH

    # User specific environment and startup programs

    PATH=$PATH:$HOME/bin

    export PATH

    [grid@oel61b ~]$

    Reboot oel61b server.

    Clone OEL61A to OEL61

    Repeat the same procedure as described in section Clone OEL61A to

    OEL61B

    Set up DNS and DHCP server on OEL61

  • Set up DNS

    The steps in this section are to be executed as root user only on oel61. Only

    /etc/resolv.conf needs to be modified on all three nodes as root.

    As root on oel61a setup a DNS by creating the following zones in

    /etc/named.conf

    2.168.192.in-addr.arpa

    10.10.10.in-addr.arpa

    5.10.10.in-addr.arpa

    gj.com.

    [root@oel61 named]# cat /etc/named.conf

    //

    // named.conf

    //

    // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS

    // server as a caching only nameserver (as a localhost DNS resolver only).

    //

    // See /usr/share/doc/bind*/sample/ for example named configuration files.

    //

    options {

    listen-on port 53 { any; };

    listen-on-v6 port 53 { ::1; };

    directory "/var/named";

    dump-file "/var/named/data/cache_dump.db";

    statistics-file "/var/named/data/named_stats.txt";

    // memstatistics-file "/var/named/data/named_mem_stats.txt";

    // recursion yes;

    // allow-recursion { any;};

    // allow-recursion-on { any;};

    // allow-query-cache { any; };

    // allow-query { any; };

    // dnssec-enable yes;

    // dnssec-validation yes;

    // dnssec-lookaside auto;

    /* Path to ISC DLV key */

    bindkeys-file "/etc/named.iscdlv.key";

    };

    logging {

    channel default_debug {

    file "data/named.run";

    severity dynamic;

    };

    };

    zone "2.168.192.in-addr.arpa" IN {

    type master;

    file "gj1.com.reverse";

    allow-update { none; };

    };

  • zone "10.10.10.in-addr.arpa" IN {

    type master;

    file "priv1.com.reverse";

    allow-update { none; };

    };

    zone "5.10.10.in-addr.arpa" IN {

    type master;

    file "priv2.com.reverse";

    allow-update { none; };

    };

    zone "2.10.10.in-addr.arpa" IN {

    type master;

    file "priv3.com.reverse";

    allow-update { none; };

    };

    zone "gj.com." IN {

    type master;

    file "gj1.zone";

    notify no;

    };

    zone "." IN {

    type hint;

    file "named.ca";

    };

    include "/etc/named.rfc1912.zones";

    [root@oel61 named]#

    Create a config file for gj.com zone in /var/named/gj1.zone

    [root@oel61 named]# cat gj1.zone

    $TTL 86400

    $ORIGIN gj.com.

    @ IN SOA oel61.gj.com. root (

    43 ; serial (d. adams)

    3H ; refresh

    15M ; retry

    1W ; expiry

    1D ) ; minimum

    IN NS oel61

    oel61 IN A 192.168.2.11

    oel61a IN A 192.168.2.21

    oel61b IN A 192.168.2.22

    oel61c IN A 192.168.2.23

    raclinux3 IN A 192.168.2.24

    dns CNAME gj.com.

    oel61-priv1 IN A 10.10.10.11

    oel61a-priv1 IN A 10.10.10.21

    oel61b-priv1 IN A 10.10.10.22

    oel61-priv2 IN A 10.10.5.11

    oel61a-priv2 IN A 10.10.5.21

    oel61b-priv2 IN A 10.10.5.22

    oel61-priv3 IN A 10.10.2.11

    oel61a-priv3 IN A 10.10.2.21

    oel61b-priv3 IN A 10.10.2.22

  • $ORIGIN grid.gj.com.

    @ IN NS gns.grid.gj.com.

    ;; IN NS oel61a.gj.com.

    gns.grid.gj.com. IN A 192.168.2.52

    oel61 IN A 192.168.2.11

    oel61a IN A 192.168.2.21

    oel61b IN A 192.168.2.22

    oel61c IN A 192.168.2.23

    [root@oel61 named]#

    Create a config file for 2.168.192.in-addr.arpa zone in /var/named/

    gj1.com.reverse.

    [root@oel61 named]# cat gj1.com.reverse

    $ORIGIN 2.168.192.in-addr.arpa.

    $TTL 1H

    @ IN SOA oel61.gj.com. root.dnsoel55.gj.com. ( 2

    3H

    1H

    1W

    1H )

    2.168.192.in-addr.arpa. IN NS oel61.gj.com.

    IN NS oel61.gj.com.

    11 IN PTR oel61.gj.com.

    21 IN PTR oel61a.gj.com.

    22 IN PTR oel61b.gj.com.

    23 IN PTR oel61c.gj.com.

    24 IN PTR raclinux3.gj.com.

    52 IN PTR gns.grid.gj.com.

    [root@oel61 named]#

    Create a config file for 10.10.10.in-addr.arpa zone in /var/named/

    priv1.com.reverse.

    [root@oel61 named]# cat priv1.com.reverse

    $ORIGIN 10.10.10.in-addr.arpa.

    $TTL 1H

    @ IN SOA oel61.gj.com. root.oel61.gj.com. ( 2

    3H

    1H

    1W

    1H )

    10.10.10.in-addr.arpa. IN NS oel61.gj.com.

    IN NS oel61a.gj.com.

    11 IN PTR oel6-priv1.gj.com.

    21 IN PTR oel61a-priv1.gj.com.

    22 IN PTR oel61b-priv1.gj.com.

    23 IN PTR oel61c-priv1.gj.com.

    [root@oel61 named]#

    Create a config file for 5.10.10.in-addr.arpa zone in /var/named/

    priv2.com.reverse.

    [root@oel61 named]# cat priv2.com.reverse

  • $ORIGIN 5.10.10.in-addr.arpa.

    $TTL 1H

    @ IN SOA oel61.gj.com. root.oel61.gj.com. ( 2

    3H

    1H

    1W

    1H )

    5.10.10.in-addr.arpa. IN NS oel61.gj.com.

    IN NS oel61a.gj.com.

    11 IN PTR oel6-priv2.gj.com.

    21 IN PTR oel61a-priv2.gj.com.

    22 IN PTR oel61b-priv2.gj.com.

    23 IN PTR oel61c-priv2.gj.com.

    [root@oel61 named]#

    Create a config file for 2.10.10.in-addr.arpa zone in /var/named/

    priv3.com.reverse.

    [root@oel61 named]# cat priv3.com.reverse

    $ORIGIN 2.10.10.in-addr.arpa.

    $TTL 1H

    @ IN SOA oel61.gj.com. root.oel61.gj.com. ( 2

    3H

    1H

    1W

    1H )

    2.10.10.in-addr.arpa. IN NS oel61.gj.com.

    IN NS oel61a.gj.com.

    11 IN PTR oel6-priv3.gj.com.

    21 IN PTR oel61a-priv3.gj.com.

    22 IN PTR oel61b-priv3.gj.com.

    23 IN PTR oel61c-priv3.gj.com.

    [root@oel61 named]#

    Make sure that you enable named service for auto-start issuing the following

    command.

    chkconfig named on

    Start the named service issuing the following command.

    service named start

    Disable the firewall by issuing the following command on all nodes oel61,

    oel61a and oel61b as root.

    chkconfig iptables off

    For production systems it is strongly recommended to adjust the iptables

    rules so that you can have access to the DNS server listening on port 53.

    Here for simplicity the firewall is disabled.

    Modify the /etc/resolv.conf file to reflect the DNS IP address specified by

    nameserver parameter and the domain specified by search parameter on all

    nodes (oel61, oel61a and oel61b)

    [root@oel61a stage]# cat /etc/resolv.conf

    # Generated by NetworkManager

  • search gj.com

    nameserver 192.168.2.11

    [root@oel61a stage]#

    Test all public and private nodes accessibility and resolution using

    nslookup.

    [root@oel61a stage]# nslookup oel61

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61.gj.com

    Address: 192.168.2.11

    [root@oel61a stage]# nslookup oel61a

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61a.gj.com

    Address: 192.168.2.21

    [root@oel61a stage]# nslookup oel61b

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61b.gj.com

    Address: 192.168.2.22

    [root@oel61a stage]# nslookup 192.168.2.11

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    11.2.168.192.in-addr.arpa name = oel61.gj.com.

    [root@oel61a stage]# nslookup 192.168.2.21

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    21.2.168.192.in-addr.arpa name = oel61a.gj.com.

    [root@oel61a stage]# nslookup 192.168.2.22

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    22.2.168.192.in-addr.arpa name = oel61b.gj.com.

    [root@oel61a stage]#

    [root@oel61a stage]# nslookup oel61a-priv1

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61a-priv1.gj.com

    Address: 10.10.10.21

    [root@oel61a stage]# nslookup 10.10.10.21

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    21.10.10.10.in-addr.arpa name = oel61a-priv1.gj.com.

    [root@oel61a stage]#

  • [root@oel61a stage]# nslookup oel61a-priv2

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61a-priv2.gj.com

    Address: 10.10.5.21

    [root@oel61a stage]# nslookup 10.10.5.21

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    21.5.10.10.in-addr.arpa name = oel61a-priv2.gj.com.

    [root@oel61a stage]#

    [root@oel61a stage]# nslookup oel61a-priv3

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61a-priv3.gj.com

    Address: 10.10.2.21

    [root@oel61a stage]# nslookup 10.10.2.21

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    21.2.10.10.in-addr.arpa name = oel61a-priv3.gj.com.

    [root@oel61a stage]#

    [root@oel61a stage]# nslookup oel61b-priv1

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61b-priv1.gj.com

    Address: 10.10.10.22

    [root@oel61a stage]# nslookup 10.10.10.22

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    22.10.10.10.in-addr.arpa name = oel61b-priv1.gj.com.

    [root@oel61a stage]# nslookup oel61b-priv2

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61b-priv2.gj.com

    Address: 10.10.5.22

    [root@oel61a stage]# nslookup 10.10.5.22

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    22.5.10.10.in-addr.arpa name = oel61b-priv2.gj.com.

    [root@oel61a stage]# nslookup oel61b-priv3

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61b-priv3.gj.com

    Address: 10.10.2.22

  • [root@oel61a stage]# nslookup 10.10.2.22

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    22.2.10.10.in-addr.arpa name = oel61b-priv3.gj.com.

    [root@oel61a stage]#

    [root@oel61a stage]# nslookup oel61

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61.gj.com

    Address: 192.168.2.11

    [root@oel61a stage]# nslookup 192.168.2.11

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    11.2.168.192.in-addr.arpa name = oel61.gj.com.

    [root@oel61a stage]#

    [root@oel61a stage]# nslookup oel61-priv1

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61-priv1.gj.com

    Address: 10.10.10.11

    [root@oel61a stage]# nslookup 10.10.10.11

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    11.10.10.10.in-addr.arpa name = oel6-priv1.gj.com.

    [root@oel61a stage]#

    [root@oel61a stage]# nslookup oel61-priv2

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Name: oel61-priv2.gj.com

    Address: 10.10.5.11

    [root@oel61a stage]# nslookup 10.10.5.11

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    11.5.10.10.in-addr.arpa name = oel6-priv2.gj.com.

    [root@oel61a stage]#

    [root@oel61a stage]# nslookup 10.10.5.11

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    11.5.10.10.in-addr.arpa name = oel6-priv2.gj.com.

    [root@oel61a stage]# nslookup oel61-priv3

    Server: 192.168.2.11

    Address: 192.168.2.11#53

  • Name: oel61-priv3.gj.com

    Address: 10.10.2.11

    [root@oel61a stage]# nslookup 10.10.2.11

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    11.2.10.10.in-addr.arpa name = oel6-priv3.gj.com.

    [root@oel61a stage]#

    Set up DHCP

    Create a file /etc/dhcp/dhcpd.conf to specify

    Routers set it to 192.168.2.1

    Subnet mask set it to 255.255.255.0

    Domain name grid.gj.com

    Domain name server From table 1 the IP is 192.168.2.11

    Time offset - EST

    Range from 192.168.2.100 to 192.168.2.130 will be assigned for GNS

    delegation.

    [root@oel61 named]# cat /etc/dhcp/dhcpd.conf

    #

    # DHCP Server Configuration file.

    # see /usr/share/doc/dhcp*/dhcpd.conf.sample

    # see 'man 5 dhcpd.conf'

    #

    ddns-update-style interim;

    ignore client-updates;

    subnet 192.168.2.0 netmask 255.255.255.0 {

    option routers 192.168.2.1;

    option subnet-mask 255.255.255.0;

    option domain-name "grid.gj.com";

    option domain-name-servers 192.168.2.11;

    option time-offset -18000; # Eastern Standard Time

    range 192.168.2.100 192.168.2.130;

    default-lease-time 86400;

    }

    [root@oel61 named]#

    Enable auto-start by issuing the following command.

    chkconfig dhcpd on

    Start the DHCPD service issuing the following command.

    service dhcpd start

    Reboot oel61 server.

    Install GI 11.2.0.3 on oel61a and oel61b

    Verify that the prerequisites for GI installation are met.

  • Run the following commands.

    ./runcluvfy.sh stage -post hwos -n oel61a,oel61b verbose

    ./runcluvfy.sh stage -pre crsinst -n oel61a,oel61b verbose

    The output is in Annex 1

    The OUI will be used for setting up user equivalence. Run OUI from the

    staging directory.

    [grid@oel61a grid]$ pwd

    /u01/stage/grid

    [grid@oel61a grid]$ ls

    doc install readme.html response rpm runcluvfy.sh runInstaller sshsetup

    stage welcome.html

    [grid@oel61a grid]$ ./runInstaller

    Select skip software updates and press Next to continue.

    Select Install and Configure GI and press Next to continue.

  • Select Advanced installation and press Next to continue.

  • Select languages and press Next to continue.

  • Enter the requested data and press Next to continue. The GNS sub-domain is

    gns.grid.gj.com. The GNS VIP is 192.168.2.52. SCAN post is 1521. SCAN name is

    oel61-cluster-scan.gns.grid.gj.com.

  • Click Add.

  • Click SSH Connectivity.

  • Select 192.168.2 as public and all 10.10 as private. Press Next to continue.

    HAIP will be deployed and examined.

  • Select ASM and press Next to continue.

  • Select disk group DATA as specified and press Next to continue.

  • Enter password and press Next to continue.

  • De-select IPMI and press Next to continue.

  • Specify the groups and press Next to continue.

  • Specify locations and press Next to continue.

  • Examine the findings.

  • The errors are as follows

    Task resolv.conf Integrity - This task checks consistency of file

    /etc/resolv.conf file across nodes

    Check Failed on Nodes: [oel61b, oel61a]

    Verification result of failed node: oel61b

    Details:

    -

    PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms

    on following nodes: oel61a,oel61b - Cause: The DNS response time for an

    unreachable node exceeded the value specified on nodes specified. - Action:

    Make sure that 'options timeout', 'options attempts' and 'nameserver' entries

    in file resolv.conf are proper. On HPUX these entries will be 'retrans',

    'retry' and 'nameserver'. On Solaris these will be 'options retrans', 'options

    retry' and 'nameserver'.

    Back to Top

    Verification result of failed node: oel61a

    Details:

    -

    PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms

    on following nodes: oel61a,oel61b - Cause: The DNS response time for an

    unreachable node exceeded the value specified on nodes specified. - Action:

    Make sure that 'options timeout', 'options attempts' and 'nameserver' entries

    in file resolv.conf are proper. On HPUX these entries will be 'retrans',

  • 'retry' and 'nameserver'. On Solaris these will be 'options retrans', 'options

    retry' and 'nameserver'.

    Reference : PRVF-5636 : The DNS response time for an unreachable node exceeded

    "15000" ms on following nodes

    [root@oel61a bin]# time nslookup not-known

    ;; connection timed out; no servers could be reached

    real 0m15.009s

    user 0m0.002s

    sys 0m0.002s

    [root@oel61a bin]#

    Reverse path filter setting - Checks if reverse path filter setting for all

    private interconnect network interfaces is correct

    Check Failed on Nodes: [oel61b, oel61a]

    Verification result of failed node: oel61b

    Expected Value

    : 0|2

    Actual Value

    : 1

    Details:

    -

    PRVE-0453 : Reverse path filter parameter "rp_filter" for private interconnect

    network interfaces "eth3" is not set to 0 or 2 on node "oel61b.gj.com". -

    Cause: Reverse path filter parameter 'rp_filter' was not set to 0 or 2 for

    identified private interconnect network interfaces on specified node. -

    Action: Ensure that the 'rp_filter' parameter is correctly set to the value of

    0 or 2 for each of the interface used in the private interconnect

    classification, This will disable or relax the filtering and allow Clusterware

    to function correctly. Use 'sysctl' command to modify the value of this

    parameter.

    Back to Top

    Verification result of failed node: oel61a

    Expected Value

    : 0|2

    Actual Value

    : 1

    Details:

    -

    PRVE-0453 : Reverse path filter parameter "rp_filter" for private interconnect

    network interfaces "eth3" is not set to 0 or 2 on node "oel61a.gj.com". -

    Cause: Reverse path filter parameter 'rp_filter' was not set to 0 or 2 for

    identified private interconnect network interfaces on specified node. -

    Action: Ensure that the 'rp_filter' parameter is correctly set to the value of

    0 or 2 for each of the interface used in the private interconnect

    classification, This will disable or relax the filtering and allow Clusterware

    to function correctly. Use 'sysctl' command to modify the value of this

    parameter.

    Back to Top

    Solution

    1. For PRVE-0453 set RP_FILTER as indicated below on each node of the cluster.

    [root@oel61a disks]# for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do

  • > echo 2 > $i

    > done

    [root@oel61a disks]#

    2. For PRVF-10406 ASM disk needs to be with right permissions and ownership.

    3. For PRVF-5636 make sure that nslookup always returns for less than

    10s. In the example below it takes 15 secs.

    [root@oel61a bin]# time nslookup not-known

    ;; connection timed out; no servers could be reached

    real 0m15.009s

    user 0m0.002s

    sys 0m0.002s

  • [root@oel61a bin]#

    Review the Summary settings and press Install to continue.

  • Here is the bottom part.

  • Wait until prompted for running scripts as root.

  • The output from the scripts is as follows.

    [root@oel61a disks]# /u01/app/oraInventory/orainstRoot.sh

    Changing permissions of /u01/app/oraInventory.

    Adding read,write permissions for group.

    Removing read,write,execute permissions for world.

    Changing groupname of /u01/app/oraInventory to oinstall.

    The execution of the script is complete.

    [root@oel61a disks]#

    [root@oel61b disks]# /u01/app/oraInventory/orainstRoot.sh

    Changing permissions of /u01/app/oraInventory.

    Adding read,write permissions for group.

    Removing read,write,execute permissions for world.

    Changing groupname of /u01/app/oraInventory to oinstall.

    The execution of the script is complete.

    [root@oel61b disks]#

    [root@oel61a disks]# /u01/app/11.2.0/grid/root.sh

    Performing root user operation for Oracle 11g

  • The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME= /u01/app/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    The contents of "dbhome" have not changed. No need to overwrite.

    The contents of "oraenv" have not changed. No need to overwrite.

    The contents of "coraenv" have not changed. No need to overwrite.

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Using configuration parameter file:

    /u01/app/11.2.0/grid/crs/install/crsconfig_params

    User ignored Prerequisites during installation

    CRS-2672: Attempting to start 'ora.cssdmonitor' on 'oel61a'

    CRS-2676: Start of 'ora.cssdmonitor' on 'oel61a' succeeded

    CRS-2672: Attempting to start 'ora.cssd' on 'oel61a'

    CRS-2672: Attempting to start 'ora.diskmon' on 'oel61a'

    CRS-2676: Start of 'ora.diskmon' on 'oel61a' succeeded

    CRS-2676: Start of 'ora.cssd' on 'oel61a' succeeded

    ASM created and started successfully.

    Disk Group DATA created successfully.

    clscfg: -install mode specified

    Successfully accumulated necessary OCR keys.

    Creating OCR keys for user 'root', privgrp 'root'..

    Operation successful.

    CRS-4256: Updating the profile

    Successful addition of voting disk 3744662ab1f94f24bf2c12907758d030.

    Successful addition of voting disk 909062d7ed274ff3bf50842d55dfb419.

    Successful addition of voting disk 4b30586423434fd4bf7120c436bea542.

    Successfully replaced voting disk group with +DATA.

    CRS-4256: Updating the profile

    CRS-4266: Voting file(s) successfully replaced

    ## STATE File Universal Id File Name Disk group

    -- ----- ----------------- --------- ---------

    1. ONLINE 3744662ab1f94f24bf2c12907758d030 (/dev/oracleasm/disks/DISK1)

    [DATA]

    2. ONLINE 909062d7ed274ff3bf50842d55dfb419 (/dev/oracleasm/disks/DISK2)

    [DATA]

    3. ONLINE 4b30586423434fd4bf7120c436bea542 (/dev/oracleasm/disks/DISK3)

    [DATA]

    Located 3 voting disk(s).

    CRS-2672: Attempting to start 'ora.asm' on 'oel61a'

    CRS-2676: Start of 'ora.asm' on 'oel61a' succeeded

    CRS-2672: Attempting to start 'ora.DATA.dg' on 'oel61a'

    CRS-2676: Start of 'ora.DATA.dg' on 'oel61a' succeeded

    CRS-2672: Attempting to start 'ora.registry.acfs' on 'oel61a'

    CRS-2676: Start of 'ora.registry.acfs' on 'oel61a' succeeded

    Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    [root@oel61a disks]#

    [root@oel61b disks]# /u01/app/11.2.0/grid/root.sh

    Performing root user operation for Oracle 11g

    The following environment variables are set as:

    ORACLE_OWNER= grid

  • ORACLE_HOME= /u01/app/11.2.0/grid

    Enter the full pathname of the local bin directory: [/usr/local/bin]:

    Copying dbhome to /usr/local/bin ...

    Copying oraenv to /usr/local/bin ...

    Copying coraenv to /usr/local/bin ...

    Creating /etc/oratab file...

    Entries will be added to the /etc/oratab file as needed by

    Database Configuration Assistant when a database is created

    Finished running generic part of root script.

    Now product-specific root actions will be performed.

    Using configuration parameter file:

    /u01/app/11.2.0/grid/crs/install/crsconfig_params

    Creating trace directory

    User ignored Prerequisites during installation

    OLR initialization - successful

    Adding Clusterware entries to upstart

    CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS

    daemon on node oel61a, number 1, and is terminating

    An active cluster was found during exclusive startup, restarting to join the

    cluster

    Configure Oracle Grid Infrastructure for a Cluster ... succeeded

    [root@oel61b disks]#

    Wait for the assistants to finish.

  • Verify that GI is installed.

    Check OS processes:

    [root@oel61a bin]# ps -ef | grep d.bin

    root 2823 1 1 06:22 ? 00:00:06 /u01/app/11.2.0/grid/bin/ohasd.bin reboot

    grid 3281 1 0 06:23 ? 00:00:01 /u01/app/11.2.0/grid/bin/oraagent.bin

    grid 3293 1 0 06:23 ? 00:00:00 /u01/app/11.2.0/grid/bin/mdnsd.bin

    grid 3305 1 0 06:23 ? 00:00:00 /u01/app/11.2.0/grid/bin/gpnpd.bin

    grid 3318 1 0 06:23 ? 00:00:02 /u01/app/11.2.0/grid/bin/gipcd.bin

    root 3357 1 0 06:23 ? 00:00:00 /u01/app/11.2.0/grid/bin/cssdmonitor

    root 3369 1 0 06:23 ? 00:00:00 /u01/app/11.2.0/grid/bin/cssdagent

    grid 3381 1 1 06:23 ? 00:00:05 /u01/app/11.2.0/grid/bin/ocssd.bin

    root 3384 1 1 06:23 ? 00:00:06 /u01/app/11.2.0/grid/bin/orarootagent.bin

    root 3400 1 2 06:23 ? 00:00:10 /u01/app/11.2.0/grid/bin/osysmond.bin

    root 3549 1 0 06:23 ? 00:00:01 /u01/app/11.2.0/grid/bin/octssd.bin

    reboot

    grid 3592 1 0 06:24 ? 00:00:01 /u01/app/11.2.0/grid/bin/evmd.bin

    root 3861 1 0 06:24 ? 00:00:00 /u01/app/11.2.0/grid/bin/ologgerd -m

    oel61b -r -d /u01/app/11.2.0/grid/crf/db/oel61a

    root 3878 1 1 06:24 ? 00:00:04 /u01/app/11.2.0/grid/bin/crsd.bin reboot

    grid 3956 3592 0 06:24 ? 00:00:00 /u01/app/11.2.0/grid/bin/evmlogger.bin -o

    /u01/app/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/11.2.0/grid/evm/log/evmlogger.log

    grid 3994 1 0 06:24 ? 00:00:01 /u01/app/11.2.0/grid/bin/oraagent.bin

    root 3998 1 0 06:24 ? 00:00:03 /u01/app/11.2.0/grid/bin/orarootagent.bin

    grid 4118 1 0 06:24 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER

    -inherit

    grid 4134 1 0 06:24 ? 00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr

    LISTENER_SCAN1 -inherit

    oracle 4166 1 0 06:24 ? 00:00:01 /u01/app/11.2.0/grid/bin/oraagent.bin

    root 5356 5235 0 06:30 pts/1 00:00:00 grep d.bin

    [root@oel61a bin]#

    Check GI resource status.

    [root@oel61a bin]# ./crsctl status res -t

    --------------------------------------------------------------------------------

    NAME TARGET STATE SERVER STATE_DETAILS

    --------------------------------------------------------------------------------

    Local Resources

    --------------------------------------------------------------------------------

    ora.DATA.dg

    ONLINE ONLINE oel61a

    ONLINE ONLINE oel61b

    ora.LISTENER.lsnr

    ONLINE ONLINE oel61a

    ONLINE ONLINE oel61b

    ora.asm

    ONLINE ONLINE oel61a Started

    ONLINE ONLINE oel61b Started

    ora.gsd

    OFFLINE OFFLINE oel61a

    OFFLINE OFFLINE oel61b

    ora.net1.network

    ONLINE ONLINE oel61a

    ONLINE ONLINE oel61b

    ora.ons

    ONLINE ONLINE oel61a

    ONLINE ONLINE oel61b

    ora.registry.acfs

    ONLINE ONLINE oel61a

    ONLINE ONLINE oel61b

    --------------------------------------------------------------------------------

    Cluster Resources

    --------------------------------------------------------------------------------

    ora.LISTENER_SCAN1.lsnr

  • 1 ONLINE ONLINE oel61b

    ora.LISTENER_SCAN2.lsnr

    1 ONLINE ONLINE oel61a

    ora.LISTENER_SCAN3.lsnr

    1 ONLINE ONLINE oel61a

    ora.cvu

    1 ONLINE ONLINE oel61a

    ora.gns

    1 ONLINE ONLINE oel61a

    ora.gns.vip

    1 ONLINE ONLINE oel61a

    ora.oc4j

    1 ONLINE ONLINE oel61a

    ora.oel61a.vip

    1 ONLINE ONLINE oel61a

    ora.oel61b.vip

    1 ONLINE ONLINE oel61b

    ora.scan1.vip

    1 ONLINE ONLINE oel61b

    ora.scan2.vip

    1 ONLINE ONLINE oel61a

    ora.scan3.vip

    1 ONLINE ONLINE oel61a

    [root@oel61a bin]#

    Check the interfaces.

    [grid@oel61a grid]$ oifcfg getif -global

    eth0 10.10.2.0 global cluster_interconnect

    eth1 192.168.2.0 global public

    eth2 10.10.10.0 global cluster_interconnect

    eth3 10.10.5.0 global cluster_interconnect

    [grid@oel61a grid]$

    Check the interfaces from ASM instance.

    SQL> select * from V$CLUSTER_INTERCONNECTS;

    NAME IP_ADDRESS IS_ SOURCE

    --------------- ---------------- --- -------------------------------

    eth0:1 169.254.45.77 NO

    eth3:1 169.254.106.22 NO

    eth2:1 169.254.188.165 NO

    eth0:2 169.254.242.179 NO

    SQL> select * from V$configured_interconnects;

    NAME IP_ADDRESS IS_ SOURCE

    --------------- ---------------- --- -------------------------------

    eth0:1 169.254.45.77 NO

    eth3:1 169.254.106.22 NO

    eth2:1 169.254.188.165 NO

    eth0:2 169.254.242.179 NO

    eth1 192.168.2.21 YES

    SQL>

    Check the GNS

    [grid@oel61b ~]$ cluvfy comp gns -postcrsinst -verbose

    Verifying GNS integrity

    Checking GNS integrity...

  • Checking if the GNS subdomain name is valid...

    The GNS subdomain name "gns.grid.gj.com" is a valid domain name

    Checking if the GNS VIP belongs to same subnet as the public network...

    Public network subnets "192.168.2.0" match with the GNS VIP "192.168.2.0"

    Checking if the GNS VIP is a valid address...

    GNS VIP "192.168.2.52" resolves to a valid IP address

    Checking the status of GNS VIP...

    Checking if FDQN names for domain "gns.grid.gj.com" are reachable

    GNS resolved IP addresses are reachable

    GNS resolved IP addresses are reachable

    GNS resolved IP addresses are reachable

    GNS resolved IP addresses are reachable

    GNS resolved IP addresses are reachable

    Checking status of GNS resource...

    Node Running? Enabled?

    ------------ ------------------------ ------------------------

    oel61b no yes

    oel61a yes yes

    GNS resource configuration check passed

    Checking status of GNS VIP resource...

    Node Running? Enabled?

    ------------ ------------------------ ------------------------

    oel61b no yes

    oel61a yes yes

    GNS VIP resource configuration check passed.

    GNS integrity check passed

    Verification of GNS integrity was successful.

    [grid@oel61b ~]$

    Install RAC RDBMS 11.2.0.3 on oel61a and oel61b

    Login as oracle user and start OUI from the staging directory.

  • Select skip software updates.

  • Select Install software only and press Next to continue.

  • Select RAC installation and select all node and press Next to continue.

  • Establish SSH connectivity.

  • Select language.

  • Select EE and press Next to continue.

  • Select software locations and press Next to continue.

  • Select groups and press Next to continue.

  • Examine the findings.

  • Press Install to continue.

  • Wait until prompted to run scripts as root.

  • Run the scripts

  • Create a policy managed database RACDB oel61a and oel61b

    Login as oracle user and start dbca to create a database. Select RAC database

    and press Next to continue.

  • Select Create a database.

  • Select create a general purpose database and press Next to continue.

  • Specify SID, server pool name and cardinality.

  • Select Configure Enterprise Manager and press Next to continue.

  • Specify password and press Next to continue.

  • Specify disk group and press Next to continue.

  • Specify FRA and enable archiving and press Next to continue.

  • Select sample schemas and press Next to continue.

  • Specify memory size and other parameters. Once done press Next to continue.

  • Keep the storage settings default and press Next to continue.

  • Review

  • Wait for the dbca to succeed.

  • Change the password and exit

  • Login to EM DC using the URL specified above.

  • Cluster Database Home page.

  • Cluster home page.

  • Interconnect page.

  • Verify database creation and create a service

    Lets verify the database and pools configuration.

    [oracle@oel61a ~]$ srvctl config srvpool

    Server pool name: Free

    Importance: 0, Min: 0, Max: -1

    Candidate server names:

    Server pool name: Generic

    Importance: 0, Min: 0, Max: -1

    Candidate server names:

    Server pool name: servpool

    Importance: 0, Min: 0, Max: 2

    Candidate server names:

    [oracle@oel61a ~]$

    [oracle@oel61a ~]$ srvctl config database -d racdb -a -v

    Database unique name: RACDB

    Database name: RACDB

    Oracle home: /u01/app/oracle/product/11.2.0/db_1

    Oracle user: oracle

    Spfile: +DATA/RACDB/spfileRACDB.ora

    Domain:

  • Start options: open

    Stop options: immediate

    Database role: PRIMARY

    Management policy: AUTOMATIC

    Server pools: servpool

    Database instances:

    Disk Groups: DATA

    Mount point paths:

    Services:

    Type: RAC

    Database is enabled

    Database is policy managed

    [oracle@oel61a ~]$

    Lets create a service with the following specification.

    [oracle@oel61b ~]$ srvctl config service -d racdb

    Service name: racdbsrv

    Service is enabled

    Server pool: servpool

    Cardinality: UNIFORM

    Disconnect: false

    Service role: PRIMARY

    Management policy: AUTOMATIC

    DTP transaction: false

    AQ HA notifications: false

    Failover type: SELECT

    Failover method: BASIC

    TAF failover retries: 10

    TAF failover delay: 200

    Connection Load Balancing Goal: LONG

    Runtime Load Balancing Goal: NONE

  • TAF policy specification: NONE

    Edition:

    Service is enabled on nodes:

    Service is disabled on nodes:

    [oracle@oel61b ~]$

    I will change retries to 200 and delay to 10 and TAF basic.

    [oracle@oel61a admin]$ srvctl modify service -d racdb -s racdbsrv -w 10 -z 200

    [oracle@oel61a admin]$ srvctl modify service -d racdb -s racdbsrv -P BASIC

    [oracle@oel61a admin]$ srvctl status service -d racdb

    Service racdbsrv is running on nodes: oel61a,oel61b

    [oracle@oel61a admin]$ srvctl config service -d racdb

    Service name: racdbsrv

    Service is enabled

    Server pool: servpool

    Cardinality: UNIFORM

    Disconnect: false

    Service role: PRIMARY

    Management policy: AUTOMATIC

    DTP transaction: false

    AQ HA notifications: false

    Failover type: SELECT

    Failover method: BASIC

    TAF failover retries: 200

    TAF failover delay: 10

    Connection Load Balancing Goal: LONG

    Runtime Load Balancing Goal: NONE

    TAF policy specification: BASIC

    Edition:

    Service is enabled on nodes:

    Service is disabled on nodes:

    [oracle@oel61a admin]$

    Edit tnsnames.ora to add

    RACDBSRV =

    (DESCRIPTION =

    (LOAD_BALANCE = YES)

    (FAILOVER = YES )

    (ADDRESS = (PROTOCOL = TCP)(HOST = oel61-cluster-scan.gns.grid.gj.com)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP)(HOST = oel61-cluster-scan.gns.grid.gj.com)(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = RACDBSRV)

    (FAILOVER_MODE =

    (TYPE = SELECT)

    (METHOD = BASIC)

    (RETRIES = 200)

    (DELAY = 10 )

    )

    )

    )

    [oracle@oel61a admin]$ sqlplus system/sys1@racdbsrv

    SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 4 01:33:22 2011

    Copyright (c) 1982, 2011, Oracle. All rights reserved.

    Connected to:

    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

  • With the Partitioning, Real Application Clusters, Automatic Storage Management,

    OLAP,

    Data Mining and Real Application Testing options

    SQL> select * from v$configured_interconnects;

    NAME IP_ADDRESS IS_ SOURCE

    --------------- ---------------- --- -------------------------------

    eth0:2 169.254.57.231 NO

    eth3:1 169.254.85.187 NO

    eth2:1 169.254.146.23 NO

    eth0:1 169.254.233.31 NO

    eth1 192.168.2.22 YES

    eth1:1 192.168.2.52 YES

    eth1:2 192.168.2.117 YES

    eth1:3 192.168.2.112 YES

    eth1:4 192.168.2.100 YES

    eth1:5 192.168.2.111 YES

    eth1:6 192.168.2.113 YES

    11 rows selected.

    SQL> select * from v$cluster_interconnects;

    NAME IP_ADDRESS IS_ SOURCE

    --------------- ---------------- --- -------------------------------

    eth0:2 169.254.57.231 NO

    eth3:1 169.254.85.187 NO

    eth2:1 169.254.146.23 NO

    eth0:1 169.254.233.31 NO

    SQL>

    There are two types of Virtual IPs

    1. Virtual IPs generated by GNS in the 192.168.2.X subnet 2. Virtual IOs generated by HAIP in the 169.254.X.X subnet

    As per Note 11gR2 Grid Infrastructure Redundant Interconnect and

    ora.cluster_interconnect.haip [ID 1210883.1] HAIP allocates Virtual IPs from

    the reserved IPs in the 169.254.X.X subnet for both load balancing and

    failover.

    Those IPs generated by HAIP can also be seen using oifcfg

    [grid@oel61a admin]$ oifcfg getif -global

    eth0 10.10.2.0 global cluster_interconnect

    eth1 192.168.2.0 global public

    eth2 10.10.10.0 global cluster_interconnect

    eth3 10.10.5.0 global cluster_interconnect

    [grid@oel61a admin]$ oifcfg iflist -p -n

    eth0 10.10.2.0 PRIVATE 255.255.255.0

    eth0 169.254.0.0 UNKNOWN 255.255.192.0

    eth0 169.254.192.0 UNKNOWN 255.255.192.0

    eth1 192.168.2.0 PRIVATE 255.255.255.0

    eth2 10.10.10.0 PRIVATE 255.255.255.0

    eth2 169.254.128.0 UNKNOWN 255.255.192.0

    eth3 10.10.5.0 PRIVATE 255.255.255.0

    eth3 169.254.64.0 UNKNOWN 255.255.192.0

    virbr0 192.168.122.0 PRIVATE 255.255.255.0

    [grid@oel61a admin]$

  • There is a new resource corresponding to HAIP.

    [root@oel61a bin]# ./crsctl status resource -t -init

    -------------------------------------------------------------------------------

    -

    NAME TARGET STATE SERVER STATE_DETAILS

    -------------------------------------------------------------------------------

    -

    Cluster Resources

    -------------------------------------------------------------------------------

    -

    ora.cluster_interconnect.haip

    1 ONLINE ONLINE oel61a

    Using ifconfig command you can look at the addresses.

    [root@oel61a named]# ifconfig -a

    eth0 Link encap:Ethernet HWaddr 08:00:27:1D:31:C1

    inet addr:10.10.2.21 Bcast:10.255.255.255 Mask:255.255.255.0

    inet6 addr: fe80::a00:27ff:fe1d:31c1/64 Scope:Link

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    RX packets:340942 errors:0 dropped:0 overruns:0 frame:0

    TX packets:315184 errors:0 dropped:0 overruns:0 carrier:0

    collisions:0 txqueuelen:1000

    RX bytes:224102003 (213.7 MiB) TX bytes:213005762 (203.1 MiB)

    eth0:1 Link encap:Ethernet HWaddr 08:00:27:1D:31:C1

    inet addr:169.254.45.77 Bcast:169.254.63.255 Mask:255.255.192.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    eth0:2 Link encap:Ethernet HWaddr 08:00:27:1D:31:C1

    inet addr:169.254.242.179 Bcast:169.254.255.255 Mask:255.255.192.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    eth1 Link encap:Ethernet HWaddr 08:00:27:88:DD:5D

    inet addr:192.168.2.21 Bcast:192.168.2.255 Mask:255.255.255.0

    inet6 addr: fe80::a00:27ff:fe88:dd5d/64 Scope:Link

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    RX packets:86786 errors:0 dropped:0 overruns:0 frame:0

    TX packets:82065 errors:0 dropped:0 overruns:0 carrier:0

    collisions:0 txqueuelen:1000

    RX bytes:39897692 (38.0 MiB) TX bytes:28853528 (27.5 MiB)

    eth1:1 Link encap:Ethernet HWaddr 08:00:27:88:DD:5D

    inet addr:192.168.2.52 Bcast:192.168.2.255 Mask:255.255.255.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    eth1:2 Link encap:Ethernet HWaddr 08:00:27:88:DD:5D

    inet addr:192.168.2.111 Bcast:192.168.2.255 Mask:255.255.255.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    eth1:3 Link encap:Ethernet HWaddr 08:00:27:88:DD:5D

    inet addr:192.168.2.112 Bcast:192.168.2.255 Mask:255.255.255.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    eth1:4 Link encap:Ethernet HWaddr 08:00:27:88:DD:5D

    inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    eth2 Link encap:Ethernet HWaddr 08:00:27:E2:F8:4B

    inet addr:10.10.10.21 Bcast:10.255.255.255 Mask:255.255.255.0

    inet6 addr: fe80::a00:27ff:fee2:f84b/64 Scope:Link

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

  • RX packets:164579 errors:0 dropped:0 overruns:0 frame:0

    TX packets:136998 errors:0 dropped:0 overruns:0 carrier:0

    collisions:0 txqueuelen:1000

    RX bytes:104256255 (99.4 MiB) TX bytes:76968904 (73.4 MiB)

    eth2:1 Link encap:Ethernet HWaddr 08:00:27:E2:F8:4B

    inet addr:169.254.188.165 Bcast:169.254.191.255 Mask:255.255.192.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    eth3 Link encap:Ethernet HWaddr 08:00:27:0C:85:F5

    inet addr:10.10.5.21 Bcast:10.255.255.255 Mask:255.255.255.0

    inet6 addr: fe80::a00:27ff:fe0c:85f5/64 Scope:Link

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    RX packets:164007 errors:0 dropped:0 overruns:0 frame:0

    TX packets:137579 errors:0 dropped:0 overruns:0 carrier:0

    collisions:0 txqueuelen:1000

    RX bytes:104288102 (99.4 MiB) TX bytes:77877992 (74.2 MiB)

    eth3:1 Link encap:Ethernet HWaddr 08:00:27:0C:85:F5

    inet addr:169.254.106.22 Bcast:169.254.127.255 Mask:255.255.192.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    lo Link encap:Local Loopback

    inet addr:127.0.0.1 Mask:255.0.0.0

    inet6 addr: ::1/128 Scope:Host

    UP LOOPBACK RUNNING MTU:16436 Metric:1

    RX packets:416388 errors:0 dropped:0 overruns:0 frame:0

    TX packets:416388 errors:0 dropped:0 overruns:0 carrier:0

    collisions:0 txqueuelen:0

    RX bytes:182438529 (173.9 MiB) TX bytes:182438529 (173.9 MiB)

    virbr0 Link encap:Ethernet HWaddr 52:54:00:DE:7D:10

    inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0

    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

    RX packets:0 errors:0 dropped:0 overruns:0 frame:0

    TX packets:114 errors:0 dropped:0 overruns:0 carrier:0

    collisions:0 txqueuelen:0

    RX bytes:0 (0.0 b) TX bytes:20794 (20.3 KiB)

    virbr0-nic Link encap:Ethernet HWaddr 52:54:00:DE:7D:10

    BROADCAST MULTICAST MTU:1500 Metric:1

    RX packets:0 errors:0 dropped:0 overruns:0 frame:0

    TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

    collisions:0 txqueuelen:500

    RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

    [root@oel61a named]#

    GNS provides a resolution to the following Virtual IPs

    1. oel61b.gns.grid.gj.com 2. oel61a.gns.grid.gj.com

    3. oel61a-vip.gns.grid.gj.com 4. oel61b-vip.gns.grid.gj.com 5. oel61-cluster-scan.gns.grid.gj.com

    Example:

    [root@oel61a stage]# nslookup oel61a.gns.grid.gj.com

    Server: 192.168.2.11

    Address: 192.168.2.11#53

  • Non-authoritative answer:

    Name: oel61a.gns.grid.gj.com

    Address: 192.168.2.21

    [root@oel61a stage]# nslookup oel61a-vip.gns.grid.gj.com

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Non-authoritative answer:

    Name: oel61a-vip.gns.grid.gj.com

    Address: 192.168.2.100

    [root@oel61a stage]# nslookup oel61b-vip.gns.grid.gj.com

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Non-authoritative answer:

    Name: oel61b-vip.gns.grid.gj.com

    Address: 192.168.2.113

    [root@oel61a stage]# nslookup oel61-cluster-scan.gns.grid.gj.com

    Server: 192.168.2.11

    Address: 192.168.2.11#53

    Non-authoritative answer:

    Name: oel61-cluster-scan.gns.grid.gj.com

    Address: 192.168.2.117

    Name: oel61-cluster-scan.gns.grid.gj.com

    Address: 192.168.2.111

    Name: oel61-cluster-scan.gns.grid.gj.com

    Address: 192.168.2.112

    [root@oel61a stage]#

    Resource status can be seen as follows.

    [root@oel61a bin]# ./crsctl status resource -t -init

    -------------------------------------------------------------------------------

    -

    NAME TARGET STATE SERVER STATE_DETAILS

    -------------------------------------------------------------------------------

    -

    Cluster Resources

    -------------------------------------------------------------------------------

    -

    ora.asm

    1 ONLINE ONLINE oel61a Started

    ora.cluster_interconnect.haip

    1 ONLINE ONLINE oel61a added 11.2.0.2 ora.crf

    1 ONLINE ONLINE oel61a

    ora.crsd

    1 ONLINE ONLINE oel61a

    ora.cssd

    1 ONLINE ONLINE oel61a

    ora.cssdmonitor

    1 ONLINE ONLINE oel61a

    ora.ctssd

    1 ONLINE ONLINE oel61a ACTIVE:0

    ora.diskmon

    1 OFFLINE OFFLINE

    ora.drivers.acfs

  • 1 ONLINE ONLINE oel61a

    ora.evmd

    1 ONLINE