d59999gc10_sg2

316
Oracle Grid Infrastructure 11g: Manage Clusterware and ASM Volume II Student Guide D59999GC10 Edition 1.0 February 2010 D65406 andy thorpe (andythorpe@snsbskybcom) has a non-transferable license to use this Student Guide Unauthorized reproduction or distribution prohibited Copyright© 2010, Oracle and/or its affiliates

Upload: lucio-teixeira

Post on 09-Mar-2016

242 views

Category:

Documents


11 download

DESCRIPTION

e (and andy Oracle Grid Infrastructure 11g: Manage Clusterware and ASM ferab thorp trans snsฺb D59999GC10 Edition 1.0 February 2010 D65406 le Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ Volume II •Student Guide

TRANSCRIPT

Page 1: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM

Volume II • Student Guide

D59999GC10

Edition 1.0

February 2010

D65406

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 2: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Disclaimer

This document contains proprietary information and is protected by copyright and other intellectual property laws. You may copy and print this document solely for your own use in an Oracle training course. The document may not be modified or altered in any way. Except where your use constitutes "fair use" under copyright law, you may not use, share, download, upload, copy, print, display, perform, reproduce, publish, license, post, transmit, or distribute this document in whole or in part without the express authorization of Oracle.

The information contained in this document is subject to change without notice. If you find any problems in the document, please report them in writing to: Oracle University, 500 Oracle Parkway, Redwood Shores, California 94065 USA. This document is not warranted to be error-free.

Restricted Rights Notice

If this documentation is delivered to the United States Government or anyone using the documentation on behalf of the United States Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTSThe U.S. Government’s rights to use, modify, reproduce, release, perform, display, or disclose these training materials are restricted by the terms of the applicable Oracle license agreement and/or the applicable U.S. Government contract.

Trademark Notice

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Authors

James Womack

James Spiller

Technical Contributors

David BrowerJean-Francois VerrierMark FullerMike LeathermanBarb LundhildS. Matt TaylorRick Wessman

Technical Reviewers

Christopher AndrewsChristian BauwensMichael CebullaJonathan CreightonAl FlournoyAndy FortunakJoel GoodmanMichael HazelPete JonesJerry LeeMarkus MichalewiczPeter SharmanRanbir SinghLinda SmalleyJanet SternRichard StrohmBranislav ValnyDoug Williams

Publishers

Jayanthy KeshavamurthyGiri Venugopal

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 3: d59999gc10_sg2

iii

Contents

1 Oracle Grid Infrastructure Architecture Objectives 1-2 Oracle Grid Infrastructure 1-3 Module 1: Oracle Clusterware Concepts 1-4 What Is a Cluster? 1-5 What Is Clusterware? 1-6 Oracle Clusterware 1-7 Oracle Clusterware Architecture and Services 1-8 Goals for Oracle Clusterware 1-9 Oracle Clusterware Networking 1-10 Interconnect Link Aggregation: Single Switch 1-12 Interconnect Link Aggregation: Multiswitch 1-14 Interconnect NIC Guidelines 1-15 Additional Interconnect Guidelines 1-16 Quiz 1-17 Module 2: Oracle Clusterware Architecture 1-18 Oracle Clusterware Startup 1-19 Oracle Clusterware Process Architecture 1-20 Grid Plug and Play 1-22 GPnP Domain 1-23 GPnP Components 1-24 GPnP Profile 1-25 Grid Naming Service 1-26 Single Client Access Name 1-27 GPnP Architecture Overview 1-29 How GPnP Works: Cluster Node Startup 1-31 How GPnP Works: Client Database Connections 1-32 Quiz 1-33 Module 3: ASM Architecture 1-35 ASM and ASM Cluster File System 1-37 ASM Key Features and Benefits 1-39 ASM Instance Designs: Nonclustered ASM and Oracle Databases 1-40 ASM Instance Designs: Clustered ASM for Clustered Databases 1-41 ASM Instance Designs: Clustered ASM for Mixed Databases 1-42 ASM System Privileges 1-43 ASM OS Groups with Role Separation 1-44 Authentication for Accessing ASM Instances 1-45 Password-Based Authentication for ASM 1-46

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 4: d59999gc10_sg2

iv

Managing the ASM Password File 1-47 Using a Single OS Group 1-48 Using Separate OS Groups 1-49 ASM Components: Software 1-50 ASM Components: ASM Instance 1-51 ASM Components: ASM Instance Primary Processes 1-53 ASM Components: ASM Listener 1-54 ASM Components: Configuration Files 1-55 ASM Components: Group Services 1-56 ASM Components: ASM Disk Group 1-57 ASM Disk Group: Failure Groups 1-58 ASM Components: ASM Disks 1-59 ASM Components: ASM Files 1-60 ASM Files: Extents and Striping 1-61 ASM Files: Mirroring 1-62 ASM Components: ASM Clients 1-63 ASM Components: ASM Utilities 1-64 ASM Scalability 1-65 Summary 1-66

2 Grid Infrastructure Installation

Objectives 2-2 Module 1: Preinstallation Planning 2-3 Shared Storage Planning for Grid Infrastructure 2-4 Sizing Shared Storage 2-5 Storing the OCR in ASM 2-6 Oracle Clusterware Repository (OCR) 2-7 Managing Voting Disks in ASM 2-9 CSS Voting Disk Function 2-10 Oracle Clusterware Main Log Files 2-11 Installing ASMLib 2-12 Preparing ASMLib 2-13 Quiz 2-14 Module 2: Grid Infrastructure Preinstallation Tasks 2-15 Oracle Grid Infrastructure 11g Installation 2-16 Checking System Requirements 2-17 Checking Network Requirements 2-18 Software Requirements (Kernel) 2-20 Software Requirements: Packages 2-21 Oracle Validated Configuration RPM 2-22 Creating Groups and Users 2-24 Creating Groups and Users: Example 2-25 Shell Settings for the Grid Infrastructure User 2-26

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 5: d59999gc10_sg2

v

Quiz 2-28 Module 3: Grid Infrastructure Installation 2-29 Installing Grid Infrastructure 2-30 Grid Plug and Play Support 2-31 Cluster Node Information 2-32 Specify Network Interface Usage 2-33 Storage Option Information 2-34 Failure Isolation Support with IPMI 2-35 Privileged Operating System Groups 2-36 Installation and Inventory Locations 2-37 Prerequisite Checks 2-38 Finishing the Installation 2-39 Verifying the Grid Infrastructure Installation 2-40 Module 4: Configuring ASM Disk Groups and ACFS 2-41 Configuring ASM with ASMCA 2-42 Creating an ASM Disk Group with ASMCA 2-43 Creating an ASM Disk Group: Advanced Options 2-44 Creating a Disk Group with Enterprise Manager 2-45 Creating an ASM Disk Group with Command-Line Tools 2-47 Configuring an ASM Volume: ASMCA 2-48 Configuring an ASM Volume: EM 2-49 Configuring an ASM Volume: ASMCMD and SQL 2-50 Configuring the ASM Cluster File System 2-51 Configuring ACFS with EM 2-52 Configuring ACFS with Command Line 2-53 Mounting ACFS with ASMCA 2-54 Mounting an ACFS with EM 2-55 Mounting ACFS with Command-Line Tools 2-56 Summary 2-57 Practice 2 Overview 2-58 Quiz 2-59

3 Administering Oracle Clusterware Objectives 3-2 Managing Oracle Clusterware 3-3 Managing Clusterware with Enterprise Manager 3-4 Controlling Oracle Clusterware 3-5 Verifying the Status of Oracle Clusterware 3-6 Determining the Location of Oracle Clusterware Configuration Files 3-7 Checking the Integrity of Oracle Clusterware Configuration Files 3-8 Backing Up and Recovering the Voting Disk 3-9 Adding, Deleting, or Migrating Voting Disks 3-10 Locating the OCR Automatic Backups 3-11

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 6: d59999gc10_sg2

vi

Changing the Automatic OCR Backup Location 3-12 Adding, Replacing, and Repairing OCR Locations 3-13 Removing an Oracle Cluster Registry Location 3-14 Migrating OCR Locations to ASM 3-15 Migrating OCR from ASM to Other Shared Storage 3-16 Performing Manual OCR Backups 3-17 Recovering the OCR by Using Physical Backups 3-18 Recovering the OCR by Using Logical Backups 3-19 Oracle Local Registry 3-20 Determining the Current Network Settings 3-22 Changing the Public VIP Addresses 3-23 Changing the Interconnect Adapter 3-25 Managing SCAN VIP and SCAN Listener Resources 3-27 Quiz 3-30 Summary 3-32 Practice 3 Overview 3-33

4 Managing Oracle Clusterware Module 4-1 Adding and Deleting Oracle Clusterware Homes 4-2 Objectives 4-3 Adding Oracle Clusterware Homes 4-4 Prerequisite Steps for Running addNode.sh 4-5 Adding a Node with addNode.sh 4-7 Completing OUI Silent Node Addition 4-8 Removing a Node from the Cluster 4-9 Deleting a Node from the Cluster 4-10 Deleting a Node from a Cluster (GNS in Use) 4-13 Deleting a Node from the Cluster 4-14 Quiz 4-15 Summary 4-16 Module 4-2 Patching Oracle Clusterware 4-17 Objectives 4-18 Out-of-Place Oracle Clusterware Upgrade 4-19 Oracle Clusterware Upgrade 4-20 Types of Patches 4-21 Patch Properties 4-22 Configuring the Software Library 4-23 Setting Up Patching 4-24 Starting the Provisioning Daemon 4-25 Obtaining Oracle Clusterware Patches 4-26 Uploading Patches 4-28 Deployment Procedure Manager 4-30 Reduced Down-Time Patching for Cluster Environments 4-31

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 7: d59999gc10_sg2

vii

Rolling Patches 4-32 Checking Software Versions 4-33 Installing a Rolling Patchset with OUI 4-34 Patchset OUI 4-35 Installing a Rolling Patchset with OUI 4-36 OPatch: Overview 4-37 OPatch: General Usage 4-39 Before Patching with OPatch 4-40 Installing a Rolling Patch with OPatch 4-41 Installing a Patch with Minimum Down Time with OPatch 4-43 Quiz 4-44 Summary 4-46

5 Making Applications Highly Available with Oracle Clusterware Objectives 5-2 Oracle Clusterware High Availability (HA) 5-3 Oracle Clusterware HA Components 5-4 Resource Management Options 5-5 Server Pools 5-6 Server Pool Attributes 5-7 GENERIC and FREE Server Pools 5-9 Assignment of Servers to Server Pools 5-11 Server Attributes and States 5-12 Creating Server Pools with srvctl and crsctl 5-14 Managing Server Pools with srvctl and crsctl 5-15 Adding Server Pools with Enterprise Manager 5-16 Managing Server Pools with Enterprise Manager 5-17 Clusterware Resource Modeling 5-18 Resource Types 5-20 Adding a Resource Type 5-21 Resource Type Parameters 5-23 Resource Type Advanced Settings 5-24 Defining Resource Dependencies 5-25 Creating an Application VIP by Using crsctl 5-27 Creating an Application VIP by Using EM 5-29 Managing Clusterware Resources with EM 5-30 Adding Resources with EM 5-31 Adding Resources by Using crsctl 5-36 Managing Resources with EM 5-37 Managing Resources with crsctl 5-40 HA Events: ONS and FAN 5-42 Managing Oracle Notification Server with srvctl 5-43 Quiz 5-44

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 8: d59999gc10_sg2

viii

Summary 5-46 Practice 5 Overview 5-47

6 Troubleshooting Oracle Clusterware Objectives 6-2 Golden Rule in Debugging Oracle Clusterware 6-3 Oracle Clusterware Main Log Files 6-5 Diagnostics Collection Script 6-6 Cluster Verify: Overview 6-7 Cluster Verify Components 6-8 Cluster Verify Locations 6-9 Cluster Verify Configuration File 6-10 Cluster Verify Output: Example 6-12 Enabling Resource Debugging 6-13 Dynamic Debugging 6-14 Enabling Tracing for Java-Based Tools 6-15 Preserving Log Files Before Wrapping 6-16 Processes That Can Reboot Nodes 6-17 Determining Which Process Caused Reboot 6-18 Using diagwait for Eviction Troubleshooting 6-19 Linux hangcheck-timer Kernel Module 6-20 Configuring the hangcheck-timer Kernel Module 6-21 Avoiding False Reboots 6-22 Using ocrdump to View Logical Contents of the OCR 6-23 Checking the Integrity of the OCR 6-24 OCR-Related Tools for Debugging 6-25 Browsing My Oracle Support Knowledge Articles 6-26 Summary 6-27 Quiz 6-28 Practice 6 Overview 6-30

7 Administering ASM Instances Objectives 7-2 ASM Initialization Parameters 7-3 ASM_DISKGROUPS 7-4 Disk Groups Mounted at Startup 7-5 ASM_DISKSTRING 7-6 ASM_POWER_LIMIT 7-8 INSTANCE_TYPE 7-9 CLUSTER_DATABASE 7-10 MEMORY_TARGET 7-11 Adjusting ASM Instance Parameters in SPFILEs 7-12 Starting and Stopping ASM Instances by Using srvctl 7-13

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 9: d59999gc10_sg2

ix

Starting and Stopping ASM Instances by Using SQL*Plus 7-14 Starting and Stopping ASM Instances by Using ASMCA and ASMCMD 7-16 Starting and Stopping ASM Instances Containing Cluster Files 7-17 Starting and Stopping the ASM Listener 7-18 ASM Dynamic Performance Views 7-19 ASM Dynamic Performance Views Diagram 7-20 Quiz 7-22 Summary 7-24 Practice 7 Overview: Administering ASM Instances 7-25

8 Administering ASM Disk Groups Objectives 8-2 Disk Group Overview 8-3 Creating a New Disk Group 8-4 Creating a New Disk Group with ASMCMD 8-6 Disk Group Attributes 8-7 V$ASM_ATTRIBUTE 8-9 Compatibility Attributes 8-10 Features Enabled by Disk Group Compatibility Attributes 8-11 Support for 4KB Sector Disk Drives 8-12 Supporting 4 KB Sector Disks 8-13 ASM Support for 4KB Sector Disks 8-14 Using the SECTOR_SIZE Clause 8-15 Viewing ASM Disk Groups 8-17 Viewing ASM Disk Information 8-19 Extending an Existing Disk Group 8-21 Dropping Disks from an Existing Disk Group 8-22 REBALANCE POWER 0 8-23 V$ASM_OPERATION 8-24 Adding and Dropping in the Same Command 8-25 Adding and Dropping Failure Groups 8-26 Undropping Disks in Disk Groups 8-27 Mounting and Dismounting Disk Groups 8-28 Viewing Connected Clients 8-29 Dropping Disk Groups 8-30 Checking the Consistency of Disk Group Metadata 8-31 ASM Fast Mirror Resync 8-32 Preferred Read Failure Groups 8-33 Preferred Read Failure Groups Best Practice 8-34

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 10: d59999gc10_sg2

x

Viewing ASM Disk Statistics 8-35 Performance, Scalability, and Manageability Considerations for Disk Groups 8-37 Quiz 8-38 Summary 8-40 Practice 8 Overview: Administering ASM Disk Groups 8-41

9 Administering ASM Files, Directories, and Templates

Objectives 9-2 ASM Clients 9-3 Interaction Between Database Instances and ASM 9-5 Accessing ASM Files by Using RMAN 9-6 Accessing ASM Files by Using XML DB 9-8 Accessing ASM Files by Using DBMS_FILE_TRANSFER 9-9 Accessing ASM Files by Using ASMCMD 9-10 Fully Qualified ASM File Names 9-11 Other ASM File Names 9-13 Valid Contexts for the ASM File Name Forms 9-15 Single File Creation Examples 9-16 Multiple File Creation Example 9-17 View ASM Aliases, Files, and Directories 9-18 Viewing ASM Files 9-20 ASM Directories 9-21 Managing ASM Directories 9-22 Managing Alias File Names 9-23 ASM Intelligent Data Placement 9-24 Guidelines for Intelligent Data Placement 9-25 Assigning Files to Disk Regions 9-26 Assigning Files to Disk Regions with Enterprise Manager 9-27 Monitoring Intelligent Data Placement 9-28 Disk Group Templates 9-29 Viewing Templates 9-31 Managing Disk Group Templates 9-32 Managing Disk Group Templates with ASMCMD 9-33 Using Disk Group Templates 9-34 ASM Access Control Lists 9-35 ASM ACL Prerequisites 9-36 Managing ASM ACL with SQL Commands 9-37 Managing ASM ACL with ASMCMD Commands 9-38 Managing ASM ACL with Enterprise Manager 9-39 ASM ACL Guidelines 9-41 Quiz 9-42 Summary 9-44

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 11: d59999gc10_sg2

xi

10 Administering ASM Cluster File Systems Objectives 10-2 ASM Files and Volumes 10-3 ACFS and ADVM Architecture Overview 10-4 ADVM Processes 10-6 ADVM Restrictions 10-7 ASM Cluster File System 10-8 ADVM Space Allocation 10-9 Striping Inside the Volume 10-10 Volume Striping: Example 10-11 Creating an ACFS Volume 10-13 Create an ASM Dynamic Volume with Enterprise Manager 10-14 Managing ADVM Dynamic Volumes 10-17 Create an ASM Cluster File System with Enterprise Manager 10-18 Manage Dynamic Volumes with SQL*PLUS 10-19 Registering an ACFS Volume 10-20 Creating an ACFS Volume with ASMCA 10-21 Creating the ACFS File System with ASMCA 10-22 Mounting the ACFS File System with ASMCA 10-23 Managing ACFS with EM 10-24 Extending ASMCMD for Dynamic Volumes 10-25 Linux-UNIX File System APIs 10-26 Linux-UNIX Extensions 10-27 ACFS Platform-Independent Commands 10-28 ACFS Snapshots 10-29 Managing ACFS Snapshots 10-30 Managing ACFS Snapshots with Enterprise Manager 10-32 Creating ACFS Snapshots 10-33 Managing Snapshots 10-34 Viewing Snapshots 10-35 ACFS Backups 10-36 ACFS Performance 10-37 Using ACFS Volumes After Reboot 10-38 ACFS Views 10-39 Quiz 10-40 Summary 10-41 Practice 10 Overview: Managing ACFS 10-42

Appendix A: Practices and Solutions Appendix B: Cloning Oracle Clusterware

Objectives B-2 What Is Cloning? B-3

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 12: d59999gc10_sg2

xii

Benefits of Cloning Oracle Clusterware B-4 Creating a Cluster by Cloning Oracle Clusterware B-5 Preparing the Oracle Clusterware Home for Cloning B-6 Cloning to Create a New Oracle Clusterware Environment B-9 The clone.pl Script B-11 The clone.pl Environment Variables B-12 The clone.pl Command Options B-13 Cloning to Create a New Oracle Clusterware Environment B-14 Log Files Generated During Cloning B-18 Cloning to Extend Oracle Clusterware to More Nodes B-20 Quiz B-25 Summary B-27

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 13: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Administering ASM Disk Groups

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 14: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to:

• Create and delete Automatic Storage Management (ASM) disk groups

• Set the attributes of an existing ASM disk group

• Perform ongoing maintenance tasks on ASM disk groups

• Explain key performance and scalability considerations for ASM disk groups

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 2

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 15: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Disk Group Overview

Disk 1 Disk 2 Disk 3 Disk 4 Disk 5 Disk 6

File 1 File 2

File 3 File 4

Screen reader hint: This slide contains a diagram that is explained in the speaker

notes.

Disk Group Overview

A disk group is a grouping of one or more disks that ASM manages as a unit of storage. Each disk group contains the metadata required for the management of space in that disk group. An ASM disk group is conceptually similar to a logical volume (LV) in a typical Storage Area Network (SAN).

Files are allocated from the space inside a disk group. The content of files that are stored in a disk group are evenly distributed, or striped, across the disks in the disk group to eliminate hot spots and to provide uniform performance across the disks.

ASM striping balances disk usage such that all the disks in a disk group will be used evenly in percentage terms. When a disk group is made up of uniform-sized disks, the amount of data on each disk is approximately the same. When a disk group contains different-sized disks, the larger disks will contain more data than the smaller disks. A comparatively large disk may present an I/O bottleneck within a disk group, if the bandwidth to all disks in the disk group is the same.

By default, ASM automatically rebalances storage whenever the storage configuration of a disk group changes such as when a disk is added.

Each ASM file is completely contained within a single disk group. However, a disk group can contain files belonging to several databases and a single database can use different files from multiple disk groups. For most installations, you need only a small number of disk groups.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 3

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 16: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Creating a New Disk Group

The CREATE DISKGROUP command creates new ASM disk groups.

• Example:

CREATE DISKGROUP diskgroup_name[ { HIGH | NORMAL | EXTERNAL } REDUNDANCY ]{ [ FAILGROUP failgroup_name ]DISK qualified_disk_clause [, qualified_disk_clause]...

} ...[ ATTRIBUTE { 'attribute_name' = 'attribute_value' }... ]

;

qualified_disk_clause ::= search_string [ NAME disk_name ][ SIZE size_clause ][ FORCE | NOFORCE ]

CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK 'ORCL:SDD11' NAME 'FRA_DISK1' SIZE 977 M,

'ORCL:SDD12' NAME 'FRA_DISK2' SIZE 977 M;

Creating a New Disk Group The CREATE DISKGROUP statement creates a disk group, assigns one or more disks to the disk group, and mounts the disk group for the first time. If you want ASM to automatically mount the disk group when an ASM instance starts, you must add the disk group name to the value of the ASM_DISKGROUPS initialization parameter in your parameter files (PFILEs). If you use a server parameter file (SPFILE), the disk group is added to the initialization parameter automatically.The CREATE DISKGROUP command can also be run using database management tools such asthe ASM Configuration Assistant (ASMCA), Oracle Enterprise Manager, and ASM Command-Line utility (ASMCMD).

The CREATE DISKGROUP statement has the following clauses:• REDUNDANCY clause

The REDUNDANCY clause lets you specify the redundancy level of the disk group.

NORMAL REDUNDANCY requires the existence of at least two failure groups. By default, NORMAL REDUNDANCY provides a two-way mirror of all ASM files except for control files, which are mirrored three ways. NORMAL REDUNDANCY disk groups can tolerate the loss of one failure group.HIGH REDUNDANCY requires the existence of at least three failure groups. ASM fixes mirroring at three-way mirroring, with each file getting two mirrored copies. HIGH REDUNDANCY disk groups can tolerate the loss of two failure groups.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 4

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 17: d59999gc10_sg2

Creating a New Disk Group (continued)

EXTERNAL REDUNDANCY indicates that ASM does not provide any redundancy for the diskgroup. The disks within the disk group must provide redundancy (for example, using a storage array), or you must be willing to tolerate the loss of the disk group if a disk fails. You cannot specify the FAILGROUP clause if you specify EXTERNAL REDUNDANCY.

• FAILGROUP clause

Use this clause to specify a name for one or more failure groups. If you omit this clause, and you have specified NORMAL or HIGH REDUNDANCY, then ASM automatically adds each disk in the disk group to its own failure group. The implicit name of the failure group is the same as the name in the NAME clause.

• DISK clause

Use this clause to specify one or more disks for each failure group.

For each disk that you are adding to the disk group, specify the operating system–dependent search string that ASM will use to find the disk. The search_string must point to a subset of the disks returned by discovery using the strings in the ASM_DISKSTRING initialization parameter. If search_string does not point to any disks to which the ASM user has read/write access, then ASM returns an error. If it points to one or more disks that have already been assigned to a different disk group, then Oracle Database returns an error unless you also specify FORCE. For each valid candidate disk, ASM formats the disk header to indicate that it is a member of the new disk group.

The optional NAME subclause is valid only if the search_string points to a single disk. It specifies an operating system–independent name for the disk. The name can be up to 30 alphanumeric characters. The first character must be alphabetic. If you omit this clause and you assigned a label to a disk through ASMLib, then that label is used as the disk name. If you are not using ASMLib, then ASM creates a default name of the form diskgroup_name_nnnn, where nnnn is the disk number. You can use this name to refer to the disk in subsequent ASM operations.

Use the optional SIZE subclause to specify the size of the disk. If you specify a size greater than the capacity of the disk, then ASM returns an error. If you specify a size less than the capacity of the disk, you limit the disk space ASM will use. If you omit this clause, ASM attempts to determine the size of the disk programmatically.

You can specify FORCE or NOFORCE for each disk.

Specify FORCE if you want ASM to add the disk to the disk group even if the disk is already a member of a different disk group. Exercise caution because using FORCE in this way may destroy existing disk groups. For this clause to be valid, the disk must already be a member of a disk group and the disk cannot be part of a mounted disk group.

Specify NOFORCE if you want ASM to return an error if the disk is already a member of a different disk group. NOFORCE is the default.

• ATTRIBUTE clause

Use this clause to set attribute values for the disk group. ASM disk group attributes are described later in this lesson.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 5

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 18: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Creating a New Disk Group with ASMCMD

ASMCMD allows an XML configuration file to create and change the disk group.• Sample XML used with the mkdg command:

<dg name=“DATA” redundancy=“normal”>

<fg name=“fg1”>

<dsk string=“/dev/sda1” />

<dsk string=“/dev/sdb1” />

</fg>

<fg name=“fg2”>

<dsk string=“/dev/sdc1” />

<dsk string=“/dev/sdd1” />

</fg>

<a name=“compatible.asm” value=“11.2”/>

<a name=“compatible.rdbms” value=“11.2”/>

</dg>

Creating a New Disk Group with ASMCMD

ASMCMD has added the ability to use an XML configuration file to either create a disk group or change a disk group configuration.

The XML file for the mkdg command specifies the name of the disk group, redundancy, attributes, and paths of the disks that form the disk group. Redundancy is an optional parameter; the default is normal redundancy. For some types of redundancy, disks are required to be gathered into failure groups. In the case that failure groups are not specified, every disk will be in its own failure group.

It is possible to set the disk group attribute values during disk group creation. Some attributes, such as AU_SIZE and SECTOR_SIZE, can be set only during disk group creation.

The following is an example of an inline XML configuration for chdg. This XML alters the disk group named DATA. The FG1 failure group is dropped and the DATA_0001 disk is also dropped. The /dev/disk5 disk is added to the FG2 failure group. The rebalance power level is set to 3.

ASMCMD> chdg '<chdg> <dg name="DATA" power="3"> <drop> <fgname="FG1"> </fg> <dsk name="DATA_0001" /> </drop> <add> <fgname="FG2"> <dsk string="/dev/disk5"/> </fg> </add> </chdg>'

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 6

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 19: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Disk Group Attributes

The following attributes can be set for a disk group:

Attribute Description Valid Values Default Value

AU_SIZE Specifies the allocation unit (AU) size. This attribute can be set only during disk group creation.

1M, 2M, 4M, 8M, 16M, 32M, 64M

(M = Megabyte)

1M

DISK_REPAIR_TIME Specifies the amount of time ASM will wait from when a disk goes offline until ASM drops it and rebalances the disk group

0 to 136 years specified in minutes (M) or hours (H)

3.6H

COMPATIBLE.RDBMS Specifies the minimum software version required for a database instance to use files in this disk group

At least the first two digits of a valid Oracle Database release number

10.1

COMPATIBLE.ASM Specifies the minimum software version required for an ASM instance to mount this disk group

At least the first two digits of a valid Oracle Database release number

10.1

COMPATIBLE.ADVM Allows the creation of ASM volumes

>=11.2 NONE

Disk Group Attributes

First introduced in Oracle Database 11g, disk group attributes govern various aspects of how a disk group functions.

AU_SIZE specifies the allocation unit size, which is the fundamental unit of space within a disk group. It is also the stripe size for coarse-grained striping. This attribute can be set only during disk group creation.

DISK_REPAIR_TIME specifies the amount of time ASM will wait from when a disk goes offline until ASM drops it and rebalances the disk group. This attribute enables the fast mirrorresync feature, whereby ASM keeps track of pending changes on an offline disk during an outage and the extents are automatically resynced when the disk is brought back online.

There are three compatibility attributes. COMPATIBLE.RDBMS specifies the minimum software version required for a database instance to use files in this disk group, whereas COMPATIBLE.ASM specifies the minimum software version required for an ASM instance to mount this disk group. The compatibility attributes are discussed in greater detail later in this lesson.

The value for the disk group COMPATIBLE.ADVM attribute determines whether the disk group can contain Oracle ASM volumes. The COMPATIBLE.ASM must be set to 11.2 first, and the ASM Dynamic Volume Manager (ADVM) drivers must be loaded.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 7

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 20: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Disk Group Attributes

The following attributes can be set for a disk group:

Attribute Description Valid Values

Default Value

ACCESS_CONTROL.ENABLED Allows the creation of access control lists (ACL)

TRUE, FALSE FALSE

ACCESS_CONTROL.UMASK Sets the default permissions to be set for files created in the disk group

0,2,6 in a triple for owner, group, and others

None

SECTOR_SIZE Is the optional parameter used with CREATE DISKGROUP to specify the disk sector size

512,4096,4K 512

CELL.SMART_SCAN_CAPABLE Enables smart scan predicate offload processing if all disks in the disk group are Exadata Grid Disks

TRUE, FALSE FALSE

Disk Group Attributes (continued)

ACCESS_CONTROL.ENABLED and ACCESS_CONTROL.UMASK are discussed in more detail in the lesson tilted “Administering ASM Files, Directories, and Templates.” Both attributes can be set after the disk group is created.

SECTOR_SIZE is used to support 4KB sector disk drives.

From Oracle Database 11g version 11.1.0.7, there is a new attribute, CELL.SMART_SCAN_CAPABLE, that enables smart scan predicate offload processing if all disks in the disk group are Exadata Grid Disks.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 8

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 21: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

V$ASM_ATTRIBUTE

• In an ASM instance, V$ASM_ATTRIBUTE displays one row for each attribute defined.

• In addition to attributes specified by CREATE DISKGROUPand ALTER DISKGROUP statements, the view may show other attributes that are created automatically.

• Disk group attributes are listed in V$ASM_ATTRIBUTE only if the disk group attribute COMPATIBLE.ASM is set to 11.1 or higher.

• In a database instance, V$ASM_ ATTRIBUTE displays no rows.

• This is the information displayed by asmcmd lsattr –lm –G <diskGroup>.

V$ASM_ATTRIBUTE

V$ASM_ATTRIBUTE lists the ASM disk group attributes if the disk group attributeCOMPATIBLE.ASM is set to 11.1 or higher. If COMPATIBLE.ASM is not set to 11.1 or higher, then V$ASM_ATTRIBUTE will have no rows corresponding to the disk group.Following is a typical example of a query that retrieves attributes for a particular disk group:

SQL> SELECT G.NAME DISK_GROUP, A.NAME ATTRIBUTE, A.VALUE2 FROM V$ASM_ATTRIBUTE A, V$ASM_DISKGROUP G3 WHERE A.GROUP_NUMBER = G.GROUP_NUMBER4 AND G.NAME = 'DATA';

The ASMCMD lsattr –lm –G command will show the same information for a disk group.

$ asmcmd lsattr -lm -G DATAGroup_Name Name Value RO Sys DATA access_control.enabled FALSE N Y DATA access_control.umask 066 N Y DATA au_size 1048576 Y Y DATA cell.smart_scan_capable FALSE N N DATA compatible.asm 11.2.0.0.0 N Y DATA compatible.rdbms 10.1.0.0.0 N Y DATA disk_repair_time 3.6h N Y DATA sector_size 512 Y Y

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 9

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 22: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Compatibility Attributes

• Disk group compatibility attributes can be set using CREATE DISKGROUP or ALTER DISKGROUP.

• Values can only be advanced and the setting is irreversible.• COMPATIBLE.RDBMS must be less than or equal to

COMPATIBLE.ASM.

• Valid attribute combinations:

COMPATIBLE

ASM

COMPATIBLE

RDBMS

COMPATIBLE

ADVM

ASM Instance Version

COMPATIBLESetting for RDBMS Instance

10.1 10.1 n/a >=10.1 >=10.1

11.1 10.1 n/a >=11.1 >=10.1

11.2 11.1 11.2 >=11.2 >=11.1

11.2 11.2 11.2 >=11.2 >=11.2

Compatibility Attributes

You can set disk group compatibility with the CREATE DISKGROUP or ALTER DISKGROUPSQL statement. For example:

ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '11.2';

When setting the values for the COMPATIBLE.RDBMS and COMPATIBLE.ASM attributes, specify at least the first two digits of a valid Oracle Database release number. For example, you can specify compatibility as 10.1 or 11.1. ASM assumes that any missing version number digits are zeros.

Note that although it is possible to alter these attributes, they can only be advanced (for example, from 10.1 to 11.1). Advancing the values for on-disk compatibility attributes is an irreversible operation. To revert to the previous value, you must create a new disk group with the old compatibility attributes and then restore the database files that were in the disk group.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 10

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 23: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Features Enabled by Disk Group Compatibility Attributes

Disk Group Features Enabled COMPATIBLE ASM

COMPATIBLE RDBMS

COMPATIBLE ADVM

Different AU sizes (1, 2, 4, 8, 16, 32, or 64 MB) >=11.1 >=11.1 n/a

Attributes displayed in the V$ASM_ATTRIBUTE view >=11.1 >=11.1 n/a

Fast mirror resync (DISK_REPAIR_TIME attribute) >=11.1 >=11.1

Exadata storage >= 11.1.0.7 >= 11.1.0.7 n/a

Variable size extents 11.1 11.1 n/a

Preferred read failure groups 11.1 11.1 n/a

Intelligent Data Placement >=11.2 >=11.2 n/a

OCR and voting disks in a disk group >=11.2 n/a n/a

Sector size set to nondefault value >=11.2 >=11.2 n/a

Oracle ASM SPFILE in a disk group >=11.2 n/a n/a

Oracle ASM File Access Control >=11.2 >11.2 n/a

Volumes in disk groups >=11.2 n/a >=11.2

Features Enabled by Disk Group Compatibility Attributes

Setting the disk group compatibility attributes is essentially a balance between flexibility and functionality.

At one extreme setting, COMPATIBLE.RDBMS and COMPATIBLE.ASM to 10.1 will allow the greatest flexibility regarding the ASM and database software versions that can use a disk group. However, such a setting limits access to various Oracle Database 11g ASM features such as large (>8 MB) allocation units, fast mirror resync, variable sized extents, and preferred read failure groups.

That is, setting COMPATIBLE.RDBMS and COMPATIBLE.ASM to 11.1 will provide access to the Oracle Database 11g ASM features but require that the Oracle Database 11g software be installed and in use across your environment.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 11

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 24: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Support for 4KB Sector Disk Drives

Oracle supports 4KB sector disk drives.

• In emulation mode:– No changes but there are performance penalties

• In native mode, there is no performance penalty with:– Oracle database files

– ASM files, at the disk group level

ACFS/ADVM does not support 4KB sector disk drives.

Support for 4KB Sector Disk Drives

The disk drive industry decided several years ago that to continue to develop larger capacity and more reliable hard disks, a change to the sector size was needed. The sector size has been 512 bytes and has not changed in 25 years. This sector size has been hard-coded into operating systems and file system code. The disk manufacturers have announced a plan to start changing over to the “big-sector” disks in 2011. In a few years, they expect to be producing only 4KB sector disk drives. Most operating systems can already query the physical disk device to determine the sector size setting. Because Oracle software—specifically, the Database Writer, Log Writer, and ASM processes—can write directly to the disk without going through the operating system, Oracle is now supporting 4KB sector disk drives for ASM and database files on a file system.

The performance penalty, in emulation mode, is caused when the 512-byte sector writes are translated onto 4K sector disks. On a physical disk, the sector reads and writes are atomic, and the disk drive reads the 4K byte sector into disk cache memory, changes a 512-byte section, and writes the entire 4K sector.

ASM disk group commands have changed to allow you to specify the sector size.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 12

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 25: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Supporting 4 KB Sector Disks

• Emulation mode:

LBA0 LBA1 LBA2 LBA3 LBA4 LBA5 LBA6 LBA7

4096-byte physical sector

• Native mode:

LBA0

4096-byte physical sector

Logical sector

Physical sector

Supporting 4 KB Sector Disks

4 KB sector disks have physical sectors (shown in gray) and logical sectors (shown in blue). There are two types of 4 KB sector disks: emulation mode and native mode.

• 4 KB sector disks in emulation mode have eight logical sectors per physical sector (as shown in the slide). They maintain a 512-byte interface to their 4 KB physical sectors—that is, the logical block address (LBA) references 512 bytes on disk.

• 4 KB sector disks in native mode have one logical sector per physical sector (as shown in the slide). So, there is only the 4 KB interface. In other words, the LBA references 4096 bytes on disk.

Emulation mode can hurt performance because the disk drive reads the 4 KB sector into disk cache memory, changes the 512-byte section, and then writes the entire 4 KB sector back to disk. For example, when redo is being written to disk in 512 chunks, each write requires a read of the 4KB sector, an update of the 512-byte section, and then a write. With native mode, the 4 KB sector is written without requiring a read and update.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 13

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 26: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM Support for 4KB Sector Disks

• ASM commands:– CREATE DISKGROUP…ATTRIBUTE SECTOR_SIZE

• ASMCA support for the new commands

• Enterprise Manager support

• All the disks in a disk group must have the same sector size.

• Restrictions on redo log files: – The BLOCKSIZE redo log file and the SECTOR_SIZE disk

group

ASM Support for 4KB Sector Disks

ASM sector size can be managed from three different tools: SQL command line, the ASM Command-Line utility (ASMCMD), and Enterprise Manager. Because disk groups must have a compatible sector size on all the disks in a disk group, the sector size is set as an attribute of the disk group when the disk group is created. The ALTER DISKGROUP ADD DISK command requires that the candidate disk to be added has the same sector size as the existing disks in the group. (This should be handled by checking the disk properties at the time the ADD DISKcommand is executed.)

Best Practice: For data files and temp files, avoid using data block size of less than 4K bytes with 4KB sector disk drives. The default data block size is 8K bytes.

For redo log files and archive log files, the block size specified for the log file must match the sector size of the disk group. The syntax for specifying a new log file in the +DATA disk group of 100 MB with the redo block size of 4 KB is:SQL> ALTER DATABASE ADD LOGFILE +DATA SIZE 100M BLOCKSIZE 4K;

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 14

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 27: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Using the SECTOR_SIZE Clause

Creating a disk group (in ASM) with a 4 KB sector size:

CREATE DISKGROUP mydgroup1 NORMAL REDUNDANCY

FAILGROUP mycontroller1 DISK

'/devices/diska1',

'/devices/diska2',

'/devices/diska3',

'/devices/diska4'

FAILGROUP mycontroller2 DISK

'/devices/diskb1',

'/devices/diskb2',

'/devices/diskb3',

'/devices/diskb4'ATTRIBUTE 'sector_size'='4096';

Using the SECTOR_SIZE Clause

You can manage ASM with three different tools: the SQL command line, the ASMCMD utility, and Enterprise Manager.

What you already know: Use the CREATE DISKGROUP clause to name a group of disks and specify that the Oracle database should manage the group for you. The Oracle database manages a disk group as a logical unit and evenly spreads each file across the disks to balance I/O. It also automatically distributes database files across all available disks in disk groups and rebalances storage automatically whenever the storage configuration changes.

What is new: Use the optional SECTOR_SIZE clause in the CREATE DISKGROUP command to explicitly specify the sector size value for the disk group. If you do not use this clause, ASM detects the hardware sector size and uses it. The SECTOR_SIZE disk group attribute can be set only during disk group creation. The values for the SECTOR_SIZE can be 512, 4096, or 4K. ASM provides support for 4 KB sector disk drives without negatively affecting performance.

You can determine the sector size value that has either been assumed or explicitly set for a successful disk group creation by querying the V$ASM_ATTRIBUTE view. You can also query the SECTOR_SIZE column in the V$ASM_DISKGROUP view. The asmcmd lsattr command also displays the sector size of a disk group.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 15

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 28: d59999gc10_sg2

Using the SECTOR_SIZE Clause (continued)

For details about how ASM verifies the sector size (and when this operation fails), see the Oracle Database Storage Administrator’s Guide 11g Release 2 (11.2). All disks in the disk group must have a sector size equal to the attribute value specified. When processing the CREATE DISKGROUP command, ASM queries the operating system for the sector size of every disk specified in this command before the disk is added to the disk group. It ensures that a disk group is made up of disks with an identical sector size. If a disk has a different sector size, the command fails. Similar checks are performed when mounting the disk group. If you omit this attribute in the CREATE DISKGROUP command, ASM proceeds with the create operation as long as all specified disks have an identical sector size and this value is used as the disk group sector size.

When you add new disks to an existing disk group with the ALTER DISKGROUP ADD DISKcommand, the new disks must also have the same sector size value. ASM verifies this. The ALTER DISKGROUP command fails if any of the disks have a different sector size.

By setting a value for this attribute, you can establish the sector size for the disk group, rather than let ASM assume a value for all the disks in the disk group. You can query the SECTOR_SIZE column of the V$ASM_ATTRIBUTE view to determine the sector size before attempting to add a new disk to the disk group.

Note: The Oracle Automatic Storage Management Cluster File System (ACFS) does not support 4 KB sector disks.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 16

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 29: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Viewing ASM Disk Groups

• In an ASM instance, V$ASM_DISKGROUP displays one row for every ASM disk group discovered by the ASM instance.

• In a database instance, V$ASM_DISKGROUP displays one row for every ASM disk group available to the database instance.

• Other ASM dynamic performance views relate to V$ASM_DISKGROUP through the GROUP_NUMBER column.

• ASMCMD lsdg command provides the same information.

• V$ASM_DISKGROUP_STAT displays performance statistics in the same way that V$ASM_DISKGROUP does, but without performing discovery of new disks.

Viewing ASM Disk Groups

V$ASM_DISKGROUP contains configuration and status information for ASM disk groups. It is commonly queried by itself to display information about disk groups but is also commonly joined with other ASM dynamic performance views as shown in many of the examples in this lesson.

The following example shows a typical V$ASM_DISKGROUP query that summarizes the current space utilization for the ASM disk groups:

SQL> SELECT NAME, TYPE, TOTAL_MB, FREE_MB, 2 REQUIRED_MIRROR_FREE_MB RMFM, USABLE_FILE_MB3 FROM V$ASM_DISKGROUP;

NAME TYPE TOTAL_MB FREE_MB RMFM USABLE_FILE_MB---- ------ -------- ------- ----- --------------DATA NORMAL 9998 4248 1449 1399 ACFS EXTERN 9996 3706 0 3706FRA EXTERN 7497 7356 0 7356

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 17

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 30: d59999gc10_sg2

Viewing ASM Disk Groups (continued)

The asmcmd lsdg command is a preformatted view of the information fromV$ASM_DISKGROUP.$ asmcmd lsdgState Type Rebal Sector Block AU Total_MB Free_MB

Req_mir_free_MB Usable_file_MB Offline_disks Voting_files NameMOUNTED EXTERN N 512 4096 1048576 9996 3706

0 3706 0 N ACFS/MOUNTED NORMAL N 512 4096 1048576 9998 4248

1449 1399 0 N DATA/MOUNTED EXTERN N 512 4096 1048576 7497 7356

0 7356 0 N FRA/

Note the following when considering the space utilization statistics provided by V$ASM_DISKGROUP or asmcmd lsdg.

• The TOTAL_MB column is the total capacity of the disk group in megabytes, not taking mirroring into account. In the preceding example, both disk groups are NORMALredundancy disk groups, so by default most files contained in them will be two-way mirrored. In practical terms, that means the total usable size of both disk groups is approximately half of the number reported.

• The FREE_MB column is the total unused capacity of the disk group in megabytes, not taking mirroring into account. In the preceding example, both disk groups are NORMALredundancy disk groups, so in practice the total free space in both disk groups is approximately half of the number reported.

• The REQUIRED_MIRROR_FREE_MB column shows the amount of space that is required to be available in a given disk group in order to restore redundancy after the worst failure that can be tolerated by the disk group. In the case of the DATA disk group in the preceding example, 977 MB is the size of each of the six failure groups that currently make up that disk group. The worst failure that can be tolerated by this disk group is the loss of one failure group because the loss of any more would mean the loss of any data that was spread across the lost failure groups. So in essence, as long as the DATA disk group has 977 MB free, the loss of any one failure group can be tolerated without compromising mirroring.In the case of the FRA disk group, zero is reported because that disk group consists of only two disks. Because of that, the loss of either disk will compromise mirroring regardless of the amount of space that is free.

• USABLE_FILE_MB is computed by subtracting REQUIRED_MIRROR_FREE_MB from the total free space in the disk group and then adjusting for mirroring. It is supposed to show the amount of free space that can be safely utilized taking mirroring into account and yet be able to restore redundancy after a disk failure. In the case of the DATA disk group, the reported negative value shows that mirroring would be compromised by a failure although there is space available in the disk group.In the case of the FRA disk group, the value has a different meaning. It is already known that the disk group cannot tolerate failure without compromising mirroring. So in this case, the computed value of 630 MB simply refers to the amount of available free space after mirroring is factored in.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 18

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 31: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Viewing ASM Disk Information

• In an ASM instance, V$ASM_DISK displays one row for every disk discovered by the ASM instance, including disks that are not part of any disk group.

• In a database instance, V$ASM_DISK displays rows only for disks in disk groups in use by the database instance.

• V$ASM_DISK_STAT has the same columns but does not discover new disks (may not have the latest information).

• ASMCMD lsdsk has several options and two modes.

– Modes: — Connected shows information from dynamic performance views.

— Nonconnected shows information from disk headers.

– Options: -t, -k, and -p provide different preformatted views.

Viewing ASM Disk Information

Following is a typical example of a query that retrieves information about disks in a particular disk group:

SQL> SELECT G.NAME DISK_GROUP, D.NAME, D.STATE, D.TOTAL_MB, 2 D.FREE_MB3 FROM V$ASM_DISK D, V$ASM_DISKGROUP G4 WHERE D.GROUP_NUMBER = G.GROUP_NUMBER5 AND G.NAME = 'DATA';

DISK_GROUP NAME STATE TOTAL_MB FREE_MB---------- ------ -------- ---------- ----------DATA SDE5 NORMAL 977 136DATA SDE6 NORMAL 977 134DATA SDE7 NORMAL 977 135DATA SDE10 NORMAL 977 150DATA SDE8 NORMAL 977 150DATA SDE9 NORMAL 977 150

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 19

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 32: d59999gc10_sg2

Viewing ASM Disk Information (continued)

Following is another example that uses V$ASM_DISK to display disks that do not belong to any disk group:

SQL> SELECT PATH, MOUNT_STATUS, HEADER_STATUS, OS_MB2 FROM V$ASM_DISK3 WHERE GROUP_NUMBER = 0;

PATH MOUNT_S HEADER OS_MB--------------- ------- ------ ----------ORCL:SDD10 CLOSED FORMER 977ORCL:SDD8 CLOSED FORMER 977ORCL:SDD9 CLOSED FORMER 977ORCL:SDD11 CLOSED FORMER 977

V$ASM_DISK_STAT displays performance statistics in the same way that V$ASM_DISK does, but without performing discovery of new disks. This results in a less expensive operation. However, because discovery is not performed, the output of this view does not include any data about disks that are new to the system.

The columns for V$ASM_DISK_STAT are the same as those for V$ASM_DISK.

The ASMCMD lsdsk command provides a way to see the V$ASM_DISK information in preformatted columns without having to write SQL statements. The ASMCMD lsdskcommand can be used in connected mode where it retrieves the data from V$ASM_DISK, ornonconnected mode where the data is collected from the ASM disk headers. The -g, and --discovery options control whether V$ASM_DISK or V$ASM_DISK_STAT are used.

The ASMCMD lsdsk –k command connects to the ASM instance shows the following:$ asmcmd lsdsk -kTotal_MB Free_MB OS_MB Name Failgroup Library Label UDID Product Redund Path

1953 654 1953 DATA1 DATA1 ASM Library - Generic Linux, version 2.0.4 (KABI_V2) DATA1 UNKNOWN ORCL:DATA1

1953 650 1953 DATA2 DATA2 ASM Library - Generic Linux, version 2.0.4 (KABI_V2) DATA2 UNKNOWN ORCL:DATA2

1953 647 1953 DATA3 DATA3 ASM Library - Generic Linux, version 2.0.4 (KABI_V2) DATA3 UNKNOWN ORCL:DATA3

2329 877 2329 DATA4 DATA4 ASM Library - Generic Linux, version 2.0.4 (KABI_V2) DATA4 UNKNOWN ORCL:DATA4

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 20

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 33: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Extending an Existing Disk Group

• The ALTER DISKGROUP command enables you to extend an existing disk group by adding disks to it.

– Example:

• ASMCMD, ASMCA, and Enterprise Manager also provide interfaces to extend the existing disk groups.

ALTER DISKGROUP diskgroup_name { ADD{ [ FAILGROUP failgroup_name ]DISK qualified_disk_clause [, qualified_disk_clause ]...

}...}[ REBALANCE [POWER integer] [WAIT | NOWAIT] ]

;

ALTER DISKGROUP FRA ADD DISK 'ORCL:SDE5' NAME 'FRA_DISK3','ORCL:SDE6' NAME 'FRA_DISK4';

Extending an Existing Disk Group

You can use the ADD clause of the ALTER DISKGROUP statement to add a disk or a failure group to a disk group. The same syntax that you use to add a disk or failure group with the CREATE DISKGROUP statement can be used with the ALTER DISKGROUP statement.

ASM automatically rebalances the disk group when disks are added. By default, the ALTER DISKGROUP statement returns immediately after the disks have been added while the rebalance operation continues to run asynchronously. You can query the V$ASM_OPERATION view to monitor the status of the rebalance operation.

You can optionally use the REBALANCE clause to manually control the rebalance process.

In the POWER clause, you can specify a value from 0 to 11. A value of 0 disables rebalancing for this statement. A value of 1 causes the rebalance to take place with minimal resources allocated to it, whereas a value of 11 permits ASM to execute the rebalance as quickly as possible. If you do not specify a value, the POWER clause defaults to the value of the ASM_POWER_LIMIT initialization parameter.

You can specify WAIT or NOWAIT to determine whether the ALTER DISKGROUP statement waits for the rebalance to conclude before returning or not. NOWAIT is the default.

You can also extend a disk group using the Add Disks button in the ASM Disk Groups window of the Database Configuration Assistant (DBCA). Enterprise Manager provides an Add button in each specific Disk Group window.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 21

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 34: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Dropping Disks from an Existing Disk Group

• The ALTER DISKGROUP command enables you to remove disks or failure groups from an existing disk group.

– Example:

ALTER DISKGROUP diskgroup_name { DROP{ DISK

disk_name [ FORCE | NOFORCE ][, disk_name [ FORCE | NOFORCE ] ]...

| DISKS IN FAILGROUPfailgroup_name [ FORCE | NOFORCE ][, failgroup_name [ FORCE | NOFORCE ] ]...

}}[ REBALANCE [POWER integer] [WAIT | NOWAIT] ]

;

ALTER DISKGROUP FRA DROP DISK FRA_DISK1, FRA_DISK4;

Dropping Disks from an Existing Disk GroupTo drop disks from a disk group, use the DROP DISK clause of the ALTER DISKGROUPstatement. A dropped disk can be reused by adding it to a disk group or by using it in the creation of a new disk group. A dropped disk can be added back to the disk group it was removed from.When a disk is dropped, the disk group is rebalanced by moving all the file extents from the dropped disk to other disks in the disk group. A drop disk operation will fail if not enough space is available on the other disks. Data will not be lost by dropping a disk.By default, the ALTER DISKGROUP...DROP DISK statement returns before the drop and rebalance operations are complete. Do not reuse, remove, or disconnect the dropped disk until the HEADER_STATUS column for this disk in the V$ASM_DISK view changes to FORMER. You can query the V$ASM_OPERATION view to determine the amount of time remaining for the drop/rebalance operation to complete. If you specify the FORCE clause for the drop operation, the disk is dropped even if ASM cannot read from or write to the disk. You cannot use the FORCE flag when dropping a disk from an external redundancy disk group. A DROP FORCE operation leaves data at reduced redundancy for as long as it takes for the subsequent rebalance operation to complete. Take great care because this increases your exposure to data loss if there is a subsequent disk failure during rebalancing. The REBALANCE clause works in the same way as when extending a disk group. You can also use Enterprise Manager, ASMCA, or ASMCMD interfaces to drop disks from a disk group.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 22

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 35: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

REBALANCE POWER 0

• You can effectively disable rebalancing when adding disks to or removing disks from a disk group by specifying REBALANCE POWER 0 in the ALTER DISKGROUPstatement.

• When adding, the disks become immediately available to the disk group; however, existing data is not moved to the new disks.– New data may be written to the new disks.

• When removing, the statement executes and the disks are marked with a status of DROPPING. However, because the operation to move the data to another disk is effectively disabled, the disk will remain in this state indefinitely until another operation causes the disk group to be rebalanced.

REBALANCE POWER 0

When adding disks to or removing disks from a disk group, you can effectively disable rebalancing by specifying REBALANCE POWER 0 in the ALTER DISKGROUP statement.

When adding disks to a disk group in conjunction with REBALANCE POWER 0, the disks become immediately available to the disk group; however, existing data is not moved to the new disks. New data may be written to the new disks. This is useful in situations where you need to add disks quickly to a running system and you want to defer the rebalance operation until a later time such as a scheduled maintenance period.

When removing disks from a disk group in conjunction with REBALANCE POWER 0, the statement executes and the disks are marked with a status of DROPPING. However, because the operation to move the data to another disk is effectively disabled, the disk will remain in this state indefinitely until another add or drop operation causes the disk group to be rebalanced or the disk group is manually rebalanced using the ALTER DISKGROUP <diskgroup_name> REBALANCE statement.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 23

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 36: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

V$ASM_OPERATION

• In an ASM instance, V$ASM_OPERATION displays one row for every active long-running operation executing in the ASM instance.

• In a database instance, V$ASM_OPERATION displays no rows.

• The ASMCMD lsop command shows ASM operations:ASMCMD [+] > lsop

Group_Name Dsk_Num State Power

FRA REBAL REAP 3

V$ASM_OPERATION

V$ASM_OPERATION provides information about long-running operations conducted by ASM. A long-running operation is one of the following:

• A rebalance operation that results from adding disks to or removing disks from a disk group

• A mirror resync operation that results from a disk being brought back online after an outage

The following example shows a typical query listing all the current operations: SQL> SELECT G.NAME DISK_GROUP, O.*2 FROM V$ASM_OPERATION O, V$ASM_DISKGROUP G3 WHERE O.GROUP_NUMBER = G.GROUP_NUMBER;

DISK_GROUP GROUP_NUMBER OPERATION STATE POWER ACTUAL---------- ------------ --------- ----- ----- ------SOFAR EST_WORK EST_RATE EST_MINUTES ERROR_CODE----- -------- -------- ----------- ----------DATA 1 REBAL RUN 2 2644 1237 565 1

In this example, a rebalance operation is currently running. The operation is estimated to require the movement of 1,237 allocation units. So far, 644 allocation units have been moved at a rate of 565 per minute. It is estimated that the operation will complete in 1 minute. No errors have been reported.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 24

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 37: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Adding and Dropping in the Same Command

You can add and drop disks in the same ALTER DISKGROUPcommand.

• Is useful when replacing disks

• Avoids multiple rebalancing operations

Example:

ALTER DISKGROUP diskgroup_name { ADD ... }{ DROP ... }[ REBALANCE [POWER integer] [WAIT | NOWAIT] ];

ALTER DISKGROUP FRA ADD DISK 'ORCL:SDE7' NAME 'FRA_DISK5' SIZE 977 M ,

'ORCL:SDE8' NAME 'FRA_DISK6' SIZE 977 MDROP DISK FRA_DISK1, FRA_DISK2;

Adding and Dropping in the Same Command

It is possible to combine both the add and drop operations in the same ALTER DISKGROUPstatement. This approach is recommended in preference to separate operations wherever possible. Adding and dropping in the same command has the benefit of rebalancing data extents once and can provide greater assurance that there is enough space for the rebalance operation to succeed.

Adding disks to a disk group and dropping disks from a disk group, whether in the same command or not, is an effective way of migrating ASM from an existing storage platform to a new one. For example, when a new disk subsystem is configured for discovery by ASM, it is possible to migrate your data from its current storage to the new storage without down time by using a single command for each disk group.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 25

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 38: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Adding and Dropping Failure Groups

Examples:

ALTER DISKGROUP FRA ADD FAILGROUP FAIL2 DISK 'ORCL:SDE11' SIZE 977 M ,'ORCL:SDE12' SIZE 977 MDROP DISKS IN FAILGROUP FAIL1REBALANCE POWER 11 WAIT;

ALTER DISKGROUP FRA DROP DISKS IN FAILGROUP FAIL2REBALANCE POWER 1;

ALTER DISKGROUP FRAADD FAILGROUP FAIL1DISK 'ORCL:SDE9' SIZE 977 M ,'ORCL:SDE10' SIZE 977 MREBALANCE POWER 0;

Adding and Dropping Failure Groups

Adding and dropping failure groups is essentially the same as adding and dropping individual disks. The only noticeable difference from a user’s perspective is the variation in the ALTER DISKGROUP statement to include explicit references to failure group names.

In the slide’s examples, the first statement will add a failure group with two disks. However, no data will be moved into the failure group because rebalancing is disabled for this statement.

If the second statement is executed immediately after the first, the FAIL1 failure group and its disks will be removed and effectively replaced by the FAIL2 failure group and the two disks it contains. This statement will trigger a rebalancing operation to use FAIL2 and it will not return until the rebalance is complete. The rebalance operation will execute as quickly as possible but may consume considerable system resources.

The final statement will remove the FAIL2 failure group and return the disk group to the same disk configuration as it had before this series of statements. This statement will return quickly. The rebalance operation triggered by it will proceed asynchronously and will be allocated minimal system resources.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 26

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 39: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Undropping Disks in Disk Groups

Use ALTER DISKGROUP ... UNDROP DISKS to cancel all pending disk removal operations within disk groups.

Examples:

ALTER DISKGROUP{ diskgroup_name [, diskgroup_name ] ...| ALL}UNDROP DISKS

;

ALTER DISKGROUP DATA UNDROP DISKS;

ALTER DISKGROUP DATA2, DATA3 UNDROP DISKS;

ALTER DISKGROUP ALL UNDROP DISKS;

Undropping Disks in Disk Groups

The UNDROP DISKS clause of the ALTER DISKGROUP statement enables you to cancel all pending disk removal operations within disk groups. Pending disk removal operations include those currently in progress.

This statement cannot be used to restore disks that are being dropped as the result of a DROP DISKGROUP statement, or for disks that are being dropped using the FORCE clause.

If a drop disk operation has already completed, this statement cannot be used to restore it. In this case, you can simply add the dropped disk back to the disk group to achieve the same outcome.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 27

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 40: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Mounting and Dismounting Disk Groups

The ALTER DISKGROUP command enables you to manually mount and dismount disk groups.

Examples:

ALTER DISKGROUP{ diskgroup_name [, diskgroup_name ] ...| ALL}{ MOUNT [ RESTRICTED | NORMAL ]

[ FORCE | NOFORCE ]| DISMOUNT [ FORCE | NOFORCE ]}

;

ALTER DISKGROUP DATA MOUNT;

ALTER DISKGROUP DATA2, DATA3 MOUNT;

ALTER DISKGROUP ALL DISMOUNT;

Mounting and Dismounting Disk Groups

Disk groups must be mounted for database instances to access the files they contain. To mount or dismount disk groups, use the MOUNT or DISMOUNT clauses of the ALTER DISKGROUPstatement. You can mount or dismount disk groups by name, or specify ALL. Specify ALL MOUNT to mount all the disk groups specified in the ASM_DISKGROUPS initialization parameter. Specify ALL DISMOUNT to dismount all currently mounted disk groups.

If a disk group is mounted using the RESTRICTED option, no other ASM instance in the same cluster can mount that disk group and the disk group is not usable by any database instances. RESTRICTED mode can be used to improve rebalancing performance on a cluster by eliminating messaging and coordination with other ASM instances and database instances.

For normal and high redundancy disk groups, use the FORCE option of the MOUNT clause to mount a disk group if there are sufficient ASM disks available. The mount succeeds if ASM finds at least one complete set of extents in a disk group. MOUNT FORCE is used in situations where one or more disks are not available. When MOUNT FORCE is used, ASM flags the unavailable disks as offline and drops the disks after DISK_REPAIR_TIME expires.

If you try to dismount a disk group that contains open files, the statement will fail, unless you also specify the FORCE clause. If you DISMOUNT FORCE, the files in the disk group become immediately inaccessible to your databases, and any operations involving those files will fail. In extreme cases, your database instance will abort if a disk group that it uses is forcibly dismounted.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 28

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 41: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Viewing Connected Clients

To view ASM clients:• In an ASM instance, V$ASM_CLIENT displays one row for

each open ASM disk group used by each client database instance.

• In a database instance, V$ASM_CLIENT displays one row for each open ASM disk group used by the database instance.

• Connect to an ASM instance with the asmcmd lsctcommand.

Viewing Connected Clients

Following is a typical example of a query that retrieves information about databases that are current clients of a particular disk group using V$ASM_CLIENT from the ASM instance:

SQL> SELECT G.NAME DISK_GROUP, C.*2 FROM V$ASM_CLIENT C, V$ASM_DISKGROUP G3 WHERE C.GROUP_NUMBER = G.GROUP_NUMBER4 AND G.NAME = 'DATA';

SOFTWARE COMPATIBLEDISK_GROUP GROUP# INSTANCE_NAME DB_NAME STATUS VERSION VERSION---------- ------ ------------- -------- --------- ---------- ----------DATA 1 racdb2 racdb CONNECTED 11.2.0.1.0 11.2.0.0.0DATA 1 +ASM2 +ASM CONNECTED 11.2.0.1.0 11.2.0.1.0

The asmcmd lsct provides similar information:$ asmcmd lsct

DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group

+ASM CONNECTED 11.2.0.1.0 11.2.0.1.0 +ASM2 DATA

racdb CONNECTED 11.2.0.1.0 11.2.0.0.0 racdb2 DATA

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 29

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 42: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Dropping Disk Groups

The DROP DISKGROUP statement enables you to delete an ASM disk group and, optionally, all its files.

Examples:

DROP DISKGROUP diskgroup_name[ FORCE INCLUDING CONTENTS| { INCLUDING | EXCLUDING } CONTENTS]

;

DROP DISKGROUP DATA;

DROP DISKGROUP DATA2 INCLUDING CONTENTS;

DROP DISKGROUP DATA3 FORCE INCLUDING CONTENTS;

Dropping Disk Groups

The DROP DISKGROUP statement enables you to delete an ASM disk group and, optionally, all its files. You can specify the INCLUDING CONTENTS clause if you also want to delete any files that might be contained in the disk group. The default is EXCLUDING CONTENTS, which prevents you from dropping the disk group if it has any contents.

In order for the DROP DISKGROUP statement to succeed, the disk group must be mounted by the current ASM instance only and have none of the disk group files open.

When you drop a disk group, ASM dismounts the disk group and removes the disk group name from the ASM_DISKGROUPS initialization parameter if an SPFILE file is being used. If a text initialization parameter file (PFILE) is being used, you must manually adjust the ASM_DISKGROUPS initialization parameter to make sure that ASM does not attempt to mount the dropped disk group the next time the instance starts.

If you cannot mount a disk group but need to drop it, you can use the FORCE INCLUDING CONTENTS option. This enables you to remove the headers on disks that belong to a disk group that cannot be mounted by any ASM instances. When you use the FORCE INCLUDING CONTENTS option, the ASM instance does not attempt to verify that the disk group is being used by another ASM instance. The FORCE INCLUDING CONTENTS option can be useful in various unusual situations but should be used with extreme caution.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 30

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 43: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Checking the Consistency of Disk Group Metadata

You can check the internal consistency of disk group metadata using the ALTER DISKGROUP statement.

• This is conceptually the same as fsck in Linux or UNIX.

• An error summary is returned by the statement.

• Error details are written to the ASM instance alert log.• Use the REPAIR option to resolve inconsistencies.

– Similar to fsck -y in Linux or UNIX

• Additional specific CHECK clauses have been deprecated.

ALTER DISKGROUP diskgroup_name CHECK [ REPAIR | NOREPAIR ];

Checking the Consistency of Disk Group Metadata

You can check the internal consistency of disk group metadata using the ALTER DISKGROUPstatement with the CHECK keyword. The disk group must be mounted to perform these checks. The statement displays summary errors and details about the errors are written to the alert log. The CHECK keyword performs the following operations:

• Verifies the consistency of the disk• Cross-checks all the file extent maps and allocation tables for consistency• Checks that the alias metadata directory and file directory are linked correctly• Verifies that the alias directory tree is linked correctly• Checks that ASM metadata directories do not have unreachable allocated blocks

Note that these checks do not validate any stored database content such as index structures and table data.

The REPAIR clause specifies that ASM should attempt to repair errors that are found during the check. Use the NOREPAIR clause to receive alerts about inconsistencies without ASM resolving the errors automatically. The default is NOREPAIR.

In earlier releases, you could specify CHECK for ALL, DISK, DISKS IN FAILGROUP, or FILE. These additional clauses have been deprecated. Oracle recommends that you do not introduce these clauses into your new code because they are scheduled for desupport.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 31

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 44: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM Fast Mirror Resync

Enabled when COMPATIBLE.RDBMS >= 11.1

ASM redundancy used

P P PS SS

Disk access failure(disk offline)

P P PS SS

Failure time < DISK_REPAIR_TIME

P

--

P PS S

--

S

Disk becomes accessible.Modified extents synchronized

P

--

P

--

PS

--

S

--

S

1 2

34

Primary extentSecondary extent

Screen reader hint: This slide contains a diagram that is explained in the speaker

notes.

ASM Fast Mirror Resync

Whenever ASM is unable to write an extent, ASM takes the associated disk offline. If the corresponding disk group uses ASM mirroring (NORMAL or HIGH redundancy), at least one mirror copy of the same extent exists on another disk in the disk group.

Before Oracle Database 11g, ASM assumed that an offline disk contains only stale data and no longer reads from such disks. Shortly after a disk is put offline, ASM drops it from the disk group by re-creating the extents allocated to the disk on the remaining disks in the disk group using mirrored extent copies. This process is quite resource intensive and can take hours to complete. If the disk is replaced or the failure is repaired, the disk must be added again and another rebalance operation must take place.

ASM fast mirror resync significantly reduces the time required to resynchronize a transient failure of a disk. When a disk goes offline following a transient failure, ASM tracks the extents that are modified during the outage. When the transient failure is repaired, ASM can quickly resynchronize only the ASM disk extents that have been affected during the outage.

Using ASM fast mirror resync, the failed disk is taken offline but not dropped if you have set the DISK_REPAIR_TIME attribute for the corresponding disk group. The setting for this attribute determines the duration of disk outage that ASM will tolerate while still being able to resynchronize after you complete the repair. Note that the tracking mechanism uses one bit for each modified extent and is very efficient.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 32

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 45: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Preferred Read Failure Groups

In a multisite cluster, with separate storage at each site, preferred read failure groups allow every node in the cluster toread from its local disks resulting in better performance.

Site B +ASM2Site A +ASM1

F3F1 F2F4

= DG1

= DG2

+ASM1.ASM_PREFERRED_READ_FAILURE_GROUPS = DG1.F1, DG2.F3+ASM2.ASM_PREFERRED_READ_FAILURE_GROUPS = DG1.F2, DG2.F4

Disk groups

Screen reader hint: This slide contains a diagram with an example of how ASM_PREFERRED_READ_FAILURE_GROUPS might be set. In this example, two disk groups,

DG1 and DG2, span two sites, Site A and Site B. Site A has ASM instance +ASM1 and houses failure groups DG1.F1 and DG2.F3. Site B has ASM instance +ASM2 and houses failure groups

DG1.F2 and DG2.F4. The suggested parameter setting for this scenario is:

+ASM1.ASM_PREFERRED_READ_FAILURE_GROUPS = DG1.F1, DG2.F3

+ASM2.ASM_PREFERRED_READ_FAILURE_GROUPS = DG1.F2, DG2.F4

Preferred Read Failure Groups

Before Oracle Database 11g, ASM always read the primary copy of a mirrored extent. It may bemore efficient for a node to read from a failure group extent that is closest to the node, even if it is a secondary extent. This is especially true in extended (multisite) cluster configurations where reading from a local copy of an extent provides improved performance.

With Oracle Database 11g, you can do this by configuring the preferred mirror read using the new initialization parameter, ASM_PREFERRED_READ_FAILURE_GROUPS. The disks in the identified failure groups become the preferred read disks. Thus, every node can be configured to read from its local extent copy. This results in higher efficiency and performance, and reduced network traffic. The setting for this parameter is instance specific.

If there is more than one mirrored copy and you have set a value for the ASM_PREFERRED_READ_FAILURE_GROUPS parameter, ASM first reads the copy that resides on a preferred read disk. If that read fails, ASM attempts to read from another mirrored copy that might not be on a local preferred read disk.

You can use the PREFERRED_READ column in the V$ASM_DISK view to determine whether a particular disk belongs to a preferred read failure group.

To identify performance issues with the ASM preferred read failure groups, use the V$ASM_DISK_IOSTAT view. This view displays input/output statistics for each ASM client.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 33

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 46: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Preferred Read Failure GroupsBest Practice

Two sites: Normal redundancy

Two failure groups: One for each site

P S

Three sites: High redundancy

Three failure groups: One for each site

S P S

Two sites: High redundancy

Four failure groups: Two for each site

P SSS PS

Screen reader hint: This slide contains a diagram that is explained in the speaker

notes.

Site A Site B Site A Site B

Site A Site B Site C

Preferred Read Failure Groups Best Practice

In practice, there are only a limited number of sensible disk group configurations in an extended cluster. A good configuration takes into account both performance and availability of a disk group in an extended cluster. Here are some possible examples:

For a two-site extended cluster, a normal redundancy disk group should have only two failure groups; all disks local to each site should belong to the same failure group. Also, no more than one failure group should be specified as a preferred read failure group by each instance. If there are more than two failure groups, ASM may not mirror extents across both sites and you will not be protected against possible site failure.

If the disk group is to be created as a high-redundancy disk group, at most two failure groups should be created on each site. This guarantees that at least one extent copy is located at each site. Both local failure groups should be specified as preferred read failure groups for the local instance.

For a three-site extended cluster, a high-redundancy disk group with three failure groups should be used. In this way, ASM can guarantee that each extent has a mirror copy local to each site and that the disk group is protected against a catastrophic disaster on any of the three sites.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 34

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 47: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Viewing ASM Disk Statistics

• V$ASM_DISK_IOSTAT displays information about disk I/O statistics for each ASM client.

• If V$ASM_DISK_IOSTAT is queried from a database instance, only the rows relating to that instance are shown.

• ASMCMD lsdsk --statistics shows just the disk statistics.

• ASMCMD iostat shows a subset of the statistics depending on the option.

Viewing ASM Disk Statistics

V$ASM_DISK_IOSTAT relates information about disk I/O statistics to each database instance that is a client of ASM. The following example shows the use of V$ASM_DISK_IOSTAT to summarize I/O statistics for each instance and disk group combination:

SQL> SELECT INSTNAME, G.NAME DISKGROUP, SUM(READS) READS, 2 SUM(BYTES_READ) BYTES_READ, SUM(WRITES) WRITES, 3 SUM(BYTES_WRITTEN) BYTES_WRITTEN4 FROM V$ASM_DISK_IOSTAT I, V$ASM_DISKGROUP G5 WHERE I.GROUP_NUMBER = G.GROUP_NUMBER6 GROUP BY INSTNAME, G.NAME;

INSTNAME DISKGROUP READS BYTES_READ WRITES BYTES_WRITTEN-------- --------- ------- ----------- ------- -------------ORA11G1 FRA 730 12373504 20303 382233088ORA11G1 DATA 73619 1272131584 53549 953606656

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 35

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 48: d59999gc10_sg2

Viewing ASM Disk Statistics (continued)

The following query quickly shows the presence of any I/O errors reported by ASM. If this query returns a value other than zero, further investigation may be undertaken:

SQL> SELECT SUM(READ_ERRS)+SUM(WRITE_ERRS) ERRORS2 FROM V$ASM_DISK;

ERRORS----------

0

The ASMCMD lsdsk --statistics command shows just the statistics columns.$ asmcmd lsdsk --statisticsReads Write Read_Errs Write_Errs Read_time Write_Time

Bytes_Read Bytes_Written Voting_File Path6033 42139 0 0 54.332 597.764

68358144 604267008 Y ORCL:DATA16278 11890 0 0 71.596 163.1

81133568 131769344 Y ORCL:DATA234902 39797 0 0 58.86 602.488

545398784 580783616 Y ORCL:DATA3191934 44743 0 0 162.28 620.704 3112785920 623890432 N ORCL:DATA4

The ASMCMD iostat command has more ways to see the statistics including errors. $ asmcmd iostat -eGroup_Name Dsk_Name Reads Writes Read_Err Write_Err DATA DATA1 69439488 612191744 0 0 DATA DATA2 81936384 132383744 0 0 DATA DATA3 553680896 588099072 0 0 DATA DATA4 3160528896 631962624 0 0

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 36

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 49: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Performance, Scalability, and Manageability Considerations for Disk Groups

• Create separate disk groups for database files and fast recovery area.

• Disks in a disk group should have the same size and performance characteristics.– Allows the disk group to deliver consistent performance

– Allows ASM to use disk space most effectively

– Allows operations with different storage requirements to be matched with different disk groups effectively

• Using separate disk groups for each database as opposed to having multiple databases in a disk group has various benefits and drawbacks.

Performance, Scalability, and Manageability Considerations for Disk Groups

There is no ideal number of disk groups. However, there are a few guiding principles that can help you decide how many disk groups you should create.

• Create separate disk groups for your database files and fast recovery area for backup files. This configuration allows fast recovery in case of a disk group failure.

• Disks in a disk group should have the same size and performance characteristics. If you have several different types of disks in terms of size and performance, create disk groups that contain disks with similar characteristics. ASM load-balances the file activity by uniformly distributing file extents across all the disks in a disk group. For this technique to be effective, it is important that disks in a disk group have similar performance characteristics. For example, the newest and fastest disks might reside in a disk group reserved for the database work area, and slower drives could reside in a disk group reserved for the fast recovery area.

• There are benefits and drawbacks associated with housing multiple databases in the same disk group as opposed to maintaining each database in a separate disk group.Housing multiple databases in a single disk group affords the most efficient use of space. However, any faults or maintenance that affects the disk group may affect many databases. Separate disk groups provide greater isolation from the effects of a fault or maintenance operation. However, to achieve this may consume more disk space and may require more disk group maintenance to balance disk resources.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 37

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 50: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Quiz

If you create a disk group and do not specify the REDUNDANCYclause, the default setting will be:1. EXTERNAL

2. NORMAL

3. HIGH

4. The default varies depending on the number of failure groups that are defined.

Answer: 2

However, if only one failure group is specified, the CREATE DISKGROUP statement will fail. In any event, it is recommended that the REDUNDANCY clause should be specified explicitly because the setting cannot be changed after the disk group has been created.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 38

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 51: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Quiz

Adding a disk to a disk group will always cause a rebalance operation to occur.

1. True

2. False

Answer: 2

If the ASM_POWER_LIMIT initialization parameter is set to zero or REBALANCE POWER 0 is specified in the ALTER DISKGROUP statement, then rebalancing will be disabled.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 39

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 52: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:

• Create and delete ASM disk groups

• Set the attributes of an existing ASM disk group

• Perform ongoing maintenance tasks on ASM disk groups

• Explain key performance and scalability considerations for ASM disk groups

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 40

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 53: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Practice 8 Overview: Administering ASM Disk Groups

This practice covers the following topics:

• Configuring disk groups

• Adding and removing disks

• Controlling rebalance operations

• Monitoring disk and disk group I/O statistics

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 8 - 41

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 54: d59999gc10_sg2

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 55: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Administering ASM Files, Directories, and Templates

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 56: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to:

• Use different client tools to access Automatic Storage Management (ASM) files

• Describe the format of a fully qualified ASM file name

• Explain how ASM files, directories, and aliases are created and managed

• Describe and manage disk group templates

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 2

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 57: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM Clients

ASM

DBMS_FILE_TRANSFER

RMAN

ASMCMD

ASMCMD

>

SQL

SQL>

Database instance

XML DB repository

ftp>

http:/

/ftp>

http:/

/

SQL

SQL>

Screen reader hint: This slide contains a diagram that is explained in the speaker

notes.

ACFS

ASM Clients

Oracle 11g ASM is a volume manager and a file system for Oracle database files. This slide introduces various ASM clients. Many of these will be examined in greater detail later in this lesson. Clients that connect directly to ASM include:

• Oracle Database: Oracle Database is the most fundamental ASM client. It makes requests of ASM relating to numerous types of activities such as ASM File creation.

• ASM Cluster File System: This depends on ASM to provide the ASM volumes.• ASM Clusterware: If the Oracle Cluster Registry (OCR), voting files, or ASM server

parameter file (SPFILE) is stored on an ASM disk group, the ASM instance is a client of itself.

• ASMCA: ASM Configuration Manager is a graphical interface that allows you to manage ASM instances, disk groups, and volumes.

• Enterprise Manager: This allows you to manage and monitor the ASM instance directly, and indirectly through the database.

• Clusterware: Clusterware uses ASM to store the OCR and voting disk files by default.• SQL clients (such as SQL*Plus): A series of ASM-specific SQL commands (such as

CREATE DISKGROUP) provide the most fundamental management client for ASM.• ASMCMD: ASM command-line interface is used to interrogate and manage ASM. It

includes many UNIX-like commands that can be used to manage the files and directories in an ASM system.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 3

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 58: d59999gc10_sg2

ASM Clients (continued)

In addition to clients that connect to ASM directly, there are a series of interfaces provided by Oracle Database that can be used to manipulate ASM in various different ways. These include:

• SQL: When an Oracle Database is managed under ASM, activities that create and delete ASM files (such as CREATE DATABASE and DROP TABLESPACE) will implicitly interact with the underlying ASM instance. A database instance will also directly read and write ASM files as a result of SQL data manipulation language (DML) commands (INSERT, UPDATE, and DELETE). However, operations that involve space management (such as extending a data file) will also require interaction with the ASM instance.

• Oracle Recovery Manager (RMAN): RMAN is Oracle’s recommended backup and recovery utility. RMAN is well integrated with ASM and can be used to migrate non-ASM databases into ASM.

• XML DB: ASM files and directories can be accessed through a virtual folder in the XML DB repository. XML DB provides a means to access and manipulate the ASM files and directories with programmatic APIs, such as the DBMS_XDB package, and with XML DB protocol services such as FTP and HTTP/WebDAV.

• DBMS_FILE_TRANSFER: The DBMS_FILE_TRANSFER package provides procedures to copy ASM files within a database or to transfer binary files between a local ASM instance and a remote database file. DBMS_FILE_TRANSFER.COPY_FILE supports all transfer combinations involving ASM and/or your local file system, namely:

- Local file system to local file system- Local file system to ASM- ASM to local file system- ASM to ASM

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 4

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 59: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Interaction Between Database Instances and ASM

Database instance

ASM storage

ASM instance

Create file1

Initialize

4

Extent map3

Commit5

Allocation

2

Close6

Screen reader hint: This slide contains a diagram that is explained in the speaker

notes.

Interaction Between Database Instances and ASM

The file creation process provides an illustration of the interactions that take place between database instances and ASM. The file creation process occurs as follows:

1. The database requests file creation. 2. An ASM foreground process creates a Continuing Operation Directory (COD) entry and

allocates space for the new file across the disk group. 3. A background database process (ASMB) receives an extent map for the new file.4. The file is now open and the database process initializes the file directly.5. After initialization, the database process requests that the file creation be committed. This

causes the ASM foreground process to clear the COD entry and mark the file as created.6. Acknowledgement of the file commit implicitly closes the file. The database instance will

need to reopen the file for future I/O.

There are two important points to consider from this example:• The database instance and ASM instance work together in a coordinated fashion. A

database instance must interact with ASM to map database files to ASM extents. It also receives a stream of messages relating to ASM operations (such as disk group rebalancing) that may lock or move ASM extents.

• Database I/O is not channeled through the ASM instance. In fact, the database conducts I/O operations directly against ASM files, as illustrated in step 4 in the slide.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 5

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 60: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Accessing ASM Files by Using RMAN

• RMAN should be used to back up ASM files.– RMAN is needed to unmap data file extent locations.

– Third-party Oracle Database backup and recovery managers also use RMAN to access ASM.

• ASM can be used as a storage area for RMAN backups.

• Depending on the context, RMAN can reference individual ASM files or entire disk groups.

• RMAN can be used to migrate databases into ASM.– Migrating a database to ASM can be done one file at a time,

or all at once.

RMAN> BACKUP AS COPYDATAFILE "+DATA/rdbms/datafile/tbs_5.256.565313879"FORMAT "+DATA2";

Accessing ASM Files by Using RMAN

The Oracle Recovery Manager (RMAN) is the preferred method for backup and recovery of databases contained in ASM. RMAN is very well integrated with ASM and is also used by third-party Oracle Database backup and recovery managers.

RMAN can back up from and recover to ASM. It can also use ASM as a store for backup sets.

Depending on the context, RMAN can reference individual ASM files or entire disk groups. For example, the following RMAN BACKUP command refers to an individual ASM file on the second line and also refers to an ASM disk group on the third line.

RMAN> BACKUP AS COPYDATAFILE "+DATA/rdbms/datafile/tbs_5.256.565313879"FORMAT "+DATA2";

RMAN can also be used to migrate existing databases into ASM. This can be done one file at a time or a complete database can be migrated into ASM in the same operation.

The following provides an example of the procedure you can use to relocate your entire database to an ASM disk group (assuming the use of a server parameter file):

1. Obtain the file names of the current control files and online redo logs using V$CONTROLFILE and V$LOGFILE.

2. Shut down the database consistently.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 6

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 61: d59999gc10_sg2

Accessing ASM Files by Using RMAN (continued)

3. Modify the server parameter file of your database as follows:- Start the database with the NOMOUNT option.- Set the DB_CREATE_FILE_DEST parameter to the desired ASM disk group.- Remove the CONTROL_FILES parameter. It will be re-created automatically.

4. Edit to replace the placeholder file and disk group references with your actual locations, and then run the following RMAN command file. This backs up the database, switches the current data files to the backups, renames the online redo logs, and re-creates the temporary tablespaces.

RESTORE CONTROLFILE FROM '/u1/c1.ctl';ALTER DATABASE MOUNT;BACKUP AS COPY DATABASE FORMAT '+YOURDG';SWITCH DATABASE TO COPY;# Repeat command for all online redo log membersSQL "ALTER DATABASE RENAME '/u1/log1' TO '+YOURDG' ";ALTER DATABASE OPEN RESETLOGS;# Repeat commands for all temporary tablespacesSQL "ALTER TABLESPACE temp ADD TEMPFILE";SQL "ALTER DATABASE TEMPFILE '/u1/temp1' DROP";

5. Delete the old database files.

Note: This example illustrates the procedure for migrating a database into ASM. You may want to use other options and settings to migrate your specific databases into ASM. For a complete discussion of this topic, refer to the Oracle Database Backup and Recovery User’s Guide 11g Release 2.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 7

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 62: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Accessing ASM Files by Using XML DB

• ASM files and directories can be accessed through the /sys/asm virtual folder in the XML DB repository.

– Access is via PL/SQL APIs, FTP, and HTTP/WebDAV.

• The diagram shows the hierarchy under /sys/asm.

Screen reader hint: This slide contains a diagram that is explained in the speaker

notes.

Accessing ASM Files by Using XML DB

ASM files and directories can be accessed through a virtual folder in the Oracle Database XML DB repository. The repository path to the virtual folder is /sys/asm. The folder is virtual because its contents do not actually reside in the repository; they exist as normal ASM files and directories. /sys/asm provides a means to access and manipulate the ASM files and directories with programmatic APIs, such as the DBMS_XDB package, and with XML DB protocols such as FTP and HTTP/WebDAV. You must log in to XML DB as a user other than SYS and that user must be granted the DBA role to access /sys/asm with XML DB protocols.

A typical use for this capability might be to view /sys/asm as a Web folder in Windows Explorer, and then copy a Data Pump dumpset from an ASM disk group to an operating system file system by dragging and dropping.Under /sys/asm, the folder hierarchy is defined by the structure of an ASM fully qualified file name. That is, the /sys/asm virtual folder contains one subfolder for every mounted disk group, and each disk group folder contains one subfolder for each database that uses the disk group. In addition, a disk group folder might contain files and folders corresponding to aliases created by the administrator. Continuing the hierarchy, the database folders contain ASM file type folders, which contain the ASM files.

Note: XML DB must be configured to enable FTP and HTTP/WebDAV. Use Enterprise Manager or the script provided at $ORACLE_HOME/rdbms/admin/catxdbdbca.sql to do this.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 8

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 63: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Accessing ASM Files by Using DBMS_FILE_TRANSFER

• DBMS_FILE_TRANSFER provides procedures to:

– Copy ASM files within a database

– Transfer binary files in either direction between a local ASM instance and a remote database file

• DBMS_FILE_TRANSFER.COPY_FILE supports all transfer combinations between ASM and local files.

• Example:– Copy a local database file into ASM.

SQL> CREATE DIRECTORY dgb AS '+DATA/dbfiles'; SQL> CREATE DIRECTORY loc AS '/u01/app/oracle/oradata/db'; SQL> BEGIN

2 DBMS_FILE_TRANSFER.COPY_FILE('loc','tmp.dbf','dgb','tmp.dbf');3 END;4 /

Accessing ASM Files by Using DBMS_FILE_TRANSFER

The DBMS_FILE_TRANSFER package provides procedures to copy ASM files within a database or transfer binary files between databases that use ASM. The DBMS_FILE_TRANSFER package has the following procedures:• COPY_FILE: Reads a file from a source directory and creates a copy of the file in a

destination directory. The source and destination directories can both be in a local file system or in an ASM disk group. You can also use this procedure to copy between a local file system and an ASM disk group; the copy operation is valid in either direction.

• GET_FILE: Contacts a remote database to read a remote file and then creates a copy of the file in the local file system or ASM disk group

• PUT_FILE: Reads a local file or ASM disk group and contacts a remote database to create a copy of the file in the remote file system

When using DBMS_FILE_TRANSFER, note the following:• The size of the copied file must be a multiple of 512 bytes.• The size of the copied file must be less than or equal to two terabytes.• Transferring a file is not transactional. To guarantee consistency, bring files offline when

the database is in use.• The copied file is treated as a binary file, and no character set conversion is performed.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 9

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 64: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Accessing ASM Files by Using ASMCMD

• ASMCMD provides a command-line interface to interrogate and manage ASM.

• It includes many UNIX-like commands that can be used to manage the files and directories in an ASM system.

• Example AMSCMD session:[grid@host01 ~]$ asmcmdASMCMD> cd +DATA/racdb/DATAFILEASMCMD> ls -l SYS*Type Redund Striped Time Sys NameDATAFILE MIRROR COARSE AUG 03 16:00:00 Y SYSAUX.257.692926339

DATAFILE MIRROR COARSE AUG 03 16:00:00 Y SYSTEM.256.692926339ASMCMD> cd ..ASMCMD> pwd+DATA/racdb/DATAFILEASMCMD> exit[grid@host01 ~]$

Accessing ASM Files by Using ASMCMD

ASMCMD is a command-line utility that you can use to view and manipulate files and directories within ASM disk groups. ASMCMD can list the contents of disk groups, perform searches, create and remove directories and aliases, display space utilization, and more.

When you run ASMCMD, you connect to an ASM instance and you are provided a command line where you can execute any of the ASMCMD commands. The ASMCMD commands include many commands that perform the same basic functions as their UNIX/Linux counterparts. These include: cd, cp, du, find, ls, mkdir, pwd, and rm. There are additional commands that perform ASM-specific functions.

ASMCMD commands are discussed in parallel with the other utilities for performing ASM administration tasks.

When you invoke the ASMCMD utility from the Grid software $ORACLE_HOME, the utility attempts to connect AS SYSASM. The ASMCMD utility attempts to connect AS SYSASM, so the user that invokes ASMCMD must be a member of the OSASM group. When connecting from a $ORACLE_HOME database, ASMCMD attempts to connect AS SYSDBA. The OS user must be a member of the OSDBA group for ASM. Set the ORACLE_SID environment variable to the local name of the database instance. Any OS user who is a member of the appropriate OS group may connect with ASMCMD to a local instance, and specify the connect role with the -aoption.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 10

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 65: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Fully Qualified ASM File Names

• Every file created in ASM gets a system-generated file name, known as the fully qualified file name. – You cannot set the fully qualified file name.

– ASM guarantees uniqueness within the ASM environment.

– The fully qualified file name is used in database views that display Oracle Database file names.

• Format:—

• Examples:+DATA/ORA11G/DATAFILE/SYSTEM.262.676172197+DATA/ORA11G/PARAMETERFILE/spfile.268.676172697+DATA/ORA11G/CONTROLFILE/Current.257.676172363+DATA/ORA11G/ONLINELOG/group_1.256.676172371+FRA/ORA11G/CONTROLFILE/Backup.275.676172942

<+group>/<dbname>/<file_type>/<file_type_tag>.<file>.<incarnation>

Fully Qualified ASM File Names

Every file created in ASM gets a system-generated file name, known as the fully qualified file name. You cannot set the fully qualified file name. The fully qualified file name represents a complete path name in the ASM file system. An example of a fully qualified file name is:

+DATA/racdb/DATAFILE/SYSTEM.262.676172197

You can use the fully qualified file name to reference (read or retrieve) an ASM file. You can also use other abbreviated file name formats, such as an alias ASM file name (discussed later in this lesson) to reference an ASM file. You can find the fully qualified file name in database views displaying Oracle Database file names, such as V$DATAFILE and V$LOGFILE.

A fully qualified file name has the following form:+group/dbname/file_type/file_type_tag.file.incarnation

where:• +group is the disk group name preceded by a plus sign. You can think of the plus sign (+)

as the root directory of the ASM file system, similar to the slash (/) on UNIX or Linux file systems.

• dbname is the DB_UNIQUE_NAME of the database to which the file belongs.• file_type is the Oracle Database file type.• file_type_tag is type-specific information about the file.• file.incarnation is the file/incarnation pair, used to ensure uniqueness.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 11

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 66: d59999gc10_sg2

Fully Qualified ASM File Names (continued)

The following table shows possible values for the file_type and file_type_tagcomponents of a fully qualified file name:

file_type file_type_tag Description

CONTROLFILE Current or Backup Control files and backup control files

DATAFILE <tsname> Data file and data file copies. <tsname>is the tablespace containing the data file.

ONLINELOG group_<group#> Online logs

ARCHIVELOG thread_<thread#>_seq_<sequence#>

Archive logs

TEMPFILE <tsname> Temporary tablespace file. <tsname> is the tablespace containing the data file.

BACKUPSET <sp>_<timestamp> Data file backup, data file incremental backup, or archive log backup pieces. <sp> has the value s when the backup set includes the spfile; n indicates that the backup set does not include the spfile.

PARAMETERFILE spfile Server parameter files

DATAGUARDCONFIG <dbname> Data Guard configuration file. <dbname>is the value of the DB_UNIQUE_NAMEdatabase initialization parameter.

FLASHBACK log_<log#> Flashback logs

CHANGETRACKING ctf Block change tracking file, used in incremental backups

DUMPSET <user>_<job#>_<file#>

Data Pump dumpset. Dumpset files encode the username, the job number that created the dump set, and the file number as part of the tag.

XTRANSPORT <tsname> Cross-platform transportable tablespace data files. <tsname> is the tablespace containing the data file.

AUTOBACKUP <sp>_<timestamp> Automatically generated control file backup. <sp> has the value s when the backup set includes the spfile; n indicates that the backup set does not include the spfile.

ASMPARAMETERFILE spfile Name of the Oracle ASM SPFILE

OCRFILE ocrfile Name of the OCR files

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 12

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 67: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Other ASM File Names

• Numeric ASM file name

• Alias ASM file name

• Alias ASM file name with template

• Incomplete ASM file name

• Incomplete ASM file name with template

Format: <+group>.<file>.<incarnation>Example: +DATA.262.676172197

Format: <+group>/<alias>Example: +DATA/my_dir/my_other_dir/my_file_name.dbf

Format: <+group>(<template>)/<alias>Example: +DATA(my_template)/my_dir/my_other_file_name.dbf

Format: <+group>

Format: <+group>(<template>)Example: +DATA(my_template)

Other ASM File Names

In addition to the fully qualified file name, the following other ASM file names can be used in various different situations:

Numeric ASM File Name

The numeric ASM file name can be used for referencing existing files. It is derived from the fully qualified ASM file name and takes the form:

+group.file.incarnation

Alias ASM File Name

Alias ASM file names, or simply aliases, can be used both for referencing existing ASM files and for creating new ASM files. Alias names start with the disk group name preceded by a plus sign, after which you specify a name string of your choosing. Alias file names are implemented using a hierarchical directory structure, with the slash (/) or backslash (\) character separating name components. You can create an alias in any system-generated or user-created ASM directory already in existence except for the root (+) level.

Alias ASM file names are distinguished from fully qualified file names or numeric file names because they do not end in a dotted pair of numbers. It is an error to attempt to create an alias that ends in a dotted pair of numbers.

You can think of aliases as achieving the same function as symbolic links in a UNIX or Linux environment.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 13

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 68: d59999gc10_sg2

Other ASM File Names (continued)

Alias ASM File Name with Template

An alias ASM file name with template is essentially the same as an alias ASM file name. The only difference is that a file created using this type of file name receives the mirroring and striping attributes specified by the named template. The template must belong to the disk group that the file is being created in. An alias ASM file name with template is used only for ASM file creation operations.

Incomplete ASM File Name

Incomplete ASM file names are used only for file creation operations. An incomplete ASM file name is just a reference to a disk group name. When incomplete ASM file names are used, ASM automatically creates fully qualified file names under the specified disk group.

Incomplete ASM File Name with Template

Incomplete ASM file names with templates are also used only for file creation operations. They consist of the disk group name followed by the template name in parentheses. This explicit template reference causes ASM to use the specified template to determine mirroring and striping attributes for the file, instead of the default template for that file type.

Incomplete ASM file names, with and without templates, can be used in conjunction with the Oracle Managed Files (OMF) feature of Oracle Database. If the DB_CREATE_FILE_DESTinitialization parameter for the database is set to an incomplete ASM file name, it is possible to create a tablespace without any further reference to a file name.

For example, assume the following initialization parameter setting:DB_CREATE_FILE_DEST = '+DATA'

The following statement creates the tspace1 tablespace.CREATE TABLESPACE tspace1;

ASM automatically creates and manages a data file for tspace1 on ASM disks in the DATAdisk group. File extents are stored using the attributes defined by the default template for a data file.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 14

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 69: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Valid Contexts for the ASM File Name Forms

File Name Form Valid to Reference Existing Files

Valid for Single File Creation

Valid for Multiple File Creation

Fully Qualified Yes No No

Numeric Yes No No

Alias Yes if defined Yes No

Alias with Template No Yes No

Incomplete No Yes Yes

Incomplete with Template No Yes Yes

Valid Contexts for the ASM File Name Forms

The table in the slide specifies the valid contexts for each file name form.

A single file creation is an operation that creates a single file, such as a data file or a control file. You can use an alias or incomplete file name, where a file name is called for in a statement to create a single file, such as CREATE TABLESPACE or CREATE CONTROLFILE.

A multiple file creation is an operation that can create multiple ASM files. An example of such an operation would be a single ALTER DATABASE statement that adds a new online redo log group with multiple members.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 15

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 70: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Single File Creation Examples

• Alias ASM file name

• Alias file name with template

• Incomplete file name

SQL> ALTER TABLESPACE myspace ADD2 DATAFILE '+DATA/mydir/myspace02.dbf' SIZE 50M;

SQL> ALTER TABLESPACE myspace ADD2 DATAFILE '+DATA(mytemplate)/mydir/myspace03.dbf';

SQL> ALTER TABLESPACE myspace ADD2 DATAFILE '+DATA' SIZE 50M;

Single File Creation Examples

In a single file creation operation, you specify one of the following ASM file name forms instead of a local file name. As previously mentioned, the valid forms for single file creation operations are:

• Alias ASM file name• Alias ASM file name with template• Incomplete ASM file name• Incomplete ASM file name with template

Note: When the SIZE parameter is not given, the SIZE defaults to 100M.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 16

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 71: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Multiple File Creation Example

• Create an online redo log group with two members.– Set the following database initialization parameters:

– Then create the log group without any file references:

• The resulting fully qualified file names are:

SQL> ALTER DATABASE ADD LOGFILE;

+DATA/ORA11G/ONLINELOG/group_5.269.699798885+FRA/ORA11G/ONLINELOG/group_5.256.699799169

DB_CREATE_ONLINE_LOG_DEST_1 = '+DATA'DB_CREATE_ONLINE_LOG_DEST_2 = '+FRA'

Multiple File Creation Example

Multiple file creation operations work only in conjunction with incomplete ASM file names. In such operations, the incomplete file name reference is implied through the use of the following database initialization parameters:

• DB_CREATE_FILE_DEST• DB_CREATE_ONLINE_LOG_DEST_<n> (where <n> = 1, 2, 3, 4 or 5)

The slide example illustrates how an online redo log group containing two members is created without any explicit file references.

Although it is recommended that you use the Database Configuration Assistant (DBCA) to create databases, it is possible to set DB_CREATE_FILE_DEST to an ASM disk group and create a complete database using the following simple statement:

CREATE DATABASE sample;

This statement creates a database with at least the following ASM files:• A SYSTEM tablespace data file in the disk group specified in DB_CREATE_FILE_DEST• A SYSAUX tablespace data file in the disk group specified in DB_CREATE_FILE_DEST• A multiplexed online redo log with two log groups. Each log group will have one member

in the disk group specified in DB_CREATE_FILE_DEST and a second member in the disk group specified in DB_RECOVERY_FILE_DEST.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 17

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 72: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

View ASM Aliases, Files, and Directories

• In an ASM instance, V$ASM_ALIAS displays one row for each alias, system-generated file name, and directory present in every disk group mounted by the ASM instance.

• V$ASM_ALIAS contains the full hierarchy of aliases, files, and directories.– Use a hierarchical query to show this.

• In a database instance, V$ASM_ALIAS displays no rows.

• The asmcmd ls command behaves much like the UNIX ls command.

View ASM Aliases, Files, and Directories

V$ASM_ALIAS contains a record for each system-generated file name, user-defined alias, and directory for every currently mounted disk group.

Use a hierarchical query on V$ASM_ALIAS to reconstruct the hierarchy of files, aliases, and directories in ASM. Join V$ASM_ALIAS with V$ASM_FILE to display additional file information, such as the file type. V$ASM_ALIAS is also commonly joined with V$ASM_DISKGROUP to display the disk group name.

The asmcmd ls -l command lists the contents of a directory:ASMCMD> ls -l +DATA/RACDB/DATAFILE

Type Redund Striped Time Sys Name

DATAFILE MIRROR COARSE JUL 31 14:00:00 Y EXAMPLE.264.692926563

DATAFILE MIRROR COARSE JUL 31 14:00:00 Y SYSAUX.257.692926339

DATAFILE MIRROR COARSE JUL 31 14:00:00 Y SYSTEM.256.692926339

DATAFILE MIRROR COARSE JUL 31 14:00:00 Y UNDOTBS1.258.692926341

DATAFILE MIRROR COARSE JUL 31 14:00:00 Y UNDOTBS2.265.692926841

DATAFILE MIRROR COARSE JUL 31 14:00:00 Y USERS.259.692926341

The following query shows an example of how to reconstruct the ASM file hierarchy using V$ASM_ALIAS in combination with other V$ASM views:

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 18

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 73: d59999gc10_sg2

View ASM Aliases, Files, and Directories (continued)

SQL> SELECT CONCAT('+'||GNAME, SYS_CONNECT_BY_PATH(ANAME, '/')) 2 FULL_PATH, SYSTEM_CREATED, ALIAS_DIRECTORY, FILE_TYPE3 FROM ( SELECT B.NAME GNAME, A.PARENT_INDEX PINDEX, 4 A.NAME ANAME, A.REFERENCE_INDEX RINDEX, 5 A.SYSTEM_CREATED, A.ALIAS_DIRECTORY,6 C.TYPE FILE_TYPE7 FROM V$ASM_ALIAS A, V$ASM_DISKGROUP B, V$ASM_FILE C8 WHERE A.GROUP_NUMBER = B.GROUP_NUMBER9 AND A.GROUP_NUMBER = C.GROUP_NUMBER(+)10 AND A.FILE_NUMBER = C.FILE_NUMBER(+)11 AND A.FILE_INCARNATION = C.INCARNATION(+)12 )13 START WITH (MOD(PINDEX, POWER(2, 24))) = 014 CONNECT BY PRIOR RINDEX = PINDEX;

FULL_PATH S A FILE_TYPE------------------------------------------------------- - - ----------------+DATA/cluster03 Y Y+DATA/cluster03/ASMPARAMETERFILE Y Y+DATA/cluster03/ASMPARAMETERFILE/REGISTRY.253.692923731 Y N ASMPARAMETERFILE+DATA/cluster03/OCRFILE Y Y+DATA/cluster03/OCRFILE/REGISTRY.255.692923735 Y N OCRFILE+DATA/RACDB Y Y+DATA/RACDB/DATAFILE Y Y+DATA/RACDB/DATAFILE/UNDOTBS1.258.692926341 Y N DATAFILE+DATA/RACDB/DATAFILE/EXAMPLE.264.692926563 Y N DATAFILE+DATA/RACDB/DATAFILE/UNDOTBS2.265.692926841 Y N DATAFILE+DATA/RACDB/DATAFILE/USERS.259.692926341 Y N DATAFILE+DATA/RACDB/DATAFILE/SYSTEM.256.692926339 Y N DATAFILE+DATA/RACDB/DATAFILE/SYSAUX.257.692926339 Y N DATAFILE+DATA/RACDB/CONTROLFILE Y Y+DATA/RACDB/CONTROLFILE/Current.260.692926519 Y N CONTROLFILE+DATA/RACDB/ONLINELOG Y Y+DATA/RACDB/ONLINELOG/group_1.261.692926525 Y N ONLINELOG+DATA/RACDB/ONLINELOG/group_2.262.692926529 Y N ONLINELOG+DATA/RACDB/ONLINELOG/group_4.267.692926921 Y N ONLINELOG+DATA/RACDB/ONLINELOG/group_3.266.692926919 Y N ONLINELOG+DATA/RACDB/TEMPFILE Y Y+DATA/RACDB/TEMPFILE/TEMP.263.692926547 Y N TEMPFILE+DATA/RACDB/PARAMETERFILE Y Y+DATA/RACDB/PARAMETERFILE/spfile.268.692926927 Y N PARAMETERFILE+DATA/RACDB/spfileracdb.ora N N PARAMETERFILE

25 rows selected.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 19

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 74: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Viewing ASM Files

• In an ASM instance, V$ASM_FILE displays one row for every ASM file in every disk group mounted by the ASM instance.

• In a database instance, V$ASM_FILE displays no rows.

Viewing ASM Files

V$ASM_FILE contains information about the physical attributes of ASM files. This includes settings for striping, redundancy, and block size, along with information about the file size, type, redundancy status, creation date, and date of last modification.

Because it does not contain directory path information, V$ASM_FILE is often joined with V$ASM_ALIAS and V$ASM_DISKGROUP to construct fully qualified file names and full alias paths. See the description of V$ASM_ALIAS earlier in this lesson for an example.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 20

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 75: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM Directories

• ASM disk groups contain the following system-generated hierarchical directory structure for storing ASM files.

• You can create your own directories within this hierarchy to store aliases that you create.

ASM Directories

ASM disk groups contain a system-generated hierarchical directory structure for storing ASM files. The system-generated file name that ASM assigns to each file represents a path in this directory hierarchy. The following is an example of a system-generated file name:

+DATA/racdb/DATAFILE/SYSTEM.262.676172197

The plus sign represents the root of the ASM file system. The DATA directory is the parent directory for all files in the DATA disk group. The racdb directory is the parent directory for all files in the racdb database, and the DATAFILE directory contains all data files for the racdbdatabase.

You can create your own directories within this hierarchy to store aliases that you create. Thus, in addition to having user-friendly alias names for ASM files, you can have user-friendly paths to those names. For example, the following user-defined directory might be used to store a collection of alias file names:

+DATA/racdb/my_directory

User-defined directories may be created at any level below the disk group directories. That is, you cannot create a user-defined directory at the root (+) level.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 21

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 76: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ASM Directories

Use the ALTER DISKGROUP statement to create, rename, and drop ASM directories.

Examples:

ALTER DISKGROUP diskgroup_name{ ADD DIRECTORY 'dir_name' [, 'dir_name' ]...| DROP DIRECTORY

'dir_name' [ FORCE | NOFORCE ] [, 'dir_name' [ FORCE | NOFORCE ]]...

| RENAME DIRECTORY'old_dir_name' TO 'new_dir_name'[, 'old_dir_name' TO 'new_dir_name' ]...

};

SQL> ALTER DISKGROUP DATA ADD DIRECTORY '+DATA/mydir';

SQL> ALTER DISKGROUP DATA 2 RENAME DIRECTORY '+DATA/mydir' TO '+DATA/myotherdir';

Managing ASM Directories

You can use the ALTER DISKGROUP statement to create, rename, and drop user-defined directories. You cannot rename or drop system-created directories.

When using the CREATE DIRECTORY clause, you cannot create a nested directory unless the parent directory already exists. For example, the following statement fails unless first_diralready exists:

ALTER DISKGROUP DATAADD DIRECTORY '+DATA/first_dir/second_dir';

When using the DROP DIRECTORY clause, you cannot drop a directory containing alias names unless you also specify the FORCE clause. If you specify DROP DIRECTORY ... FORCE, ASM will recursively remove all the subdirectories and aliases that exist below the specified directory.

The ASM Command-Line utility (ASMCMD) can also be used to manage ASM directories.

The ASMCMD commands:• mkdir: Makes directory• mv: Renames a file or directory• rm: Removes a file or directory and its aliases• rmalias: Removes the alias only

You can also perform all these operations from Enterprise Manager.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 22

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 77: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing Alias File Names

Use the ALTER DISKGROUP statement to create, rename, and drop ASM aliases.

Examples:

ALTER DISKGROUP diskgroup_name{ ADD ALIAS 'alias_name' FOR 'filename'

[, 'alias_name' FOR 'filename' ]...| DROP ALIAS 'alias_name' [, 'alias_name' ]...| RENAME ALIAS 'old_alias_name' TO 'new_alias_name'

[, 'old_alias_name' TO 'new_alias_name' ]...}

;

SQL> ALTER DISKGROUP DATA ADD ALIAS '+DATA/mydir/system.dbf'2 FOR '+DATA/sample/datafile/system.262.676172197';

SQL> ALTER DISKGROUP DATA RENAME ALIAS '+DATA/mydir/datafile.dbf'2 TO '+DATA/payroll/compensation.dbf';

Managing Alias File Names ASM alias file names, or aliases, provide a more user-friendly means of referring to ASM files, rather than the system-generated file names. For example, the following ALTER DATABASEstatement uses an alias to the data file that is being taken offline:

ALTER DATABASE DATAFILE '+DATA/mydir/myfile.dbf' OFFLINE;

You can create an alias for a file when you create it in the database. For example:CREATE TABLESPACE myts

DATAFILE '+DATA/mydir/myts.dbf' SIZE 50M ONLINE;

The alias name is created automatically, but the command fails if the directory alias does not already exist. Note: A file created in this manner is not an OMF file.You can add an alias to an existing file by using the ADD ALIAS clause of the ALTER DISKGROUP statement. You can create an alias in any system-generated or user-created ASM directory already in existence. However, you cannot create an alias at the root level (+). You cannot create multiple aliases for the same ASM file.The ALTER DISKGROUP statement also provides clauses to RENAME and DROP existing aliases. Dropping an alias does not drop the referenced file. If you use the ALTER DISKGROUP ... DROP FILE statement to drop an ASM file, the alias associated with the file is also removed.ASMCMD can also be used to manage ASM aliases.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 23

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 78: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM Intelligent Data Placement

ASM Intelligent Data Placement (IDP):

• Places files in hot or cold regions based on:– File attribute

– Template

• Improves performance by:– Placing frequently accessed files in the hot region

• Improves disk usage by:– Placing infrequently used files in the cold region

IDP attribute changes to existing data will be implemented by

the next rebalance operation.

ASM Intelligent Data Placement

It is known that access to the data furthest from the spindle of a disk can be as much as 50% faster than for the data closest to the spindle. So placing frequently accessed data in the disk region furthest from the spindle will increase performance. This region is called the hot region and the region closest to the spindle, the cold region.

Some customers would never use the cold region of the disk nearest the spindle due to the increased latency. With Intelligent Data Placement, you can assign an ASM file to a disk region for best performance and use the lower performance region for less frequently accessed files. Using the disk region settings, you can ensure that frequently accessed data is placed on the outermost tracks (hot region). In addition, when files with similar access patterns are located physically close together, latency is reduced. Intelligent Data Placement also allows the placement of primary and mirror extents in different hot or cold regions.

Intelligent Data Placement settings can be specified for a file or in disk group templates. The disk region settings for a file can be modified after the file has been created. The disk region setting can improve I/O performance by placing more frequently accessed data in regions furthest from the spindle, while reducing your cost by increasing the usable space on a disk.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 24

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 79: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Guidelines for Intelligent Data Placement

Intelligent Data Placement works best for:

• Data files that are accessed at different rates

• Disk groups more than 50% full

• Disks with the geometry exposed to ASM (JBOD)

Intelligent Data Placement requires disk group attributes.• COMPATIBLE.ASM = 11.2.0 or higher

• COMPATIBLE.RDBMS = 11.2.0 or higher

Guidelines for Intelligent Data Placement

Intelligent Data Placement works best for the following:• Databases with data files that are accessed at different rates. A database that accesses all

data files in the same way is unlikely to benefit from Intelligent Data Placement.• Disk groups that are more than 50% full. Disk groups less than 50% full may see some

performance benefit. For disk groups less than 50% full, the management overhead is likely to cancel any benefit. By default, the disk fills from the outer (hot) sectors, so the first 25% to 50% of the disk is in the hot region and enjoying the faster access. The cold region extents follow the hot region extents, but if all the extents are in the outer sectors of the disk, all would be accessed as though they were marked hot. As the disk fills, the performance difference between files marked hot and cold should become more pronounced. The actual performance benefit will depend on the file access pattern.

• Disks that have better performance at the beginning of the media relative to the end. Because Intelligent Data Placement leverages the geometry of the disk, it is well suited to just a bunch of disks (JBOD). In contrast, a storage array with logical units (LUNs)composed of concatenated volumes masks the geometry from Oracle ASM. When LUNs are used, Intelligent Data Placement will use the logical sector numbering, which may have no relation to the physical disk layout.

The COMPATIBLE.ASM and COMPATIBLE.RDBMS disk group attributes must be set to 11.2 or higher to use Intelligent Data Placement.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 25

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 80: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Assigning Files to Disk Regions

• Create a file template:ALTER DISKGROUP data ADD TEMPLATE datafile_hot

ATTRIBUTES ( HOT MIRRORHOT);

• Alter the file attributes:ALTER DISKGROUP data MODIFY FILE

'+data/orcl/datafile/users.259.679156903'

ATTRIBUTES (COLD MIRRORCOLD);

Assigning Files to Disk Regions

There are two methods for assigning a file to a region.

The first is with a template. Set the primary region and mirror region attributes to the desired regions, and then use the template to create the ASM file.

The second method is to change the region attributes of the file. You can change these attributes either through SQL*Plus with the command shown in the slide or using Enterprise Manager as shown in a following slide.

Changing the assigned region of the disk for a file does not trigger a rebalance operation. If enough space exists in the proper region, any new extents are created in the proper region. The region specification sets a preferred policy, not a commitment. The next rebalance operation, after any IDP changes that require existing data to be moved, will attempt to move extents into the specified region.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 26

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 81: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Assigning Files to Disk Regions with Enterprise Manager

Assigning Files to Disk Regions with Enterprise Manager

An ASM file can be assigned to a disk region through Enterprise Manager. On the Edit File page, you can assign the primary and mirror extents of a file to either the hot or cold region. An online log file—an example is shown in the slide—with a high volume of writes should be assigned to the hot region for both the primary and mirror extents. The default settings of cold region are shown in the slide’s example.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 27

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 82: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Monitoring Intelligent Data Placement

Performance benefit is dependent on:

• File access pattern

• Correct file access evaluation

Use performance views to select the candidate files:• V$ASM_FILE

• V$ASM_DISK

• V$ASM_DISK_STAT

Performance benefit can be verified by testing.

• Review Automatic Workload Repository (AWR) reports before and after making a change.

• Use V$FILEMETRIC and V$FILEMETRIC_HISTORY.

Monitoring Intelligent Data Placement

It is possible that inappropriate file policy specification can degrade performance. Measuring the frequency of accesses to each file will be an important tool for diagnoses. Columns have been added to ASM-fixed tables that will provide this information.

V$ASM_FILE has the following columns for each file: HOT_READS, HOT_WRITES, HOT_BYTES_READ, HOT_BYTES_WRITTEN, COLD_READS, COLD_WRITES, COLD_BYTES_READ, and COLD_BYTES_WRITTEN. These columns will help you to select the files that should be moved to another region and verify that the selection is correct. The V$ASM_DISK and V$ASM_DISK_STAT views have the same statistics as V$ASM_FILE, but aggregated at the disk level. These statistics can be viewed with the ASMCMD command: iostat –region.

To verify that the change of regions is actually changing the performance, record the AVERAGE_READ_TIME, AVERAGE_WRITE_TIME, PHYSICAL_READS, and PHYSICAL_WRITESfrom V$FILEMETRIC and V$FILEMETRIC_HISTORY before and after the change, or use AWR reports (especially the File I/O statistics in the I/O Stats section). The ASMCMD command iostat --region -t will list these statistics.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 28

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 83: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Disk Group Templates

• Templates are used to set striping, redundancy, and region attributes for ASM files.

• Striping attribute options are:– FINE: 128 KB stripe size

– COARSE: 1 allocation unit (AU_SIZE attribute) stripe size

• Redundancy attribute options are:– MIRROR: Two-way mirroring

– HIGH: Three-way mirroring

– UNPROTECTED: No ASM mirroring

• A set of default templates is created for each disk group.– Default template settings depend on disk group redundancy.

• The redundancy attribute applies only to NORMALredundancy disk groups.

Disk Group Templates

Templates are used to set redundancy (mirroring) and striping attributes of each individual file created in an ASM disk group. When a file is created, redundancy and striping attributes are set for that file based on an explicitly named template or the system template that is the default template for the file type.

When a disk group is created, ASM creates a set of default templates for that disk group. The set consists of one template for each file type (data file, control file, redo log file, and so on) that is supported by ASM. For example, a template named ONLINELOG provides the default file redundancy and striping attributes for all redo log files written to that disk group. Default template settings for redundancy depend on the disk group type. The default template for data files for a normal redundancy disk group sets two-way mirroring, whereas the corresponding default template in a high redundancy disk group sets three-way mirroring. You can modify the default templates.

The striping attribute of templates applies to all disk group types (normal redundancy, high redundancy, and external redundancy). In practice, the redundancy attribute of templates applies only to normal redundancy disk groups. It is effectively ignored for high-redundancy disk groups in which every file is always three-way mirrored. It is also effectively ignored for external redundancy disk groups in which no files are mirrored by ASM. Nevertheless, each type of disk group gets a full set of templates, and the redundancy value in each template is always set to the proper default for the disk group type.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 29

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 84: d59999gc10_sg2

Disk Group Templates (continued)

The following table lists the default templates and the attributes that are associated with different ASM file types. As the table shows, the initial redundancy value of each default template depends on the type of disk group that the template belongs to. The region values for all the system templates are set to COLD and MIRRORCOLD.

Template Name (File Type)

Striping Mirroring, Normal Redundancy Disk Group

Mirroring, High Redundancy Disk Group

Mirroring, External Redundancy Disk Group

CONTROLFILE FINE HIGH HIGH UNPROTECTED

DATAFILE COARSE MIRROR HIGH UNPROTECTED

ONLINELOG FINE MIRROR HIGH UNPROTECTED

ARCHIVELOG COARSE MIRROR HIGH UNPROTECTED

TEMPFILE COARSE MIRROR HIGH UNPROTECTED

BACKUPSET COARSE MIRROR HIGH UNPROTECTED

PARAMETERFILE COARSE MIRROR HIGH UNPROTECTED

DATAGUARDCONFIG COARSE MIRROR HIGH UNPROTECTED

FLASHBACK FINE MIRROR HIGH UNPROTECTED

CHANGETRACKING COARSE MIRROR HIGH UNPROTECTED

DUMPSET COARSE MIRROR HIGH UNPROTECTED

XTRANSPORT COARSE MIRROR HIGH UNPROTECTED

AUTOBACKUP COARSE MIRROR HIGH UNPROTECTED

OCRFILE COARSE MIRROR HIGH UNPROTECTED

ASMPARAMETERFILE COARSE MIRROR HIGH UNPROTECTED

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 30

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 85: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

• In either an ASM or database instance, V$ASM_TEMPLATEdisplays one row for every template present in every disk group mounted by the ASM instance.

• The ASMCMD lstmpl command shows all the templates for a given disk group.

$ asmcmd lstmpl -l -G data

Group Group

Name Num Name Stripe Sys Redund PriReg MirrReg

DATA 1 ARCHIVELOG COARSE Y MIRROR COLD COLD

DATA 1 ASMPARAMETERFILE COARSE Y MIRROR COLD COLD

DATA 1 CONTROLFILE FINE Y HIGH COLD COLD

DATA 1 DATAFILE COARSE Y MIRROR COLD COLD

Viewing Templates

Viewing Templates

The V$ASM_TEMPLATE view displays disk group templates and their attribute settings. The following query lists all the templates associated with the DATA disk group. The SYSTEMcolumn returns Y for system-generated templates and N for user-defined templates.

SQL> SELECT T.NAME, T.REDUNDANCY, T.STRIPE, T.SYSTEM2 FROM V$ASM_TEMPLATE T, V$ASM_DISKGROUP G3 WHERE T.GROUP_NUMBER = G.GROUP_NUMBER4 AND G.NAME = 'DATA';

NAME REDUNDANCY STRIPE SYSTEM------------------------- ---------- ------ ------PARAMETERFILE MIRROR COARSE YDUMPSET MIRROR COARSE YNOMIRROR UNPROT COARSE NDATAFILE MIRROR COARSE YONLINELOG MIRROR FINE YARCHIVELOG MIRROR COARSE YCONTROLFILE HIGH FINE Y...

Note: The output from this example has been truncated for page-formatting purposes.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 31

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 86: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing Disk Group Templates

Use the ALTER DISKGROUP statement to add, modify, and drop disk group templates.

Examples:

ALTER DISKGROUP diskgroup_name{ { ADD | MODIFY } TEMPLATE

qualified_template_clause [, qualified_template_clause ]...| DROP TEMPLATE template_name [, template_name ]...}

;

qualified_template_clause ::= template_name ATTRIBUTES([ MIRROR | HIGH | UNPROTECTED ] [ FINE | COARSE ])

SQL> ALTER DISKGROUP DATA ADD TEMPLATE unprot ATTRIBUTES (UNPROTECTED);

SQL> ALTER DISKGROUP DATA ADD TEMPLATE reliable ATTRIBUTES (HIGH FINE);

SQL> ALTER DISKGROUP DATA MODIFY TEMPLATE DATAFILE ATTRIBUTES (FINE);

Managing Disk Group Templates

Creating your own templates allows you to set the right combination of attributes to meet your requirements. It also allows you to specify an intuitive name for the attribute set. For example, in a normal or high redundancy disk group, you might use a template with a redundancy setting of UNPROTECTED to create transient files without the overhead of mirroring. This can be a useful way of creating short-term databases for testing and development purposes.Using the ALTER DISKGROUP statement, you can add new templates to a disk group, modify existing ones, or drop templates.When adding a new template, you can omit either the redundancy or the stripe attribute setting. If you omit an attribute setting, it will default as follows:

• For the redundancy attribute, the value defaults to MIRROR for a normal redundancy disk group, HIGH for a high redundancy disk group, and UNPROTECTED for an external redundancy disk group.

• For the stripe attribute, the value defaults to COARSE.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 32

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 87: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing Disk Group Templateswith ASMCMD

• Use the mktmpl statement to add disk group templates.

• Use the chtmpl command to modify disk group templates.

• Use the rmtmpl command to drop disk group templates.

• Use the lstmpl command to view disk group templates.

ASMCMD> mktmpl -G DATA –redundancy unprotected unprotect;

ASMCMD> chtmpl –G DATA –striping fine DATAFILE;

ASMCMD> rmtmpl –G DATA unprotect;

ASMCMD> lstmpl –G DATA

Managing Disk Group Templates (continued)You can modify both user-defined and system-generated default templates. Modifying a template does not change the attributes of any existing files. The attributes of a modified template will only be applied to new files that reference the modified template, either explicitly or implicitly. You can only drop user-defined templates, not system-default templates.The V$ASM_TEMPLATE view lists all of the templates known to the ASM instance.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 33

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 88: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Using Disk Group Templates

You can apply disk group templates to newly created files in thefollowing ways:

• Alias file name with template

• Incomplete file name with template

• DB_CREATE_FILE_DEST database initialization parameter

SQL> ALTER TABLESPACE myspace ADD2 DATAFILE '+DATA(mytemplate)/mydir/myspace02.dbf' SIZE 50M;

SQL> ALTER TABLESPACE myspace ADD2 DATAFILE '+DATA(mytemplate)' SIZE 50M;

SQL> ALTER SYSTEM SET DB_CREATE_FILE_DEST = '+DATA(mytemplate)';

SQL> CREATE TABLESPACE yourspace;

Using Disk Group Templates

You can reference a template name when creating a file by using either an alias or an incomplete file name. This allows you to assign desired attributes based on an individual file rather than on the file type.

You can also use an incomplete file name with template in the DB_CREATE_FILE_DESTdatabase initialization parameter. Using this setting will apply the named template to files created by statements such as CREATE TABLESPACE and ALTER TABLESPACE.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 34

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 89: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM Access Control Lists

ASM Access Control Lists (ACLs)

• Set permissions at the ASM file level.

• Set grants to groups or users.– Users are Database software owners.

– Users are identified by the OS user ID.

• They are available only on Linux and UNIX operating systems.

ASM Access Control Lists

ASM access control lists (ACLs) provide an optional protection for the ASM files. The objective of the ASM access control list is not security but separation of duties to prevent accidental file damage. Without ACLs, any user with SYSDBA privilege may access ASM files in the mounted disk group, up to and including removing them.

To set up Oracle ASM File Access Control, you must create separate operating system groups for the OSASM, OSDBA for ASM, and OSDBA for database groups. The OSDBA group for the database must be different for each database instance using the same ASM instance.

Each ASM file is created by a DBUSER. This DBUSER is usually an owner of a database instance. The ASM files created for that database are owned by that DBUSER. The operating system user ID of the database software owner identifies the DBUSER to the ASM instance. Access is limited by the operating system effective user ID NUMBER of the DBUSER. The operating system user of a running database instance is automatically added to a disk group when the database instance accesses that disk group and creates files.

Each DBUSER can create access control lists.

The ASM ACL includes a user group, a list of DBUSERS, but not any passwords. ASM trusts the OS authentication mechanisms. A DBUSER is a member of the OSDBA group for ASM.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 35

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 90: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM ACL Prerequisites

Access control lists for ASM files require:

• Linux or UNIX operating system

• Job role separation at the OS level

• Diskgroup attributes:– COMPATIBLE.ASM to 11.2 or higher

– COMPATIBLE.RDBMS to 11.2 or higher

– ACCESS_CONTROL.ENABLED to TRUE

– ACCESS_CONTROL.UMASK to a mask value

• Permissions: none (0), read (4), or read-write(6)

• Mask values: 6 (removes all), 2 (removes write), 0 (removes nothing)

ASM ACL Prerequisites

Access control lists for ASM are available only on Linux and UNIX operating systems. Job role separation must be configured. Each database and ASM instance must have different owners.

Disk group attributes must be set. The COMPATIBLE.ASM and COMPATIBLE.RDBMS must be set to 11.2 or higher. For each disk group that is using ACLs, set ACCESS_CONTROL.ENABLED to TRUE and set ACCESS_CONTROL.UMASK to a mask value.

ACCESS_CONTROL.ENABLED must be set to TRUE before ACCESS_CONTROL.UMASK can be set. The umask value removes permissions from full access of read-write for owner, group, and others. In permissions, 6 indicates read-write, 4 indicates read, and 0 indicates none. A umask value of 0 removes nothing; 2 removes write privilege; 6 removes read-write privilege. Concatenating the values gives the permissions for a user, group, or others. For example a umask of 026 will cause files to have permissions of 640, which is read-write for the owner, read for the group, and no access for all other users.

ALTER DISKGROUP DATA2 SET ATTRIBUTE 'access_control.enabled' = 'true';

ALTER DISKGROUP DATA2 SET ATTRIBUTE 'access_control.umask' = '026';

Equivalent ASMCMD commands:ASMCMD> setattr -d data2 access_control_enabled true

ASMCMD> setattr -d data2 access_control_umask 026

Attributes of the disk group can also be set with the asmca tool.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 36

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 91: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ASM ACL with SQL Commands

The ACL for ASM can be managed with SQL commands.ALTER DISKGROUP ADD USERGROUP ... WITH MEMBER

ALTER DISKGROUP DROP USERGROUP

ALTER DISKGROUP MODIFY USERGROUP ADD MEMBER

ALTER DISKGROUP MODIFY USERGROUP DROP MEMBER

ALTER DISKGROUP ADD USER

ALTER DISKGROUP DROP USER

ALTER DISKGROUP SET PERMISSION

ALTER DISKGROUP SET OWNERSHIP

SELECT * FROM V$ASM_USER

SELECT * from V$ASM_USERGROUP

Managing ASM ACL with SQL Commands

Access control lists in ASM can be managed with the SQL commands shown.

To create a user group ACL, use the following command: ALTER DISKGROUP ADD USERGROUP groupname WITH MEMBER user [, user]

Each user must already exist.

Any user with the SYSDBA or SYSASM privilege may create groups. Only the group owner or an ASM administrator may change or drop groups. Only an ASM administrator may add or drop users. Only the owner of a file or an ASM administrator may change the ownership of a file.

Users of files are usually the database owners, and users are added automatically as files are created in a disk group. Adding users to a disk group should seldom be needed. An OS user that is not a database owner could be added to a disk group and a user group.

The SET PERMISSION clause modifies permissions of an Oracle ASM file. Note that setting the read-only permission to a file that has read-write permission revokes the write permission. Only the file owner or the Oracle ASM administrator can change the permissions of a file. You cannot change the permissions on an open file.

You cannot change the ownership on an open file.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 37

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 92: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ASM ACL with ASMCMD Commands

The ACL for ASM can be managed with ASMCMD commandschgrp usergroup list_of_files

chmod mode list_of_files

chown user[:usergroup ] list_of_files

groups diskgroup user

grpmod { -a | -d } diskgroup usergroup user1 [user2]…

lsgrp [-Ha] [ -d diskgroup ] [ pattern_expr ]

lsusr [-Ha] [ -d diskgroup ] [ pattern_expr ]

mkgrp diskgroup usergroup [user1] [user2]...

mkusr diskgroup user

passwd user

rmgrp diskgroup usergroup

rmusr diskgroup user

Managing ASM ACL with ASMCMD Commands

The ASMCMD commands allow the storage administrator to perform the same operations as were shown with SQL commands but in a more familiar format. The asmcmd commands are patterned after the UNIX commands. The asmcmd commands have the same restrictions as the SQL commands.• chmod: Modifies permissions of an Oracle ASM file or list of files. Note that setting the

read-only permission to a file that has the read-write permission revokes the write permission. Only the file owner or the Oracle ASM administrator can change the permissions of a file. You cannot change the permissions on an open file.

• chown: You cannot change the ownership on an open file.

In ASMCMD, the user can connect using the –a option to set privileges, either SYSASM or SYSDBA, but the OS user invoking the command must be a member of the OS group having that privilege. SYSASM is the default. To connect with the SYSDBA privilege, use the following syntax:

asmcmd –a SYSDBA

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 38

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 93: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ASM ACL with Enterprise Manager

Managing ASM ACL with Enterprise Manager

The ASM access control lists can be viewed and managed through Enterprise Manager. This slide shows the attributes and permissions associated with each file.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 39

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 94: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ASM ACL with Enterprise Manager

Managing ASM ACL with Enterprise Manager (continued)

On the Edit File page, permissions and ownership of an ASM file can be modified. The ownership and permissions cannot be changed on an open file.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 40

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 95: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM ACL Guidelines

• Use ACLs to prevent accidental file damage.

• Be careful when enabling access control if it blocks a user’s access to a critical file.

ASM ACL Guidelines

ASM access control lists are intended as a safety feature rather than a security feature. Without access control lists, any user with SYSDBA privileges can manipulate any file in the disk group, possibly unintentionally dropping or damaging files belonging to other databases. This becomes more important with storage consolidation where multiple databases share a single ASM installation, either on a single server or clustered.

To implement ACLs, each database must have a different OS owner and group, and the ASM instance must have yet another owner as shown in the lesson titled “Oracle Grid Infrastructure Architecture.” Then even if the databases share disk groups, ACLs can prevent a user with SYSDBA privilege in ASM from damaging a file belonging to another database.

When configuring ACL permissions and changing ownership, be careful that the ACL does not block access to a critical file or disk group. For example, make sure the database can access the disk group that contains the control file.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 41

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 96: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Quiz

RMAN is the only way to back up an Oracle Database that resides in ASM.

1. True

2. False

Answer: 2

It is possible to back up an Oracle Database by using a custom PL/SQL routine with DBMS_FILE_TRANSFER or by using the XML DB repository. RMAN is, however, the preferred method of backing up Oracle Databases that reside in ASM. RMAN is very well integrated with ASM and provides many advanced backup and recovery features to aid administrators.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 42

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 97: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Quiz

Which of the following ASM file name forms will always be a valid way of referencing an existing ASM file?

1. Fully Qualified

2. Numeric

3. Alias

4. Alias with Template

5. Incomplete

6. Incomplete with Template

Answer: 1, 2

Option three is not a correct answer because there might not be an alias defined for some ASM files.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 43

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 98: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:

• Use different client tools to access ASM files

• Describe the format of a fully qualified ASM file name

• Explain how ASM files, directories, and aliases are created and managed

• Describe and manage disk group templates

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 9 - 44

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 99: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Administering ASM Cluster File Systems

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 100: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to:

• Administer ASM Dynamic Volume Manager

• Manage ASM volumes

• Implement ASM Cluster File System

• Manage ASM Cluster File System (ACFS)

• Use ACFS Snapshots

• Use command-line tools to manage ACFS

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 2

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 101: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM Files and Volumes

Dynamic Volumes

ASM

Disk

Group

ASM Files

ACFS Third-PartyFS

DB filesDB files

DB filesDB files

DB filesDB files

DB filesDB files

Dynamic Volume Manager

ASM Cluster File System (ACFS)

ACFS Snapshots

ASM Instanceuser

Kernel

ASM Device File: /dev/asm/dgbvol1

OS

ASM Files and Volumes

The Automatic Storage Management (ASM) feature of Oracle database has been extended in Oracle Grid Infrastructure 11g Release 2 to include support for a general purpose cluster file system, the ASM Cluster File System (ACFS). To understand the operation of this feature, some terminology needs to be defined and explained.

At the operating system (OS) level, the ASM instance provides the disk group, which is a logical container for physical disk space. The disk group can hold ASM database files and ASM dynamic volume files. The ASM Dynamic Volume Manager (ADVM) presents the volume device file to the operating system as a block device. The mkfs utility can be used to create an ASM file system in the volume device file.

Four OS kernel modules loaded in the OS provide the data service. On Linux, they are:oracleasm, the ASM module; oracleadvm, the ASM dynamic volume manager module;oracleoks, the kernel services module; and oracleacfs, the ASM file system module. These modules provide the ASM Cluster File System, ACFS snapshots, the ADVM, and cluster services. The ASM volumes are presented to the OS as a device file at /dev/asm/<volume name>-<number>.

The volume device file appears as another ASM file to the ASM Instance and asmcmd utility. The ASM layers are transparent to the OS file system commands. Only the files and directories created in ACFS and the ACFS snapshots are visible to the OS file system commands. Other file system types such as ext3 and NTFS may be created in an ADMV volume using the mkfscommand on Linux and advmutil commands on Windows.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 3

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 102: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ACFS and ADVM Architecture Overview

ASM disk

ASM disk groupASM view

ASMvolume

file

OS view

ASM dynamic volume (Block device)

/dev/asm/volname-id

\\.\asm-volume_name-nnn

File systemsor

disk-basedapplications

ASMcluster

filesystem

SQL, ASMCA,EM, ASMCA

VDBG VBG# VMB

mkfs, mount,…

ACFS and ADVM Architecture Overview

In 10g, Automatic Storage Management (ASM) is both a file system and volume manager built specifically for Oracle database files. Oracle 11g Release 2 includes the ASM dynamic volume manager (ADVM) and Oracle ASM cluster file system (ACFS). ADVM provides volume management services and a standard disk device driver interface to clients. Clients, such as file systems and other disk-based applications, issue I/O requests to ADVM volume devices as they would to other storage devices on a vendor operating system.

ADVM extends ASM by providing a disk driver interface to storage backed by an ASM file. The administrator can use the ADVM to create volumes that contain file systems. These file systems can be used to support files beyond Oracle database files such as executables, report files, trace files, alert logs, and other application data files. With the addition of ADVM and ACFS, ASM becomes a complete storage solution of user data for both database and non-database file needs. ACFS is intended as a general file system accessible by the standard OS utilities. ACFS can be used in either a single server or a cluster environment.

Note: Oracle ACFS file systems cannot be used for an Oracle base directory or an Oracle grid infrastructure home that contains the software for Oracle Clusterware, ASM, Oracle ACFS, and Oracle ADVM components. Oracle ACFS file systems cannot be used for an OS root directory or boot directory.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 4

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 103: d59999gc10_sg2

ACFS and ADVM Architecture Overview (continued)

ASM volumes serve as containers for storage presented as a block device accessed through ADVM. File systems or user processes can do I/O on this “ASM volume device” just as they would on any other device. To accomplish this, ADVM is configured into the operating system. A volume device is constructed from an ASM file. ASM file extents map the ASM volume file to logical blocks located on specific physical devices. Additional processes are started as part of the ASM instance and serve as intermediaries between the ASM instance and ADVM. To use the ADVM driver, an ASM instance must exist with at least one disk group mounted that can be used to contain an ASM volume file.

An ASM volume is an ASM file. It inherits the properties of the ASM disk group and behaves similar to any other ASM file. ASM volume storage is automatically rebalanced whenever a storage configuration change occurs. This reconfiguration can be performed while an ASM volume is in use. Because ASM uses direct I/O, ASM volumes offer performance equivalent to raw disks.

An OS device file is created automatically when an ASM volume is created using either asmcmd, SQL*Plus, ASMCA, or the Enterprise Manager graphical interfaces. On Linux, this device file is created in the /dev/asm directory. You can configure both disk group mount and volume enable operations to occur automatically upon ASM instance startup. The volume device file names are unique clusterwide and persistent across all nodes in the cluster that have an ASM instance running with the disk group mounted and volumes enabled.

Upon Linux system startup, the Oracle clusterware startup will load the drivers (oraclesacfs,oracleoks, and oracleadvm). The ASM instance is started by the ASM cluster registry service (CRS) agent, which will also mount the appropriate ASM disk groups and enable volumes. The CRS agent will then mount any ACFS file systems in the Oracle Cluster Registry (OCR).

Similar actions are performed on Windows.

ACFS file systems are accessed through OS file system tools and APIs on UNIX and Linux systems, and accessed through Windows file system tools and APIs on Windows systems. Remote access is supported using standard NAS file access protocols such as network file systems (NFS) and common internet file system (CIFS) in support of heterogeneous file data sharing.

The ACFS File System and ADVM components are installed onto each host along with the other ASM components into the Grid Infrastructure home location. The ACFS components consist of drivers that are dynamically loadable OS modules, several command-line tools, and a set of processes that execute within the ASM instance. However, loading the ACFS drivers requires rootprivileges on UNIX/Linux and Administrator privileges on Windows. So, the configuration and loading of the ACFS drivers is performed by the root scripts associated with the Oracle Grid Infrastructure installation.

ACFS file systems are generally mounted on all cluster synchronization service (CSS) cluster members. In the event of a member failure, another cluster member will recover any outstanding metadata transactions on behalf of the failed member. In addition, any lock tokens held by the failed cluster members will be recovered and the failed member will be I/O fenced from the active CSS cluster. Following recovery, access by other active cluster members and any remote client systems may resume.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 5

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 104: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ADVM Processes

New ASM instance background processes:

• Volume Driver Background (VDBG)

• Volume Background (VBGn)

• Volume Membership Background (VMB)

ASM disk

ASM disk group

ASM dynamic volume (block device)

/dev/asm/volname-id

\\.\asm-volume_name-nnn

VDBG VBG# VMB

ADVM Processes

A small number of processes will be added to the ASM instance when a volume is enabled. These processes are not started when there are no volumes configured.

• The Volume Driver Background (VDBG) process forwards ASM requests to lock or unlock an extent for rebalancing, resize the volume, offline the disk, add or drop a disk, and force and dismount a disk group to the dynamic volume manager driver. The VDBG is a fatal background process so the unplanned death of this process brings down the ASM instance.

• Volume Background (VBGn) processes wait for requests from the dynamic volume manager driver that need to be coordinated with the ASM instance. An example of such a request would be opening or closing an ASM volume file when the dynamic volume manager driver receives an open for a volume (possibly due to a file system mount request) or close for an open volume (possibly due to a file system unmount request). The unplanned death of any of these processes does not have an effect on the ASM instance.

• Volume Membership Background (VMB) coordinates cluster membership with the ASM instance.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 6

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 105: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ADVM Restrictions

• Partitioning of dynamic volumes (using fdisk or similar) is not supported.

• Do not use raw to map ADVM volume block devices into raw volume devices.

• Do not create multipath devices over ADVM devices.

• Do not create ASMLIB devices over ADVM devices.

• ADVM supports all ASM supported storage solutions except NFS and Exadata.

• ADVM volumes cannot be used as a boot device or a root file system.

ADVM Restrictions

Although ADVM provides a standard disk device interface for dynamic volumes, the following restrictions should be noted:

• Device partitions are not supported on Oracle ADVM dynamic volumes. Dynamic volumes supersede traditional device partitioning. Each volume is individually named and may be configured for a single file system. Oracle ADVM volumes may be created on demand from ASM disk group storage and dynamically resized as required. These attributes make Oracle ADVM volumes far more flexible than physical devices and associated partitioning schemes.

• On Linux platforms, Oracle ADVM volume devices are created as block devices regardless of the configuration of the storage underpinning the ASM disk group. Do not use the rawcommand to map Oracle ADVM volume block devices into raw volume devices.

• You should not create multipath devices over Oracle ADVM volume devices. Multipathingshould be applied over the disk devices that are initially consumed by ASM to construct the disk group underpinning an ADVM volume.

• You should not use ASMLIB over an ADVM volume device. You cannot layer ASM over ASM in a recursive fashion as this serves no useful purpose and is not supported.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 7

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 106: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ASM Cluster File System

• General purpose scalable file system– Journaling, extent based

– Single node and cluster

• Multi-OS platform (Linux and Windows at initial release)– POSIX, X/OPEN file system solution for UNIX/Linux

– Windows file system solution for Windows platforms

• Accessible through NAS protocols (NFS, CIFS)

• Integrated with Oracle Clusterware for cluster support

• Integrated with Oracle system management tools– Oracle installation and configuration

– Enterprise Manager and ASM Storage mgt tools

– Native OS File System Management tools

ASM Cluster File System

The ASM cluster file system (ACFS) extends Automatic Storage Management (ASM) by providing a robust, modern, general-purpose, extent-based, and journaling file system. ACFS provides support for files such as application binaries, report files, trace files, alert logs, and other application data files. With the addition of the ACFS, ASM becomes a complete storage management solution for both Oracle database and non-database files. ACFS scales from small files to very large files(Exabyte) and supports a large numbers of nodes in a cluster. ACFS is an extent-based file system to provide high performance and utilizes a log-based metadata transaction engine for file system integrity and fast recovery. The ACFS on-disk structure supports endian neutral metadata. ACFS file systems can be exported to remote clients through industry standard protocols such as NFS and CIFS.

ACFS eliminates the need for third-party cluster file system solutions and simplifies management for all file types in single node and grid computing environments. ACFS supports dynamic file system expansion and contraction without any downtime. ACFS makes use of the ASM mirroring and striping features in addition to hardware RAID functionality. Note: ACFS on a single server does not use cluster features, and thus is not a cluster file system.

ACFS can be managed with tools familiar to the DBA: Enterpriser Manager, SQL*Plus, and ASMCMD. ACFS is integrated with the native OS file management tools, so the file system is transparent to the users. ASMCMD has UNIX-like syntax and command names that should be familiar to the system administrator and storage administrator. A new graphical tool (asmca) has been introduced to manage ASM and ACFS.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 8

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 107: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ADVM Space Allocation

A volume allocates space in Volume Allocation Units (VAU) at creation and resize.

• VAU is stripe column multiplied by volume extent size.

• Volume extent is statically assigned based on the disk group AU.– Volume extent is 64 MB when the AU is 1 MB.

– Each extent is allocated round-robin on different disks.

• Stripe column is the number of stripes used inside a volume.– Stripe column can range from 1 to 8.

– Stripe column default is 4.

ADVM Space Allocation

ADVM allocates space from an ASM disk group for a volume. The volume allocation unit is the smallest allocation when a volume is created and smallest amount of space that can be added to a volume.

The VAU is based on the volume extent and the value of the stripe column. The volume extent is assigned based on the disk group allocation unit (AU). The AU can vary from 1 MB to 64 MB in powers of 2 (2,4,8,…,64). The volume extent is assigned based on the AU. The default AU of 1 MB has a volume extent size of 64 MB.

The value of the stripe column is the number of volume extents that are allocated in each VAU. Thus VAU is the stripe column multiplied by the volume extent. The volume extents are allocated round-robin (each on the next disk in the disk group). If the stripe column is set to 1, volume striping is turned off.

Example: If the stripe column is 4 and the AU is 1 MB, the volume extent is 64 MB and the VAU will be 256 MB. When the volume is created, multiples of the VAU will be allocated. If you asked for a volume of 300 MB in size, you would get a volume of 512 MB, assuming the space is available in the disk group. If you resized the volume, the space would be added or dropped in 256 MB chunks.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 9

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 108: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Striping Inside the Volume

Inside a volume, space is allocated based on:

• Stripe column – the number of stripes (default 4)

• Stripe width – the size of each stripe (default 128 KB)

Striping Inside the Volume

Stripe column and stripe width are properties that are set when you create the volume. The stripe column is used both for allocating the initial space for the volume but also for the way the space inside the volume is used.

The volume will be formatted for a file system. As the file system allocates space for files, the files will be stored in pieces the size of stripe width. If defaults are used, they are stored in 128 KB pieces. Each piece goes into the next volume extent. Since the volume extents are placed on disks in the disk group in the same manner, as the pieces are placed in the extents, the pieces are spread across the disks in the disk group.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 10

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 109: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Volume Striping: Example

Example:

• Stripe column is 8, stripe width is 16 KB

• First 200 KB of a file inside the file system is written as 13 chunks of 16 KB stripes across eight allocation units.

Volume Striping: Example

For this example, assume the default AU size of 1 MB. This means that the volume extent is 64 MB. For simplicity, assume that there are eight disks in the disk group.

Since the stripe column is 8, eight volume extents will be created in every volume allocation unit of 512 MB. Eight volume extents are shown, one on each of the eight disks. Note: Each VAU will be filled before the next VAU is used.

The diagram on this page shows how ACFS volume striping works. The strip width is 16 KB. A new file of 200 KB will be written, occupying 13 chunks of 16 KB spread round-robin across the eight volume extents. Consequently, reads and writes to this file are spread across eight disks instead of one.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 11

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 110: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Volume Striping: Example

Example:

• Disk group with eight disks and external redundancy

• Default AU size of 1 MB in use

• Another ACFS file of 90 KB written as six chunks of 16 KB stripes across the same eight allocation units

Volume Striping: Example (continued)

Continuing the previous example, the next file of 90 KB is spread across each of the same volume extents in 16 KB chunks. This pattern continues until the first set of volume allocation units is filled and another set is allocated.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 12

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 111: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Creating an ACFS Volume

Create the volume:$ asmcmd volcreate -d DATA -s 100M testvol

View the volume information:$ asmcmd volinfo –d DATA testvol

Make a mount point directory:$ mkdir /u01/app/oracle/acfsdata/testvol

Make the file system (as root):# mkfs -t acfs /dev/asm/testvol-403

Mount the file system to the mount point:# mount -t acfs /dev/asm/testvol-403 \

/u01/app/oracle/acfsdata/testvol

Creating an ACFS Volume

The COMPATIBLE.ADVM attribute of the disk group must be 11.2.0 or higher before you can create an ADVM volume in the disk group. You can set the attribute with the command:

asmcmd setattr -G DATA compatibility.advm 11.2.0.0.0

To create and mount an ACFS volume, use the procedure shown above.

As a user with SYSDBA privilege in the ASM instance, perform the following:1. Create the volume by using the volcreate command.2. View the volume information to determine the volume name.3. Make a mount point directory. This is where the volume will be mounted. It should have

appropriate ownership and permissions.

As the root user:1. Make the file system. This command formats the volume for an ACFS file system.2. Mount the volume to the mount point.

Note: The mkfs command can be used to create other file systems on an ADVM volume. Only one file system is allowed per volume. Only the ACFS file system is supported as a clustered file system.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 13

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 112: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Create an ASM Dynamic Volumewith Enterprise Manager

Create an ASM Dynamic Volume with Enterprise Manager

1. On the Enterprise Manager home page, scroll down to the Instances section. Click the name of one of the ASM instances.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 14

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 113: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Create an ASM Dynamic Volumewith Enterprise Manager

Create an ASM Dynamic Volume with Enterprise Manager (continued)

2. On the Automatic Storage Management page, click the ASM Cluster File System tab. 3. On the ASM Cluster File System tab, click Create. 4. On the Create ASM Cluster File System page, click Create ASM Volume.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 15

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 114: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Create an ASM Dynamic Volumewith Enterprise Manager

Create an ASM Dynamic Volume with Enterprise Manager (continued)

5. On the Create ASM Volume page, enter a volume name, and size. Click Ok. In this example, a volume named testvol with a size of 1 GB is created.

If you click Show SQL, you will see the SQL command:ALTER DISKGROUP DATA ADD VOLUME TESTVOL SIZE 1G;

Click Ok to create the volume.

Optionally, you can specify a region of the physical disk in the Disk group where the extents should be placed.

The ASMCMD command to add a volume is:ASMCMD> volcreate -d DATA -s 1G testvol

ASMCMD and SQL have commands to view the volume informationASMCMD> volinfo –a SQL> SELECT * FROM V$ASM_VOLUME

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 16

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 115: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ADVM Dynamic Volumes

Managing ADVM Dynamic Volumes

Enterprise Manager has pages that allow you to create, delete, and manage ASM volumes.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 17

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 116: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Create an ASM Cluster File Systemwith Enterprise Manager

Create an ASM Cluster File System with Enterprise ManagerCreate an ASM Cluster File System on the ASM volume testvol that was created previously. Note that a disk group number is added to the volume name. This number is system assigned.

1. Assign a volume label. In this example it is labeled test.2. Assign a mount point. The mount point must be a exiting directory. Best practice is to place the

ACFS mount points in a common directory below the ORACLE_BASE directory. In this example the ORACLE_BASE is /u01/app/oracle the common directory is acfsdata and the mount point is /u01/app/oracle/acfsdata/test. The mount point directory must be created on all the nodes of the cluster.

If you click Show Command, the commands to make the file system on the ASM volume and to register the volume and mount point are shown. For Linux, the commands are:

$ /sbin/mkfs -t acfs -b 4k /dev/asm/testvol-93 -n "test" $ /sbin/acfsutil registry -f -a /dev/asm/testvol-93 \

/u01/app/oracle/acfsdata/test

3. Click Ok to perform the operations.

You will be asked for the OS credentials of a user that has SYSASM privileges in the ASM instance.

The file system will be created on the ASM volume and the volume will be registered in the cluster. The file system is then mounted on all the nodes of the cluster.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 18

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 117: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Manage Dynamic Volumes with SQL*PLUS

Command examples:

SQL> ALTER DISKGROUP DGROUPA

ADD VOLUME asmvol1 SIZE 10G;

SQL> ALTER DISKGROUP DGROUPA

RESIZE VOLUME asmvol1 SIZE 15G;

SQL> ALTER DISKGROUP DGROUPA

DROP VOLUME asmvol1;

SQL> ALTER DISKGROUP DGROUPA

ENABLE VOLUME asmvol1;

SQL> ALTER DISKGROUP ALL

DISABLE VOLUME ALL;

SQL> ALTER DISKGROUP DGROUPA

MODIFY VOLUME asmvol1 USAGE 'acfs';

Manage Dynamic Volumes with SQL*PLUS

SQL*Plus has added clauses to the ALTER DISKGROUP command that allow you to manage ASM volumes. Notice that each volume is managed as a part of a disk group. SQL*Plus allows you to create volumes with the add command, resize the volume, drop, enable, and disable the volume. The modify volume command allows you to set the intelligent data placement option[HOT|COLD], the mount point, and the usage name of the volume.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 19

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 118: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

As the grid owner, register the volume:

$ /sbin/acfsutil registry -a -f \

/dev/asm/volume1-167 \

/u01/app/oracle/acfsdata/volume1

Registering an ACFS Volume

Registering an ACFS Volume

You may register an ACFS volume. Registering the volume enables the cluster-ready services daemon to mount the volume automatically at startup. The –a option says add this volume to the registry. The –f option is used with the add option to allow replacement of an existing registry entry. You may register the volume before or after you mount the volume.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 20

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 119: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Creating an ACFS Volume with ASMCA

Creating an ACFS Volume with ASMCA

In ASMCA, click the Volume tab, and then, on the Volumes page, click Create.

An alternative way is to click the ASM Cluster File System tab, and then click Create. On the Create ASM Cluster File System page (shown in the slide, upper left) click the Volume field and select Create Volume.

Both methods bring you to the Create Volume page, where you specify the name, disk group, and size of the volume. Optionally, with the Advanced options you can specify the mirroring and striping attributes.

Click OK to create the ACFS volume.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 21

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 120: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Creating the ACFS File System with ASMCA

Creating the ACFS File System with ASMCA

Once the volume is created, on the Create ASM Cluster File System page, select the volume, the type of file system, and mount point. ASMCA creates the file system on the volume and registers the volume, but the mount must be done by the root or administrator user.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 22

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 121: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Mounting the ACFS File System with ASMCA

Mounting the ACFS File System with ASMCA

Because it requires root privileges to mount a file system, ASMCA does not mount the file system, but generates the command for you to use.

Starting on the ASMCA ASM Cluster File Systems tab, right-click the new Dismounted volume and select Show Mount Command. On the Mount ACFS Command page, verify the mount point and click Generate Command.

On the Mount ACFS Command page, follow the instructions. Execute the command shown on each of the nodes in the cluster as the root user. The command in this example is:

/bin/mount -t acfs /dev/asm/images-389 /u01/app/oracle/acfsdata/images

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 23

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 122: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ACFS with EM

Enterprise Manager provides the same functionality as ASMCA.

Managing ACFS with EM

The DBA, System Administrator, or Storage Administrator can use OS tools, ASMCMD, ASMCA, or Enterprise Manager to manage an ASM Cluster File System. Each Administrator can use the tools that they prefer. ACFS provides extensions to existing tools on Linux and UNIX systems and new commands on Windows for use by System Administrators.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 24

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 123: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Command syntax:asmcmd volcreate dgname volname -s size

[-r redundancy][-w stripe_width]

[-c stripe_columns] [-p [hot|cold]]

[-i [hot|cold]]

asmcmd volresize dgname volname -s newsize [-f]

asmcmd voldelete dgname volname

asmcmd volenable [-d dgname] [-d dgname -n volname]

asmcmd voldisable [-d dgname]

[-d dgname -n volname]

asmcmd volset dgname volname [-u usage_message] |

[-m mount_path][-p [hot|cold]] [-i [hot|cold]]

asmcmd volinfo [-d dgname] [-n volname]

Extending ASMCMD for Dynamic Volumes

Extending ASMCMD for Dynamic Volumes

The commands shown in the slide have been added to ASMCMD to allow the ASM volumes to be managed from ASMCMD. The ASMCMD commands allow you to have the same level of control through ASMCMD as you do through the SQL*Plus commands or Enterprise Manager.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 25

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 124: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Linux-UNIX File System APIs

ACFS supports:

• Standard POSIX and X/OPEN file system APIs and commands

• User commands: access, ar, cat, chattr, chgrp, chmod, chown, cksum, compress, cp, cpio, dd, df, diff, dir, dirname, du, file, find, fuser, grep, gunzip, gzip, link, ln, lockfile, ls, mkdir, mkfifo, mknod, mktemp, more, mv, od, pwd, rcp, rename, rm, rmdir, size, stat, string, sync, tail, tar, unlink, …

• Administrator commands: fsck, mkfs, mount, and umount

Linux-UNIX File System APIs

The ACFS files system is integrated with the POSIX and X/OPEN file system APIs and commands. This allows users and administrators to use ACFS transparently. As shown in the slide, the file system access commands are supported with ACFS transparently.

Even the administration commands such as mkfs, fsck, mount, and umount are used to manage the ACFS file system.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 26

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 125: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Linux-UNIX Extensions

• Create an ACFS file system:

mkfs [-vf] -t acfs [-b blksz] [-n name ] device [blocks]$ mkfs -t acfs /dev/asm/vol1-nnn

• Mount an ACFS file system:

mount [-v] -t acfs [-o options] device dir$ mount -t acfs /dev/asm/vol1-nnn \

/oracle/cluster1/myacfs

• Unmount an ACFS file system:

umount [-v] device|dir

• Check and repair an ACFS file system:

fsck [-avnf] -t acfs [info] device$ fsck -t acfs /dev/asm/vol1-nnn

Linux-UNIX Extensions

The administration commands have added some extension to the common commands to handle the differences between ACFS and other file systems. Most of these extensions use the familiar file system type and file system–specific option parameters just as other file systems do.

The example commands in the grey boxes show the most common usage. Each command is typed on one line. The device file is /dev/asm/volume-nnn, where volume name is appended with a disk group identifier represented by –nnn in the slide.

Note: The backslash character shown as part of the mount command in the example above is a continuation character and used for formatting purposes only.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 27

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 126: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Command Function

acfsutil info Display new ACFS file and file system features (ACFS, snapshots, registry, size) and information

acfsutil snap Create and delete ACFS snapshots

acfsutil registry Register an ACFS file system with the ACFS mount registry (that is, cluster-wide fstab)

acfsutil rmfs Remove an ACFS file system

acfsutil size Resize an ACFS file system

acfsutil tune View or modify ACFS tune-ables

ACFS Platform-Independent Commands

New ACFS commands for Linux/UNIX and Windows (not natively supported)

ACFS Platform-Independent Commands

The slide shows a set of commands that are the same on both Windows and Linux/UNIX platforms. The commands listed allow you to display and manipulate ACFS file systems.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 28

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 127: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ACFS Snapshots

ACFS snapshots:

• Are space efficient. Snapshots store:– Metadata

– Original versions of changed blocks

– Using sparse files

• Use a directory on the same volume

• Can be used to:– Revert to a version in the snapshots

– Recover a deleted file

– Back up a consistent data set

• Are limited to 63 snapshot views per volume

ACFS Snapshots

An Oracle ACFS snapshot is an online, read-only, point-in-time copy of an Oracle ACFS file system. The snapshot copy is initially sparse and merely references the storage allocation information maintained by the file system. Before an Oracle ACFS file extent is modified or deleted, its current value is copied to the snapshot using a copy-on-write (COW) technique to maintain the snapshot’s point-in-time view of the file system.

ACFS snapshots provide a low cost way to preserve the state of a volume at a point in time. When a snapshot is created, a set of metadata about the volume is created in the same volume. This metadata includes the directory structure and the names of all the files. The snapshot consists of pointers to blocks in the volume that have not changed since the snapshot was created and the actual blocks if the file has changed. After the snapshot is created, data blocks are written to the snapshot as files change, but only the original version of changed blocks are written. The snapshot allows you recover deleted files, revert files to the version at the time of the snapshot, and back up a consistent file set by taking a backup of the snapshot.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 29

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 128: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ACFS Snapshots

Manage ACFS snapshots with acfsutil commands.

• Create snapshots:$ acfsutil snap create snapshot_2

/u01/app/oracle/acfsdata/testvol

• Delete snapshots:$ acfsutil snap delete snapshot_2

/u01/app/oracle/acfsdata/testvol

• View file system information, including snapshots:$ acfsutil info fs <mount point> ls –l

<mount_point>/.ACFS/snaps

Managing ACFS Snapshots

ACFS snapshots can be managed with the acfsutil snap commands to create and delete snapshots. You can view the contents of the snapshots by using standard OS commands to search and view the contents of a directory.

An example of creating a snapshot on the ACFS volume mounted at /u01/app/oracle/acfsdata/testvol with a snapshot name of snapshot_2 is:

acfsutil snap create snapshot_2 /u01/app/oracle/acfsdata/testvol

This create command will create a directory named:/u01/app/oracle/acfsdata/testvol/.ACFS/snaps/snapshot_2

The snapshot is immediately available for use.

You can view the file system information including the number of snapshots with:acfsutil info fs /u01/app/oracle/acfsdata/testvol

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 30

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 129: d59999gc10_sg2

Managing ACFS Snapshots (continued)/u01/app/oracle/acfsdata/testvol

ACFS Version: 11.2.0.1.0.0flags: MountPoint,Availablemount time: Wed Jul 1 12:00:10 2009volumes: 1total size: 268435456total free: 163360768primary volume: /dev/asm/testvol-403

label: flags: Primary,Available,ADVM

…ADVM stripe width: 131072

number of snapshots: 1snapshot space usage: 32768

You can see the names of the snapshots with the OS command:ls /u01/app/oracle/acfsdata/testvol/.ACFS/snaps

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 31

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 130: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing ACFS Snapshotswith Enterprise Manager

Managing ACFS Snapshots with Enterprise Manager

Enterprise Manager allows you to create, manage, and search snapshots. Select the volume and click Create Snapshots to create a snapshot on that volume. To view snapshots for a particular volume, click the snapshots link for that volume.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 32

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 131: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Creating ACFS Snapshots

Creating ACFS Snapshots

Enterprise Manager allows you to create an ACFS snapshot. You can also open telnet session from Enterprise Manager and use the acfsutil snap create command.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 33

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 132: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Managing Snapshots

Managing Snapshots

On the ASM Cluster File System page of Enterprise Manager, you can Create new snapshots, delete existing snapshots, search multiple snapshots. To search or browse a single snapshot, click the snapshot name link.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 34

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 133: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Viewing Snapshots

Viewing Snapshots

Using Enterprise Manager you can view the contents of a snapshot and search for files or directories. In this example that orahome directory in snapshot_3 is being viewed.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 35

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 134: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ACFS Backups

An ACFS file system may be backed up using:

• Standard OS file system backup tools

• Oracle Secure Backup

• Third-party backup tools

ACFS snapshots present a stable point-in-time view.

ACFS Backups

ACFS file systems accommodate and enable a variety of backup and restore solutions. Oracle Secure Backup and other enterprise backup tools supports full and incremental backups of file systems by reading file system files using the native OS file system interface and storing the results on a choice of storage media (tape, disk). If the file system is active, backup results will reflect the dynamic nature of the file modifications in the file system. ACFS file system snapshots can be employed to present a stable point-in-time view of the file system to a backup tool. If the storage subsystem supports atomic copy domains, for example: Netapp or EMC, a system administrator will be able to leverage a storage array snapshot for backing up ACFS file systems.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 36

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 135: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ACFS Performance

ACFS performance benefits from:

• Distribution and load balancing of ASM file segments

• ACFS file extents distributed across ASM file segments

• User and metadata caching

• In-memory updates of transaction logs

ACFS Performance

The ACFS storage allocation and I/O access schemes benefit from the distribution and load balancing of ASM file segments across the ASM disks in a disk group. ACFS storage allocation further distributes individual ACFS file extents across ASM file segments for increased distribution of individual file data and file I/O parallelism. ACFS user and metadata caching techniques accommodate a variety of non-database application workload I/O access patterns with high cache hit rates that result in reduced I/O traffic to the underlying ASM file and deferred updates of modified file cache data. The ACFS transaction logs are first updated in memory, deferring the actual transfer of log and file data through periodic updates to the associated ASM file.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 37

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 136: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Using ACFS Volumes After Reboot

In cluster configuration, ACFS drivers are automatically reloaded after reboot.

In stand-alone server configuration:

• Oracle Restart cannot load ACFS drivers

• Root or administrator must load drivers

Using ACFS Volumes After Reboot

The ACFS drivers are loaded when the Grid Infrastructure is installed, but they are not automatically reloaded by Oracle Restart in a single-server environment.

Note: Oracle cluster-ready services reloads these drivers in a RAC environment. The drivers must be loaded by the root user on Linux or administrator on Windows.

On Linux, load the drivers with the command# /<GridHome>/bin/acfsload start –s

Enable volume in ASM instanceALTER ... ENABLE VOLUME <vol_name>;

Mount volume as root:# /bin/mount -t acfs -rw <volume_name> <mount_point> # /bin/mount -t acfs -rw /dev/asm/asm01-480 /u01/app/oracle/acfs01

View file systems as root:# acfsutil info fs

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 38

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 137: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

ACFS Views

Available in SQL*Plus connected to the ASM instance• V$ASM_FILESYSTEM contains one row for every mounted

file system.• V$ASM_ACFSVOLUMES contains one row for every ACFS

device file that is a member of a mounted ACFS file system.

ASMCMD volinfo –a provides similar volume information

• Includes unmounted file systems

• Does not include freespace or snapshot space usage

ASMCMD acfsutil info fs <file system> gives detail information on each file system.

ACFS Views

The information and availability of ASM dynamic performance views vary depending on the instance that queries the view. The ASM dynamic performance views for ACFS are available only from the ASM instance.

The V$ASM_FILESYSTEM view contains one row for every mounted file system. This view includes space usage, and snapshot space usage.

The V$ASM_ACFSVOLUME view contains one row for every ACFS device file that is a member of a mounted ACFS file system.

Similar information can be obtained from the ASMCMD volinfo command. The volinfo output includes unmounted volumes. The space usage information is not provided from the volinfocommand.

The command acfsutil info fs gives detail information about each mounted file system, including space usage, snapshot space used, volume name, and stripe configuration. The acfsutilis available to the root or administrator user.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 39

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 138: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Quiz

The ADVM volume can only be used for ACFS file systems.

1. True

2. False

Answer: 2

Other file systems can be created on ADVM volumes but they are not clustered file systems just because they are on an ADVM volume.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 40

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 139: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:

• Administer ASM dynamic volume manager

• Manage ASM volumes

• Implement ASM cluster file system (ACFS)

• Manage ACFS with various tools

• Use ACFS snapshots

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 41

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 140: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Practice 10 Overview: Managing ACFS

This practice covers the following topics:

• Managing an ACFS file system– Create

– Register

– Mount

• Managing ACFS Snapshots

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM 10 - 42

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 141: d59999gc10_sg2

Appendix APractices and Solutions

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 142: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 2

Table of Contents Practices for Lesson 1 ......................................................................................................... 3

Practice 1-1: Discovering Cluster Environment ............................................................. 4 Practices for Lesson 2 ......................................................................................................... 8

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure .............. 9 Practice 2-2: Installing Oracle Grid Infrastructure ....................................................... 18 Practice 2-3: Creating ASM Disk Groups .................................................................... 31 Practice 2-4: Creating ACFS File System .................................................................... 34 Practice 2-5: Installing a Silent Database ..................................................................... 36

Practices for Lesson 3 ....................................................................................................... 38 Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware............................ 39 Practice 3-2: Adding and Removing Oracle Clusterware Configuration Files............. 48 Practice 3-3: Performing a Backup of the OCR and OLR............................................ 52

Practices for Lesson 4 ....................................................................................................... 54 Practice 4-1: Adding a Third Node to Your Cluster ..................................................... 55

Practices for Lesson 5 ....................................................................................................... 76 Practice 5-1: Protecting the Apache Application.......................................................... 77

Practices for Lesson 6 ....................................................................................................... 85 Practice 6-1: Working with Log Files........................................................................... 86 Practice 6-2: Working with OCRDUMP ...................................................................... 89 Practice 6-3: Working with CLUVFY.......................................................................... 92

Practices for Lesson 7 ....................................................................................................... 96 Practice 7-1: Administering ASM Instances................................................................. 97

Practices for Lesson 8 ..................................................................................................... 111 Practice 8-1: Administering ASM Disk Groups ......................................................... 112

Practices for Lesson 9 ..................................................................................................... 118 Practice 9-1: Administering ASM Files, Directories, and Templates ........................ 119

Practices for Lesson 10 ................................................................................................... 129 Practice 10-1: Managing ACFS.................................................................................. 130 Practice 10-2: Uninstalling RAC Database................................................................. 140

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 143: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 3

Practices for Lesson 1 The practice for Lesson 1 is a discovery of the classroom environment.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 144: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 4

Practice 1-1: Discovering Cluster Environment In this practice, you will explore the classroom and cluster environment in preparation for cluster installation and management.

1) In an Oracle cluster, there are pieces of information that are important for a successful installation. Several of these are unique to each cluster. In order to make this practice document generic to allow every participant to use the same document and specific enough that each participant can be successful, an environment file has been created called /home/oracle/labs/st_env.sh. This file holds the information that is specific to your cluster. This script is not a script supplied with Oracle software. This script was developed specifically for this class.

2) Open a VNC session as the oracle OS user on your first node. (The instructor will give this node name.) Click the VNC icon on the desktop. In the VNC Viewer:Connection Details box, enter the name of your first node with ':1' as a suffix. In the VNC Viewer:Authentication box, enter the password for the oracle user. The password is oracle.

3) In the VNC session, open a terminal window. Right-click in the VNC session window, and select Open Terminal from the shortcut menu.

4) Find and list the /home/oracle/labs/st_env.sh script.

$ cat /home/oracle/labs/st_env.sh #Common environment Variables NODELIST=`cat /home/oracle/nodeinfo` NODE1=host01 NODE2=host02 NODE3=host03 CLUSTER_NAME=cluster01 GNS_NAME=host01-gns # Hardware environment definitions export ST_SOFTWARE_STAGE_GRID=/stage/clusterware/Disk1 export ST_SOFTWARE_STAGE_DB=/stage/database/Disk1 export ST_SOFTWARE_STAGE_DEINST=/stage/deinstall export ST_NODE1=$NODE1 export ST_NODE2=$NODE2 export ST_NODE3=$NODE3 export ST_CLUSTER_NAME=$CLUSTER_NAME export ST_NODE3_VIP=${NODE3}-vip export ST_NODE_DOMAIN=example.com export ST_NODE_LIST=${NODE1},${NODE2} export ST_NODE_LIST2=${NODE1},${NODE2},${NODE3} export ST_GNS_NAME=${GNS_NAME}.${ST_NODE_DOMAIN} # Software Ownership definitions export ST_GRID_OWNER=grid export ST_GRID_OWNER_HOME=/home/grid export ST_DB1_OWNER=oracle export ST_DB1_OWNER_HOME=/home/oracle # Grid Infrastructure definitions export ST_ASM_INSTANCE1=+ASM1 export ST_ASM_INSTANCE2=+ASM2

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 145: d59999gc10_sg2

Practice 1-1: Discovering Cluster Environment (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 5

export ST_ASM_INSTANCE3=+ASM3 export ST_ASM_HOME=/u01/app/11.2.0/grid # Database 1 definitions export ST_DB1_NAME=orcl export ST_DB1_INSTANCE1=orcl1 export ST_DB1_INSTANCE2=orcl2 export ST_DB1_INSTANCE3=orcl3 export ST_DB1_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1 # Database 2 definitions (standby, rcat for example) export ST_DB2_NAME=orcl export ST_DB2_INSTANCE1=orcl1 export ST_DB2_INSTANCE2=orcl2 export ST_DB2_INSTANCE3=orcl3 export ST_DB2_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 # Database 3 definitions export ST_DB3_NAME= export ST_DB3_INSTANCE1=

5) The environment for this class can change. The node names will be different for every cluster and the cluster names will be different for each setup inside the class. Discover and record values for your cluster. Note: The environment variables created for this class begin with ST_ and are referenced by prefixing the variable with $. For example, the ST_NODE1 variable is referenced with $ST_NODE1.

a) Set the environment variables found in st_env.sh for your terminal session. To set the variables in a session from a script, you must “source” the script—that is, cause it to run in the same shell. This is done by using the “.” command as shown in the code example.

$ . /home/oracle/labs/st_env.sh

b) What are the short names for the nodes of your cluster? These nodes are called host01, host02, and host03 in the practices that follow. You will substitute your node names or use the environment variable to refer to the node. For example, $ST_NODE2 refers to your second node, and in the code output, is shown as host02. Your nodes will be named something else. Record the names of your nodes.

First node ($ST_NODE1) _________________

Second node ($ST_NODE2) __________________

Third node ($ST_NODE3) __________________

$ echo $ST_NODE1 host01 << Your node names will be different $ echo $ST_NODE2 host01

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 146: d59999gc10_sg2

Practice 1-1: Discovering Cluster Environment (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 6

$ echo $ST_NODE3 host03

6) What is the name of your cluster?

Cluster name ($ST_CLUSTER_NAME) ________________

$ echo $ST_CLUSTER_NAME cluster01 << Your cluster name will be different

7) What is the node domain?

Node domain ($ST_NODE_DOMAIN) _____________________

$ echo $ST_NODE_DOMAIN example.com

8) What is the name of your GNS ($ST_GNS_NAME)__________________________

$ echo $ST_GNS_NAME cluster01-gns.example.com << Your gns name will be different

9) Verify that the environment variables for your cluster have been set in your environment. Use the env | grep ST_ command.

$ env | grep ST_ ST_DB1_NAME=orcl ST_DB1_OWNER_HOME=/home/oracle ST_SOFTWARE_STAGE_GRID=/stage/clusterware/Disk1 ST_CLUSTER_NAME=cluster01 ST_NODE1=host01 ST_NODE2=host02 ST_NODE3=host03 ST_GRID_OWNER=grid ST_ASM_HOME=/u01/app/11.2.0/grid ST_NODE_LIST=host01,host02 ST_GRID_OWNER_HOME=/home/grid ST_DB2_INSTANCE2=orcl2 ST_GNS_NAME=cluster01-gns.example.com ST_DB2_INSTANCE3=orcl3 ST_SOFTWARE_STAGE_DB=/stage/database/Disk1 ST_DB2_NAME=orcl ST_DB2_INSTANCE1=orcl1 ST_DB1_INSTANCE1=orcl1 ST_DB1_INSTANCE3=orcl3 ST_DB1_INSTANCE2=orcl2 ST_DB1_OWNER=oracle ST_DB2_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 ST_NODE3_VIP=host03-vip ST_DB3_NAME= ST_ASM_INSTANCE3=+ASM3 ST_ASM_INSTANCE2=+ASM2 ST_ASM_INSTANCE1=+ASM1 ST_DB3_INSTANCE1=

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 147: d59999gc10_sg2

Practice 1-1: Discovering Cluster Environment (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 7

ST_NODE_LIST2=host01,host02,host03 ST_SOFTWARE_STAGE_DEINST=/stage/deinstall ST_DB1_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1 ST_NODE_DOMAIN=example.com

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 148: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 8

Practices for Lesson 2 In the practices for this lesson, you will perform the tasks that are prerequisites to successfully installing Oracle Grid Infrastructure. You will configure ASMLib to manage your shared disks and finally, you will install and verify Oracle Grid Infrastructure 11.2. In addition, you will create ASM disk groups and an ACFS file system by using ASMCA.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 149: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 9

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure In this practice, you perform various tasks that are required before installing Oracle Grid Infrastructure. These tasks include:

• Setting up required groups and users • Creating base directory • Configuring hangcheck-timer • Configuring Network Time Protocol (NTPD) • Setting shell limits • Editing profile entries • Configuring ASMLib and shared disks

1) From a graphical terminal session, make sure that the groups asmadmin, asmdba, and asmoper exist (cat /etc/group). Make sure that the user grid exists with the primary group of oinstall and the secondary groups of asmadmin, asmdba, and asmoper. Make sure that the oracle user’s primary group is oinstall with secondary groups of dba, oper, and asmdba. Running the script /home/oracle/labs/less_02/usrgrp.sh as the root user will complete all these tasks. Perform this step on all three of your nodes.

$ cat /home/oracle/labs/less_02/usrgrp.sh #!/bin/bash groupadd -g 503 oper groupadd -g 505 asmdba groupadd -g 506 asmoper groupadd -g 504 asmadmin grep -q grid /etc/passwd UserGridExists=$? if [[ $UserGridExists == 0 ]]; then usermod -g oinstall -G asmoper,asmdba,asmadmin grid else useradd -u 502 -g oinstall -G asmoper,asmdba,asmadmin grid fi echo oracle | passwd --stdin grid usermod -g oinstall -G dba,oper,asmdba oracle $ su - Password: oracle << password is not displayed # . /home/oracle/labs/st_env.sh <<< On node 1 >>> # /home/oracle/labs/less_02/usrgrp.sh Creating mailbox file: File exists Changing password for user grid. passwd: all authentication tokens updated successfully.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 150: d59999gc10_sg2

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 10

<<< On node 2 >>> # ssh $ST_NODE2 /home/oracle/labs/less_02/usrgrp.sh The authenticity of host 'gr7273 (10.196.180.73)' can't be established. RSA key fingerprint is 4a:8c:b8:48:51:04:2e:60:e4:f4:e6:39:13:39:48:8f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'gr7273,10.196.180.73' (RSA) to the list of known hosts. root's password: oracle << password is not displayed Creating mailbox file: File exists Changing password for user grid. passwd: all authentication tokens updated successfully. <<< On node 3 >>> # ssh $ST_NODE3 /home/oracle/labs/less_02/usrgrp.sh The authenticity of host 'host03 (10.196.180.14)' can't be established. RSA key fingerprint is 4a:8c:b8:48:51:04:2e:60:e4:f4:e6:39:13:39:48:8f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'host03,10.196.180.14' (RSA) to the list of known hosts. root's password: oracle << password is not displayed Creating mailbox file: File exists Changing password for user grid. passwd: all authentication tokens updated successfully.

2) As the root user, create the oracle and grid user base directories. Perform this step on all three of your nodes.

# mkdir -p /u01/app/grid # chown -R grid:oinstall /u01/app # chmod -R 775 /u01/app/grid # mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01/app/oracle

3) View the /etc/sysconfig/ntpd file and confirm that the –x option is specified to address slewing. If necessary, change the file, and then restart the ntpd service with the service ntpd restart command. Perform this step on all three of your nodes.

[root]# cat /etc/sysconfig/ntpd # Drop root to id 'ntp:ntp' by default. OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 151: d59999gc10_sg2

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 11

# Set to 'yes' to sync hw clock after successful ntpdate SYNC_HWCLOCK=no # Additional options for ntpdate NTPDATE_OPTIONS="" [root]#

4) As the root user, start the local naming cache daemon on all three nodes with the service ncsd start command. Perform this step on all three of your nodes.

[root]# service nscd start Starting nscd: [ OK ] # ssh $ST_NODE2 service nscd start root's password: oracle << password is not displayed Starting nscd: [ OK ] [root]# ssh $ST_NODE3 service nscd start root's password: oracle << password is not displayed Starting nscd: [ OK ]

5) As the root user, run the /home/oracle/labs/less_02/limits.sh script. This script replaces the profile for the oracle and grid users and replaces /etc/profile. It replaces the /etc/security/limits.conf file with a new one with entries for oracle and grid. Cat the /home/oracle/labs/less_02/bash_profile and /home/oracle/labs/less_02/profile to view the new files. Perform this step on all three of your nodes.

# cat /home/oracle/labs/less_02/bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH umask 022 # cat /home/oracle/labs/less_02/profile # /etc/profile # System wide environment and startup programs, for login setup

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 152: d59999gc10_sg2

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 12

# Functions and aliases go in /etc/bashrc pathmunge () { if ! echo $PATH | /bin/egrep -q "(^|:)$1($|:)" ; then if [ "$2" = "after" ] ; then PATH=$PATH:$1 else PATH=$1:$PATH fi fi } # ksh workaround if [ -z "$EUID" -a -x /usr/bin/id ]; then EUID=`id -u` UID=`id -ru` fi # Path manipulation if [ "$EUID" = "0" ]; then pathmunge /sbin pathmunge /usr/sbin pathmunge /usr/local/sbin fi # No core files by default ulimit -S -c 0 > /dev/null 2>&1 if [ -x /usr/bin/id ]; then USER="`id -un`" LOGNAME=$USER MAIL="/var/spool/mail/$USER" fi HOSTNAME=`/bin/hostname` HISTSIZE=1000 if [ -z "$INPUTRC" -a ! -f "$HOME/.inputrc" ]; then INPUTRC=/etc/inputrc fi export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE INPUTRC for i in /etc/profile.d/*.sh ; do if [ -r "$i" ]; then . $i fi done

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 153: d59999gc10_sg2

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 13

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then umask 022 if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi unset i unset pathmunge # cat /home/oracle/labs/less_02/limits.conf # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to # - rtprio - max realtime priority #<domain> <type> <item> <value> #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # End of file oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 3500000 oracle hard memlock 3500000 grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 3500000 grid hard memlock 3500000 # Recommended stack hard limit 32MB for oracle installations

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 154: d59999gc10_sg2

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 14

# oracle hard stack 32768 # cat /home/oracle/labs/less_02/limits.sh cp /home/oracle/labs/less_02/profile /etc/profile cp /home/oracle/labs/less_02/bash_profile /home/oracle/.bash_profile cp /home/oracle/labs/less_02/bash_profile /home/grid/.bash_profile cp /home/oracle/labs/less_02/limits.conf /etc/security/limits.conf <<< On Node 1 >>> # /home/oracle/labs/less_02/limits.sh <<< On Node 2 >>> # ssh $ST_NODE2 /home/oracle/labs/less_02/limits.sh root@gr7213's password: oracle << password is not displayed <<< On Node 3 >>>> # ssh $ST_NODE3 /home/oracle/labs/less_02/limits.sh root@host03's password: oracle << password is not displayed #

6) As root, execute the oracleasm configure –i command to configure the Oracle ASM library driver. The owner should be grid and the group should be asmadmin. Make sure that the driver loads and scans disks on boot. Perform this step on all three of your nodes.

<<< On Node 1 >>> # oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 155: d59999gc10_sg2

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 15

<<< On Node 2 >>> # ssh $ST_NODE2 oracleasm configure –i … output not shown <<< On Node 3 >>> # ssh $ST_NODE3 oracleasm configure –i … output not shown

7) (On the first node only) Create the ASM disks needed for the practices. The /home/oracle/labs/less_02/createdisk.sh script has been provided to do this for you. Look at the script, and then execute it as the root user. Perform this step on the first node only.

# cat /home/oracle/labs/less_02/createdisk.sh oracleasm init oracleasm createdisk ASMDISK01 /dev/sda1 oracleasm createdisk ASMDISK02 /dev/sda2 oracleasm createdisk ASMDISK03 /dev/sda3 oracleasm createdisk ASMDISK04 /dev/sda5 oracleasm createdisk ASMDISK05 /dev/sda6 oracleasm createdisk ASMDISK06 /dev/sda7 oracleasm createdisk ASMDISK07 /dev/sda8 oracleasm createdisk ASMDISK08 /dev/sda9 oracleasm createdisk ASMDISK09 /dev/sda10 oracleasm createdisk ASMDISK10 /dev/sda11 oracleasm createdisk ASMDISK11 /dev/sdb1 oracleasm createdisk ASMDISK12 /dev/sdb2 oracleasm createdisk ASMDISK13 /dev/sdb3 oracleasm createdisk ASMDISK14 /dev/sdb5 # /home/oracle/labs/less_02/createdisk.sh Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 156: d59999gc10_sg2

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 16

Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done Writing disk header: done Instantiating disk: done #

8) As the root user, scan the disks to make sure that they are available with the oracleasm scandisks command. Perform an oracleasm listdisks command to make sure all the disks have been configured. Perform this step on all three of your nodes.

# oracleasm exit # oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm # oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... Instantiating disk "ASMDISK01" Instantiating disk "ASMDISK02" Instantiating disk "ASMDISK03" Instantiating disk "ASMDISK04" Instantiating disk "ASMDISK05" Instantiating disk "ASMDISK06" Instantiating disk "ASMDISK07" Instantiating disk "ASMDISK08" Instantiating disk "ASMDISK09" Instantiating disk "ASMDISK10"

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 157: d59999gc10_sg2

Practice 2-1: Performing Preinstallation Tasks for Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 17

Instantiating disk "ASMDISK11" Instantiating disk "ASMDISK12" Instantiating disk "ASMDISK13" Instantiating disk "ASMDISK14" # oracleasm listdisks ASMDISK01 ASMDISK02 ASMDISK03 ASMDISK04 ASMDISK05 ASMDISK06 ASMDISK07 ASMDISK08 ASMDISK09 ASMDISK10 ASMDISK12 ASMDISK13 ASMDISK14 #

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 158: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 18

Practice 2-2: Installing Oracle Grid Infrastructure In this practice, you install Oracle Grid Infrastructure.

1) Use the Oracle Universal Installer (runInstaller) to install Oracle Grid Infrastructure. You need to know your cluster name and GNS name and address. You also need to know your cluster subdomain. Your instructor will provide this information. YOU MUST USE THE NAMES ASSIGNED TO YOU BY YOUR INSTRUCTOR. Failure to use the proper names will result in a failed or inoperative cluster installation.

For this example, assume the following:

• Your assigned cluster nodes are host01, host02, and host03.

• Your instructor has assigned the cluster name cluster01 and GNS name cluster01-gns.example.com with an IP address of 192.0.2.155.

• Your assigned cluster subdomain is cluster01.example.com.

These values are for illustrational purposes only. PLEASE USE THE VALUES GIVEN TO YOU BY YOUR INSTRUCTOR.

a) As the grid user, start a VNC session on any available port for the grid user. Change the Xstartup file for vnc. Before starting the installation, make sure that the DNS server can resolve your GNS name. The GNS name in the classroom environment is available with the echo $ST_GNS_NAME command. Record the IP address you get for $ST_GNS_NAME ___________________

[oracle]$ su – grid Password: oracle << password not displayed [grid]$ id uid=502(grid) gid=501(oinstall) groups=501(oinstall),504(asmadmin),505(asmdba),506(asmoper) [grid]$ mkdir .vnc [grid]$ cp /home/oracle/.vnc/xstartup /home/grid/.vnc/xstartup [grid]$ vncserver :3 <= Starting VNC on port 5803 You will require a password to access your desktops. Password: oracle << password not displayed Verify: oracle << password not displayed xauth: creating new authority file /home/grid/.Xauthority New 'host03.example.com:3 (grid)' desktop is host03.example.com:3 Creating default startup script /home/grid/.vnc/xstartup Starting applications specified in /home/grid/.vnc/xstartup Log file is /home/grid/.vnc/host03.example.com:3.log

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 159: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 19

[grid]$ . /home/oracle/labs/st_env.sh [grid]$ echo $ST_GNS_NAME cluster01-gns.example.com [grid]$ nslookup $ST_GNS_NAME Server: 192.0.2.14 Address: 192.0.2.14#53 Name: cluster01-gns.example.com Address: 192.0.2.155

b) To start a VNC viewer session, click the VNC icon on your desktop, and connect to the grid user’s session. In keeping with the example above, you would enter host01:3. Open a terminal window and change directory to the staged software location provided by your instructor and start the OUI by executing the runInstaller command from the $ST_SOFTWARE_STAGE_GRID directory.

[grid]$ id uid=502(grid) gid=501(oinstall) groups=501(oinstall) ... [grid]$ . /home/oracle/labs/st_env.sh [grid]$ cd $ST_SOFTWARE_STAGE_GRID [grid]$ ./runInstaller

c) On the Select Installation Option page, select the “Install and Configure Grid Infrastructure for a Cluster” option and click Next.

d) On the Select Installation Type page, select Advanced Installation and click Next.

e) On the Product Languages page, select all languages and click Next.

f) The “Grid Plug and Play Information” page appears next. Make sure that the Configure GNS check box is selected. Input the proper data carefully. DO NOT GUESS here. If you are unsure of a value, PLEASE ASK YOUR INSTRUCTOR. You must input:

• Cluster Name

• SCAN Name

• SCAN Port

• GNS Subdomain

• GNS Address

The values given here are based on the example data presented earlier. To continue with the example, the following values would be assigned:

• Cluster Name: cluster01 (Yours will be different Use the value recorded in practice 1 step 6.)

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 160: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 20

• SCAN Name: cluster01-scan.cluster01.example.com (Yours will be different.)

• SCAN Port: 1521 (DEFAULT)

• GNS Sub Domain: cluster01.example.com (Yours will be different. The subdomain is \$ST_CLUSTER_NAME.$ST_NODE_DOMAIN.)

• GNS Address: 192.0.2.155 (Yours will be different. Use the value you found in practice 2-2 step 1a.)

Hint: If you enter the cluster name (for example, cluster01), and then enter GNS Sub Domain (for example, cluster01.example.com), the SCAN name will autofill correctly. Leave SCAN Port to default to 1521 and enter the IP address for your GNS. Verify all data entered on this page, and then click Next.

g) On the Cluster Node Information page, you add your second node only. DO NOT for any reason install to all three nodes. Click the Add button and enter the fully qualified name of your second node into the box and click OK. Your second node should appear in the window under your first node. The Virtual IP name values for both nodes will be AUTO. Click the SSH Connectivity button. Enter the grid password, which is oracle. Click the Setup button. A dialog box stating that you have successfully established passwordless SSH connectivity appears. Click OK to close the dialog box. Click Next to continue.

h) On the Specify Network Usage page, you must configure the correct interface types for the listed network interface. Your instructor will indicate the proper usage for each of your interfaces. Again, DO NOT GUESS. The systems that were used to develop the course had four interfaces: eth0 (storage network), eth1 (storage network), eth2 (private network), and eth3 (public network). Using that example, eth0 and eth1 would be marked “Do Not Use,” eth2 would be marked Private, and eth3 would be marked Public. Again, this is only an example. Check with your instructor for proper network interface usage. When you have correctly assigned the interface types, click Next to continue.

i) On the Storage Option Information page, select Automatic Storage Management (ASM) and click Next.

j) On the Create ASM Disk Group page, make sure that Disk Group Name is DATA and Redundancy is Normal. In the Add Disks region, select ORCL:ASMDISK01 ORCL:ASMDISK02, ORCL:ASMDISK03, and ORCL:ASMDISK04. Click Next.

k) On the ASM Passwords page, click the Use Same Password for these accounts button. In the Specify Password field, enter oracle_4U and confirm it in the Confirm Password field. Click Next to continue.

l) Select the “Do not use Intelligent Platform Management Interface (IPMI)” option on the Failure Isolation page and click Next to continue.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 161: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 21

m) On the Privileged Operating System Groups page, select “asmdba” for the ASM Database Administrator (OSDBA) group, “asmoper” for the ASM Instance Administration Operator (ASMOPER) group, and “asmadmin” for the ASM Instance Administrator (OSASM) group. Click Next to continue.

n) On the Specify Installation Location page, make sure that Oracle Base is /u01/app/grid and Software Location is /u01/app/11.2.0/grid. Click Next.

o) On the Create Inventory page, Inventory Directory should be /u01/app/oraInventory and the oraInventory Group Name should be oinstall. Click Next.

p) On the Perform System Prerequisites page, the Installer checks whether all the systems involved in the installation meet the minimum system requirements for that platform. If the check is successful, click Next. If any deficiencies are found, click the “Fix & Check Again” button. The Execute Fixup Script dialog box appears. You are instructed to execute a script as root on each node involved in the installation— two nodes in this case. Open a terminal window on the first node, become the root user, and set up the classroom environment variables with the st_env.sh script. Execute the fixup script on your first node, and then ssh to your second node and repeat the process. Exit the root user session on the first node, and then exit the terminal session. When the script has been run on each node, click OK to close the dialog box. Click Next to continue.

<<< On first node >>>

[grid]$ su - root Password: oracle << password not displayed [root]# . /home/oracle/labs/st_env.sh [root]# /tmp/CVU_11.2.0.1.0_grid/runfixup.sh Response file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.response Enable file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.enable Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log Setting Kernel Parameters... fs.file-max = 327679 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.wmem_max = 262144 net.core.wmem_max = 1048576 # <<< On Second Node >>> [root]# ssh $ST_NODE2 /tmp/CVU_11.2.0.1.0_grid/runfixup.sh root@host02's password: oracle <= password not echoed Response file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.response Enable file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.enable

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 162: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 22

Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log Setting Kernel Parameters... fs.file-max = 327679 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.wmem_max = 262144 net.core.wmem_max = 1048576 [root]# exit logout [grid]$ exit

q) Click Finish on the Summary screen. From this screen, you can monitor the progress of the installation.

r) When the remote operations have finished, the Execute Configuration Scripts window appears. You are instructed to run the orainstRoot.sh and root.sh scripts as the root user on both nodes. Open a terminal window and as the root user set the classroom environment variables.

# su – Password: oracle << password not displayed # . /home/oracle/labs/st_env.sh

(On the first node) # /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. # /u01/app/11.2.0/grid/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 163: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 23

Now product-specific root actions will be performed. 2009-08-25 14:46:20: Parsing the host name 2009-08-25 14:46:20: Checking for super user privileges 2009-08-25 14:46:20: User has super user privileges Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-2672: Attempting to start 'ora.gipcd' on 'host01' CRS-2672: Attempting to start 'ora.mdnsd' on 'host01' CRS-2676: Start of 'ora.gipcd' on 'host01' succeeded CRS-2676: Start of 'ora.mdnsd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'host01' CRS-2676: Start of 'ora.gpnpd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01' CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'host01' CRS-2672: Attempting to start 'ora.diskmon' on 'host01' CRS-2676: Start of 'ora.diskmon' on 'host01' succeeded CRS-2676: Start of 'ora.cssd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'host01' CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded ASM created and started successfully.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 164: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 24

DiskGroup DATA created successfully. clscfg: -install mode specified Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. CRS-2672: Attempting to start 'ora.crsd' on 'host01' CRS-2676: Start of 'ora.crsd' on 'host01' succeeded CRS-4256: Updating the profile Successful addition of voting disk 0c270623caf14f07bf57a7e9a1eb5f5c. Successful addition of voting disk 04f565f5c6ed4f1cbf7444c2b24ebf1a. Successful addition of voting disk 34c5c13ae40a4f21bf950e0de2777cdf. Successfully replaced voting disk group with +DATA. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 0c270623caf14f07bf57a7e9a1eb5f5c (ORCL:ASMDISK01) [DATA] 2. ONLINE 04f565f5c6ed4f1cbf7444c2b24ebf1a (ORCL:ASMDISK02) [DATA] 3. ONLINE 34c5c13ae40a4f21bf950e0de2777cdf (ORCL:ASMDISK03) [DATA] Located 3 voting disk(s). CRS-2673: Attempting to stop 'ora.crsd' on 'host01' CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'host01' CRS-2677: Stop of 'ora.asm' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'host01' CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host01' CRS-2677: Stop of 'ora.cssdmonitor' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'host01' CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'host01' CRS-2677: Stop of 'ora.gpnpd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'host01' CRS-2677: Stop of 'ora.gipcd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.mdnsd' on 'host01' CRS-2677: Stop of 'ora.mdnsd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.mdnsd' on 'host01' CRS-2676: Start of 'ora.mdnsd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'host01' CRS-2676: Start of 'ora.gipcd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'host01' CRS-2676: Start of 'ora.gpnpd' on 'host01' succeeded

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 165: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 25

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01' CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'host01' CRS-2672: Attempting to start 'ora.diskmon' on 'host01' CRS-2676: Start of 'ora.diskmon' on 'host01' succeeded CRS-2676: Start of 'ora.cssd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'host01' CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.asm' on 'host01' CRS-2676: Start of 'ora.asm' on 'host01' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'host01' CRS-2676: Start of 'ora.crsd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'host01' CRS-2676: Start of 'ora.evmd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.asm' on 'host01' CRS-2676: Start of 'ora.asm' on 'host01' succeeded CRS-2672: Attempting to start 'ora.DATA.dg' on 'host01' CRS-2676: Start of 'ora.DATA.dg' on 'host01' succeeded CRS-2672: Attempting to start 'ora.registry.acfs' on 'host01' CRS-2676: Start of 'ora.registry.acfs' on 'host01' succeeded host01 2009/08/25 14:53:37 /u01/app/11.2.0/grid/cdata/host01/backup_20090825_145337.olr Preparing packages for installation... cvuqdisk-1.0.7-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3007 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. # (On the second node) # ssh $ST_NODE2 /u01/app/oraInventory/orainstRoot.sh root's password: oracle << password not displayed Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. # ssh $ST_NODE2 /u01/app/11.2.0/grid/root.sh Running Oracle 11g root.sh script...

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 166: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 26

The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-08-25 14:56:42: Parsing the host name 2009-08-25 14:56:42: Checking for super user privileges 2009-08-25 14:56:42: User has super user privileges Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node host01, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster CRS-2672: Attempting to start 'ora.mdnsd' on 'host02' CRS-2676: Start of 'ora.mdnsd' on 'host02' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'host02' CRS-2676: Start of 'ora.gipcd' on 'host02' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'host02' CRS-2676: Start of 'ora.gpnpd' on 'host02' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02' CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'host02' CRS-2672: Attempting to start 'ora.diskmon' on 'host02' CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded CRS-2676: Start of 'ora.cssd' on 'host02' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'host02' CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on 'host02' CRS-2676: Start of 'ora.drivers.acfs' on 'host02' succeeded CRS-2672: Attempting to start 'ora.asm' on 'host02' CRS-2676: Start of 'ora.asm' on 'host02' succeeded

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 167: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 27

CRS-2672: Attempting to start 'ora.crsd' on 'host02' CRS-2676: Start of 'ora.crsd' on 'host02' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'host02' CRS-2676: Start of 'ora.evmd' on 'host02' succeeded host02 2009/08/25 15:01:15 /u01/app/11.2.0/grid/cdata/host02/backup_20090825_150115.olr Preparing packages for installation... cvuqdisk-1.0.7-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3007 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. #

s) After the scripts are executed on both nodes, click the OK button to close the dialog box. The configuration assistants will continue to execute from the Setup page.

t) When the configuration assistants have finished, click the Close button on the Finish page to exit the Installer.

2) When the installation finishes, you should verify the installation. You should check to make sure that the software stack is running, as it should. Execute the crsctl stat res –t command:

[grid]$ /u01/app/11.2.0/grid/bin/crsctl stat res –t -------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------- Local Resources -------------------------------------------------------------- ora.DATA1.dg ONLINE ONLINE host01 ONLINE ONLINE host02 ora.LISTENER.lsnr ONLINE ONLINE host01 ONLINE ONLINE host02 ora.asm ONLINE ONLINE host01 Started ONLINE ONLINE host02 Started ora.eons ONLINE ONLINE host01

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 168: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 28

ONLINE ONLINE host02 ora.gsd OFFLINE OFFLINE host01 OFFLINE OFFLINE host02 ora.net1.network ONLINE ONLINE host01 ONLINE ONLINE host02 ora.ons ONLINE ONLINE host01 ONLINE ONLINE host02 ora.registry.acfs ONLINE ONLINE host01 ONLINE ONLINE host02 -------------------------------------------------------------- Cluster Resources -------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE host02 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE host01 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE host01 ora.gns 1 ONLINE ONLINE host01 ora.gns.vip 1 ONLINE ONLINE host01 ora.host01.vip 1 ONLINE ONLINE host01 ora.host02.vip 1 ONLINE ONLINE host02 ora.oc4j 1 OFFLINE OFFLINE ora.scan1.vip 1 ONLINE ONLINE host02 ora.scan2.vip 1 ONLINE ONLINE host01 ora.scan3.vip 1 ONLINE ONLINE host01 The inventory is located at /u01/app/oraInventory

3) Use the dig command to check your GNS and make sure that it is being updated with your cluster addresses. In this example, the IP address is the one assigned to your GNS.

# dig @192.0.2.155 cluster01-scan.cluster01.example.com # dig @${ST_GNS_NAME} ${ST_CLUSTER_NAME}-scan.${CLUSTER_NAME}.${ST_NODE_DOMAIN} ; <<>> DiG 9.3.4-P1 <<>> @cluster01-gns cluster01-scan.cluster01.example.com

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 169: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 29

; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43003 ;; flags: qr aa; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;cluster01-scan.cluster01.example.com. IN A ;; ANSWER SECTION: cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.231 cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.229 cluster01-scan.cluster01.example.com. 120 IN A 192.0.2.232 ;; AUTHORITY SECTION: cluster01-gns-vip.cluster01.example.com. 10800 IN NS cluster01-gns-vip.cluster01.example.com. ;; ADDITIONAL SECTION: cluster01-gns-vip.cluster01.example.com. 10800 IN A 192.0.2.155 ;; Query time: 55 msec ;; SERVER: 192.0.2.155#53(192.0.2.155) ;; WHEN: Tue Aug 25 15:42:56 2009 ;; MSG SIZE rcvd: 174

4) Use the dig command to make sure that your name server is properly forwarding address requests in your cluster subdomain back to your GNS for resolution.

# cat /etc/resolv.conf domain example.com nameserver 10.216.104.27 search example.com # dig @10.216.104.27 ${ST_CLUSTER_NAME}-scan.${CLUSTER_NAME}.${ST_NODE_DOMAIN} ; <<>> DiG 9.3.4-P1 <<>> @10.216.104.27 cluster01-scan.cluster01.example.com ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14358 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 170: d59999gc10_sg2

Practice 2-2: Installing Oracle Grid Infrastructure (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 30

;; QUESTION SECTION: ; cluster01-scan.cluster01.example.com. IN A ;; ANSWER SECTION: cluster01-scan.cluster01.example.com. 120 IN A 10.196.180.214 cluster01-scan.cluster01.example.com. 120 IN A 10.196.180.233 cluster01-scan.cluster01.example.com. 120 IN A 10.196.182.230 ;; AUTHORITY SECTION: cluster01.example.com. 86400 IN NS cluster01-gns.example.com. ;; ADDITIONAL SECTION: cluster01-gns.example.com. 86400 IN A 10.196.183.12 ;; Query time: 62 msec ;; SERVER: 10.216.104.27#53(10.216.104.27) ;; WHEN: Tue Sep 29 14:53:49 2009 ;; MSG SIZE rcvd: 137

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 171: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 31

Practice 2-3: Creating ASM Disk Groups In this practice, you create additional ASM disk groups to support the activities in the rest of the course. You create a disk group to hold ACFS file systems and another disk group to hold the Fast Recovery Area (FRA).

1) In the VNC session for the grid user, open a terminal window as the grid user, and set the oracle environment with the oraenv tool to the +ASM1 instance.

[grid]$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid

2) Start the ASM Configuration Assistant (ASMCA).

$ asmca

3) Create a disk group named ACFS with four disks and external redundancy—choose disks ASMDISK05 through ASMDISK08. Set the disk group attribute ADVM Compatibility to 11.2.0.0.0.

Step Screen/Page Description Choices or Values a. Configure ASM :DiskGroups Click Create. b. Create DiskGroup Enter:

Disk Group Name: ACFS In the Redundancy section, select “External (None).” In the Select Member Disk section, select: ASMDISK05 ASMDISK06 ASMDISK07 ASMDISK08 Click Show Advanced Options. In the Disk Group Attributes section, set ADVM Compatibility to 11.2.0.0.0. Click OK.

c. Disk Group:Creation Click OK.

4) Using ASMCMD, create a disk group named FRA over the disks ASMDISK09 through ASMDISK11 with external redundancy. Using the command

asmcmd mkdg FRA_dg_config.xml

Review the FRA_dg_config.xml file, and then execute the command.

[grid]$ cat /home/oracle/labs/less_02/FRA_dg_config.xml <dg name="FRA" redundancy="external"> <dsk> <dsk string="/dev/oracleasm/disk/ASMDISK09"/> </dsk> <dsk> <dsk string="/dev/oracleasm/disk/ASMDISK10"/> </dsk> <dsk> <dsk string="/dev/oracleasm/disk/ASMDISK11"/> </dsk> <a name="compatible.asm" value="11.2"/> <a name="compatible.rdbms" value="11.2"/>

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 172: d59999gc10_sg2

Practice 2-3: Creating ASM Disk Groups (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 32

<a name="compatible.advm" value="11.2"/> </dg> [grid]$ asmcmd mkdg /home/oracle/labs/less_02/FRA_dg_config.xml [grid]$

5) Use ASMCMD to confirm the creation of the FRA disk group and to see which disks are included in the FRA disk group.

[grid]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 10644 4396 0 4396 0 N ACFS/ MOUNTED NORMAL N 512 4096 1048576 10648 6249 1111 2569 0 N DATA/ MOUNTED EXTERN N 512 4096 1048576 7984 7928 0 7928 0 N FRA/ ASMCMD> lsdsk -G FRA Path ORCL:ASMDISK09 ORCL:ASMDISK10 ORCL:ASMDISK11 ASMCMD> exit [grid]$

6) Mount the FRA disk group on your second node. Note: The ASMCMD commands operate only on the local node, so the FRA disk group is mounted only on your first node.

[grid}$ . /home/oracle/labs/st_env.sh [grid]$ ssh $ST_NODE2 [grid@host02] $ . oraenv ORACLE_SID = [grid] ? +ASM2 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid@ host02]$ asmcmd mount FRA [grid@ host02]$ asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 9996 3748 0 3748 0 N ACFS/ MOUNTED NORMAL N 512 4096 1048576 9998 5556 1126 2215 0 N DATA/

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 173: d59999gc10_sg2

Practice 2-3: Creating ASM Disk Groups (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 33

MOUNTED EXTERN N 512 4096 1048576 7497 7398 0 7398 0 N FRA/ [grid@ host02]$ exit

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 174: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 34

Practice 2-4: Creating ACFS File System In this practice, you create an ASM volume to use as the shared ORACLE_HOME for a database that is being used as the Enterprise Manager repository. Create this volume in the disk group called ACFS, the volume name is DBHOME_1, and the mount point is /u01/app/oracle/acfsmount/11.2.0/sharedhome.

1) Start the ASM Configuration Assistant (ASMCA).

$ asmca

2) Create an ASM volume named DBHOME_1 with size 6 GB in the ACFS disk group.

Step Screen/Page Description Choices or Values a. Configure ASM :DiskGroups Click the Volumes tab. b. Configure ASM :Volumes Click Create. c. Create Volume Enter:

Volume Name: DBHOME_1 Size: 6 G Bytes Click OK.

d. Volume: Creation Click OK.

3) Open a terminal window and become the root user, password oracle. Create the mount point directory at /u01/app/oracle/acfsmount/11.2.0/sharedhome.

Do this on all three nodes.

$ su – root Password: oracle << password is not displayed [root]# mkdir -p /u01/app/oracle/acfsmount/11.2.0/sharedhome [root]# . /home/oracle/labs/st_env.sh [root]# ssh $ST_NODE2 mkdir -p /u01/app/oracle/acfsmount/11.2.0/sharedhome root@host02's password: oracle << password is not displayed [root]# ssh $ST_NODE3 mkdir -p /u01/app/oracle/acfsmount/11.2.0/sharedhome root@host03's password: oracle << password is not displayed

4) Create the ACFS file system, and mount and register the ACFS file system.

Step Screen/Page Description

Choices or Values

a. Configure ASM: Volumes

Click the ASM Cluster File Systems tab.

b. Configure ASM: ASM Cluster File

Click Create.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 175: d59999gc10_sg2

Practice 2-4: Creating ACFS File System (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 35

Step Screen/Page Description

Choices or Values

Systems c. Create ASM

Cluster File System

Verify that Database Home File System is selected. Enter Database Home Mountpoint: /u01/app/oracle/acfsmount/11.2.0/sharedhome Database Home Owner Name: oracle Database Home Owner Group: oinstall Click OK.

d. Database Home: Run ACFS Script

Copy the script shown to the root terminal window.

e. Root terminal window

Execute the script: # /u01/app/grid/cfgtoollogs/asmca/scripts/acfs_script.sh#

f. Database Home: Run ACFS Script

Click Close.

g. Configure ASM: ASM Cluster File Systems

Click Exit.

h. ASM Configuration Assistant

Click Yes.

5) Exit the root terminal window.

# exit $

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 176: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 36

Practice 2-5: Installing a Silent Database In this practice, you start a silent installation of an Oracle RAC database that will be used in this course to enable Enterprise Manager Database Control.

1) In a VNC session for the oracle user (host01:1). Establish the ssh user equivalency for the oracle user between your two cluster nodes. Open a terminal window on one as the oracle user and name the other in place of remote_host in the example.

[oracle]$ cd /home/oracle/labs/silent_inst [oracle]$ ./ssh_setup.sh oracle@host02's password: oracle << password not displayed Warning: Permanently added 'gr7213,10.196.180.13' (RSA) to the list of known hosts. oracle@host02's password: oracle << password not displayed oracle@host02's password: oracle << password not displayed id_rsa.pub 100% 395 0.4KB/s 00:00 oracle@host02's password: oracle << password not displayed [oracle]$

2) Confirm that the oracle user in this session is a member of the required groups. The required groups are: dba, oinstall, oper, and asmdba. If all the groups do not appear, use the su – oracle command to reset the user groups. Note: The groups are only reset for this terminal window.

$ id uid=501(oracle) gid=502(oinstall) groups=501(dba),502(oinstall) [oracle]$ su - oracle Password: oracle << password not displayed [oracle@gr7212 ~]$ id uid=501(oracle) gid=502(oinstall) groups=501(dba),502(oinstall),503(oper),505(asmdba)

3) In the same terminal window, use the classroom environment variables run /home/oracle/labs/st_env.sh.

$ . /home/oracle/labs/st_env.sh

4) In a terminal window, as the oracle user, change directory to the following or to a directory specified by your instructor:

$ cd $ST_SOFTWARE_STAGE_DB

5) Start the silent installation by using the following command: (Be sure to provide the parameters. The installer shows a warning that can be ignored [WARNING] [INS-35421].This option installs a single instance database only.)

$ ./runInstaller –silent –responseFile \ /home/oracle/labs/silent_inst/db.rsp –waitforcompletion \ ORACLE_HOST=`hostname` CLUSTER_NODES=$ST_NODE_LIST Starting Oracle Universal Installer...

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 177: d59999gc10_sg2

Practice 2-5: Installing a Silent Database (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 37

Checking Temp space: must be greater than 80 MB. Actual 29270 MB Passed Checking swap space: must be greater than 150 MB. Actual 3971 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2009-10-02_11-51-20AM. Please wait ...[WARNING] [INS-35421] This option installs a single instance database only. CAUSE: You have chosen to perform a Desktop class install on a cluster. This option will not install Oracle RAC. ACTION: If you wish to install Oracle RAC, choose to perform a Server class install. [WARNING] [INS-35421] This option installs a single instance database only. CAUSE: You have chosen to perform a Desktop class install on a cluster. This option will not install Oracle RAC. ACTION: If you wish to install Oracle RAC, choose to perform a Server class install. You can find the log of this install session at: /u01/app/oraInventory/logs/installActions2009-10-02_11-51-20AM.log [WARNING] [INS-35421] This option installs a single instance database only. CAUSE: You have chosen to perform a Desktop class install on a cluster. This option will not install Oracle RAC. ACTION: If you wish to install Oracle RAC, choose to perform a Server class install.

6) Execute the root.sh script on both nodes to complete the installation.

-- On the first node # /u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1/root.sh Check /u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1/install/root_host01_2009-09-11_10-35-54.log for the output of root script # -- On the second node # /u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1/root.sh Check /u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1/install/root_host02_2009-09-11_10-43-07.log for the output of root script #

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 178: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 38

Practices for Lesson 3 In these practices, you will verify, stop, and start Oracle Clusterware. You will add and remove Oracle Clusterware configuration files and backup the Oracle Cluster Registry and the Oracle Local Registry.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 179: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 39

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware In this practice, you check the status of Oracle Clusterware using both the operating system commands and the crsctl utility. You will also start and stop Oracle Clusterware.

1) Connect to the first node of your cluster as the grid user. You can use the oraenv script to define ORACLE_SID, ORACLE_HOME, PATH, ORACLE_BASE, and LD_LIBRARY_PATH for your environment.

$ id uid=502(grid) gid=501(oinstall) groups=501(oinstall),504(asmadmin),505(asmdba),506(asmoper) $ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid

2) Using the operating system commands, verify that the Oracle Clusterware daemon processes are running on the current node. (Hint: Most of the Oracle Clusterware daemon processes have names that end with d.bin.)

$ pgrep -l d.bin 12895 ohasd.bin 14838 mdnsd.bin 14850 gipcd.bin 14862 gpnpd.bin 14916 ocssd.bin 15062 octssd.bin 15166 oclskd.bin 15181 crsd.bin 15198 evmd.bin 15222 oclskd.bin 15844 gnsd.bin 24709 oclskd.bin

3) Using the crsctl utility, verify that Oracle Clusterware is running on the current node.

$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online

4) Verify the status of all cluster resources that are being managed by Oracle Clusterware for all nodes.

$ crsctl stat res -t ------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 180: d59999gc10_sg2

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 40

------------------------------------------------------------- Local Resources ------------------------------------------------------------- ora.ACFS.dg ONLINE ONLINE host01 ONLINE ONLINE host02 ora.DATA.dg ONLINE ONLINE host01 ONLINE ONLINE host02 ora.FRA.dg ONLINE ONLINE host01 ONLINE ONLINE host02 ora.LISTENER.lsnr ONLINE ONLINE host01 ONLINE ONLINE host02 ora.acfs.dbhome1.acfs ONLINE ONLINE host01 ONLINE ONLINE host02 ora.asm ONLINE ONLINE host01 Started ONLINE ONLINE host02 Started ora.eons ONLINE ONLINE host01 ONLINE ONLINE host02 ora.gsd OFFLINE OFFLINE host01 OFFLINE OFFLINE host02 ora.net1.network ONLINE ONLINE host01 ONLINE ONLINE host02 ora.ons ONLINE ONLINE host01 ONLINE ONLINE host02 ora.registry.acfs ONLINE ONLINE host01 ONLINE ONLINE host02 ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE host02 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE host01 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE host01 ora.gns 1 ONLINE ONLINE host01 ora.gns.vip 1 ONLINE ONLINE host01 ora.host01.vip 1 ONLINE ONLINE host01 ora.host02.vip

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 181: d59999gc10_sg2

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 41

1 ONLINE ONLINE host02 ora.oc4j 1 OFFLINE OFFLINE ora.orcl.db 1 ONLINE ONLINE host01 Open 2 ONLINE ONLINE host02 Open ora.scan1.vip 1 ONLINE ONLINE host02 ora.scan2.vip 1 ONLINE ONLINE host01 ora.scan3.vip 1 ONLINE ONLINE host01

5) Attempt to stop Oracle Clusterware on the current node while logged in as the grid user. What happens and why?

$ crsctl stop crs CRS-4563: Insufficient user privileges. CRS-4000: Command Stop failed, or completed with errors.

6) Switch to the root account and stop Oracle Clusterware only on the current node. Exit the switch user command when the stop succeeds.

$ su - Password: oracle << Password is not displayed # /u01/app/11.2.0/grid/bin/crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host01' CRS-2673: Attempting to stop 'ora.crsd' on 'host01' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host01' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host01' CRS-2673: Attempting to stop 'ora.FRA.dg' on 'host01' CRS-2673: Attempting to stop 'ora.orcl.db' on 'host01' CRS-2673: Attempting to stop 'ora.registry.acfs' on 'host01' CRS-2673: Attempting to stop 'ora.gns' on 'host01' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'host01' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'host01' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.host01.vip' on 'host01' CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.scan3.vip' on 'host01' CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'host01' CRS-2677: Stop of 'ora.host01.vip' on 'host01' succeeded

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 182: d59999gc10_sg2

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 42

CRS-2672: Attempting to start 'ora.host01.vip' on 'host02' CRS-2677: Stop of 'ora.scan3.vip' on 'host01' succeeded CRS-2672: Attempting to start 'ora.scan3.vip' on 'host02' CRS-2677: Stop of 'ora.scan2.vip' on 'host01' succeeded CRS-2672: Attempting to start 'ora.scan2.vip' on 'host02' CRS-2677: Stop of 'ora.gns' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.gns.vip' on 'host01' CRS-2677: Stop of 'ora.gns.vip' on 'host01' succeeded CRS-2672: Attempting to start 'ora.gns.vip' on 'host02' CRS-2677: Stop of 'ora.FRA.dg' on 'host01' succeeded CRS-2676: Start of 'ora.scan2.vip' on 'host02' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'host02' CRS-2676: Start of 'ora.gns.vip' on 'host02' succeeded CRS-2672: Attempting to start 'ora.gns' on 'host02' CRS-2676: Start of 'ora.scan3.vip' on 'host02' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'host02' CRS-2674: Start of 'ora.host01.vip' on 'host02' failed CRS-2679: Attempting to clean 'ora.host01.vip' on 'host02' CRS-2674: Start of 'ora.gns' on 'host02' failed CRS-2679: Attempting to clean 'ora.gns' on 'host02' CRS-2681: Clean of 'ora.gns' on 'host02' succeeded CRS-2681: Clean of 'ora.host01.vip' on 'host02' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'host02' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'host02' succeeded CRS-2677: Stop of 'ora.orcl.db' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.acfs.dbhome1.acfs' on 'host01' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host01' CRS-2677: Stop of 'ora.DATA.dg' on 'host01' succeeded CRS-5014: Agent "/u01/app/11.2.0/grid/bin/orarootagent.bin" timed out starting process "/u01/app/11.2.0/grid/bin/acfssinglefsmount" for action "stop": details at "(:CLSN00009:)" in "/u01/app/11.2.0/grid/log/host01/agent/crsd/orarootagent_root/orarootagent_root.log" (:CLSN00009:)Utils:execCmd aborted CRS-2675: Stop of 'ora.acfs.dbhome1.acfs' on 'host01' failed CRS-2679: Attempting to clean 'ora.acfs.dbhome1.acfs' on 'host01' CRS-2675: Stop of 'ora.registry.acfs' on 'host01' failed CRS-2679: Attempting to clean 'ora.registry.acfs' on 'host01' CRS-2678: 'ora.acfs.dbhome1.acfs' on 'host01' has experienced an unrecoverable failure CRS-0267: Human intervention required to resume its availability.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 183: d59999gc10_sg2

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 43

CRS-2673: Attempting to stop 'ora.asm' on 'host01' CRS-2677: Stop of 'ora.asm' on 'host01' succeeded CRS-2681: Clean of 'ora.registry.acfs' on 'host01' succeeded CRS-2794: Shutdown of Cluster Ready Services-managed resources on 'host01' has failed CRS-2675: Stop of 'ora.crsd' on 'host01' failed CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'host01' has failed CRS-4687: Shutdown command has completed with error(s). CRS-4000: Command Stop failed, or completed with errors.

7) If the crsctl utility fails to stop Oracle Clusterware, reissue the command on the same node.

# /u01/app/11.2.0/grid/bin/crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'host01' CRS-2673: Attempting to stop 'ora.crsd' on 'host01' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host01' CRS-2679: Attempting to clean 'ora.acfs.dbhome1.acfs' on 'host01' CRS-2673: Attempting to stop 'ora.ACFS.dg' on 'host01' CRS-2681: Clean of 'ora.acfs.dbhome1.acfs' on 'host01' succeeded CRS-2677: Stop of 'ora.ACFS.dg' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'host01' CRS-2673: Attempting to stop 'ora.eons' on 'host01' CRS-2677: Stop of 'ora.ons' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'host01' CRS-2677: Stop of 'ora.net1.network' on 'host01' succeeded CRS-2677: Stop of 'ora.eons' on 'host01' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'host01' has completed CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host01' CRS-2673: Attempting to stop 'ora.ctssd' on 'host01' CRS-2673: Attempting to stop 'ora.evmd' on 'host01' CRS-2673: Attempting to stop 'ora.asm' on 'host01' CRS-2673: Attempting to stop 'ora.mdnsd' on 'host01' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'host01' CRS-2677: Stop of 'ora.cssdmonitor' on 'host01' succeeded CRS-2677: Stop of 'ora.evmd' on 'host01' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'host01' succeeded CRS-5016: Process "/u01/app/11.2.0/grid/bin/acfsload" spawned by agent "/u01/app/11.2.0/grid/bin/orarootagent.bin" for action "stop" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0/grid/log/host01/agent/ohasd/orarootagent_root/orarootagent_root.log" Waiting for ASM to shutdown.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 184: d59999gc10_sg2

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 44

acfsload: ACFS-9118: oracleadvm.ko driver in use - can not unload. acfsload: ACFS-9118: oracleoks.ko driver in use - can not unload. CRS-2677: Stop of 'ora.drivers.acfs' on 'host01' succeeded CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded CRS-2677: Stop of 'ora.asm' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'host01' CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'host01' CRS-2673: Attempting to stop 'ora.diskmon' on 'host01' CRS-2677: Stop of 'ora.gpnpd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'host01' CRS-2677: Stop of 'ora.gipcd' on 'host01' succeeded CRS-2677: Stop of 'ora.diskmon' on 'host01' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'host01' has completed CRS-4133: Oracle High Availability Services has been stopped. # exit

8) Attempt to check the status of Oracle Clusterware now that it has been successfully stopped.

$ crsctl check crs CRS-4639: Could not contact Oracle High Availability Services $ crsctl check cluster CRS-4639: Could not contact Oracle High Availability Services CRS-4000: Command Check failed, or completed with errors.

9) Connect to the second node of your cluster and verify that Oracle Clusterware is still running on that node. You may need to set your environment for the second node by using the oraenv utility.

$ . /home/oracle/labs/st_env.sh $ ssh $ST_NODE2 Last login: Thu Aug 27 17:28:29 2009 from host01.example.com $ . oraenv ORACLE_SID = [grid] ? +ASM2 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid $ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 185: d59999gc10_sg2

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 45

10) Verify that all cluster resources are running on the second node, stopped on the first node, and that the VIP resources from the first node have migrated or failed over to the second node. The ora.oc4j and the ora.gsd resources are expected to be offline. Exit the connection to the second node when done.

$ crsctl stat res -t ---------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS ---------------------------------------------------------- Local Resources ---------------------------------------------------------- ora.ACFS.dg ONLINE ONLINE host02 ora.DATA.dg ONLINE ONLINE host02 ora.FRA.dg ONLINE ONLINE host02 ora.LISTENER.lsnr ONLINE ONLINE host02 ora.acfs.dbhome1.acfs ONLINE ONLINE host02 ora.asm ONLINE ONLINE host02 ora.eons ONLINE ONLINE host02 ora.gsd OFFLINE OFFLINE host02 ora.net1.network ONLINE ONLINE host02 ora.ons ONLINE ONLINE host02 ora.registry.acfs ONLINE ONLINE host02 ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE host02 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE host02 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE host02 ora.gns 1 ONLINE ONLINE host02 ora.gns.vip 1 ONLINE ONLINE host02 ora.host01.vip 1 ONLINE INTERMEDIATE host02 FAILED OVER ora.host02.vip 1 ONLINE ONLINE host02 ora.oc4j 1 OFFLINE OFFLINE

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 186: d59999gc10_sg2

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 46

ora.orcl.db 1 ONLINE OFFLINE 2 ONLINE ONLINE host02 ora.scan1.vip 1 ONLINE ONLINE host02 ora.scan2.vip 1 ONLINE ONLINE host02 ora.scan3.vip 1 ONLINE ONLINE host02 $ exit Connection to host02 closed. $

11) Restart Oracle Clusterware on the first node as the root user. Return to the grid account and verify the results. Note: You may need to check the status of all the resources several times until they all have been restarted. You can tell that they are all complete when the ora.orcl.db resource has a State Details of Open. It may take several minutes to completely restart all resources.

$ su - Password: oracle << Password is not displayed # /u01/app/11.2.0/grid/bin/crsctl start crs CRS-4123: Oracle High Availability Services has been started. # exit $ $ crsctl stat res -t ----------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS ----------------------------------------------------------- Local Resources ----------------------------------------------------------- ora.ACFS.dg ONLINE ONLINE host01 ONLINE ONLINE host02 ora.DATA.dg ONLINE ONLINE host01 ONLINE ONLINE host02 ora.FRA.dg OFFLINE OFFLINE host01 ONLINE ONLINE host02 ora.LISTENER.lsnr ONLINE ONLINE host01 ONLINE ONLINE host02 ora.acfs.dbhome1.acfs ONLINE ONLINE host01

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 187: d59999gc10_sg2

Practice 3-1: Verifying, Starting, and Stopping Oracle Clusterware (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 47

ONLINE ONLINE host02 ora.asm ONLINE ONLINE host01 Started ONLINE ONLINE host02 ora.eons ONLINE ONLINE host01 ONLINE ONLINE host02 ora.gsd OFFLINE OFFLINE host01 OFFLINE OFFLINE host02 ora.net1.network ONLINE ONLINE host01 ONLINE ONLINE host02 ora.ons ONLINE ONLINE host01 ONLINE ONLINE host02 ora.registry.acfs ONLINE ONLINE host01 ONLINE ONLINE host02 ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE host01 ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE host02 ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE host02 ora.gns 1 ONLINE ONLINE host02 ora.gns.vip 1 ONLINE ONLINE host02 ora.host01.vip 1 ONLINE ONLINE host01 ora.host02.vip 1 ONLINE ONLINE host02 ora.oc4j 1 OFFLINE OFFLINE ora.orcl.db 1 ONLINE ONLINE host01 Open 2 ONLINE ONLINE host02 ora.scan1.vip 1 ONLINE ONLINE host01 ora.scan2.vip 1 ONLINE ONLINE host02 ora.scan3.vip 1 ONLINE ONLINE host02

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 188: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 48

Practice 3-2: Adding and Removing Oracle Clusterware Configuration Files In this practice, you determine the current location of your voting disks and Oracle Cluster Registry (OCR) files. You will then add another OCR location and remove it.

1) Use the crsctl utility to determine the location of the voting disks that are currently used by your Oracle Clusterware installation.

$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- ----- ----- 1. ONLINE 0f7a12c15ece4fcebf4dfe8ab209406e (ORCL:ASMDISK01) [DATA] 2. ONLINE 718cf2c254b44f21bfb8d41fbf21fdd5 (ORCL:ASMDISK02) [DATA] 3. ONLINE 449422aa69534f81bfc54e41087d677c (ORCL:ASMDISK03) [DATA] Located 3 voting disk(s).

2) Use the ocrcheck utility to determine the location of the Oracle Clusterware Registry (OCR) files.

$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3228 Available space (kbytes) : 258892 ID : 1581544792 Device/File Name : +DATA Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check bypassed due to non-privileged user

3) Verify that the FRA ASM disk group is currently online for all nodes using the crsctl utility.

$ crsctl stat res ora.FRA.dg -t -----------------------------------------------------------

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 189: d59999gc10_sg2

Practice 3-2: Adding and Removing Oracle Clusterware Configuration Files (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 49

NAME TARGET STATE SERVER STATE_DETAILS ----------------------------------------------------------- Local Resources ---------------------------------------------------------- ora.FRA.dg OFFLINE OFFLINE host01 ONLINE ONLINE host02

4) If the FRA ASM disk group is not online, use the asmcmd utility to mount the FRA disk group. Note: This step may not be necessary if it is already in an online state on each node. Verify the results. You may have to run the commands on each node.

$ asmcmd mount FRA $ $ crsctl stat res ora.FRA.dg -t ----------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS ----------------------------------------------------------- Local Resources ---------------------------------------------------------- ora.FRA.dg ONLINE ONLINE host01 ONLINE ONLINE host02

5) Switch to the root account and add a second OCR location that is to be stored in the FRA ASM disk group. Use the ocrcheck command to verify the results.

$ su - Password: oracle << Password is not displayed # /u01/app/11.2.0/grid/bin/ocrconfig -add +FRA # /u01/app/11.2.0/grid/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3228 Available space (kbytes) : 258892 ID : 1581544792 Device/File Name : +DATA Device/File integrity check succeeded Device/File Name : +FRA Device/File integrity check succeeded Device/File not configured

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 190: d59999gc10_sg2

Practice 3-2: Adding and Removing Oracle Clusterware Configuration Files (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 50

Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded

6) Examine the contents of the ocr.loc configuration file to see the changes made to the file referencing the new OCR location.

# cat /etc/oracle/ocr.loc #Device/file getting replaced by device +FRA ocrconfig_loc=+DATA ocrmirrorconfig_loc=+FRA

7) Open a connection to your second node as the root user, and remove the second OCR file that was added from the first node. Exit the remote connection and verify the results when completed.

# . /home/oracle/labs/st_env.sh # ssh $ST_NODE2 root@host02's password: oracle << Password is not displayed Last login: Tue Aug 25 13:04:32 2009 from host01.example.com # /u01/app/11.2.0/grid/bin/ocrconfig -delete +FRA # exit Connection to host02 closed. # /u01/app/11.2.0/grid/bin/ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 3228 Available space (kbytes) : 258892 ID : 1581544792 Device/File Name : +DATA Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 191: d59999gc10_sg2

Practice 3-2: Adding and Removing Oracle Clusterware Configuration Files (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 51

Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 192: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 52

Practice 3-3: Performing a Backup of the OCR and OLR In this practice, you determine the location of the Oracle Local Registry (OLR) and perform backups of the OCR and OLR files.

1) Use the ocrconfig utility to list the automatic backups of the Oracle Cluster Registry (OCR) and the node or nodes on which they have been performed.

# /u01/app/11.2.0/grid/bin/ocrconfig -showbackup host02 2009/08/27 20:17:20 /u01/app/11.2.0/grid/cdata/cluster03/backup00.ocr host01 2009/08/27 14:53:56 /u01/app/11.2.0/grid/cdata/cluster03/backup01.ocr host01 2009/08/27 10:53:54 /u01/app/11.2.0/grid/cdata/cluster03/backup02.ocr host01 2009/08/26 02:53:44 /u01/app/11.2.0/grid/cdata/cluster03/day.ocr host01 2009/08/24 18:53:34 /u01/app/11.2.0/grid/cdata/cluster03/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available

2) Perform a manual backup of the OCR.

# /u01/app/11.2.0/grid/bin/ocrconfig -manualbackup host02 2009/08/27 21:37:49 /u01/app/11.2.0/grid/cdata/cluster03/backup_20090827_213749.ocr

3) Perform a logical backup of the OCR, storing the file in the /home/oracle directory.

# /u01/app/11.2.0/grid/bin/ocrconfig -export /home/oracle/ocr.backup

4) Display only the manual backups that have been performed and identify the node for which the backup was stored. Do logical backup appear in the display?

# /u01/app/11.2.0/grid/bin/ocrconfig -showbackup manual host02 2009/08/27 21:37:49 /u01/app/11.2.0/grid/cdata/cluster03/backup_20090827_213749.ocr

5) Determine the location of the Oracle Local Registry (OLR) using the ocrcheck utility.

# /u01/app/11.2.0/grid/bin/ocrcheck -local Status of Oracle Local Registry is as follows :

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 193: d59999gc10_sg2

Practice 3-3: Performing a Backup of the OCR and OLR (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 53

Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 2204 Available space (kbytes) : 259916 ID : 1965295944 Device/File Name : /u01/app/11.2.0/grid/cdata/host01.olr Device/File integrity check succeeded Local registry integrity check succeeded Logical corruption check succeeded

6) Perform a logical backup of the OLR, storing the file in the /home/oracle directory.

# /u01/app/11.2.0/grid/bin/ocrconfig -local -export /home/oracle/olr.backup

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 194: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 54

Practices for Lesson 4 In this practice, you will add a third node to your cluster.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 195: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 55

Practice 4-1: Adding a Third Node to Your Cluster The goal of this lab is to extend your cluster to a third node. Before you start this lab, make sure that you went through the following steps on your third node: lab 2-1: 1, 2, 3, 4, 5, 6, 8. In this practice, you extend your cluster to the third node given to you. Note

• All the scripts you need to execute this lab are located on your first node in the /home/oracle/labs/node_addition directory.

• Unless specified otherwise, you are connected on your first node as the grid user using a terminal session.

1) Set up the ssh user equivalence for the grid user between your first node and your third node.

$ /home/oracle/solutions/configure_ssh Setting up SSH user equivalency. grid@gr7213's password: grid@gr7214's password: Checking SSH user equivalency. gr7212 gr7213 gr7214 $

2) Make sure that you can connect from your first node to the third one without being prompted for passwords.

$ . /home/oracle/labs/st_env.sh $ ssh $ST_NODE3 date Fri Sep 4 10:38:10 EDT 2009 $

3) Make sure that you set up your environment variables correctly for the grid user to point to your grid installation.

$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid $

4) Check your pre-grid installation for your third node using the Cluster Verification Utility. This fails because the fixup scripts have not been run on the third node.

$ . /home/oracle/labs/st_env.sh $ $ cluvfy stage -pre crsinst -n $ST_NODE3 Performing pre-checks for cluster services setup

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 196: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 56

Checking node reachability... Node reachability check passed from node "gr7212" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.216.52.0" with node(s) gr7214 TCP connectivity check passed for subnet "10.216.52.0" Node connectivity passed for subnet "10.216.96.0" with node(s) gr7214 TCP connectivity check passed for subnet "10.216.96.0" Node connectivity passed for subnet "10.196.28.0" with node(s) gr7214 TCP connectivity check passed for subnet "10.196.28.0" Node connectivity passed for subnet "10.196.180.0" with node(s) gr7214 TCP connectivity check passed for subnet "10.196.180.0" Interfaces found on subnet "10.216.52.0" that are likely candidates for VIP are: gr7214 eth0:10.216.54.232 Interfaces found on subnet "10.216.96.0" that are likely candidates for a private interconnect are: gr7214 eth1:10.216.98.155 Interfaces found on subnet "10.196.28.0" that are likely candidates for a private interconnect are: gr7214 eth2:10.196.31.14 Interfaces found on subnet "10.196.180.0" that are likely candidates for a private interconnect are: gr7214 eth3:10.196.180.14 Node connectivity check passed Total memory check passed Available memory check passed

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 197: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 57

Swap space check passed Free disk space check passed for "gr7214:/tmp" User existence check passed for "grid" Group existence check passed for "oinstall" Group existence check passed for "dba" Membership check for user "grid" in group "oinstall" [as Primary] passed Membership check for user "grid" in group "dba" failed Check failed on nodes: gr7214 Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check failed for "file-max" Check failed on nodes: gr7214 Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check failed for "wmem_max" Check failed on nodes: gr7214 Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81" Package existence check passed for "binutils-2.17.50.0.6" Package existence check passed for "gcc-4.1.2" Package existence check passed for "gcc-c++-4.1.2" Package existence check passed for "libgomp-4.1.2" Package existence check passed for "libaio-0.3.106" Package existence check passed for "glibc-2.5-24" Package existence check passed for "compat-libstdc++-33-3.2.3" Package existence check passed for "elfutils-libelf-0.125" Package existence check passed for "elfutils-libelf-devel-0.125" Package existence check passed for "glibc-common-2.5" Package existence check passed for "glibc-devel-2.5" Package existence check passed for "glibc-headers-2.5" Package existence check passed for "libaio-devel-0.3.106" Package existence check passed for "libgcc-4.1.2"

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 198: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 58

Package existence check passed for "libstdc++-4.1.2" Package existence check passed for "libstdc++-devel-4.1.2" Package existence check passed for "sysstat-7.0.2" Package existence check passed for "unixODBC-2.2.11" Package existence check passed for "unixODBC-devel-2.2.11" Package existence check passed for "ksh-20060214" Check for multiple users with UID value 0 passed Current group ID check passed Core file name pattern consistency check passed. User "grid" is not part of "root" group. Check passed Default user file creation mask check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... NTP Configuration file check passed Checking daemon liveness... Liveness check passed for "ntpd" NTP daemon slewing option check passed NTP daemon's boot time configuration check for slewing option passed NTP common Time Server Check started... Check of common NTP Time Server passed Clock time offset check from NTP Time Server started... Clock time offset check passed Clock synchronization check using Network Time Protocol(NTP) passed Pre-check for cluster services setup was unsuccessful on all the nodes. $

5) Generate the fixup script for your third node with cluvfy with the –fixup option.

$ cluvfy stage -pre crsinst -n $ST_NODE3 –fixup Performing pre-checks for cluster services setup Checking node reachability... Node reachability check passed from node "host01" Checking user equivalence...

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 199: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 59

User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.216.52.0" with node(s) host03 TCP connectivity check passed for subnet "10.216.52.0" Node connectivity passed for subnet "10.216.96.0" with node(s) host03 TCP connectivity check passed for subnet "10.216.96.0" Node connectivity passed for subnet "10.196.28.0" with node(s) host03 TCP connectivity check passed for subnet "10.196.28.0" Node connectivity passed for subnet "10.196.180.0" with node(s) host03 TCP connectivity check passed for subnet "10.196.180.0" Interfaces found on subnet "10.216.52.0" that are likely candidates for VIP are: host03 eth0:10.216.54.232 Interfaces found on subnet "10.216.96.0" that are likely candidates for a private interconnect are: host03 eth1:10.216.98.155 Interfaces found on subnet "10.196.28.0" that are likely candidates for a private interconnect are: host03 eth2:10.196.31.14 Interfaces found on subnet "10.196.180.0" that are likely candidates for a private interconnect are: host03 eth3:10.196.180.14 Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for " host03:/tmp" User existence check passed for "grid" Group existence check passed for "oinstall" Group existence check passed for "dba"

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 200: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 60

Membership check for user "grid" in group "oinstall" [as Primary] passed Membership check for user "grid" in group "dba" failed Check failed on nodes: host03 Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check failed for "file-max" Check failed on nodes: host03 Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check failed for "wmem_max" Check failed on nodes: host03 Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81" Package existence check passed for "binutils-2.17.50.0.6" Package existence check passed for "gcc-4.1.2" Package existence check passed for "gcc-c++-4.1.2" Package existence check passed for "libgomp-4.1.2" Package existence check passed for "libaio-0.3.106" Package existence check passed for "glibc-2.5-24" Package existence check passed for "compat-libstdc++-33-3.2.3" Package existence check passed for "elfutils-libelf-0.125" Package existence check passed for "elfutils-libelf-devel-0.125" Package existence check passed for "glibc-common-2.5" Package existence check passed for "glibc-devel-2.5" Package existence check passed for "glibc-headers-2.5" Package existence check passed for "libaio-devel-0.3.106" Package existence check passed for "libgcc-4.1.2" Package existence check passed for "libstdc++-4.1.2" Package existence check passed for "libstdc++-devel-4.1.2" Package existence check passed for "sysstat-7.0.2" Package existence check passed for "unixODBC-2.2.11" Package existence check passed for "unixODBC-devel-2.2.11"

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 201: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 61

Package existence check passed for "ksh-20060214" Check for multiple users with UID value 0 passed Current group ID check passed Core file name pattern consistency check passed. User "grid" is not part of "root" group. Check passed Default user file creation mask check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... NTP Configuration file check passed Checking daemon liveness... Liveness check passed for "ntpd" NTP daemon slewing option check passed NTP daemon's boot time configuration check for slewing option passed NTP common Time Server Check started... Check of common NTP Time Server passed Clock time offset check from NTP Time Server started... Clock time offset check passed Clock synchronization check using Network Time Protocol(NTP) passed Fixup information has been generated for following node(s): host03 Please run the following script on each node as "root" user to execute the fixups: '/tmp/CVU_11.2.0.1.0_grid/runfixup.sh' Pre-check for cluster services setup was unsuccessful on all the nodes.

6) Run the fixup script as directed.

$ ssh root@${ST_NODE3} /tmp/CVU_11.2.0.1.0_grid/runfixup.sh root@host03's password: Response file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.response Enable file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.enable Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log Setting Kernel Parameters... fs.file-max = 327679

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 202: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 62

fs.file-max = 6815744 net.ipv4.ip_local_port_range = 9000 65500 net.core.wmem_max = 262144 net.core.wmem_max = 1048576 uid=502(grid) gid=502(oinstall) groups=505(asmdba),506(asmoper),504(asmadmin),502(oinstall) $

7) Using the Cluster Verification Utility, make sure that you can add your third node to the cluster.

$ . /home/oracle/labs/st_env.sh $ $ cluvfy stage -pre nodeadd -n $ST_NODE3 Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "host01" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth3" Node connectivity passed for interface "eth3" Node connectivity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Shared resources check for node addition failed Check failed on nodes: <-------- You can safely ignore host03 Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 203: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 63

Node connectivity passed for subnet "10.216.52.0" with node(s) host01,host02,host03 TCP connectivity check passed for subnet "10.216.52.0" Node connectivity passed for subnet "10.216.96.0" with node(s) host01,host02,host03 TCP connectivity check passed for subnet "10.216.96.0" Node connectivity passed for subnet "10.196.28.0" with node(s) host01,host02,host03 TCP connectivity check passed for subnet "10.196.28.0" Node connectivity passed for subnet "10.196.180.0" with node(s) host01,host02,host03 TCP connectivity check failed for subnet "10.196.180.0" Interfaces found on subnet "10.216.52.0" that are likely candidates for VIP are: host01 eth0:10.216.54.233 host02 eth0:10.216.54.234 host03 eth0:10.216.54.235 Interfaces found on subnet "10.216.96.0" that are likely candidates for a private interconnect are: host01 eth1:10.216.101.101 host02 eth1:10.216.96.144 host03 eth1:10.216.100.226 Interfaces found on subnet "10.196.28.0" that are likely candidates for a private interconnect are: host01 eth2:10.196.31.15 host02 eth2:10.196.31.16 host03 eth2:10.196.31.17 Interfaces found on subnet "10.196.180.0" that are likely candidates for a private interconnect are: host01 eth3:10.196.180.15 eth3:10.196.181.239 eth3:10.196.183.15 eth3:10.196.182.229 eth3:10.196.180.232 host02 eth3:10.196.180.16 eth3:10.196.181.231 eth3:10.196.181.244 host03 eth3:10.196.180.17 Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "host03:/tmp" User existence check passed for "grid"

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 204: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 64

Run level check passed Hard limits check passed for "maximum open file descriptors" Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81" Package existence check passed for "binutils-2.17.50.0.6" Package existence check passed for "gcc-4.1.2" Package existence check passed for "gcc-c++-4.1.2" Package existence check passed for "libgomp-4.1.2" Package existence check passed for "libaio-0.3.106" Package existence check passed for "glibc-2.5-24" Package existence check passed for "compat-libstdc++-33-3.2.3" Package existence check passed for "elfutils-libelf-0.125" Package existence check passed for "elfutils-libelf-devel-0.125" Package existence check passed for "glibc-common-2.5" Package existence check passed for "glibc-devel-2.5" Package existence check passed for "glibc-headers-2.5" Package existence check passed for "libaio-devel-0.3.106" Package existence check passed for "libgcc-4.1.2" Package existence check passed for "libstdc++-4.1.2" Package existence check passed for "libstdc++-devel-4.1.2" Package existence check passed for "sysstat-7.0.2" Package existence check passed for "unixODBC-2.2.11" Package existence check passed for "unixODBC-devel-2.2.11" Package existence check passed for "ksh-20060214" Check for multiple users with UID value 0 passed User "grid" is not part of "root" group. Check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started...

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 205: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 65

NTP Configuration file check passed Checking daemon liveness... Liveness check passed for "ntpd" NTP daemon slewing option check passed NTP daemon's boot time configuration check for slewing option passed NTP common Time Server Check started... Check of common NTP Time Server passed Clock time offset check from NTP Time Server started... Clock time offset check passed Clock synchronization check using Network Time Protocol(NTP) passed Pre-check for node addition was unsuccessful on all the nodes. $

8) Add your third node to the cluster from your first node:

$ . /home/oracle/labs/st_env.sh $ $ cd $ORACLE_HOME/oui/bin $ $ ./addNode.sh -silent "CLUSTER_NEW_NODES={$ST_NODE3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={$ST_NODE3_VIP}" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3755 MB Passed Oracle Universal Installer, Version 11.2.0.1.0 Production Copyright (C) 1999, 2009, Oracle. All rights reserved. Performing tests to see whether nodes host02,host03 are available ............................................................... 100% Done. . ----------------------------------------------------------------------------- Cluster Node Addition Summary Global Settings Source: /u01/app/11.2.0/grid New Nodes

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 206: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 66

Space Requirements New Nodes host03 /: Required 4.76GB : Available 32.38GB Installed Products Product Names Oracle Grid Infrastructure 11.2.0.1.0 Sun JDK 1.5.0.17.0 Installer SDK Component 11.2.0.1.0 Oracle One-Off Patch Installer 11.2.0.0.2 Oracle Universal Installer 11.2.0.1.0 Oracle Configuration Manager Deconfiguration 10.3.1.0.0 Enterprise Manager Common Core Files 10.2.0.4.2 Oracle DBCA Deconfiguration 11.2.0.1.0 Oracle RAC Deconfiguration 11.2.0.1.0 Oracle Quality of Service Management (Server) 11.2.0.1.0 Installation Plugin Files 11.2.0.1.0 Universal Storage Manager Files 11.2.0.1.0 Oracle Text Required Support Files 11.2.0.1.0 Automatic Storage Management Assistant 11.2.0.1.0 Oracle Database 11g Multimedia Files 11.2.0.1.0 Oracle Multimedia Java Advanced Imaging 11.2.0.1.0 Oracle Globalization Support 11.2.0.1.0 Oracle Multimedia Locator RDBMS Files 11.2.0.1.0 Oracle Core Required Support Files 11.2.0.1.0 Bali Share 1.1.18.0.0 Oracle Database Deconfiguration 11.2.0.1.0 Oracle Quality of Service Management (Client) 11.2.0.1.0 Expat libraries 2.0.1.0.1 Oracle Containers for Java 11.2.0.1.0 Perl Modules 5.10.0.0.1 Secure Socket Layer 11.2.0.1.0 Oracle JDBC/OCI Instant Client 11.2.0.1.0 Oracle Multimedia Client Option 11.2.0.1.0 LDAP Required Support Files 11.2.0.1.0 Character Set Migration Utility 11.2.0.1.0 Perl Interpreter 5.10.0.0.1 PL/SQL Embedded Gateway 11.2.0.1.0 OLAP SQL Scripts 11.2.0.1.0 Database SQL Scripts 11.2.0.1.0 Oracle Extended Windowing Toolkit 3.4.47.0.0 SSL Required Support Files for InstantClient 11.2.0.1.0 SQL*Plus Files for Instant Client 11.2.0.1.0 Oracle Net Required Support Files 11.2.0.1.0 Oracle Database User Interface 2.2.13.0.0 RDBMS Required Support Files for Instant Client 11.2.0.1.0 Enterprise Manager Minimal Integration 11.2.0.1.0 XML Parser for Java 11.2.0.1.0 Oracle Security Developer Tools 11.2.0.1.0 Oracle Wallet Manager 11.2.0.1.0

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 207: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 67

Enterprise Manager plugin Common Files 11.2.0.1.0 Platform Required Support Files 11.2.0.1.0 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 RDBMS Required Support Files 11.2.0.1.0 Oracle Ice Browser 5.2.3.6.0 Oracle Help For Java 4.2.9.0.0 Enterprise Manager Common Files 10.2.0.4.2 Deinstallation Tool 11.2.0.1.0 Oracle Java Client 11.2.0.1.0 Cluster Verification Utility Files 11.2.0.1.0 Oracle Notification Service (eONS) 11.2.0.1.0 Oracle LDAP administration 11.2.0.1.0 Cluster Verification Utility Common Files 11.2.0.1.0 Oracle Clusterware RDBMS Files 11.2.0.1.0 Oracle Locale Builder 11.2.0.1.0 Oracle Globalization Support 11.2.0.1.0 Buildtools Common Files 11.2.0.1.0 Oracle RAC Required Support Files-HAS 11.2.0.1.0 SQL*Plus Required Support Files 11.2.0.1.0 XDK Required Support Files 11.2.0.1.0 Agent Required Support Files 10.2.0.4.2 Parser Generator Required Support Files 11.2.0.1.0 Precompiler Required Support Files 11.2.0.1.0 Installation Common Files 11.2.0.1.0 Required Support Files 11.2.0.1.0 Oracle JDBC/THIN Interfaces 11.2.0.1.0 Oracle Multimedia Locator 11.2.0.1.0 Oracle Multimedia 11.2.0.1.0 HAS Common Files 11.2.0.1.0 Assistant Common Files 11.2.0.1.0 PL/SQL 11.2.0.1.0 HAS Files for DB 11.2.0.1.0 Oracle Recovery Manager 11.2.0.1.0 Oracle Database Utilities 11.2.0.1.0 Oracle Notification Service 11.2.0.0.0 SQL*Plus 11.2.0.1.0 Oracle Netca Client 11.2.0.1.0 Oracle Net 11.2.0.1.0 Oracle JVM 11.2.0.1.0 Oracle Internet Directory Client 11.2.0.1.0 Oracle Net Listener 11.2.0.1.0 Cluster Ready Services Files 11.2.0.1.0 Oracle Database 11g 11.2.0.1.0 ----------------------------------------------------------------------------- Instantiating scripts for add node (Tuesday, September 1, 2009 2:46:41 PM EDT) . 1% Done.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 208: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 68

Instantiation of add node scripts complete Copying to remote nodes (Tuesday, September 1, 2009 2:46:45 PM EDT) ............................................................................................... 96% Done. Home copied to new nodes Saving inventory on nodes (Tuesday, September 1, 2009 2:51:33 PM EDT) . 100% Done. Save inventory complete WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. To register the new inventory please run the script at '/u01/app/oraInventory/orainstRoot.sh' with root privileges on nodes 'host03'. If you do not register the inventory, you may not be able to update or patch the products you installed. The following configuration scripts need to be executed as the "root" user in each cluster node. /u01/app/oraInventory/orainstRoot.sh #On nodes host03 /u01/app/11.2.0/grid/root.sh #On nodes host03 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node The Cluster Node Addition of /u01/app/11.2.0/grid was successful. Please check '/tmp/silentInstall.log' for more details. $

9) Connected as the root user on your third node using a terminal session, execute the following scripts: /u01/app/oraInventory/orainstRoot.sh and /u01/app/11.2.0/grid/root.sh.

[grid]$ . /home/oracle/labs/st_env.sh [grid]$ ssh root@${ST_NODE3} root@host03's password: Last login: Tue Sep 29 09:59:03 2009 from host01.example.com # /u01/app/oraInventory/orainstRoot.sh Creating the Oracle inventory pointer file (/etc/oraInst.loc) Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 209: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 69

The execution of the script is complete. # # /u01/app/11.2.0/grid/root.sh Running Oracle 11g root.sh script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed. 2009-09-01 15:05:27: Parsing the host name 2009-09-01 15:05:27: Checking for super user privileges 2009-09-01 15:05:27: User has super user privileges Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Adding daemon to inittab CRS-4123: Oracle High Availability Services has been started. ohasd is starting CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node host01, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster CRS-2672: Attempting to start 'ora.mdnsd' on 'host03' CRS-2676: Start of 'ora.mdnsd' on 'host03' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'host03' CRS-2676: Start of 'ora.gipcd' on 'host03' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'host03' CRS-2676: Start of 'ora.gpnpd' on 'host03' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host03' CRS-2676: Start of 'ora.cssdmonitor' on 'host03' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'host03' CRS-2672: Attempting to start 'ora.diskmon' on 'host03' CRS-2676: Start of 'ora.diskmon' on 'host03' succeeded CRS-2676: Start of 'ora.cssd' on 'host03' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'host03'

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 210: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 70

CRS-2676: Start of 'ora.ctssd' on 'host03' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on 'host03' CRS-2676: Start of 'ora.drivers.acfs' on 'host03' succeeded CRS-2672: Attempting to start 'ora.asm' on 'host03' CRS-2676: Start of 'ora.asm' on 'host03' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'host03' CRS-2676: Start of 'ora.crsd' on 'host03' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'host03' CRS-2676: Start of 'ora.evmd' on 'host03' succeeded clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. host03 2009/09/01 15:09:15 /u01/app/11.2.0/grid/cdata/host03/backup_20090901_150915.olr Preparing packages for installation... cvuqdisk-1.0.7-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded Updating inventory properties for clusterware Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 4095 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. #

10) From your first node, check that your cluster is integrated and that the cluster is not divided into separate parts.

$ . /home/oracle/labs/st_env.sh $ $ cluvfy stage -post nodeadd -n $ST_NODE3 Performing post-checks for node addition Checking node reachability... Node reachability check passed from node "host01" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file...

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 211: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 71

Verification of the hosts config file successful Check: Node connectivity for interface "eth3" Node connectivity passed for interface "eth3" Node connectivity check passed Checking cluster integrity... Cluster integrity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Node connectivity passed for subnet "10.216.52.0" with node(s) host03,host02,host01 TCP connectivity check passed for subnet "10.216.52.0" Node connectivity passed for subnet "10.216.96.0" with node(s) host03,host02,host01 TCP connectivity check passed for subnet "10.216.96.0" Node connectivity passed for subnet "10.196.28.0" with node(s) host03,host02,host01 TCP connectivity check passed for subnet "10.196.28.0" Node connectivity passed for subnet "10.196.180.0" with node(s) host03,host02,host01 TCP connectivity check passed for subnet "10.196.180.0" Interfaces found on subnet "10.216.52.0" that are likely candidates for VIP are: host03 eth0:10.216.54.235 host02 eth0:10.216.54.234 host01 eth0:10.216.54.233

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 212: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 72

Interfaces found on subnet "10.216.96.0" that are likely candidates for a private interconnect are: host03 eth1:10.216.100.226 host02 eth1:10.216.96.144 host01 eth1:10.216.101.101 Interfaces found on subnet "10.196.28.0" that are likely candidates for a private interconnect are: host03 eth2:10.196.31.17 host02 eth2:10.196.31.16 host01 eth2:10.196.31.15 Interfaces found on subnet "10.196.180.0" that are likely candidates for a private interconnect are: host03 eth3:10.196.180.17 eth3:10.196.182.229 eth3:10.196.180.224 host02 eth3:10.196.180.16 eth3:10.196.181.231 eth3:10.196.181.244 host01 eth3:10.196.180.15 eth3:10.196.181.239 eth3:10.196.183.15 eth3:10.196.180.232 Node connectivity check passed Checking node application existence... Checking existence of VIP node application (required) Check passed. Checking existence of ONS node application (optional) Check passed. Checking existence of GSD node application (optional) Check ignored. Checking existence of EONS node application (optional) Check passed. Checking existence of NETWORK node application (optional) Check passed. Checking Single Client Access Name (SCAN)... Checking name resolution setup for "cl7215-scan.cl7215.example.com"... Verification of SCAN VIP and Listener setup passed User "grid" is not part of "root" group. Check passed

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 213: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 73

Checking if Clusterware is installed on all nodes... Check of Clusterware install passed Checking if CTSS Resource is running on all nodes... CTSS resource check passed Querying CTSS for time offset on all nodes... Query of CTSS for time offset passed Check CTSS state started... CTSS is in Observer state. Switching over to clock synchronization checks using NTP Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... NTP Configuration file check passed Checking daemon liveness... Liveness check passed for "ntpd" NTP daemon slewing option check passed NTP daemon's boot time configuration check for slewing option passed NTP common Time Server Check started... Check of common NTP Time Server passed Clock time offset check from NTP Time Server started... Clock time offset check passed Clock synchronization check using Network Time Protocol(NTP) passed Oracle Cluster Time Synchronization Services check passed Post-check for node addition was successful. $

11) Make sure that the FRA and ASCF ASM disk groups are mounted on all three nodes.

[grid@host01]$ crsctl stat res ora.FRA.dg -t --------------------------------------------------------------------------------

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 214: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 74

NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.FRA.dg ONLINE ONLINE host01 ONLINE ONLINE host02 OFFLINE OFFLINE host03 [grid@host01]$ crsctl stat res ora.ACFS.dg -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ACFS.dg ONLINE ONLINE host01 ONLINE ONLINE host02 OFFLINE OFFLINE host03 [grid@host01]$ ssh $ST_NODE3 Last login: Sat Sep 12 13:16:53 2009 from host01.example.com [grid@host03]$ . oraenv ORACLE_SID = [grid] ? +ASM3 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid@host03]$ asmcmd mount ACFS [grid@host03]$ asmcmd mount FRA [grid@host03]$ exit logout Connection to host03 closed. [grid@host01]$ crsctl stat res ora.FRA.dg -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.FRA.dg ONLINE ONLINE host01 ONLINE ONLINE host02

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 215: d59999gc10_sg2

Practice 4-1: Adding a Third Node to Your Cluster (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 75

ONLINE ONLINE host03 [grid@host011]$ crsctl stat res ora.ACFS.dg -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ACFS.dg ONLINE ONLINE host01 ONLINE ONLINE host02 ONLINE ONLINE host03 [grid@host01 ~]$ $

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 216: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 76

Practices for Lesson 5 In this practice, you will use Oracle Clusterware to protect the Apache application.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 217: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 77

Practice 5-1: Protecting the Apache Application In this practice, you use Oracle Clusterware to protect the Apache application. To do this, you create an application VIP for Apache (HTTPD), an action script, and a resource.

1) As root, source the environment script /home/oracle/labs/st_env.sh. Then verify that the Apache RPMs; httpd, httpd-devel, and httpd-manual are installed on your first two nodes.

# su – Password: oracle <<password not displayed # . /home/oracle/labs/st_env.sh # rpm -qa|grep httpd httpd-2.2.3-22.0.1.el5 httpd-devel-2.2.3-22.0.1.el5 httpd-manual-2.2.3-22.0.1.el5 [root]# . /home/oracle/labs/st_env.sh Repeat on second node [root] ssh $ST_NODE2 rpm -qa|grep httpd root@host02's password: oracle <<password not displayed httpd-2.2.3-22.0.1.el5 httpd-devel-2.2.3-22.0.1.el5 httpd-manual-2.2.3-22.0.1.el5 [root]#

2) As the root user, start the Apache application on your first node with the apachectl start command.

# apachectl start

From a VNC session on one of your three nodes, access the Apache test page on your first node. For example, if your first node was named host01, the HTTP address would look something like this:

http://host01.example.com

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 218: d59999gc10_sg2

Practice 5-1: Protecting the Apache Application (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 78

After you have determined that Apache is working properly, repeat this step on your second host. After you have determined that Apache is working correctly on both nodes, you can stop Apache with the apachectl stop command.

3) Create an action script to control the application. This script must be accessible by all nodes on which the application resource can be located.

a) As the root user, create a script on the first node called apache.scr in /usr/local/bin that will start, stop, check status, and clean up if the application does not exit cleanly. Make sure that the host specified in the WEBPAGECHECK variable is your first node. Use the /home/oracle/labs/less_05/apache.scr.tpl file as a template for creating the script. Make the script executable and test the script.

# cp /home/oracle/labs/less_05/apache.tpl /usr/local/bin/apache.scr # vi /usr/local/bin/apache.scr #!/bin/bash HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf WEBPAGECHECK=http://host01.example.com:80/icons/apache_pb.gif case $1 in 'start') /usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION RET=$? ;; 'stop') /usr/sbin/apachectl -k stop

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 219: d59999gc10_sg2

Practice 5-1: Protecting the Apache Application (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 79

RET=$? ;; 'clean') /usr/sbin/apachectl -k stop RET=$? ;; 'check') /usr/bin/wget -q --delete-after $WEBPAGECHECK RET=$? ;; *) RET=0 ;; esac # 0: success; 1 : error if [ $RET -eq 0 ]; then exit 0 else exit 1 fi Save the file # chmod 755 /usr/local/bin/apache.scr # apache.scr start

Verify web page # apache.scr stop

Web page should no longer display

b) As root, create a script on the second node called apache.scr in /usr/bin/local that will start, stop, check status, and clean up if the application does not exit cleanly. Make sure that the host specified in the WEBPAGECHECK variable is your second node. Use the /home/oracle/labs/less_05/apache.scr.tpl file as a template for creating the script. Make the script executable and test the script.

# ssh $ST_NODE2 root@host02's password: oracle << password not displayed # cp /home/oracle/labs/less_05/apache.tpl /usr/local/bin/apache.scr # vi /usr/local/bin/apache.scr #!/bin/bash HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf WEBPAGECHECK=http://host02.example.com:80/icons/apache_pb.gif case $1 in 'start') /usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION RET=$?

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 220: d59999gc10_sg2

Practice 5-1: Protecting the Apache Application (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 80

;; 'stop') /usr/sbin/apachectl -k stop RET=$? ;; 'clean') /usr/sbin/apachectl -k stop RET=$? ;; 'check') /usr/bin/wget -q --delete-after $WEBPAGECHECK RET=$? ;; *) RET=0 ;; esac # 0: success; 1 : error if [ $RET -eq 0 ]; then exit 0 else exit 1 fi Save the file # chmod 755 /usr/local/bin/apache.scr # apache.scr start

Verify web page # apache.scr stop Web page should no longer display

4) Next, you must validate the return code of a check failure using the new script. The Apache server should NOT be running on either node. Run apache.scr check and immediately test the return code by issuing an echo $? command. This must be run immediately after the apache.scr check command because the shell variable $? holds the exit code of the previous command run from the shell. An unsuccessful check should return an exit code of 1. You should do this on both nodes.

# apache.scr check # echo $? 1 Repeat on second node

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 221: d59999gc10_sg2

Practice 5-1: Protecting the Apache Application (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 81

5) As the grid user, create a server pool for the resource called myApache_sp. This pool contains your first two hosts and is a child of the Generic pool.

$ id uid=502(grid) gid=501(oinstall) groups=501(oinstall),504(asmadmin),505(asmdba),506(asmoper) $ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid $ /u01/app/11.2.0/grid/bin/crsctl add serverpool myApache_sp -attr "PARENT_POOLS=Generic, SERVER_NAMES=$ST_NODE1 $ST_NODE2"

6) Check the status of the new pool on your cluster.

$ /u01/app/11.2.0/grid/bin/crsctl status server -f NAME=host01 STATE=ONLINE ACTIVE_POOLS=Generic myApache_sp ora.orcl STATE_DETAILS= NAME=host02 STATE=ONLINE ACTIVE_POOLS=Generic myApache_sp ora.orcl STATE_DETAILS= ...

7) Add the Apache Resource, which can be called myApache, to the myApache_sp subpool that has Generic as a parent. It must be performed as root because the resource requires root authority because of listening on the default privileged port 80. Set CHECK_INTERVAL to 30, RESTART_ATTEMPTS to 2, and PLACEMENT to restricted.

# su – Password: oracle << Password not displayed # id uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) # /u01/app/11.2.0/grid/bin/crsctl add resource myApache -type cluster_resource -attr "ACTION_SCRIPT=/usr/local/bin/apache.scr, PLACEMENT='restricted', SERVER_POOLS=myApache_sp, CHECK_INTERVAL='30', RESTART_ATTEMPTS='2'" #

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 222: d59999gc10_sg2

Practice 5-1: Protecting the Apache Application (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 82

8) View the static attributes of the myApache resource with the crsctl status resource myApache -p –f command.

# /u01/app/11.2.0/grid/bin/crsctl status resource myApache –f NAME=myApache TYPE=cluster_resource STATE=OFFLINE TARGET=OFFLINE ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTION_FAILURE_TEMPLATE= ACTION_SCRIPT=/usr/local/bin/apache.scr ACTIVE_PLACEMENT=0 AGENT_FILENAME=%CRS_HOME%/bin/scriptagent AUTO_START=restore CARDINALITY=1 CARDINALITY_ID=0 CHECK_INTERVAL=30 CREATION_SEED=123 CURRENT_RCOUNT=0 DEFAULT_TEMPLATE= DEGREE=1 DESCRIPTION= ENABLED=1 FAILOVER_DELAY=0 FAILURE_COUNT=0 FAILURE_HISTORY= FAILURE_INTERVAL=0 FAILURE_THRESHOLD=0 HOSTING_MEMBERS= ID=myApache INCARNATION=0 LAST_FAULT=0 LAST_RESTART=0 LAST_SERVER= LOAD=1 LOGGING_LEVEL=1 NOT_RESTARTING_TEMPLATE= OFFLINE_CHECK_INTERVAL=0 PLACEMENT=restricted PROFILE_CHANGE_TEMPLATE= RESTART_ATTEMPTS=2 SCRIPT_TIMEOUT=60 SERVER_POOLS=myApache_sp START_DEPENDENCIES= START_TIMEOUT=0 STATE_CHANGE_TEMPLATE= STATE_CHANGE_VERS=0 STATE_DETAILS= STOP_DEPENDENCIES=

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 223: d59999gc10_sg2

Practice 5-1: Protecting the Apache Application (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 83

STOP_TIMEOUT=0 UPTIME_THRESHOLD=1h

9) Use the crsctl start resource myApache command to start the new resource. Use the crsctl status resource myApache command to confirm that the resource is online on the first node. If you like, open a browser and point it to your first node as shown in step 2.

# /u01/app/11.2.0/grid/bin/crsctl start resource myApache CRS-2672: Attempting to start 'myApache' on 'host01' CRS-2676: Start of 'myApache' on 'host01' succeeded # /u01/app/11.2.0/grid/bin/crsctl status resource myApache NAME=myApache TYPE=cluster_resource TARGET=ONLINE STATE=ONLINE on host01

10) Confirm that Apache is NOT running on your second node. The easiest way to do this is to check for the running /usr/sbin/httpd -k start -f /etc/httpd/conf/httpd.confd processes with the ps command.

# . /home/oracle/labs/st_env.sh # ssh $ST_NODE2 ps -ef|grep -i "httpd –k" root@host02's password: oracle << password is not displayed #

11) Next, simulate a node failure on your first node using the init command as root. Before issuing the reboot on the first node, open a VNC session on the second node and as the root user, execute the /home/oracle/labs/less_05/monitor.sh script so that you can monitor the failover.

ON THE FIRST NODE AS THE root USER # init 6 To initiate a reboot, simulating a node failure ON THE second NODE AS THE root USER # cat /home/oracle/labs/less_05/monitor.sh while true

do ps -ef | grep -i "httpd -k"

sleep 1 done

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 224: d59999gc10_sg2

Practice 5-1: Protecting the Apache Application (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 84

# /home/oracle/labs/less_05/monitor.sh Execute this on the second node root 21940 18530 0 11:01 pts/4 00:00:00 grep -i httpd -k root 21948 18530 0 11:01 pts/4 00:00:00 grep -i httpd -k root 21951 18530 0 11:01 pts/4 00:00:00 grep -i httpd –k ... apache 22123 22117 0 11:01 ? 00:00:00 /usr/sbin/httpd -k start -f /etc/httpd/conf/httpd.conf apache 22124 22117 0 11:01 ? 00:00:00 /usr/sbin/httpd -k start -f /etc/httpd/conf/httpd.conf apache 22125 22117 0 11:01 ? 00:00:00 /usr/sbin/httpd -k start -f /etc/httpd/conf/httpd.conf ... Issue a ctl-c to stop the monitoring #

12) Verify the failover from the first node to the second with the crsctl stat resource myApache –t command.

# /u01/app/11.2.0/grid/bin/crsctl stat resource myApache –t -------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------- Cluster Resources -------------------------------------------------------------- myApache 1 ONLINE ONLINE host02

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 225: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 85

Practices for Lesson 6 In this lesson you will work with Oracle Clusterware log files and learn to use the ocrdump and cluvfy utilities.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 226: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 86

Practice 6-1: Working with Log Files In this practice, you will examine the Oracle Clusterware alert log and then package various log files into an archive format suitable to send to My Oracle Support.

1) While connected as the grid user to your first node, locate and view the contents of the Oracle Clusterware alert log.

$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid $ . /home/oracle/labs/st_env.sh $ cd $ORACLE_HOME/log/$ST_NODE1 $ view alert$ST_NODE1.log Oracle Database 11g Clusterware Release 11.2.0.1.0 - Production Copyright 1996, 2009 Oracle. All rights reserved. 2009-08-24 14:32:24.580 [client(12578)]CRS-2106:The OLR location /u01/app/11.2.0/grid/cdata/host01.olr is inaccessible. Details in /u01/app/11.2.0/grid/log/host01/client/ocrconfig_12578.log. 2009-08-24 14:32:24.933 [client(12578)]CRS-2101:The OLR was formatted using version 3. 2009-08-24 14:32:58.972 [ohasd(12895)]CRS-2112:The OLR service started on node host01. 2009-08-24 14:33:00.090 [ohasd(12895)]CRS-2772:Server 'host01' has been assigned to pool 'Free'. 2009-08-24 14:33:58.187 [ohasd(12895)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running). 2009-08-24 14:34:02.853 [cssd(14068)]CRS-1713:CSSD daemon is started in exclusive mode 2009-08-24 14:34:05.230 [cssd(14068)]CRS-1709:Lease acquisition failed for node host01 because no voting file has been configured; Details at (:CSSNM00031:) in /u01/app/11.2.0/grid/log/host01/cssd/ocssd.log 2009-08-24 14:34:22.893 [cssd(14068)]CRS-1601:CSSD Reconfiguration complete. Active nodes are host01 … :q!

2) Navigate to the Oracle Cluster Synchronization Services daemon log directory and determine whether any log archives exist.

$ cd ./cssd $ pwd

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 227: d59999gc10_sg2

Practice 6-1: Working with Log Files (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 87

/u01/app/11.2.0/grid/log/host01/cssd $ ls -alt ocssd* -rw-r--r-- 1 grid oinstall 7564217 Sep 1 14:06 ocssd.log -rw-r--r-- 1 grid oinstall 52606470 Sep 1 06:06 ocssd.l01 -rw-r--r-- 1 grid oinstall 52527387 Aug 29 22:26 ocssd.l02 -rw-r--r-- 1 root root 52655425 Aug 27 14:18 ocssd.l03 -rw-r--r-- 1 grid oinstall 114 Aug 24 14:34 ocssd.trc

3) Switch to the root user and set up the environment variables for the Grid Infrastructure. Change to the /home/oracle/labs directory and run the diagcollection.pl script to gather all log files that can be send to My Oracle Support for problem analysis.

$ su - Password: oracle << Password is not displayed # . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid # cd /home/oracle/labs # diagcollection.pl --collect --crshome /u01/app/11.2.0/grid Production Copyright 2004, 2008, Oracle. All rights reserved Cluster Ready Services (CRS) diagnostic collection tool The following CRS diagnostic archives will be created in the local directory. crsData_host01_20090901_1413.tar.gz -> logs,traces and cores from CRS home. Note: core files will be packaged only with the --core option. ocrData_host01_20090901_1413.tar.gz -> ocrdump, ocrcheck etc coreData_host01_20090901_1413.tar.gz -> contents of CRS core files in text format osData_host01_20090901_1413.tar.gz -> logs from Operating System Collecting crs data /bin/tar: log/host01/agent/crsd/orarootagent_root/orarootagent_root.log: file changed as we read it Collecting OCR data Collecting information from core files No corefiles found Collecting OS logs

4) List the resulting log file archives that were generated with the diagcollection.pl script.

# ls -la *tar.gz

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 228: d59999gc10_sg2

Practice 6-1: Working with Log Files (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 88

-rw-r--r-- 1 root root 89083873 Sep 1 14:17 crsData_host01_20090901_1413.tar.gz -rw-r--r-- 1 root root 11821 Sep 1 14:18 ocrData_host01_20090901_1413.tar.gz -rw-r--r-- 1 root root 98372 Sep 1 14:18 osData_host01_20090901_1413.tar.gz

5) Exit the switch user command to return to the grid account.

# exit $

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 229: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 89

Practice 6-2: Working with OCRDUMP In this practice, you will work with the OCRDUMP utility and dump the binary file into both text and XML representations.

1) While connected to the grid account, dump the contents of the OCR to the standard output and count the number of lines of output.

$ ocrdump -stdout | wc -l 469

2) Switch to the root user, dump the contents of the OCR to the standard output and count the number of lines of output. Compare your results with the previous step. How do the results differ?

$ su - Password: oracle << Password is not displayed # . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid # ocrdump -stdout | wc -l 3243

3) Dump the first 25 lines of the OCR to standard output using XML format.

# ocrdump -stdout -xml | head -25 <OCRDUMP> <TIMESTAMP>09/01/2009 16:55:11</TIMESTAMP> <COMMAND>/u01/app/11.2.0/grid/bin/ocrdump.bin -stdout -xml </COMMAND> <KEY> <NAME>SYSTEM</NAME> <VALUE_TYPE>UNDEF</VALUE_TYPE> <VALUE><![CDATA[]]></VALUE> <USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION> <GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION> <OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION> <USER_NAME>root</USER_NAME> <GROUP_NAME>root</GROUP_NAME> <KEY> <NAME>SYSTEM.version</NAME> <VALUE_TYPE>UB4 (10)</VALUE_TYPE> <VALUE><![CDATA[5]]></VALUE> <USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION> <GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION> <OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION> <USER_NAME>root</USER_NAME> <GROUP_NAME>root</GROUP_NAME>

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 230: d59999gc10_sg2

Practice 6-2: Working with OCRDUMP (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 90

4) Create an XML file dump of the OCR in the /home/oracle/labs directory. Name the dump file ocr_current_dump.xml.

# ocrdump –xml /home/oracle/labs/ocr_current_dump.xml

5) Find the node and directory that contains the automatic backup of the OCR from 24 hours ago.

# ocrconfig -showbackup host02 2009/09/01 16:17:55 /u01/app/11.2.0/grid/cdata/cluster01/backup00.ocr host02 2009/09/01 12:17:54 /u01/app/11.2.0/grid/cdata/cluster01/backup01.ocr host02 2009/09/01 08:17:53 /u01/app/11.2.0/grid/cdata/cluster01/backup02.ocr host02 2009/08/31 04:17:44 /u01/app/11.2.0/grid/cdata/cluster01/day.ocr host01 2009/08/24 18:53:34 /u01/app/11.2.0/grid/cdata/cluster01/week.ocr host02 2009/08/27 21:37:49 /u01/app/11.2.0/grid/cdata/cluster01/backup_20090827_213749.ocr

6) Copy the 24 hour old automatic backup of the OCR into the /home/oracle/labs directory. This is not a dump, but rather an actual backup of the OCR. Note: It may be necessary to use scp if the file is located on a different node. Be sure to use your cluster name in place of cluster01 in the path.

# . /home/oracle/labs/st_env.sh # cp /u01/app/11.2.0/grid/cdata/cluster01/day.ocr /home/oracle/labs/day.ocr >>>> Or <<<<< # scp $ST_NODE2:/u01/app/11.2.0/grid/cdata/cluster01/day.ocr /home/oracle/labs/day.ocr root@host02's password: oracle << Password is not displayed day.ocr 100% 7436KB 7.3MB/s 00:00

7) Dump the contents of the day.ocr backup OCR file in XML format saving the file in the /home/oracle/labs directory. Name the file day_ocr.xml.

# ocrdump -xml -backupfile /home/oracle/labs/day.ocr /home/oracle/labs/day_ocr.xml

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 231: d59999gc10_sg2

Practice 6-2: Working with OCRDUMP (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 91

8) Compare the differences between the day_ocr.xml file and the ocr_current_dump.xml file to determine all changes made to the OCR in the last 24 hours. Exit the switch user command when done.

# diff /home/oracle/labs/day_ocr.xml /home/oracle/labs/ocr_current_dump.xml 3,5c3,4 < <TIMESTAMP>09/01/2009 17:25:28</TIMESTAMP> < <DEVICE>/home/oracle/labs/day.ocr</DEVICE> < <COMMAND>/u01/app/11.2.0/grid/bin/ocrdump.bin -xml -backupfile /home/oracle/labs/day.ocr /home/oracle/labs/day_ocr.xml </COMMAND> --- > <TIMESTAMP>09/01/2009 17:17:56</TIMESTAMP> > <COMMAND>/u01/app/11.2.0/grid/bin/ocrdump.bin -xml /home/oracle/labs/ocr_current_dump2 </COMMAND> 8438c8437 < <VALUE><![CDATA[2009/08/31 00:17:43]]></VALUE> --- > <VALUE><![CDATA[2009/09/01 16:17:55]]></VALUE> 8486c8485 < <VALUE><![CDATA[2009/08/30 20:17:42]]></VALUE> --- > <VALUE><![CDATA[2009/09/01 12:17:54]]></VALUE> 8534c8533 < <VALUE><![CDATA[2009/08/30 16:17:40]]></VALUE> --- > <VALUE><![CDATA[2009/09/01 08:17:53]]></VALUE> 8582c8581 < <VALUE><![CDATA[2009/08/29 04:17:29]]></VALUE> --- > <VALUE><![CDATA[2009/08/31 04:17:44]]></VALUE> 8630c8629 < <VALUE><![CDATA[2009/08/30 04:17:36]]></VALUE> --- > <VALUE><![CDATA[2009/09/01 04:17:51]]></VALUE> # exit $

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 232: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 92

Practice 6-3: Working with CLUVFY In this practice, you will work with CLUVFY to verify the state of various cluster components.

1) Determine the location of the cluvfy utility and its configuration file.

$ . /home/oracle/labs/st_env.sh $ which cluvfy /u01/app/11.2.0/grid/bin/cluvfy $ cd $ORACLE_HOME/cv/admin $ pwd /u01/app/11.2.0/grid/cv/admin $ cat cvu_config # Configuration file for Cluster Verification Utility(CVU) # Version: 011405 # # NOTE: # 1._ Any line without a '=' will be ignored # 2._ Since the fallback option will look into the environment variables, # please have a component prefix(CV_) for each property to define a # namespace. # #Nodes for the cluster. If CRS home is not installed, this list will be #picked up when -n all is mentioned in the commandline argument. #CV_NODE_ALL= #if enabled, cvuqdisk rpm is required on all nodes CV_RAW_CHECK_ENABLED=TRUE # Fallback to this distribution id CV_ASSUME_DISTID=OEL4 # Whether X-Windows check should be performed for user equivalence with SSH #CV_XCHK_FOR_SSH_ENABLED=TRUE # To override SSH location #ORACLE_SRVM_REMOTESHELL=/usr/bin/ssh # To override SCP location #ORACLE_SRVM_REMOTECOPY=/usr/bin/scp

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 233: d59999gc10_sg2

Practice 6-3: Working with CLUVFY (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 93

2) Display the stage options and stage names that can be used with the cluvfy utility.

$ cluvfy stage -list USAGE: cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose] Valid stage options and stage names are: -post hwos : post-check for hardware and operating system -pre cfs : pre-check for CFS setup -post cfs : post-check for CFS setup -pre crsinst : pre-check for CRS installation -post crsinst : post-check for CRS installation -pre hacfg : pre-check for HA configuration -post hacfg : post-check for HA configuration -pre dbinst : pre-check for database installation -pre acfscfg : pre-check for ACFS Configuration. -post acfscfg : post-check for ACFS Configuration. -pre dbcfg : pre-check for database configuration -pre nodeadd : pre-check for node addition. -post nodeadd : post-check for node addition. -post nodedel : post-check for node deletion.

3) Perform a postcheck for the ACFS configuration on all nodes.

$ cluvfy stage -post acfscfg -n $ST_NODE_LIST Performing post-checks for ACFS Configuration Checking node reachability... Node reachability check passed from node "host01" Checking user equivalence... User equivalence check passed for user "grid" Task ACFS Integrity check started... Starting check to see if ASM is running on all cluster nodes... ASM Running check passed. ASM is running on all cluster nodes Starting Disk Groups check to see if at least one Disk Group configured... Disk Group Check passed. At least one Disk Group configured Task ACFS Integrity check passed

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 234: d59999gc10_sg2

Practice 6-3: Working with CLUVFY (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 94

UDev attributes check for ACFS started... UDev attributes check passed for ACFS Post-check for ACFS Configuration was successful.

4) Display a list of the component names that can be checked with the cluvfy utility.

$ cluvfy comp -list USAGE: cluvfy comp <component-name> <component-specific options> [-verbose] Valid components are: nodereach : checks reachability between nodes nodecon : checks node connectivity cfs : checks CFS integrity ssa : checks shared storage accessibility space : checks space availability sys : checks minimum system requirements clu : checks cluster integrity clumgr : checks cluster manager integrity ocr : checks OCR integrity olr : checks OLR integrity ha : checks HA integrity crs : checks CRS integrity nodeapp : checks node applications existence admprv : checks administrative privileges peer : compares properties with peers software : checks software distribution asm : checks ASM integrity acfs : checks ACFS integrity gpnp : checks GPnP integrity gns : checks GNS integrity scan : checks SCAN configuration ohasd : checks OHASD integrity clocksync : checks Clock Synchronization vdisk : check Voting Disk Udev settings

5) Display the syntax usage help for the space component check of the cluvfy utility.

$ cluvfy comp space -help USAGE: cluvfy comp space [-n <node_list>] -l <storage_location> -z <disk_space>{B|K|M|G} [-verbose] <node_list> is the comma separated list of non-domain qualified nodenames, on which the test should be conducted. If

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 235: d59999gc10_sg2

Practice 6-3: Working with CLUVFY (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 95

"all" is specified, then all the nodes in the cluster will be used for verification. <storage_location> is the storage path. <disk_space> is the required disk space, in units of bytes(B),kilobytes(K),megabytes(M) or gigabytes(G). DESCRIPTION: Checks for free disk space at the location provided by '-l' option on all the nodes in the nodelist. If no '-n' option is given, local node is used for this check.

6) Verify that on each node of the cluster that the /tmp directory has at least 200 MB of free space in it using the cluvfy utility. Use verbose output.

$ cluvfy comp space -n $ST_NODE_LIST2 -l /tmp -z 200M -verbose Verifying space availability Checking space availability... Check: Space available on "/tmp" Node Name Available Required Comment -------- ---------------------- ----------------- -------- host02 29.14GB (3.055228E7KB) 200MB (204800.0KB) passed host01 29.74GB (3.118448E7KB) 200MB (204800.0KB) passed host03 29.69GB (3.1132244E7KB) 200MB (204800.0KB) passed Result: Space availability check passed for "/tmp" Verification of space availability was successful.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 236: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 96

Practices for Lesson 7 In these practices, you will adjust ASM initialization parameters, stop and start instances, and monitor the status of instances.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 237: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 97

Practice 7-1: Administering ASM Instances In this practice, you adjust initialization parameters in the SPFILE, and stop and start the ASM instances on local and remote nodes.

1) Disk groups are reconfigured occasionally to move older data to slower disks. Even though these operations occur at scheduled maintenance times in off-peak hours, the rebalance operations do not complete before regular operations resume. There is some performance impact to the regular operations. The setting for the ASM_POWER_LIMIT initialization parameter determines the speed of the rebalance operation. Determine the current setting and increase the speed by 2.

a) Open a terminal window on the first node, become the grid user, and set the environment to use the +ASM1 instance. Connect to the +ASM1 instance as SYS with the SYSASM privilege. What is the setting for ASM_POWER_LIMIT?

[oracle]$ su - grid Password: oracle << password is not displayed [grid]$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid]$ sqlplus / as sysasm SQL*Plus: Release 11.2.0.1.0 Production on Thu Aug 27 09:57:16 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Real Application Clusters and Automatic Storage Management options SQL> show parameter ASM_POWER_LIMIT NAME TYPE VALUE ---------------- ----------- ------------------------------ asm_power_limit integer 1 SQL>

b) This installation uses an SPFILE. Use the ALTER SYSTEM command to change the ASM_POWER_LIMIT for all nodes.

SQL> show parameter SPFILE NAME TYPE VALUE ---------------- ----------- ------------------------------ spfile string +DATA/cluster01/asmparameterfi le/registry.253.695669633

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 238: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 98

SQL> ALTER SYSTEM set ASM_POWER_LIMIT=3 SCOPE=BOTH SID='*'; System altered. SQL> show parameter ASM_POWER_LIMIT NAME TYPE VALUE ---------------- ----------- ------------------------------ asm_power_limit integer 3 SQL>

2) You have decided that due to other maintenance operations you want one instance +ASM1 to handle the bulk of the rebalance operation, so you will set the ASM POWER_LIMIT to 1 on instance +ASM2 and 5 on instance +ASM1.

SQL> ALTER SYSTEM set ASM_POWER_LIMIT=1 SCOPE=BOTH 2> SID='+ASM2'; System altered. SQL> ALTER SYSTEM set ASM_POWER_LIMIT=5 SCOPE=BOTH 2> SID='+ASM1'; System altered SQL> show parameter ASM_POWER_LIMIT NAME TYPE VALUE ---------------- ----------- ------------------------------ asm_power_limit integer 5 SQL> column NAME format A16 SQL> column VALUE format 999999 SQL> select inst_id, name, value from GV$PARAMETER 2> where name like 'asm_power_limit' INST_ID NAME VALUE ---------- ---------------- ------ 1 asm_power_limit 5 2 asm_power_limit 1 3 asm_power_limit 3 SQL>

3) Exit the SQL*Plus application.

SQL> exit

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 239: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 99

4) The ASM instance and all associated applications, ACFS, database, and listener on one node must be stopped for a maintenance operation to the physical cabling. Stop all the applications, ASM, and listener associated with +ASM2 using srvctl.

a) In a new terminal window, as the oracle user, stop Enterprise Manager on your first node.

[oracle]$ . oraenv ORACLE_SID = [oracle] ? orcl The Oracle base for ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1 is /u01/app/oracle [oracle]$ export ORACLE_UNQNAME=orcl [oracle]$ . /home/oracle/labs/st_env.sh [oracle]$ emctl stop dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://host01.example.com:1158/em/console/aboutApplication Stopping Oracle Enterprise Manager 11g Database Control ... ... Stopped.

b) Stop the orcl database.

[oracle]$ srvctl stop instance -d orcl -n $ST_NODE1

c) Verify that the database is stopped on ST_NODE1. The pgrep command shows that no orcl background processes are running.

[oracle]$ pgrep –lf orcl

d) In a terminal window, become the grid OS user, set the oracle environment, set the class environment, and then stop the ASM instance +ASM using the srvctl stop asm –n $STNODE command.

[oracle]$ su - grid Password: oracle << password is not displayed [grid@host01 ~]$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid]$ . /home/oracle/labs/st_env.sh [grid]$ srvctl stop asm -n $ST_NODE1 PRCR-1014 : Failed to stop resource ora.asm PRCR-1065 : Failed to stop resource ora.asm CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATA.dg', but the force option was not specified

e) Attempt to stop the ASM instance on ST_NODE1 using the force option, –f.

[grid]$ srvctl stop asm -n $ST_NODE1 -f

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 240: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 100

PRCR-1014 : Failed to stop resource ora.asm PRCR-1065 : Failed to stop resource ora.asm CRS-2670: Unable to start/relocate 'ora.asm' because 'ora.ACFS.dg' has a stop-time 'hard' dependency on it CRS-0222: Resource 'ora.ACFS.dg' has dependency error. CRS-2670: Unable to start/relocate 'ora.ACFS.dg' because 'ora.acfs.dbhome1.acfs' has a stop-time 'hard' dependency on it CRS-0245: User doesn't have enough privilege to perform the operation

f) The ACFS file system is dependent on the disk group, so root must dismount the file system. Become root and dismount the sharedhome file system. Then attempt to stop ASM. Why does it still fail? Check the mounted volumes with the df –h command.

[grid]$ su - root Password: oracle << password is not displayed [root]# . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [root@host01 ~]# . /home/oracle/labs/st_env.sh [root@host01 ~]# srvctl stop asm -n $ST_NODE1 -f PRCR-1014 : Failed to stop resource ora.asm PRCR-1065 : Failed to stop resource ora.asm CRS-2673: Attempting to stop 'ora.asm' on 'host01' ORA-15097: cannot SHUTDOWN ASM instance with connected client CRS-2675: Stop of 'ora.asm' on 'host01' failed CRS-2675: Stop of 'ora.asm' on 'host01' failed [root]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 9.7G 4.1G 5.1G 45% / /dev/xvda1 99M 20M 74M 22% /boot tmpfs 1.1G 154M 871M 16% /dev/shm /dev/mapper/VolGroup01-LogVol00 30G 4.5G 24G 16% /u01 192.0.2.10:/mnt/shareddisk01/software/software 60G 40G 17G 71% /mnt/software [root]#

g) The df –h command shows that the /u01/app/oracle/acfsmount/sharedhome file system has been dismounted, but the ASM instance still has clients dependent on it. In this case, the Oracle clusterware is dependent on the ASM instance because the OCR is in the ASM disk group.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 241: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 101

h) Stop the ASM instance with crsctl stop cluster –n $ST_NODE1 command. This command will stop all the Cluster services on the node.

[root]# crsctl stop cluster -n $ST_NODE1 CRS-2673: Attempting to stop 'ora.crsd' on 'host01' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host01' CRS-2673: Attempting to stop 'ora.gns' on 'host01' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'host01' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host01' CRS-2673: Attempting to stop 'ora.asm' on 'host01' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'host01' CRS-2677: Stop of 'ora.asm' on 'host01' succeeded CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.scan3.vip' on 'host01' CRS-2677: Stop of 'ora.scan3.vip' on 'host01' succeeded CRS-2672: Attempting to start 'ora.scan3.vip' on 'host02' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.host01.vip' on 'host01' CRS-2677: Stop of 'ora.host01.vip' on 'host01' succeeded CRS-2672: Attempting to start 'ora.host01.vip' on 'host02' CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'host01' CRS-2677: Stop of 'ora.scan2.vip' on 'host01' succeeded CRS-2672: Attempting to start 'ora.scan2.vip' on 'host02' CRS-2677: Stop of 'ora.gns' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.gns.vip' on 'host01' CRS-2677: Stop of 'ora.gns.vip' on 'host01' succeeded CRS-2672: Attempting to start 'ora.gns.vip' on 'host02' CRS-2676: Start of 'ora.scan2.vip' on 'host02' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'host02' CRS-2676: Start of 'ora.gns.vip' on 'host02' succeeded CRS-2672: Attempting to start 'ora.gns' on 'host02' CRS-2676: Start of 'ora.scan3.vip' on 'host02' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'host02' CRS-2674: Start of 'ora.host01.vip' on 'host02' failed CRS-2679: Attempting to clean 'ora.host01.vip' on 'host02' CRS-2674: Start of 'ora.gns' on 'host02' failed CRS-2679: Attempting to clean 'ora.gns' on 'host02' CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'host02' succeeded CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'host02' succeeded CRS-2681: Clean of 'ora.gns' on 'host02' succeeded CRS-2681: Clean of 'ora.host01.vip' on 'host02' succeeded

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 242: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 102

CRS-2673: Attempting to stop 'ora.ons' on 'host01' CRS-2673: Attempting to stop 'ora.eons' on 'host01' CRS-2677: Stop of 'ora.ons' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'host01' CRS-2677: Stop of 'ora.net1.network' on 'host01' succeeded CRS-2677: Stop of 'ora.eons' on 'host01' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'host01' has completed CRS-2677: Stop of 'ora.crsd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host01' CRS-2673: Attempting to stop 'ora.ctssd' on 'host01' CRS-2673: Attempting to stop 'ora.evmd' on 'host01' CRS-2673: Attempting to stop 'ora.asm' on 'host01' CRS-2677: Stop of 'ora.cssdmonitor' on 'host01' succeeded CRS-2677: Stop of 'ora.evmd' on 'host01' succeeded CRS-2677: Stop of 'ora.ctssd' on 'host01' succeeded CRS-2677: Stop of 'ora.asm' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'host01' CRS-2677: Stop of 'ora.cssd' on 'host01' succeeded CRS-2673: Attempting to stop 'ora.diskmon' on 'host01' CRS-2677: Stop of 'ora.diskmon' on 'host01' succeeded [root]#

i) Confirm that the listener has been stopped.

[root]# lsnrctl status listener LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 28-AUG-2009 08:46:51 Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener Linux Error: 2: No such file or directory

5) Restart all the cluster resources on node ST_NODE1.

[root]# crsctl start cluster -n $ST_NODE1 CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host01' CRS-2676: Start of 'ora.cssdmonitor' on 'host01' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'host01' CRS-2672: Attempting to start 'ora.diskmon' on 'host01' CRS-2676: Start of 'ora.diskmon' on 'host01' succeeded CRS-2676: Start of 'ora.cssd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'host01' CRS-2676: Start of 'ora.ctssd' on 'host01' succeeded CRS-2672: Attempting to start 'ora.asm' on 'host01' CRS-2672: Attempting to start 'ora.evmd' on 'host01'

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 243: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 103

CRS-2676: Start of 'ora.evmd' on 'host01' succeeded CRS-2676: Start of 'ora.asm' on 'host01' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'host01' CRS-2676: Start of 'ora.crsd' on 'host01' succeeded

6) Verify that the resources, database, and Enterprise manager are restarted on ST_NODE1. The crsctl status resource –n $ST_NODE1 command shows that ASM is online and the ACFS volume dbhome1 is mounted.

[root]# crsctl status resource -n $ST_NODE1 NAME=ora.ACFS.dg TYPE=ora.diskgroup.type TARGET=ONLINE STATE=ONLINE NAME=ora.DATA.dg TYPE=ora.diskgroup.type TARGET=ONLINE STATE=ONLINE NAME=ora.FRA.dg TYPE=ora.diskgroup.type TARGET=ONLINE STATE=ONLINE NAME=ora.LISTENER.lsnr TYPE=ora.listener.type TARGET=ONLINE STATE=ONLINE NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type CARDINALITY_ID=1 TARGET=ONLINE STATE=ONLINE NAME=ora.acfs.dbhome1.acfs TYPE=ora.acfs.type TARGET=ONLINE STATE=ONLINE NAME=ora.asm TYPE=ora.asm.type TARGET=ONLINE STATE=ONLINE NAME=ora.eons TYPE=ora.eons.type TARGET=ONLINE STATE=ONLINE

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 244: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 104

NAME=ora.gsd TYPE=ora.gsd.type TARGET=OFFLINE STATE=OFFLINE NAME=ora.host01.vip TYPE=ora.cluster_vip_net1.type CARDINALITY_ID=1 TARGET=ONLINE STATE=ONLINE NAME=ora.net1.network TYPE=ora.network.type TARGET=ONLINE STATE=ONLINE NAME=ora.ons TYPE=ora.ons.type TARGET=ONLINE STATE=ONLINE NAME=ora.registry.acfs TYPE=ora.registry.acfs.type TARGET=ONLINE STATE=ONLINE NAME=ora.scan1.vip TYPE=ora.scan_vip.type CARDINALITY_ID=1 TARGET=ONLINE STATE=ONLINE [root]#

7) In a terminal window, on the first node as the oracle user, start the orcl instance and Enterprise Manager on ST_NODE1.

[oracle]$ . oraenv ORACLE_SID = [oracle] ? orcl The Oracle base for ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1 is /u01/app/oracle [oracle]$ . /home/oracle/labs/st_env.sh [oracle]$ srvctl start instance -d orcl -n $ST_NODE1 [oracle]$ export ORACLE_UNQNAME=orcl [oracle]$ emctl start dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://host01.example.com:1158/em/console/aboutApplication

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 245: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 105

Starting Oracle Enterprise Manager 11g Database Control ............ started. ------------------------------------------------------------------ Logs are generated in directory /u01/app/oracle/acfsmount/11.2.0/sharedhome/host01_orcl/sysman/log

8) Determine the Enterprise Manager DB control configuration on the cluster. Notice that dbconsole is running on your first node, an agent is running on your second node, and no EM components have been started on the third node.

[oracle]$ emca -displayConfig dbcontrol -cluster STARTED EMCA at Aug 28, 2009 9:27:22 AM EM Configuration Assistant, Version 11.2.0.0.2 Production Copyright (c) 2003, 2005, Oracle. All rights reserved. Enter the following information: Database unique name: orcl Service name: orcl Do you wish to continue? [yes(Y)/no(N)]: Y Aug 28, 2009 9:27:31 AM oracle.sysman.emcp.EMConfig perform INFO: This operation is being logged at /u01/app/oracle/cfgtoollogs/emca/orcl/emca_2009_08_28_09_27_21.log. Aug 28, 2009 9:27:36 AM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage INFO: **************** Current Configuration **************** INSTANCE NODE DBCONTROL_UPLOAD_HOST ---------- ---------- --------------------- orcl host01 host01.example.com orcl host02 host01.example.com Enterprise Manager configuration completed successfully FINISHED EMCA at Aug 28, 2009 9:27:36 AM

9) In a terminal window, become the root user. Set the grid home environment and set the practice environment variables. Stop all the cluster resources on ST_NODE2.

[oracle]$ su - root Password: oracle << password is not displayed [root]# . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [root]# . ~oracle/labs/st_env.sh [root]# crsctl stop cluster -n $ST_NODE2 CRS-2673: Attempting to stop 'ora.crsd' on 'host02'

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 246: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 106

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'host02' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'host02' CRS-2673: Attempting to stop 'ora.orcl.db' on 'host02' CRS-2673: Attempting to stop 'ora.registry.acfs' on 'host02' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'host02' CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.scan1.vip' on 'host02' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.host02.vip' on 'host02' CRS-2677: Stop of 'ora.scan1.vip' on 'host02' succeeded CRS-2672: Attempting to start 'ora.scan1.vip' on 'host01' CRS-2677: Stop of 'ora.host02.vip' on 'host02' succeeded CRS-2672: Attempting to start 'ora.host02.vip' on 'host01' CRS-2677: Stop of 'ora.registry.acfs' on 'host02' succeeded CRS-2676: Start of 'ora.host02.vip' on 'host01' succeeded CRS-2676: Start of 'ora.scan1.vip' on 'host01' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'host01' CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'host01' succeeded CRS-2677: Stop of 'ora.orcl.db' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.acfs.dbhome1.acfs' on 'host02' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'host02' CRS-2677: Stop of 'ora.DATA.dg' on 'host02' succeeded CRS-2677: Stop of 'ora.acfs.dbhome1.acfs' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.ACFS.dg' on 'host02' CRS-2677: Stop of 'ora.ACFS.dg' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'host02' CRS-2677: Stop of 'ora.asm' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'host02' CRS-2673: Attempting to stop 'ora.eons' on 'host02' CRS-2677: Stop of 'ora.ons' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'host02' CRS-2677: Stop of 'ora.net1.network' on 'host02' succeeded CRS-2677: Stop of 'ora.eons' on 'host02' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'host02' has completed CRS-2677: Stop of 'ora.crsd' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'host02' CRS-2673: Attempting to stop 'ora.ctssd' on 'host02' CRS-2673: Attempting to stop 'ora.evmd' on 'host02' CRS-2673: Attempting to stop 'ora.asm' on 'host02' CRS-2677: Stop of 'ora.cssdmonitor' on 'host02' succeeded CRS-2677: Stop of 'ora.evmd' on 'host02' succeeded CRS-2677: Stop of 'ora.ctssd' on 'host02' succeeded CRS-2677: Stop of 'ora.asm' on 'host02' succeeded

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 247: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 107

CRS-2673: Attempting to stop 'ora.cssd' on 'host02' CRS-2677: Stop of 'ora.cssd' on 'host02' succeeded CRS-2673: Attempting to stop 'ora.diskmon' on 'host02' CRS-2677: Stop of 'ora.diskmon' on 'host02' succeeded [root]#

10) What is the status of the orcl database on your cluster?

[root]# srvctl status database -d orcl Instance orcl1 is running on node gr7212 Instance orcl2 is not running on node gr7213

11) As the root user on your first node, start the cluster on your second node (ST_NODE2).

[root@host01 ~]# crsctl start cluster -n $ST_NODE2 CRS-2672: Attempting to start 'ora.cssdmonitor' on 'host02' CRS-2676: Start of 'ora.cssdmonitor' on 'host02' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'host02' CRS-2672: Attempting to start 'ora.diskmon' on 'host02' CRS-2676: Start of 'ora.diskmon' on 'host02' succeeded CRS-2676: Start of 'ora.cssd' on 'host02' succeeded CRS-2672: Attempting to start 'ora.ctssd' on 'host02' CRS-2676: Start of 'ora.ctssd' on 'host02' succeeded CRS-2672: Attempting to start 'ora.evmd' on 'host02' CRS-2672: Attempting to start 'ora.asm' on 'host02' CRS-2676: Start of 'ora.evmd' on 'host02' succeeded CRS-2676: Start of 'ora.asm' on 'host02' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'host02' CRS-2676: Start of 'ora.crsd' on 'host02' succeeded [root]#

12) Did the orcl instance on ST_NODE2 start? Use the svrctl status database –orcl command as any of the users (oracle, grid, root) as long as the oracle environment is set for that user. Note: The database may take a couple of minutes to restart. If the orcl2 instance is not running, try the status command again, until instance orcl2 is running.

[root]# . oraenv ORACLE_SID = [root] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [root]# srvctl status database -d orcl Instance orcl1 is running on node host01 Instance orcl2 is running on node host02

13) Did Enterprise Manager restart on your second node (ST_NODE2)? Verify. As the oracle user, connect to your second node and run emctl status dbconsole.

[root]# exit logout

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 248: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 108

[oracle]$ ssh $ST_NODE2 Last login: Fri Aug 28 09:41:21 2009 from host01.example.com [oracle]$ . oraenv ORACLE_SID = [oracle] ? orcl The Oracle base for ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome is /u01/app/oracle [oracle]$ export ORACLE_UNQNAME=orcl [oracle]$ emctl status dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://host02.example.com:1158/em/console/aboutApplication Oracle Enterprise Manager 11g is not running. ------------------------------------------------------------------ Logs are generated in directory /u01/app/oracle/acfsmount/11.2.0/sharedhome/host02_orcl/sysman/log [oracle]$ emctl start dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://host02.example.com:1158/em/console/aboutApplication Oracle Enterprise Manager 11g is not running. ------------------------------------------------------------------ Logs are generated in directory /u01/app/oracle/acfsmount/11.2.0/sharedhome/host02_orcl/sysman/log

14) Configure database control to monitor the ASM instance on your first node. Open a terminal window on your first node Prepare the oracle environment then use lsnrctl status command to determine the listener IP address. Use the second IP address listed.

[oracle]$ . oraenv ORACLE_SID = [+ASM1] ? orcl The Oracle base for ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1 is /u01/app/oracle [oracle]$ lsnrctl status LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 30-SEP-2009 15:37:15

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 249: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 109

Copyright (c) 1991, 2009, Oracle. All rights reserved. Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production Start Date 30-SEP-2009 14:42:04 Uptime 0 days 0 hr. 55 min. 11 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/11.2.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/gr7212/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.196.180.12)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.196.180.213)(PORT=1521))) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "orcl.example.com" has 1 instance(s). Instance "orcl1", status READY, has 1 handler(s) for this service... Service "orclXDB.example.com" has 1 instance(s). Instance "orcl1", status READY, has 1 handler(s) for this service... The command completed successfully

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 250: d59999gc10_sg2

Practice 7-1: Administering ASM Instances (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 110

Step Screen/Page Description Choices or Values a. Cluster Database Home In the Instances section, click the link for

+ASM1.example.com. b. Automatic Storage Management:

Instance name Home If the instance appears to be down, click Configure.

c.

Monitoring Configuration

In the Machine Name field enter the IP address that you discovered in Step 3 (preceding). Click Test Connection.

d. Monitoring Configuration An information message displays “Test Successful.” Click OK.

e. Automatic Storage Management: Instance name Home

Click the browser refresh button. You may have to refresh several times before EM reports that ASM instance is up.

Verify the configuration by starting Enterprise Manager DBcontrol. Use the URL: https://first_ node_ name.example.com:1158/em

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 251: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 111

Practices for Lesson 8 In these practices, you will add, configure, and remove disk groups, manage rebalance operations, and monitor disk and disk group IO statistics.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 252: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 112

Practice 8-1: Administering ASM Disk Groups In this practice, you will change the configuration of a disk group, and control the resulting rebalance operations. You will determine the connected clients to the existing disk groups, and perform disk group checks. Because the asmadmin group has only one member, grid, open a terminal window and become the grid OS user for this practice. You will use several tools, such as EM, ASMCMD, and ASMCA, to perform the same operations.

1) The FRA disk group has more disks allocated than is needed. So that one disk will be dropped from the FRA disk group, remove ASMDISK11 from the disk group. Use ASMCMD.

a) As the grid OS user on your first node, confirm that the FRA disk group is mounted. If it is not mounted, mount it on your first and second nodes.

[grid]$ . oraenv ORACLE_SID = [+ASM1] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid]$ . /home/oracle/labs/st_env.sh [grid]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 9996 3706 0 3706 0 N ACFS/ MOUNTED NORMAL N 512 4096 1048576 9998 5280 1198 2041 0 N DATA/ ASMCMD> mount FRA ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 9996 3706 0 3706 0 N ACFS/ MOUNTED NORMAL N 512 4096 1048576 9998 5280 1198 2041 0 N DATA/ MOUNTED EXTERN N 512 4096 1048576 7497 7088 0 7088 0 N FRA/ ASMCMD> exit [grid]$ crsctl status resource ora.FRA.dg -t --------------------------------------------------------------------------------

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 253: d59999gc10_sg2

Practice 8-1: Administering ASM Disk Groups (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 113

NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.FRA.dg ONLINE ONLINE gr7212 OFFLINE OFFLINE gr7213 ONLINE ONLINE gr7214 [grid]$ ssh $ST_NODE2 Last login: Tue Sep 29 16:56:00 2009 from gr7212.example.com [grid@host02]$ . oraenv ORACLE_SID = [grid] ? +ASM2 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid grid@host02]$ asmcmd mount FRA grid@host02]$ exit grid]$ crsctl status resource ora.FRA.dg -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.FRA.dg ONLINE ONLINE gr7212 ONLINE ONLINE gr7213 ONLINE ONLINE gr7214

b) Use the chdg command with inline XML. Note that the command is typed without a return, all on one line. chdg <chdg name="FRA" power="5"> <drop> <dsk name="ASMDISK11"/> </drop> </chdg>

[grid]$ asmcmd ASMCMD> chdg <chdg name="FRA" power="5"> <drop> <dsk name="ASMDISK11"/> </drop> </chdg> ASMCMD>

2) In preparation for adding another disk to the DATA disk group, perform a disk check to verify the disk group metadata. Use the check disk group command chkdg.

ASMCMD> chkdg DATA

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 254: d59999gc10_sg2

Practice 8-1: Administering ASM Disk Groups (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 114

3) Add another disk (ASMDISK12) to the DATA disk group and remove a disk (ASMDISK04), but the rebalance operation must wait until a quiet time and then proceed as quickly as possible. Use Enterprise Manager Database Control.

Step Screen/Page Description Choices or Values a. Automatic Storage Management:

Instance name Home Click Disk Groups tab.

b. Automatic Storage Management Login

Enter: Username: SYS Password: oracle_4U Click Login.

c. Automatic Storage Management: Instance name Disk Groups

Click the DATA link.

d. Disk Group: DATA Click ADD. e. Add Disks Change Rebalance Power to 0

(this prevents any rebalance). Select Disks: ORCL:ASMDISK12 Click Show SQL.

f. Show SQL The SQL statement that will be executed is shown: ALTER DISKGROUP DATA ADD DISK 'ORCL:ASMDISK12' SIZE 2500 M REBALANCE POWER 0 Click Return.

g. Add Disks Click OK. h. Add Disks in Progress message

displayed

i. Disk Group: DATA Select ASMDISK04. Click Remove.

j. Confirmation Click Advanced Options. Set Rebalance Power to 0. Click Yes.

4) Perform the pending rebalance operation on disk group DATA.

Step Screen/Page Description Choices or Values a. Disk Group: DATA Click the Automatic Storage Management:

+ASM1_host01.example.com locator link at the top of page.

b. Automatic Storage Management: +ASM1_host01.example.com

Select the DATA disk group. Click Rebalance.

c. Select ASM Instances Select both ASM instances. Click OK.

d. Confirmation Click Show Advanced Options. Set Rebalance Power to 11. Click Yes.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 255: d59999gc10_sg2

Practice 8-1: Administering ASM Disk Groups (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 115

Step Screen/Page Description Choices or Values e. Automatic Storage Management:

+ASM1_host01.example.com Update Message: Disk Group DATA rebalance request has been submitted. Click the DATA disk group.

f. Disk Group: DATA Observe the Used(%) column for the various disk in the DATA disk group. Notice the status column (ASMDISK04 is marked as DROPPING). Click the browser’s Refresh button.

g. Disk Group: DATA Notice the change in the Used(%) column. Refresh again. After a few minutes the ASMDISK04 will not appear in the list on Member disks.

h. Exit EM.

5) Determine which clients are using ASM and ACFS. This is helpful to determine which clients are preventing the shutdown of ASM. Use the ASMCMD lsct command. Notice that the expect database client is shown and the ACFS client called asmvol, but the cluster +ASM1 instance is also listed because the OCR, and voting disk files are in the ASM disk group DATA. Open a terminal window on your first node as the grid user. For ASMCMD to work, the grid infrastructure environment must be set.

[grid]$ . oraenv ORACLE_SID = [+ASM1] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid]$ asmcmd lsct DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group +ASM CONNECTED 11.2.0.1.0 11.2.0.1.0 +ASM1 DATA asmvol CONNECTED 11.2.0.1.0 11.2.0.1.0 +ASM1 ACFS orcl CONNECTED 11.2.0.1.0 11.2.0.0.0 orcl1 DATA

6) Examine the disk activity involving the DATA disk group.

a) Examine the statistics for all the disk groups and then check the individual disks in the DATA disk group using EM.

Step Screen/Page Description Choices or Values a. Cluster Database:

orcl.example.com: Home In the Instances section, click the +ASM1.example.com link.

b. Automatic Storage Management: +ASM1_host01.example.com:

Click the Performance tab.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 256: d59999gc10_sg2

Practice 8-1: Administering ASM Disk Groups (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 116

Step Screen/Page Description Choices or Values Home

c. Automatic Storage Management: +ASM1_host01.example.com: Performance

Notice the graphs of the various metrics: Response Time, Throughput, Operations per Second, and Operation Size. Almost all the data operations involved the DATA disk group. In the Additional Monitoring Links section, click Disk Group I/O Cumulative Statistics.

d. Automatic Storage Management Login

Enter: Username: SYS Password: oracle_4U Click Login.

e. Disk Group I/O Cumulative Statistics

Notice that almost all of the I/O calls are against the DATA disk group. Expand the DATA disk group to show the individual disk statistics.

f. Disk Group I/O Cumulative Statistics

Examine the number of reads and writes to each disk in the DATA disk group. The I/O to the disks will not be balanced because ASMDISK12 was just added.

g. Exit EM.

b) Examine the disk I/O statistics using lsdsk --statistics command.

[grid]$ asmcmd ASMCMD> lsdsk --statistics Reads Write Read_Errs Write_Errs Read_time Write_Time Bytes_Read Bytes_Written Voting_File Path 6893 10103 0 0 324.968 74.8 664005632 300485632 Y ORCL:ASMDISK01 10273 11096 0 0 338.432 84.596 743536128 346663936 Y ORCL:ASMDISK02 6345 10971 0 0 283.62 59.944 599138304 246011392 Y ORCL:ASMDISK03 153 0 0 0 1 0 1568768 0 N ORCL:ASMDISK05 124 4 0 0 1.056 .016 1490944 16384 N ORCL:ASMDISK06 97 0 0 0 1.1 0 397312 0 N ORCL:ASMDISK07 99 0 0 0 1.152 0 405504 0 N ORCL:ASMDISK08 897 1027 0 0 3.792 5.224 48025600 118779904 N ORCL:ASMDISK09 447 631 0 0 4.644 3.8 23568384 92385280 N ORCL:ASMDISK10

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 257: d59999gc10_sg2

Practice 8-1: Administering ASM Disk Groups (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 117

2050 7008 0 0 6.436 199.012 29081600 1234462720 N ORCL:ASMDISK12

c) Examine the disk statistics bytes and time for the DATA disk group with the iostat –t –G DATA command.

ASMCMD> iostat -t -G DATA Group_Name Dsk_Name Reads Writes Read_Time Write_Time DATA ASMDISK01 664112128 301286912 324.972 74.908 DATA ASMDISK02 744060416 346926080 338.444 84.624 DATA ASMDISK03 600055808 246699520 283.656 60.032 DATA ASMDISK12 30916608 1235182080 6.452 199.116 ASMCMD> exit

7) Run the following ASMCMD commands to return the DATA and FRA disk groups to the configuration at the beginning of the practice.

[grid]$ asmcmd chdg /home/oracle/labs/less_08/reset_DATA.xml [grid]$ asmcmd chdg /home/oracle/labs/less_08/reset_FRA.xml

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 258: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 118

Practices for Lesson 9 In this practice, you will administer ASM files, directories, and templates.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 259: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 119

Practice 9-1: Administering ASM Files, Directories, and Templates In this practice, you use several tools to navigate the ASM file hierarchy, manage aliases, manage templates, and move files to different disk regions.

1) ASM is designed to hold database files in a hierarchical structure. Navigate the orcl database files with ASMCMD.

[grid]$ asmcmd ASMCMD> ls ACFS/ DATA/ FRA/ ASMCMD> ls DATA ORCL/ cluster02/ ASMCMD> ls DATA/ORCL CONTROLFILE/ DATAFILE/ ONLINELOG/ PARAMETERFILE/ TEMPFILE/ spfileorcl.ora ASMCMD> ls -l DATA/ORCL/* Type Redund Striped Time Sys Name +DATA/ORCL/CONTROLFILE/: CONTROLFILE HIGH FINE AUG 31 13:00:00 Y Current.263.695918587 +DATA/ORCL/DATAFILE/: DATAFILE MIRROR COARSE AUG 31 13:00:00 Y EXAMPLE.264.695918623 DATAFILE MIRROR COARSE SEP 01 04:00:00 Y SYSAUX.260.695918435 DATAFILE MIRROR COARSE AUG 31 13:00:00 Y SYSTEM.258.695918433 DATAFILE MIRROR COARSE AUG 31 13:00:00 Y UNDOTBS1.259.695918437 DATAFILE MIRROR COARSE AUG 31 13:00:00 Y UNDOTBS2.257.695918879 DATAFILE MIRROR COARSE AUG 31 13:00:00 Y USERS.268.695918437 +DATA/ORCL/ONLINELOG/: ONLINELOG MIRROR COARSE AUG 31 13:00:00 Y group_1.262.695918591 ONLINELOG MIRROR COARSE AUG 31 13:00:00 Y group_2.261.695918595 ONLINELOG MIRROR COARSE AUG 31 13:00:00 Y group_3.265.695918947

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 260: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 120

ONLINELOG MIRROR COARSE AUG 31 13:00:00 Y group_4.256.695918951 +DATA/ORCL/PARAMETERFILE/: PARAMETERFILE MIRROR COARSE AUG 31 22:00:00 Y spfile.266.695918955 +DATA/ORCL/TEMPFILE/: TEMPFILE MIRROR COARSE AUG 31 13:00:00 Y TEMP.267.695918609 N spfileorcl.ora => +DATA/ORCL/PARAMETERFILE/spfile.266.695918955 ASMCMD>

2) The default structure may not be the most useful for some sites. Create a set of aliases for directories and files to match a file system. Use EM.

Step Screen/Page Description Choices or Values a. Cluster Database:

orcl.example.com In the Instances section, click the +ASM1_node_name.example.com link.

b. Automatic Storage Management: +ASM1_node_name.example.com Home

Click the Disk Groups tab.

c. Automatic Storage Management Login

Enter: Username: SYS Password: oracle_4U Click Login.

d. Automatic Storage Management: +ASM1_host01.example.com DiskGroups

Click the DATA disk group link.

e. Disk Group: DATA General Click the Files tab. f. Disk Group: DATA Files Select ORCL.

Click Create Directory. g. Create Directory Enter:

New Directory: oradata Click Show SQL.

h. Show SQL The SQL that will be executed is shown ALTER DISKGROUP DATA ADD DIRECTORY '+DATA/ORCL/oradata' Click Return.

i. Create Directory Click OK. j. Disk Group: DATA Files Expand the ORCL folder.

Expand the DATAFILE folder. Select EXAMPLE.nnn.NNNNNN. Click Create Alias.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 261: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 121

Step Screen/Page Description Choices or Values k. Create Alias Enter:

User Alias: +DATA/ORCL/oradata/example_01.dbf Click Show SQL.

l. Show SQL The SQL that will be executed is shown ALTER DISKGROUP DATA ADD ALIAS '+DATA/ORCL/oradata/example_01.dbf' FOR '+DATA/ORCL/DATAFILE/EXAMPLE.264.698859675'Click Return.

m. Create Alias Click OK. n. Disk Group: DATA: Files Click the EXAMPLE.nnn.NNNNN link. o. EXAMPLE.nnn.NNNNNN:

Properties Notice the properties that are displayed in the General section. Click OK.

p. Disk Group: DATA Files Click the example_01.dbf link. q. example_01.dbf: Properties Note that the properties include System Name.

Click OK. r. Exit EM.

3) Using ASMCMD, navigate to view example_01.dbf and display the properties. Using the system name, find the alias. Use the ls –a command.

[grid]$ asmcmd ASMCMD> ls +DATA/ORCL/oradata/* example_01.dbf ASMCMD> ls -l +DATA/ORCL/oradata/* Type Redund Striped Time Sys Name N example_01.dbf => +DATA/ORCL/DATAFILE/EXAMPLE.264.695918623 ASMCMD> ls -a +DATA/ORCL/DATAFILE/example* +DATA/ORCL/oradata/example_01.dbf => EXAMPLE.264.695918623

4) Create a new tablespace. Name the file using a full name. Use EM.

Step Screen/Page Description Choices or Values a. Cluster Database:

orcl.example.com Home Click the Server tab.

b. Cluster Database: orcl.example.com Server

In the Storage section, click the Tablespaces link.

c. Tablespaces Click Create. d. Create Tablespace: General Enter:

Name: XYZ In the Datafiles section, click Add.

e. Add Datafile Enter: Alias directory: +DATA/ORCL/oradata Alias name: XYZ_01.dbf

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 262: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 122

Step Screen/Page Description Choices or Values Click Continue.

f. Create Tablespace: General Click OK. g. Tablespaces

5) Create another data file for the XYZ tablespace. Allow the file to receive a default name. Did both the files get system-assigned names?

Step Screen/Page Description Choices or Values a. Tablespaces Select XYZ tablespace.

Click Edit. b. Edit Tablespace: XYZ In the Datafiles section, click Add. c. Add Datafile Click Continue. d. Edit Tablespace: XYZ Click Show SQL. e. Show SQL Note: The SQL provides only the disk group

name. Click Return.

f. Edit Tablespace: XYZ Click Apply. g. Edit Tablespace: XYZ In the Datafiles section, note the names of

the two files. One name was specified in the previous practice step, xyz_01.dbf, and the other is a system-assigned name. Click the Database tab.

h. Cluster Database: orcl.example.com

In the Instances section, click the +ASM1_node_name.example.com link.

i. Automatic Storage Management: +ASM1_node_name.example.com Home

Click the Disk Groups tab.

j. Automatic Storage Management Login

Enter: Username: SYS Password: oracle_4U Click Login.

k. Automatic Storage Management: +ASM1_host01.example.com DiskGroups

Click the DATA disk group link.

l. Disk Group: DATA General Click the Files tab. m. Disk Group: DATA Files Expand the ORCL folder.

Expand the DATAFILE folder. Note that there are two system-named files associated with the XYZ tablespace. Expand the oradata folder. Click the xyz_01.dbf link.

n. XYZ_01.dbf: Properties Observe that the xyz_01.dbf file is an alias to a file with a system name.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 263: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 123

Step Screen/Page Description Choices or Values Click OK.

o. Disk Group: DATA Files

6) Move the files for the XYZ tablespace to the hot region of the DATA disk group.

Step Screen/Page Description Choices or Values a. Disk Group: DATA: Files Click the General tab. b. Disk Group: DATA: General In the Advanced Attributes section, click

Edit. c. Edit Advanced Attributes for Disk

Group: DATA Change Database Compatibility to 11.2.0.0. Show SQL.

d. Show SQL Notice the SET ATTRIBUTE clause. Click Return.

e. Edit Advanced Attributes for Disk Group: DATA

Click OK.

f. Disk Group: DATA General Click the Files tab. g. Disk Group: DATA: Files Expand the ORCL folder.

Expand the oradata folder. Select the xyz_01.dbf file. Click Edit File.

h. Edit File: XYZ_01.dbf In the Regions section, select the Primary Hot and Mirror Hot options. Click Show SQL.

i. Show SQL Note that the SQL statement uses the alias name and attributes clause. Click Return.

j. Edit File: XYZ_01.dbf Click Apply. k. Disk Group: DATA: Files Expand the DATAFILES folder.

Select the XYZ file that is not in the HOT region. Click Edit File.

l. Edit File:XYZ.nnn.NNNNN In the Regions section, select Primary Hot and Mirror Hot options. Click Show SQL.

m. Show SQL Note that the SQL statement uses the system name and attributes clause. Click Return.

n. Edit File: XYZ.nnn.NNNNNN Click Apply. o. Disk Group: DATA: Files

7) Create a template that changes the default placement of files to the hot region.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 264: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 124

Step Screen/Page Description Choices or Values a. Disk Group: DATA: Files Click the Templates tab. b. Disk Group: DATA: Templates Click Create. c. Create Template Enter:

Template Name: HOT_FILES In the Regions section, select the Primary Hot and Mirror Hot options. Click Show SQL.

d. Show SQL Note the SQL statement attributes clause. Click Return.

e. Create Template Click OK. f. Disk Group: DATA: Templates Note the attributes of the HOT_FILES

template compared with the DATAFILE template.

8) Add another data file to the XYZ tablespace using the template. Was the file placed in the HOT region?

Step Screen/Page Description Choices or Values a. Disk Group: DATA: Templates Click the Database tab. b. Cluster Database:

orcl.example.com Home Click the Server tab.

c. Cluster Database: orcl.example.com Server

In the Storage section, click the Tablespaces link.

d. Tablespaces Select the XYZ tablespace. Click Edit.

e. Edit Tablespace: XYZ In the Datafiles section, click Add. f. Add Datafile Change Template to HOT_FILES.

Click Continue. g. Edit Tablespace: XYZ Click Show SQL. h. Show SQL Note the data file specification

"+DATA(HOT_FILES)." Click Return.

i. Edit Tablespace: XYZ Click Apply. Click the Database tab.

j. Cluster Database: orcl.example.com

In the Instances section, click the +ASM1_node_name.example.com link.

k. Automatic Storage Management: +ASM1_node_name.example.com Home

Click the Disk Groups tab.

l. Automatic Storage Management Login

Enter: Username: SYS Password: oracle_4U Click Login.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 265: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 125

Step Screen/Page Description Choices or Values m. Automatic Storage Management:

+ASM1_host01.example.com DiskGroups

Click the DATA disk group link.

n. Disk Group: DATA General Click the Files tab. o. Disk Group: DATA Files Expand the ORCL folder.

Expand the DATAFILE folder. Notice that there are three system-named files associated with the XYZ tablespace. All have the HOT and HOT MIRROR attributes set.

9) Create a table in the XYZ tablespace.

a) In a terminal window, as the oracle OS user, use the following command connect to the ORCL database (the password for SYS is oracle_4U): sqlplus sys@orcl AS SYSDBA

[oracle]$ . oraenv ORACLE_SID = [orcl] ? orcl The Oracle base for ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome is /u01/app/oracle [oracle]$ sqlplus sys@orcl as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 1 12:23:43 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Enter password:oracle_4U << password is not displayed Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL>

b) Create a large table in the XYZ tablespace called CUST_COPY by executing the cr_cust_copy.sql script. This script makes a copy of the SH.CUSTOMERS table into the XYZ tablespace.

SQL> @/home/oracle/labs/less_09/cr_cust_copy.sql SQL>

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 266: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 126

SQL> CREATE TABLE Cust_copy TABLESPACE XYZ AS 2 SELECT * FROM SH.CUSTOMERS; Table created. SQL>

10) Query the new table. Select all the rows to force some read activity with the command: SELECT * FROM CUST_COPY. Us the SET PAGESIZE 300 command to speed up the display processing

SQL> SET PAGESIZE 300 SQL> SELECT * FROM CUST_COPY; … /* rows removed */ 100055 Andrew Clark F 1978 Married 77 Cumberland Avenue 74673 Duncan 51402 SC 52722 52790 260-755-4130 J: 190,000 - 249,999 11000 [email protected] Customer total 52772 01-JAN-98 A 55500 rows selected. SQL>

11) View the I/O statistics by region. Use EM to view the statistics, and then repeat using ASMCMD.

a) View the I/O statistics by region with Enterprise Manager.

Step Screen/Page Description Choices or Values a. Disk Group: DATA Files Click the Performance tab. b. Disk Group: DATA: Performance In the Disk Group I/O Cumulative Statistics

section, observe the values for Hot Reads and Hot Writes.

b) In a terminal window, on your first node as the grid user, set the oracle environment for the +ASM1 instance. View the I/O statistics by region using ASMCMD.

[grid@host01]$ . oraenv ORACLE_SID = [grid] ? +ASM1

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 267: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 127

The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid]$ asmcmd ASMCMD> iostat --io --region -G DATA Group_Name Dsk_Name Reads Writes Cold_Reads Cold_Writes Hot_Reads Hot_Writes DATA ASMDISK01 11071 83680 5345 70653 20 263 DATA ASMDISK02 13056 73855 5411 58344 24 226 DATA ASMDISK03 46914 48823 36155 29176 31 367 DATA ASMDISK04 204358 71893 196787 56579 32 362 ASMCMD> exit

12) Drop the tablespaces and templates created in this practice.

a) As the oracle OS user, connect to the orcl database, and then use the drop_XYZ.sh script to drop the XYZ tablespace.

[oracle]$ sqlplus sys@orcl as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Tue Sep 1 13:29:05 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. Enter password: oracle_4U << password is not displayed Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> @/home/oracle/labs/less_09/drop_XYZ.sql SQL> SQL> DROP TABLESPACE XYZ INCLUDING CONTENTS AND DATAFILES; Tablespace dropped. SQL> SQL> EXIT; Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 268: d59999gc10_sg2

Practice 9-1: Administering ASM Files, Directories, and Templates (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 128

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options [oracle]$

b) As the grid OS user, use asmcmd to remove the HOT_FILES template.

[grid]$ asmcmd ASMCMD> rmtmpl -G DATA HOT_FILES ASMCMD> exit [grid@host01 ~]$

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 269: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 129

Practices for Lesson 10 In this practice you will create, register, and mount an ACFS file system. In addition, you will manage ACFS snapshots.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 270: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 130

Practice 10-1: Managing ACFS In this practice, you will create, register, and mount an ACFS file system for general use. You will see the acfs modules that are loaded for ACFS. You create, use, and manage ACFS snapshots.

1) Open a terminal window on your first node and become the root user. Use the lsmod command to list the currently loaded modules. Use the grep command to display only the modes that have the ora string in them. Note the first three modules in the list below. These modules are required to enable ADVM and ACFS. The oracleasm module is loaded to enable ASMlib management of the ASM disks. Check all three nodes. If the ACFS drivers are not loaded, load them with the Grid_home/bin/acfsload script. The drivers are loaded and registered automatically when an ACFS volume is created with ASMCA. In this case, the configuration must be manually extended to ST_NODE3.

[oracle]$ su – root Password: oracle << password is not displayed /* on ST_NODE1 */ [root@host01]# lsmod | grep ora oracleacfs 787588 2 oracleadvm 177792 6 oracleoks 226784 2 oracleacfs,oracleadvm oracleasm 46356 1 [root@host01]# . /home/oracle/labs/st_env.sh /* on second node */ [root@host01]# ssh $ST_NODE2 lsmod| grep ora root@host02's password: oracle << password is not displayed oracleacfs 787588 3 oracleadvm 177792 7 oracleoks 226784 2 oracleacfs,oracleadvm oracleasm 46356 1 /* on ST_NODE3 */ [root@host01 ~]# ssh $ST_NODE3 lsmod | grep ora root@host03's password: oracle << password is not displayed oracleacfs 787588 0 oracleadvm 177792 0 oracleoks 226784 2 oracleacfs,oracleadvm oracleasm 46356 1 [root@host01 ~]# exit

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 271: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 131

2) Scenario: Your database application creates a number of image files stored as BFILES and external tables. These must be stored on a shared resource. An ACFS file system meets that requirement. Create an ASM volume and the ACFS file system. The ACFS volume should be 3 GB on the ACFS disk group. The mount point should be /u01/app/oracle/asfcmounts/images. These operations can be done with ASMCA as in Practice 2, ASMCMD, Enterprise Manager, or SQL*Plus. The ASMCMD solution is shown here.

a) Open another terminal window as the grid OS user on your first node.

[grid@host01]$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid

b) Start ASMCMD as the grid OS user.

[grid@host01]$ asmcmd ASMCMD>

c) Create a volume using the volcreate command. ASMCMD> volcreate -G ACFS -s 3G IMAGES ASMCMD>

d) Find the volume device name. Use the volinfo command.

ASMCMD> volinfo -G ACFS -a Diskgroup Name: ACFS Volume Name: DBHOME_1 Volume Device: /dev/asm/dbhome_1-407 State: ENABLED Size (MB): 6144 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /u01/app/oracle/acfsmount/11.2.0/sharedhome Volume Name: IMAGES Volume Device: /dev/asm/images-407 State: ENABLED Size (MB): 3072 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: Mountpath:

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 272: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 132

ASMCMD>

e) Create an ACFS file system in the IMAGES volume—using the terminal window that you used in step 1—as the root user. When using the mkfs –t acfs command, the volume device must be supplied. Use the volume device name that you found in the previous step.

[root@host01 ~]# mkfs -t acfs /dev/asm/images-407 mkfs.acfs: version = 11.2.0.1.0.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/images-407 mkfs.acfs: volume size = 3221225472 mkfs.acfs: Format complete. [root@host01 ~]#

f) Mount the volume—using the terminal window that you used in step 1—as the root user, create the mount point directory, and mount the volume. The volume device is used again in the mount command. Enter the mount command all on one line. Repeat these commands on the second and third nodes of the cluster as the root user.

/* On first node */ [root@host01 ~]# mkdir -p /u01/app/oracle/acfsmount/images [root@host01 ~]# mount -t acfs /dev/asm/images-407 /u01/app/oracle/acfsmount/images /* On second node */ [root@host01 ~]# ssh $ST_NODE2 root@host02's password: oracle << password not displayed [root@host02 ~]# mkdir /u01/app/oracle/acfsmount/images [root@host02 ~]# mount -t acfs /dev/asm/images-407 /u01/app/oracle/acfsmount/images [root@host02 ~]# exit logout Connection to host02 closed. [root@host01 ~]# /* On third node*/ [root@host01]# ssh $ST_NODE3 root@host03's password: oracle << password not displayed [root@host03]# mkdir -p /u01/app/oracle/acfsmount/images [root@host03]# mount -t acfs /dev/asm/images-407 /u01/app/oracle/acfsmount/images [root@host03]# exit

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 273: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 133

g) Verify that the volume is mounted.

/* On first node */ [root@host01 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 9.7G 2.6G 6.6G 28% / /dev/xvda1 99M 20M 74M 22% /boot tmpfs 1.1G 482M 543M 48% /dev/shm /dev/mapper/VolGroup01-LogVol00 30G 4.7G 24G 17% /u01 192.0.2.10:/mnt/shareddisk01/software/software 60G 34G 23G 60% /mnt/software /dev/asm/dbhome_1-407 6.0G 4.8G 1.3G 79% /u01/app/oracle/acfsmount/11.2.0/sharedhome /dev/asm/images-407 3.0G 43M 3.0G 2% /u01/app/oracle/acfsmount/images /* On second node */ [root@host01 ~]# ssh $ST_NODE2 df -h /u01/app/oracle/acfsmount/images root@host02's password: oracle << password not displayed Filesystem Size Used Avail Use% Mounted on /dev/asm/images-407 3.0G 73M 3.0G 3% /u01/app/oracle/acfsmount/images [root@host01 ~]# /* On third node */ [root@host01 ~]# ssh $ST_NODE3 df -h /u01/app/oracle/acfsmount/images root@host03's password: oracle << password not displayed Filesystem Size Used Avail Use% Mounted on /dev/asm/images-407 3.0G 109M 2.9G 4% /u01/app/oracle/acfsmount/images [root@host01 ~]#

h) Register the volume and mount point. As the root user, run the command: (Use your actual device name.) acfsutil registry –a /dev/asm/images-nnn /u01/app/oracle/acfsmount/images

[root@host01]# acfsutil registry -a /dev/asm/images-407 /u01/app/oracle/acfsmount/images

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 274: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 134

acfsutil registry: mount point /u01/app/oracle/acfsmount/images successfully added to Oracle Registry

i) As the root user, view the registry status of the volume with the acfsutil registry –l command.

[root@host01]# acfsutil registry -l Device : /dev/asm/images-407 : Mount Point : /u01/app/oracle/acfsmount/images : Options : none : Nodes : all : Disk Group : ACFS : Volume : IMAGES [root@host01]#

3) An ACFS file system can be resized, and it will automatically resize the volume, if there is sufficient space in the disk group. The images file system is near capacity. Increase the file system by 512 MB to 3.5 GB. As the root user, use the command acfsutil size +512M /u01/app/oracle/acfsmount/images.

[root@host01]# acfsutil size +512M /u01/app/oracle/acfsmount/images acfsutil size: new file system size: 3758096384 (3584MB) [root@host01 ~]#

4) As the grid user, check the size of the volume after the resize operation with asmcmd volinfo.

[grid@host03 ~]$ asmcmd volinfo -G ACFS IMAGES Diskgroup Name: ACFS Volume Name: IMAGES Volume Device: /dev/asm/images-407 State: ENABLED Size (MB): 4096 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /u01/app/oracle/acfsmount/images [grid@host03 ~]$

5) As root, source the environment script /home/oracle/labs/st_env.sh. The IMAGES file system holds the image files for the orcl database owned by the oracle user. As the root user, change the permissions on the mount point so that the oracle user will own the file system on all three nodes.

/* on ST_NODE1 */ [root@host01]# . /home/oracle/labs/st_env.sh [root@host01]# chown oracle:dba /u01/app/oracle/acfsmount/images

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 275: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 135

/* On second node */ [root@host01 ~]# ssh $ST_NODE2 chown oracle:dba /u01/app/oracle/acfsmount/images root@host02's password: oracle << password is not displayed /* on third node */ [root@host01]# ssh $ST_NODE3 chown oracle:dba /u01/app/oracle/acfsmount/images root@host03's password: oracle << password is not displayed [root@host01 ~]#

6) As the oracle user, transfer a set of images to /u01/app/oracle/acfsmount/images. Unzip the images in /home/oracle/labs/less_10/images.zip to the IMAGES file system.

[oracle@host01]$ cd /home/oracle/labs/less_10 [oracle@host01]$ unzip images.zip -d /u01/app/oracle/acfsmount/images Archive: images.zip creating: /u01/app/oracle/acfsmount/images/gridInstall/ inflating: /u01/app/oracle/acfsmount/images/gridInstall/asm.gif inflating: /u01/app/oracle/acfsmount/images/gridInstall/bullet2.gif … Lines removed … inflating: /u01/app/oracle/acfsmount/images/gridInstall/view_image.gif extracting: /u01/app/oracle/acfsmount/images/gridInstall/white_spacer.gif [oracle@host01 less_10]$

7) Verify that the files have been extracted.

[oracle@host01]$ ls -R /u01/app/oracle/acfsmount/images /u01/app/oracle/acfsmount/images: gridInstall lost+found /u01/app/oracle/acfsmount/images/gridInstall: asm.gif t20108.gif t30104.gif t30119d.gif bullet2.gif t20109a.gif t30105.gif t30119.gif bullet.gif t20109b.gif t30106.gif t30120a.gif divider.gif t20110.gif t30107.gif t30120b.gif gradient.gif t20111a.gif t30108a.gif t30121d.gif MoveAllButton.gif t20111b.gif t30108.gif t30123a.gif MoveButton.gif t20111c.gif t30109.gif t30123b.gif rpm-oracleasm.gif t20111.gif t30110.gif t30123c.gif

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 276: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 136

show_me.gif t20112.gif t30111.gif t30201.gif t10101.gif t20113.gif t30112a.gif t30202.gif t10102.gif t20113h.gif t30112.gif t30203.gif t10103.gif t20114c.gif t30113a.gif t30204a.gif t10201.gif t20114login.gif t30113b.gif t30204.gif t10202.gif t20114server.gif t30114a.gif t30205.gif t10203.gif t20117add.gif t30114b.gif t30206.gif t10204.gif t20117crtbs.gif t30114.gif t30207.gif t10205.gif t20117emctl.gif t30115a.gif t30208.gif t20101.gif t20117tbs.gif t30115.gif t40101.gif t20102.gif t20119asm.gif t30116a.gif t40102.gif t20103.gif t2017emctl.gif t30116b.gif t40104.gif t20104.gif t30101a.gif t30116c.gif t40105a.gif t20105.gif t30101b.gif t30116d.gif t40105b.gif t20106.gif t30101c.gif t30118b.gif Thumbs.db t20107a.gif t30102.gif t30119b.gif view_image.gif t20107.gif t30103.gif t30119c.gif white_spacer.gif ls: /u01/app/oracle/acfsmount/images/lost+found: Permission denied [oracle@host01]$

8) Create a snapshot of the IMAGES file system. Use the ASMCMD utility as the root user to execute the command:

/sbin/acfsutil snap create snap_001 \ /u01/app/oracle/acfsmount/images

[root@host01]# /sbin/acfsutil snap create snap_001 /u01/app/oracle/acfsmount/images acfsutil snap create: Snapshot operation is complete.

9) Find the .SNAP directory and explore the entries. How much space does the gridInstall directory tree use? How much space does the .ACFS/snaps/snap_001/rgidInstall directory tree use?

[root@host01]# cd /u01/app/oracle/acfsmount/images [root@host01]# ls –la total 92 drwxrwx--- 5 oracle dba 4096 Sep 7 23:31 . drwxr-xr-x 4 root root 4096 Sep 7 11:53 .. drwxr-xr-x 5 root root 4096 Sep 7 15:04 .ACFS drwxr-xr-x 2 oracle oinstall 12288 May 6 16:30 gridInstall drwx------ 2 root root 65536 Sep 7 15:04 lost+found [root@host01]# du -h gridInstall 2.0M gridInstall [root@host01]# ls .ACFS repl snaps [root@host01 images]# ls .ACFS/snaps snap_001

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 277: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 137

[root@host01]# ls .ACFS/snaps/snap_001 gridInstall lost+found [root@host01]# du -h .ACFS/snaps/snap_001/gridInstall 2.0M .ACFS/snaps/snap_001/gridInstall

10) Delete the asm.gif file from the IMAGES file system.

[root@host01]# rm gridInstall/asm.gif rm: remove regular file `gridInstall/asm.gif'? y

11) Create another snapshot of the IMAGES file system.

[root@host01]# /sbin/acfsutil snap create snap_002 /u01/app/oracle/acfsmount/images acfsutil snap create: Snapshot operation is complete.

12) How much space is being used by the snapshots and the files that are stored in the IMAGES file system. Use the acfsutil info command to find this information.

[root@host01]# /sbin/acfsutil info fs /u01/app/oracle/acfsmount/images /u01/app/oracle/acfsmount/images ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Mon Sep 7 12:02:42 2009 volumes: 1 total size: 4294967296 total free: 4112216064 primary volume: /dev/asm/images-407 label: flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 208386 size: 4294967296 free: 4112216064 ADVM diskgroup ACFS ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 2 snapshot space usage: 2265088 [root@host01 images]#

13) Restore the asm.gif file to the file system from the snapshot. This can be done with OS commands or from Enterprise Manager. This solution uses the OS commands.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 278: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 138

a) The snapshot is a sparse file representation of the file system, so you can browse the snapshot as if it were a full file system. All the OS file commands are functional. Find the asm.gif file in the snapshot. The find command ID is shown in this solution. Perform this operation as the root user.

[root]$ cd /u01/app/oracle/acfsmount/images/gridInstall [root]$ find . –name asm.gif find: ./.ACFS/.fileid: Permission denied find: ./.ACFS/repl: Permission denied find: ./.ACFS/snaps/snap_001/.ACFS/.fileid: Permission denied find: ./.ACFS/snaps/snap_001/.ACFS/repl: Permission denied find: ./.ACFS/snaps/snap_001/.ACFS/snaps: Permission denied find: ./.ACFS/snaps/snap_001/lost+found: Permission denied ./.ACFS/snaps/snap_001/gridInstall/asm.gif find: ./.ACFS/snaps/snap_002/.ACFS/.fileid: Permission denied find: ./.ACFS/snaps/snap_002/.ACFS/repl: Permission denied find: ./.ACFS/snaps/snap_002/.ACFS/snaps: Permission denied find: ./.ACFS/snaps/snap_002/lost+found: Permission denied find: ./lost+found: Permission denied

b) Restore the asm.gif file by copying from the snapshot to the original location.

[root]$ cp ./.ACFS/snaps/snap_001/gridInstall/asm.gif ./gridInstall/asm.gif

14) Dismount the Images file system from all three nodes. This command must be executed by the root user. If the directory is busy, use lsof to find the user that is holding the directory open and stop that session.

[root@host01]# umount /u01/app/oracle/acfsmount/images umount: /u01/app/oracle/acfsmount/images: device is busy umount: /u01/app/oracle/acfsmount/images: device is busy [root@host01]# lsof +d /u01/app/oracle/acfsmount/images COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME lsof 5770 root cwd DIR 252,5634 4096 2 /u01/app/oracle/acfsmount/images lsof 5771 root cwd DIR 252,5634 4096 2 /u01/app/oracle/acfsmount/images bash 23971 root cwd DIR 252,5634 4096 2 /u01/app/oracle/acfsmount/images /* change directories the root session then continue */ [root@host01]# cd [root@host01]# umount /u01/app/oracle/acfsmount/images [root@host01 ~]# ssh $ST_NODE2 umount /u01/app/oracle/acfsmount/images root@host02's password: oracle << password is not displayed

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 279: d59999gc10_sg2

Practice 10-1: Managing ACFS (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 139

[root@host01 ~]# ssh $ST_NODE3 umount /u01/app/oracle/acfsmount/images root@host03's password: oracle << password is not displayed [root@host01 ~]#

15) Remove the IMAGES ACFS file system and volume using Enterprise Manager.

Step Screen/Page Description Choices or Values a. Enter the URL in the browser https//host01:1158/em b. Login Name: SYS

Password: oracle_4U Connect as SYSDBA Click Login.

c. Cluster Database: orcl.example.com Home

In the Instances section, click the link for +ASM1_host01.example.com.

d. Automatic Storage Management: +ASM1_host01.example.com Home

Click the ASM Cluster File System tab.

e. Automatic Storage Management Login

Username: SYS Password: oracle_4U Click Login.

f. Automatic Storage Management: +ASM1_host01.example.com ASM Cluster File System

Select /u01/app/oracle/acfsmount/images. Click Deregister.

g. ASM Cluster File System Host Credentials: host01.example.com

Username: grid Password: oracle Click Continue.

h. Deregister ASM Cluster File System: /dev/asm/images-407

Click OK.

i. Automatic Storage Management: +ASM1_host01.example.com ASM Cluster File System

Click the Disk Groups tab.

j. Automatic Storage Management: +ASM1_host01.example.com Disk Groups

Click the link for the ACFS disk group.

k. Disk Group: ACFS General Click the Volumes tab. l. Disk Group: ACFS Volumes Select the IMAGES volume.

Click Delete. m. Confirmation Click Yes. n. Disk Group: ACFS General o. Disk Group: ACFS Click Logout.

Close the browser.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 280: d59999gc10_sg2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 140

Practice 10-2: Uninstalling RAC Database In this practice, you will uninstall the RAC database, leaving the clusterware installed on three nodes.

1) Open a terminal window as the oracle OS user on your first node and set the environment with the oraenv and st_env.sh commands. Enter the name of the database when prompted.

[oracle@host01]$ . oraenv ORACLE_SID = [oracle] ? orcl The Oracle base for ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome is /u01/app/oracle [oracle] $ . /home/oracle/labs/st_env.sh

2) Stop Enterprise Manager on your first and second nodes. It was not started on your third node.

[oracle@host01]$ export ORACLE_UNQNAME=orcl [oracle@host01]$ emctl stop dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://host01.example.com:1158/em/console/aboutApplication Stopping Oracle Enterprise Manager 11g Database Control ... all attemps to stop oc4j failed... now trying to kill 9 ... Stopped. [oracle@host01]$ ssh $ST_NODE2 Last login: Thu Sep 24 20:56:37 2009 from host01.example.com [oracle@host02]$ . oraenv ORACLE_SID = [oracle] ? orcl The Oracle base for ORACLE_HOME=/u01/app/oracle/acfsmount/11.2.0/sharedhome is /u01/app/oracle [oracle@host02]$ export ORACLE_UNQNAME=orcl [oracle@host02]$ emctl stop dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://host02.example.com:1158/em/console/aboutApplication Stopping Oracle Enterprise Manager 11g Database Control ... Cannot determine Oracle Enterprise Manager 11g Database Control process. /u01/app/oracle/acfsmount/11.2.0/sharedhome/host02_orcl/emctl.pid does not exist. [oracle@host02 ~]$ exit logout

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 281: d59999gc10_sg2

Practice 10-2: Uninstalling RAC Database (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 141

Connection to host02 closed. [oracle@host01 ~]$

3) Change directories to the oracle user's home directory: /home/oracle.

[oracle@host01]$ cd /home/oracle

4) Start the deinstall tool. The tool does an auto discover of the database parameters. The next prompt allows you to change the discovered parameters, answer n.

[oracle@host01]$ $ORACLE_HOME/deinstall/deinstall -silent Checking for required files and bootstrapping ... Please wait ... Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL & DECONFIG TOOL START ############ ######################## CHECK OPERATION START ######################## Install check configuration START Checking for existence of the Oracle home location /u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1 Oracle Home type selected for de-install is: RACDB Oracle Base selected for de-install is: /u01/app/oracle Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid The following nodes are part of this cluster: host01,host02 Install check configuration END Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check5746757283308326616.log Network Configuration check config END Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check7510048152035885516.log ###### For Database 'orcl' ######

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 282: d59999gc10_sg2

Practice 10-2: Uninstalling RAC Database (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 142

RAC Database The nodes on which this database has instances: [host01, host02] The instance names: [orcl1, orcl2] The local instance name on node: orcl1 The diagnostic destination location of the database: /u01/app/oracle/diag/rdbms/orcl Storage type used by the Database: ASM Database Check Configuration END Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check.log Checking configuration for database orcl Enterprise Manager Configuration Assistant END Oracle Configuration Manager check START OCM check log file location : /u01/app/oraInventory/logs//ocm_check7494.log Oracle Configuration Manager check END ######################### CHECK OPERATION END ######################### ####################### CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)host01,host02 Oracle Home selected for de-install is: /u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1 Inventory Location where the Oracle home registered is: /u01/app/oraInventory The following databases were selected for de-configuration : orcl Database unique name : orcl Storage used : ASM Will update the Enterprise Manager configuration for the following database(s): orcl No Enterprise Manager ASM targets to update No Enterprise Manager listener targets to migrate Checking the config status for CCR host01 : Oracle Home exists with CCR directory, but CCR is not configured host02 : Oracle Home exists with CCR directory, but CCR is not configured CCR check is finished

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 283: d59999gc10_sg2

Practice 10-2: Uninstalling RAC Database (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 143

A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2009-09-28_08-24-07-AM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2009-09-28_08-24-07-AM.err' ######################## CLEAN OPERATION START ######################## Enterprise Manager Configuration Assistant START EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean.log Updating Enterprise Manager Database Control configuration for database orcl Updating Enterprise Manager ASM targets (if any) Updating Enterprise Manager listener targets (if any) Enterprise Manager Configuration Assistant END Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean8536757532603850049.log Database Clean Configuration START orcl This operation may take few minutes. Database Clean Configuration END orcl Network Configuration clean config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean4830516361423529044.log De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END Oracle Configuration Manager clean START OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean7494.log Oracle Configuration Manager clean END Oracle Universal Installer clean START Detach Oracle home '/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' from the central inventory on the local node : Done

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 284: d59999gc10_sg2

Practice 10-2: Uninstalling RAC Database (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 144

Delete directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' on the local node : Done <<< following Failed messages are expected >>> Failed to delete the directory '/u01/app/oracle/acfsmount/images'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/.fileid'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/repl'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/snaps'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS'. The directory is not empty. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/lost+found'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome'. The directory is not empty. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0'. The directory is not empty. Failed to delete the directory '/u01/app/oracle/acfsmount'. The directory is not empty. Failed to delete the directory '/u01/app/oracle/acfsmount/images'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/.fileid'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/repl'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS/snaps'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/.ACFS'. The directory is not empty. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/lost+found'. The directory is in use. Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome'. The directory is not empty.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 285: d59999gc10_sg2

Practice 10-2: Uninstalling RAC Database (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 145

Failed to delete the directory '/u01/app/oracle/acfsmount/11.2.0'. The directory is not empty. Failed to delete the directory '/u01/app/oracle/acfsmount'. The directory is not empty. Failed to delete the directory '/u01/app/oracle'. The directory is not empty. Delete directory '/u01/app/oracle' on the local node : Failed <<<< Detach Oracle home '/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' from the central inventory on the remote nodes 'host02' : Done Delete directory '/u01/app/oracle' on the remote nodes 'host02' : Failed <<<< The directory '/u01/app/oracle' could not be deleted on the nodes 'host02'. Oracle Universal Installer cleanup completed with errors. Oracle Universal Installer clean END Oracle install clean START Clean install operation removing temporary directory '/tmp/install' on node 'host01' Clean install operation removing temporary directory '/tmp/install' on node 'host02' Oracle install clean END ######################### CLEAN OPERATION END ######################### ####################### CLEAN OPERATION SUMMARY ####################### Updated Enterprise Manager configuration for database orcl Successfully de-configured the following database instances : orcl Cleaning the config for CCR As CCR is not configured, so skipping the cleaning of CCR configuration CCR clean is finished Successfully detached Oracle home '/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' from the central inventory on the local node.

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 286: d59999gc10_sg2

Practice 10-2: Uninstalling RAC Database (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 146

Successfully deleted directory '/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' on the local node. Failed to delete directory '/u01/app/oracle' on the local node. Successfully detached Oracle home '/u01/app/oracle/acfsmount/11.2.0/sharedhome/dbhome_1' from the central inventory on the remote nodes 'host02'. Failed to delete directory '/u01/app/oracle' on the remote nodes 'host02'. Oracle Universal Installer cleanup completed with errors. Oracle install successfully cleaned up the temporary directories. ############################################################## ############# ORACLE DEINSTALL & DECONFIG TOOL END ########### [oracle@host01 ~]$

5) The deinstall tool does not completely remove the ORACLE_HOME directory when ORACLE_HOME is placed in the ACFS file system. Using ASMCA, remove the ACFS file system.

a) Open a window as the grid user on your first node and set the Grid Infrastructure home with oraenv.

$ su - grid Password: oracle <<< paswword not displayed [grid]$ . oraenv ORACLE_SID = [grid] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid]$

b) Start ASMCA with the asmca command as the grid user.

[grid]$ asmca

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 287: d59999gc10_sg2

Practice 10-2: Uninstalling RAC Database (continued)

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM A - 147

c) Dismount the ACFS file system.

Step Screen/Page Description Choices or Values a. ASM Configuration Assistant Click the ASM Cluster File Systems tab. b. ASM Cluster File System Right-click the row with Volume

DBHOME_1. c. In the popup-menu Select Show Dismount Command. d. Dismount ACFS Command Follow instructions

Start a window as the root user. Copy the command in the ASMCA window to the terminal window and execute.

e. Dismount ACFS Command Click Close.

d) Remove the ORACLE_HOME directory

Step Screen/Page Description Choices or Values a. ASM Cluster File System Right-click the row with Volume

DBHOME_1. b. In the popup-menu Select Delete. c. Confirm Delete Click Yes. d. ASM Cluster File System:

Deletion Follow instructions Start a window as the root user Copy the command in the ASMCA window to the terminal window and execute. Exit the terminal window.

e. ASM Cluster File System: Deletion

Click close.

f. ASM Cluster File System Click Exit. g. ASM Configuration Assistant Click Yes.

6) Close all terminal windows.andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 288: d59999gc10_sg2

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 289: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning Oracle Clusterware

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 290: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Objectives

After completing this lesson, you should be able to:

• Describe the cloning procedure

• Prepare the software for cloning

• Describe the cloning script variables

• Clone Oracle Clusterware to create a new cluster

• Clone Oracle Clusterware to extend an existing cluster

• Examine cloning log files

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 2

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 291: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

What Is Cloning?

Cloning is the process of copying an existing Oracle Clusterware installation to a different location. It:

• Requires a successful installation as a baseline

• Can be used to create new clusters

• Cannot be used to remove nodes from the cluster

• Does not perform the operating system prerequisites to an installation

• Is useful to build many clusters in an organization

What Is Cloning?

Cloning is a process that allows the copying of an existing Oracle Clusterware installation to a different location and then updating the copied installation to work in the new environment. The cloned copy can be used to create a new cluster from a successfully installed cluster. To add or delete Oracle Clusterware from nodes in the cluster, use the addNode.sh and rootcrs.plscripts. The cloning procedure cannot be used to remove a node from an existing cluster.

The cloning procedure is responsible for the work that would have been done by the Oracle Universal Installer (OUI) utility. It does not automate the prerequisite work that must be done on each node before installing the Oracle software.

This technique is very useful if a large number of clusters need to be deployed in an organization. If only one or two clusters are being deployed, you should probably use the traditional installation program to perform the installations.

Note: The Oracle Enterprise Manager administrative tool with the Provisioning Pack feature installed has automated wizards to assist with cloning exercises.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 3

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 292: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Benefits of Cloning Oracle Clusterware

The following are some of the benefits of cloning Oracle Clusterware:

• Can be completed in silent mode from a Secure Shell (SSH) terminal session

• Contains all patches applied to the original installation

• Can be done very quickly

• Is a guaranteed method of repeating the same installation on multiple clusters

Benefits of Cloning Oracle Clusterware

Using the cloning procedure presented in this lesson has several benefits compared to the traditional Oracle Universal Installer (OUI) installation method. The OUI utility is a graphical program and must be executed from a graphical session. Cloning can be completed in silent mode from a command-line Secure Shell (SSH) terminal session without the need to load a graphical windows system. If the OUI program were to be used to install the software from the original installation media, all patches that have been applied since the first installation would have to be reapplied. The clone technique presented in this lesson includes all successfully applied patches, and can be performed very quickly to a large number of nodes. When the OUI utility performs the copying of files to remote servers, the job is executed serially to one node at a time. With cloning, simultaneous transfers to multiple nodes can be achieved. Finally, the cloning method is a guaranteed way of repeating the same installation on multiple clusters to help avoid human error.

The cloned installation acts the same as the source installation. It can be patched in the future and removed from a cluster if needed using ordinary tools and utilities. Cloning is an excellent way to instantiate test clusters or a development cluster from a successful base installation.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 4

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 293: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Creating a Cluster by CloningOracle Clusterware

Node 1 Node 1

Node 2

Cluster A Cluster B

Clusterware installed

Clusterware cloned

Creating a Cluster by Cloning Oracle Clusterware

This example shows the result when you successfully clone an installed Oracle Clusterwareenvironment to create a cluster. The environment on Node 1 in Cluster A is used as the source, and Cluster B, Nodes 1 and 2 are the destination. The clusterware home is copied from Cluster A, Node 1, to Cluster B, Nodes 1 and 2. When completed, there will be two separate clusters. The OCR and Voting disks are not shared between the two clusters after you successfully create a cluster from a clone.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 5

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 294: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Preparing the Oracle Clusterware Home for Cloning

The following procedure is used to prepare the Oracle Clusterware home for cloning:

1. Install Oracle Clusterware on the first machine.A. Use the Oracle Universal Installer (OUI) GUI interactively.

B. Install patches that are required (for example, 11.1.0.n).

C. Apply one-off patches, if necessary.

2. Shut down Oracle Clusterware.

3. Make a copy of the Oracle Clusterware home.

# crsctl stop crs -wait

# mkdir /stagecrs# cp –prf /u01/app/11.2.0/grid /stagecrs

Preparing the Oracle Clusterware Home for Cloning

The cloning method requires that an existing, successful installation of Oracle Clusterware be already performed in your organization. Verify that all patch sets and one-off patches have been applied before starting the clone procedure to minimize the amount of work that will have to be performed for the cloning exercise.

To begin the cloning process, start by performing a shutdown of Oracle Clusterware on one of the nodes in the existing cluster with the crsctl stop crs –wait command. The other nodes in the existing cluster can remain active. After the prompt is returned from the shutdown command, make a copy of the existing Oracle Clusterware installation into a temporary staging area of your choosing. The disk space requirements in the temporary staging area will be equal to the current size of the existing Oracle Clusterware installation.

The copy of the Oracle Clusterware files will not include files that are outside the main installation directory such as /etc/oraInst.loc and the /etc/oracle directory. These files will be created later by running various root scripts.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 6

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 295: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Preparing the Oracle Clusterware Home for Cloning

4. Remove files that pertain only to the source node.

5. Create an archive of the source.

# cd /stagecrs/grid# rm -rf /stagecrs/grid/log/<hostname># rm -rf root.sh*# rm -rf gpnp/*# find . -name '*.ouibak' -exec rm {} \;# find . -name '*.ouibak.1' -exec rm {} \;# rm –rf \inventory/ContentsXML/oraclehomeproperties.xml

# cd ./cfgtoollogs# find . -type f -exec rm -f {} \;

# cd /stagecrs/grid# tar –zcvf /tmp/crs111060.tgz .

Preparing the Oracle Clusterware Home for Cloning (continued)

Each installation of Oracle Clusterware on local storage devices contains files and directories that are applicable to only that node. These files and directories should be removed before making an archive of the software for cloning. Perform the commands in the slide to remove node-specific information from the copy that was made in step 3. Then create an archive of the Oracle Clusterware copy. An example of using the Linux tar command followed by the compress command to reduce the file size is shown in the slide. On Windows systems, use the WinZip utility to create a zip file for the archive.

Note: Do not use the Java Archive (JAR) utility to copy and compress the Oracle Clusterwarehome.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 7

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 296: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Preparing the Oracle Clusterware Home for Cloning

6. Restart Oracle Clusterware.

# crsctl start crs

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 8

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 297: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Create a New Oracle Clusterware Environment

The following procedure uses cloning to create a new cluster:

1. Prepare the new cluster nodes. (See the lesson titled “Grid Infrastructure Installation” for details.)A. Check system requirements.

B. Check network requirements.

C. Install the required operating system packages.

D. Set kernel parameters.

E. Create groups and users.

F. Create the required directories.

G. Configure installation owner shell limits.

H. Configure SSH and enable user equivalency.I. Use the Cluster Verify Utility (cluvfy) to check

prerequisites.

Cloning to Create a New Oracle Clusterware Environment

The archive of the source files that was made when preparing the Oracle Clusterware home for cloning will become the source files for an installation on the new cluster system and the clone.pl script will be used to make it a working environment. These actions do not perform the varied prerequisite tasks that must be performed on the operating system before an OracleClusterware installation. The prerequisite tasks are the same as the ones presented in the lesson titled “Grid Infrastructure Installation” in the topic about installing Oracle Clusterware.

One of the main benefits of cloning is rapid instantiation of a new cluster. The prerequisite tasks are manual in nature, but it is suggested that a shell script be developed to automate these tasks. One advantage of the Oracle Enterprise Linux operating system is that an oracle-validated-1.x.x.rpm file can be obtained that will perform almost all the prerequisite checks including the installation of missing packages with a single command.

This cloning procedure is used to build a new distinct cluster from an existing cluster. In step 1, prepare the new cluster nodes by performing prerequisite setup and checks.

If cluvfy fails to execute because of user equivalence errors, the passphrase needs to be loaded with the following commands before executing cluvfy:

exec /usr/bin/ssh-agent $SHELLssh-add

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 9

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 298: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Create a New Oracle Clusterware Environment

2. Deploy Oracle Clusterware on each of the destination nodes.A. Extract the tar file created earlier.

B. Change the ownership of files, and create Oracle Inventory.

C. Remove any network files from /u01/app/11.2.0/grid/network/admin

# mkdir –p /u01/app/11.20/grid# cd /u01/app/11.2.0# tar –zxvf /tmp/crs111060.tgz

# chown –R crs:oinstall /u01/app/11.2.0/grid# mkdir –p /u01/app/oraInventory# chown grid:oinstall /u01/app/oraInventory

$ rm /u01/app/11.2.0/grid/network/admin/*

Cloning to Create a New Oracle Clusterware Environment (continued)

Step 2 is the deployment and extraction of the source archive to the new cluster nodes. If a shared Oracle Cluster home on a Cluster File System (CFS) is not being utilized, extract the source archive to each node’s local file system. It is possible to change the operating system owner to a different one from that of the source with recursive chown commands as illustrated in the slide. If other Oracle products have been previously installed on the new nodes, the Central Oracle Inventory directory may already exist. It is possible for this directory to be owned by a different user than the Oracle Grid Infrastructure user; however, both should belong to the same primary group oinstall. Execute the preupdate.sh script on each target node, logged in as root. This script will change the ownership of the CRS home and root-owned files to the Oracle CRS user.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 10

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 299: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

The clone.pl Script

Cloning to a new cluster and cloning to extend an existing cluster both use a PERL script. The clone.pl script is used, which:

• Can be used on the command line

• Can be contained in a shell script

• Accepts many parameters as input

• Is invoked by the PERL interpreter

# perl <CRS_home>/clone/bin/clone.pl $E01 $E02 $E03 $E04 $C01 $CO2

The clone.pl Script

The PERL-based clone.pl script is used in place of the graphical OUI utility to perform the installation on the new nodes so that they may participate in the existing cluster or become valid nodes in a new cluster. The script can be executed directly on the command line or at a DOS command prompt for Windows platforms. The clone.pl script accepts several parameters as input, typed directly on the command line. Because the clone.pl script is sensitive to the parameters being passed to it, including the use of braces, single quotation marks, and double quotation marks, it is recommended that a shell script be created to execute the PERL-based clone.pl script to input the arguments. This will be easier to rework if there is a syntax error generated. There are a total of seven arguments that can be passed as parameters to the clone.pl script. Four of them define environment variables, and the remaining three supply processing options. If your platform does not include a PERL interpreter, you can download one at:

http://www.perl.org

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 11

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 300: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

The clone.pl Environment Variables

The clone.pl script accepts four environment variables as input. They are as follows:

Symbol Variable Description

E01 ORACLE_BASE The location of the Oracle base directory

E02 ORACLE_HOME The location of the Oracle Grid Infrastructure home. This directory location must exist and must be owned by the Oracle operating system group: oinstall.

E03 ORACLE_HOME_NAME The name of the Oracle Grid Infrastructure home

E04 INVENTORY_LOCATION The location of the Oracle Inventory

The clone.pl Environment Variables

The clone.pl script accepts four environment variables as command-line arguments providing input. These variables would correlate to some of the questions presented by the OUI utility during an installation. Each variable is associated with a symbol that is used for reference purposes for developing a shell script. The choice of symbol is arbitrary. Each variable is case-sensitive. The variables are as follows:• ORACLE_BASE: The location of the Oracle Base directory. A suggested value is

/u01/app/grid. The ORACLE_BASE value should be unique for each software owner.• ORACLE_HOME: The location of the Oracle Grid Infrastructure home. This directory

location must exist and be owned by the Oracle Grid Infrastructure software owner and the Oracle Inventory group, typically grid:oinstall. A suggested value is /u01/app/<version>/grid.

• ORACLE_HOME_NAME: The name of the Oracle Grid Infrastructure home. This is storedin the Oracle inventory and defaults to the name Ora11g_gridinfrahome1 when performing an installation with OUI. Any name can be selected, but it should be unique in the organization.

• INVENTORY_LOCATION: The location of the Oracle Inventory. This directory location must exist and must initially be owned by the Oracle operating system group: oinstall. A typical location is /u01/app/oraInventory.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 12

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 301: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

The clone.pl Command Options

The clone.pl script accepts two required command options as input. They are as follows:

# Variable Data Type Description

C01 CLUSTER_NODES String This list of short node names for the nodes in the cluster

C02 LOCAL_NODE String The short name of the local node

The clone.pl Command Options

The clone.pl script accepts two command options as input. The command options are case-sensitive and are as follows:• CLUSTER_NODES: This list of short node names for the nodes in the cluster • LOCAL_NODES: The short name of the local node

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 13

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 302: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Create a New Oracle Clusterware Environment

3. Create a shell script to invoke clone.pl supplying input.#!/bin/shORACLE_BASE=/u01/app/oracleGRID_HOME=/u01/app/11.2.0/gridTHIS_NODE=`hostname -s`

E01=ORACLE_BASE=${ORACLE_BASE}E02=ORACLE_HOME=${ORACLE_HOME}E03=ORACLE_HOME_NAME=OraGridHome1E04=INVENTORY_LOCATION=${ORACLE_BASE}/oraInventory

#C00="-O'-debug'"C01="-O'\"CLUSTER_NODES={node1,node2}\"'"C02="-O'\"LOCAL_NODE=${THIS_NODE}\"'"

perl ${GRID_HOME}/clone/bin/clone.pl -silent $E01 $E02 $E03 $E04 $C01 $C02

Syntax per bug

8718041

Cloning to Create a New Oracle Clusterware Environment

The clone.pl script requires you to provide many setup values to the script when it is executed. You may enter the values interactively on the command line or create a script to supply the input values. By creating a script, you will have the ability to modify it and execute the script a second time if errors exist. The setup values to the clone.pl script are case-sensitive and sensitive to the use of braces, single quotation marks, and double quotation marks.

For step 3, create a shell script that invokes clone.pl supplying command-line input variables. All the variables should appear on a single line.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 14

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 303: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Create a New Oracle Clusterware Environment

4. Run the script created in Step 3 on each node.

5. Prepare the crsconfig_params file.

$ /tmp/my-clone-script.sh

Cloning to Create a New Oracle Clusterware Environment (continued)

Step 4: Run the script you created in Step 3 as the operating system user that installed OracleClusterware. If you do not have a shared Oracle grid infrastructure home run this script on each node. The clone.pl command instantiates the crsconfig_params file in the next step.

Step 5: Prepare the /u01/app/11.2.0/grid/install/crsconfig_params file on all of the nodes in the cluster. You can copy the file from one node to all of the other nodes. More than 50 parameters are named in this file.

Note: In 11.2.0.1 the clone.pl script does not propagate the crsconfig_params file.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 15

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 304: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Create a New Oracle Clusterware Environment

6. Run the orainstRoot.sh script on each node.

7. Run the <CRS_home>/root.sh and rootcrs.pl scripts on each node.

# /u01/app/11.2.0/grid/root.sh# /u01/app/11.2.0/grid/perl/bin/perl \

-I/u01/app/11.2.0/grid/perl/lib \-I/u01/app/11.2.0/grid/crs/install \/u01/app/11.2.0/grid/crs/install/rootcrs.pl

# /u01/app/oraInventory/orainstRoot.sh

Cloning to Create a New Oracle Clusterware Environment (continued)

When the shell script containing clone.pl successfully completes, it is necessary to execute the orainstRoot.sh script for step 6 and the root.sh script for step 7 on each node, both as the root user.

Note• Step 4 must be run to completion before you start Step 5. Similarly, Step 5 must be run to

completion before you start Step 6.• You can perform Step 4, Step 5, and Step 6 simultaneously on different nodes. Step 6 must

be complete on all nodes before you can run Step 7.• Step 7: Ensure that the root.sh and rootcrs.pl scripts have completed on the first

node before running them on the second node and subsequent nodes.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 16

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 305: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Create a New Oracle Clusterware Environment

8. Run the configuration assistants on each new node.

9. Run the cluvfy utility to validate the installation.

$ /u01/app/11.2.0/grid/bin/netca \/orahome /u01/app/11.2.0/grid \/orahnam OraGridHome1 /instype typical \/inscomp client,oraclenet,javavm,server\/insprtcl tcp /cfg local \/authadp NO_VALUE \/responseFile \

/u01/app/11.2.0/grid/network/install/netca_typ.rsp \/silent

$ cluvfy stage –post crsinst –n all -verbose

Cloning to Create a New Oracle Clusterware Environment (continued)

Step 8: You should execute the command shown in the slide, on the first node.

If you are using ASM, then run the following command:

$ /u01/app/11.2.0/grid/bin/asmca -silent –postConfigureASM \

-sysAsmPassword oracle -asmsnmpPassword oracle

If you a plan to run a pre-11g Release 2 (11.2) database on this cluster, then you should runoifcfg as described in the Oracle Database 11g Release 2 (11.2) documentation.

To use IPMI, use the crsctl command to configure IPMI on each node:

# crsctl set css ipmiaddr ip_address

At this point, Oracle Clusterware is fully installed and configured to operate in the new cluster. The cluvfy utility can be invoked in the post-CRS installation stage to verify the installation with the following syntax:

cluvfy stage –post crsinst -n all –verbose

If cluvfy fails to execute because of user equivalence errors, the passphrase needs to be loaded with the following commands before executing cluvfy again:

exec /usr/bin/ssh-agent $SHELLssh-add

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 17

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 306: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Log Files Generated During Cloning

The following log files are generated during cloning to assist with troubleshooting failures.

• Detailed log of the actions that occur during the OUI part:

• Information about errors that occur when OUI is running:

• Other miscellaneous messages generated by OUI:

/u01/app/oraInventory/logs/cloneActions<timestamp>.log

/u01/app/oraInventory/logs/oraInstall<timestamp>.err

/u01/app/oraInventory/logs/oraInstall<timestamp>.out

Log Files Generated During Cloning

Several log files are generated when the clone.pl script is executed and are useful in diagnosing any errors that may occur. For a detailed log of the actions that occur during the OUI part of the cloning, examine the log file:

<Central_Inventory>logs/cloneActions/<timestamp>.log

If errors occurred during the OUI portion of the cloning process, examine the log file:<Central_Inventory>logs/oraInstall/<timestamp>.err

Other miscellaneous messages generated by OUI can be found in the output file:<Central_Inventory>logs/oraInstall/<timestamp>.out

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 18

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 307: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Log Files Generated During Cloning

The following log files are generated during cloning to assist with troubleshooting failures:

• Detailed log of the actions that occur before cloning as well as during cloning operations:

• Information about errors that occur before cloning as well as during cloning operations:

/u01/app/grid/clone/logs/clone-<timestamp>.log

/u01/app/grid/clone/logs/error-<timestamp>.log

Log Files Generated During Cloning (continued)

Several log files are generated when the clone.pl script is executed and are useful in diagnosing errors that may occur. For a detailed log of the actions that occur before cloning as well as during the cloning process, examine the log file:

<CRS home>/clone/logs/clone-<timestamp>.log

For a detailed log of the errors that occur before cloning as well as during the cloning process, examine the log file:

<CRS home>/clone/logs/error-<timestamp>.log

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 19

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 308: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Extend Oracle Clusterware to More Nodes

The procedure is very similar to cloning to create new clusters.

1. Prepare the new cluster nodes. (See the lesson titled “Oracle Clusterware Installation” for details. Suggest that this be made into a shell script for reuse.)A. Check system requirements.

B. Check network requirements.

C. Install the required operating system packages.

D. Set kernel parameters.

E. Create groups and users.

F. Create the required directories.

G. Configure installation owner shell limits.

H. Configure SSH and enable user equivalency.I. Use the cluvfy utility to check prerequisites.

Cloning to Extend Oracle Clusterware to More Nodes

The cloning procedure using the clone.pl script can also be used to extend OracleClusterware to more nodes within the same cluster with steps similar to the procedure for creating a new cluster. The first step shown here is identical to the first step performed when cloning to create a new cluster. These steps differ depending on the operating system used.

When you configure Secure Shell (SSH) and enable user equivalency, remember that the authorized_keys and known_hosts files exist on each node in the cluster. Therefore, it will be necessary to update these files on the existing nodes with information about the new nodes that Oracle Clusterware will be extended to.

Note: Not supported in 11.2.0.1

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 20

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 309: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Extend Oracle Clusterware to More Nodes

2. Deploy Oracle Clusterware on the destination nodes.A. Extract the tar file created earlier.

B. Change the ownership of files and create Oracle Inventory.

C. Run the preupdate.sh script on each target node.

# mkdir –p /u01/app/crs# cd /u01/app/crs# tar –zxvf /tmp/crs111060.tgz

# chown –R crs:oinstall /u01/app/crs# mkdir –p /u01/app/oraInventory# chown crs:oinstall /u01/app/oraInventory

# /u01/app/crs/install/preupdate.sh \–crshome /u01/app/crs –crsuser crs -noshutdown

Cloning to Extend Oracle Clusterware to More Nodes (continued)

Step 2 is the deployment and extraction of the source archive to the new cluster nodes. If a shared Oracle Cluster home on a CFS is not being used, extract the source archive to each node’s local file system. Because this node will participate with the existing nodes in a cluster, it is not possible to use different account names for the Oracle Clusterware software owner. If other Oracle products have been previously installed on the new nodes, the Central Oracle Inventory directory may already exist. It is possible for this directory to be owned by a different user than the Oracle Clusterware user; however, both should belong to the same primary groupoinstall. Execute the preupdate.sh script on each target node, logged in as root. This script will change the ownership of the CRS home and root-owned files to the Oracle CRS user if needed.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 21

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 310: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Extend Oracle Clusterware to More Nodes

3. Create a shell script to invoke clone.pl supplying input.

4. Run the shell script created in step 3 on each new node.

#!/bin/sh# /tmp/my-clone-script.shE01=ORACLE_BASE=/u01/appE02=ORACLE_HOME=/u01/app/crsE03=ORACLE_HOME_NAME=OraCrs11gC01="-O'sl_tableList={node3:node3-priv:node3-vip:N:Y}'"C02="-O'INVENTORY_LOCATION=/u01/app/oraInventory'"C03="-O'-noConfig'"

perl /u01/app/crs/clone/bin/clone.pl $E01 $E02 $E03 $C01 $C02 $C03

$ /tmp/my-clone-script.sh

Cloning to Extend Oracle Clusterware to More Nodes (continued)

For step 3, the quantity of input variables to the clone.pl procedure is greatly reduced because the existing cluster has already defined many of the settings that are needed. Creating a shell script to provide these input values is still recommended. For step 4, run the shell script that you developed to invoke the clone.pl script.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 22

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 311: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Extend Oracle Clusterware to More Nodes

5. Run the orainstRoot.sh script on each new node.

6. Run the addNode script on the source node.

7. Run the rootaddnode.sh script on the source node.

# /u01/app/crs/install/rootaddnode.sh

# /u01/app/oraInventory/orainstRoot.sh

$ /u01/app/crs/oui/bin/addNode.sh –silent \"CLUSTER_NEW_NODES={new_nodes}" \"CLUSTER_NEW_PRIVATE_NODE_NAMES={new_node-priv}" \"CLUSTER_NEW_VIRTUAL_HOSTNAMES={new_node-vip}" \–noCopy

Cloning to Extend Oracle Clusterware to More Nodes (continued)

For step 5, run the orainstRoot.sh script on each new node as the root user. For step 6, run the addNode.sh script on the source node, not on the destination node. Because the clone.pl scripts have already been run on the new nodes, this script only updates the inventories of the existing nodes. For step 7, again on the source node, run therootaddnode.sh script as root to instantiate the node.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 23

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 312: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Cloning to Extend Oracle Clusterware to More Nodes

8. Run the <CRS_home>/root.sh script on the new node.

9. Run the configuration assistants on each node that is listed in the configToolAllCommands file.

10. Run cluvfy to validate the installation.

$ cat /u01/app/crs/cfgtoollogs/configToolAllCommands

$ cluvfy stage –post crsinst –n all -verbose

# /u01/app/crs/root.sh

$ onsconfig add_config node3:6501

Cloning to Extend Oracle Clusterware to More Nodes (continued)

Next, in step 8, run the root.sh script on each new node joining the cluster as the root user. For step 9, additional commands for the configuration assistants need to be run on the new nodes joining the cluster as the Oracle Clusterware software owner. The list of commands can be found in the <CRS home>/cfgtoollogs/configToolAllCommands file. Finally, for step 10, which is the last step, run cluvfy to verify the success of the Oracle Clusterwareinstallation.

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 24

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 313: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Quiz

An Oracle Clusterware home that was created with cloning techniques can be used as the source for additional cloning exercises.

1. True

2. False

Answer: 1

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 25

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 314: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Quiz

Which scripting language is used for the cloning script?

1. Java

2. PERL

3. Shell

4. Python

Answer: 2

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 26

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 315: d59999gc10_sg2

Copyright © 2010, Oracle. All rights reserved.

Summary

In this lesson, you should have learned how to:

• Describe the cloning process• Describe the clone.pl script and its variables

• Perform a clone of Oracle Clusterware to a new cluster

• Extend an existing cluster by cloning

Oracle Grid Infrastructure 11g: Manage Clusterware and ASM B - 27

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ

Page 316: d59999gc10_sg2

andy thorpe (andyฺthorpe@snsฺbskybฺcom) has a non-transferable

license to use this Student Guideฺ

Unauthorized reproduction or distribution prohibitedฺ Copyright© 2010, Oracle and/or its affiliatesฺ