sun cluster 3.2 1/09 release notes - oracle documentation

45
Sun Cluster 3.2 1/09 Release Notes 3.2 1/09 Release Notes 820-4990 Release Notes

Upload: others

Post on 11-Feb-2022

15 views

Category:

Documents


0 download

TRANSCRIPT

Sun Cluster 3.2 1/09 Release Notes 3.2 1/09 Release Notes

820-4990

Release Notes

Sun Cluster 3.2 1/09 Release Notes

820-4990

Copyright © 2007, 2009, Oracle and/or its affiliates. All rights reserved.

This software and related documentation are provided under a license agreementcontaining restrictions on use and disclosure and are protected by intellectual propertylaws. Except as expressly permitted in your license agreement or allowed by law, youmay not use, copy, reproduce, translate, broadcast, modify, license, transmit,distribute, exhibit, perform, publish, or display any part, in any form, or by any means.Reverse engineering, disassembly, or decompilation of this software, unless requiredby law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is notwarranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related software documentation that is delivered to the U.S.Government or anyone licensing it on behalf of the U.S. Government, the followingnotice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and relateddocumentation and technical data delivered to U.S. Government customers are"commercial computer software" or "commercial technical data" pursuant to theapplicable Federal Acquisition Regulation and agency-specific supplementalregulations. As such, the use, duplication, disclosure, modification, and adaptationshall be subject to the restrictions and license terms set forth in the applicableGovernment contract, and, to the extent applicable by the terms of the Governmentcontract, the additional rights set forth in FAR 52.227-19, Commercial ComputerSoftware License (December 2007). Oracle America, Inc., 500 Oracle Parkway,Redwood City, CA 94065.

This software or hardware is developed for general use in a variety of informationmanagement applications. It is not developed or intended for use in any inherentlydangerous applications, including applications which may create a risk of personalinjury. If you use this software or hardware in dangerous applications, then you shallbe responsible to take all appropriate fail-safe, backup, redundancy, and othermeasures to ensure its safe use. Oracle Corporation and its affiliates disclaim anyliability for any damages caused by use of this software or hardware in dangerousapplications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other namesmay be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. AllSPARC trademarks are used under license and are trademarks or registeredtrademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMDOpteron logo are trademarks or registered trademarks of Advanced Micro Devices.UNIX is a registered trademark licensed through X/Open Company, Ltd.

This software or hardware and documentation may provide access to or informationon content, products, and services from third parties. Oracle Corporation and itsaffiliates are not responsible for and expressly disclaim all warranties of any kind withrespect to third-party content, products, and services. Oracle Corporation and itsaffiliates will not be responsible for any loss, costs, or damages incurred due to youraccess to or use of third-party content, products, or services.

Sun Cluster 3.2 1/09 Release Notes

3

(English) Sun Cluster 3.2 1-09 Release NotesThis document provides the following information for Sun™ Cluster 3.2 1/09 software.

What's New in the Sun Cluster 3.2 1/09 SoftwareFeatures Nearing End of LifeCompatibility IssuesCommands Modified in This ReleaseProduct Name ChangesSupported ProductsSun Cluster Security HardeningKnown Issues and BugsPatches and Required Firmware LevelsSun Cluster 3.2 1/09 DocumentationDocumentation Issues

Top

What's New in the Sun Cluster 3.2 1/09 SoftwareThis section provides information related to new features, functionality, and supported products in the Sun Cluster 3.2 1/09software.

New Features and Functionality

The following new features and functionality are provided in patches to Sun Cluster 3.2 1/09 software:

EMC SRDF Data RecoverySun Storage 7000 Unified Storage Systems With Oracle RACSun Storage J4200/J4400 SATA ArraysVLANs Shared Among Multiple Clusters

The following new features and functionality are provided in the initial Sun Cluster 3.2 1/09 release:

Cluster Configuration CheckerData Service for InformixExclusive-IP Zones SupportGlobal-Devices Namespace on a Devicelofi

IPsec on the Cluster InterconnectOptional FencingPostgreSQL WAL Shipping SupportQuorum

Quorum MonitoringSoftware QuorumNew Automated Response to a Change in Quorum-Device Status

Solaris Containers Clusters, or Zone ClustersZFS Root File Systems, Except the Global-Devices Namespace

Top

Cluster Configuration Checker

The command is replaced by the command. The new subcommand performs the same rolesccheck cluster check check

as . In addition, a new command is added, which displays a list of all available clustersccheck cluster list-checks

configuration checks.

For details about the and subcommands, see the man page.check list-checks (1CL)cluster

Top

Sun Cluster 3.2 1/09 Release Notes

4

Data Service for Informix

A new data service is added for Informix. This data service is supported on the Solaris 10 OS for both SPARC® based platformsand x86 based platforms.

Top

Support EMC SRDF Data Recovery After a Complete Failure in a Primary Room

A patch to Sun Cluster 3.2 1/09 software changes Sun Cluster behavior so that if the primary campus cluster room fails, SunCluster automatically fails over to the secondary room. The patch makes the secondary room's storage device readable andwritable, and enables the failover of the corresponding device groups and resource groups. When the primary room returnsonline, you can manually run the procedure that recovers the data from the EMC Symmetrix Remote Data Facility (SRDF) devicegroup by resynchronizing the data from the original secondary room to the original primary room.

The following are the minimum patch versions that contain this new functionality:

Solaris 10 SPARC - 126106-30Solaris 10 x86 - 126107-31Solaris 9 - 126105-29

For more information, see the procedure in the Documentation Issues section for the .System Administration Guide

Top

Exclusive-IP Zones Support

Support of resources is added for non-global zones with . Such zones canSUNW.LogicalHostname ip-type=exclusive

have complete network isolation from the global zone and from one another, with their own network interfaces, routing, filtering,IPMP groups, and so forth. You can now use zones with to host network resources of type ip-type=exclusive

that are used by failover data services. The use of zones with is not availableSUNW.LogicalHostname ip-type=exclusive

for scalable data services or with zone clusters.

For clusters that upgrade to the Sun Cluster 3.2 1/09 release, you must upgrade the resource toSUNW.LogicalHostname

version 3 to make this feature available to the cluster. For new installations of Sun Cluster 3.2 1/09 software, this feature isautomatically available.

Top

Global-Devices Namespace on a Devicelofi

On the Solaris 10 OS, as an alternative to creating the global-devices namespace on a dedicated partition, you can instead chooseto create the namespace on a lofi device. This feature is particularly useful if you are installing Sun Cluster software on systemsthat are pre-installed with the Solaris 10 OS.

This feature is not available on the Solaris 9 OS.

For more information, see .New Choice for the Location of the Global-Devices Namespace

Top

IPsec on the Cluster Interconnect

Support is added to use IP Security Architecture (IPsec) on the cluster private interconnect. For details, see "How to Configure IP in the .Security Architecture (IPsec) on the Cluster Interconnect" Sun Cluster Software Installation Guide

Top

Optional Fencing

You can now use the command to set a default value for fencing that is applied to all storage devices. You can alsocluster

now use the new property with the command to enable or disable fencing for particular devices.default_fencing cldevice

Sun Cluster 3.2 1/09 Release Notes

5

If you disable fencing on a device, Sun Cluster software performs neither SCSI-2 nor SCSI-3 reservation-related operations on thatdevice.

The property replaces the raw-disk device group property that you used to turn off fencing fordefault_fencing localonly

devices in specific situations. The property remains available, however, as you still need it to create local VxVMlocalonly

device groups.

For details, see in the "Administering the SCSI Protocol Settings for Storage Devices" Sun Cluster System Administration Guideand the and man pages.(1CL)cluster (1CL)cldevice Top

PostgreSQL WAL Shipping Support

The Sun Cluster data service for PostgreSQL now supports PostgreSQL Write Ahead Log (WAL) shipping. WAL shipping providesthe ability to support log shipping functionality as a replacement for shared storage, thus eliminating the need for shared storagein a cluster when using PostgreSQL databases. This feature provides support for PostgreSQL database replication between twodifferent clusters or between two different PostgreSQL failover resources within one cluster.

For details, see the .Sun Cluster Data Service for PostgreSQL Guide

Top

Quorum Enhancements

New quorum features are available, as described in the following sections.

Software Quorum

Sun Cluster software now supports a new protocol called "software quorum". Software quorum implements all Sun Clusterquorum device operations entirely in software. Software quorum does not use SCSI-2 or SCSI-3 reservation-related operations.Consequently, you can now use any kind of shared disk with Sun Cluster software. For example, you can now use Serial AdvancedTechnology Attachment (SATA) devices as well as solid-state storage devices as quorum devices.

To use a shared disk with software quorum, you first need to turn off SCSI fencing for that device. When you subsequentlyconfigure the device as a quorum device, Sun Cluster software determines that SCSI fencing for the device is turned off andautomatically uses software quorum. Note, however, that Sun Cluster still supports the SCSI-2 and SCSI-3 protocols with shareddisks.

Software quorum provides a level of data-integrity protection comparable to SCSI-2-based fencing.

Top

Quorum Monitoring

This new feature periodically probes all configured quorum devices and sends notification to if a quorum device fails.syslog

You can monitor for quorum-device failures and proactively replace a failed quorum device before node reconfigurationsyslog

occurs.

Top

New Automated Response to a Change in Quorum Device Status

In previous Sun Cluster releases, if a configured quorum device stopped working, the cluster would subsequently ignore thequorum device completely. If such a quorum device became functional again, the user needed to unconfigure and thenreconfigure the device, to return it to use as a quorum device in the cluster. Alternatively, the user could reboot the entirecluster.

This release changes the quorum device behavior. Now when the cluster encounters a problem with a quorum device, such asduring a node reconfiguration, the cluster automatically takes the quorum device offline.

The new quorum-monitoring feature periodically checks the status of all configured quorum devices and initiates the followingactions:

Takes offline any quorum device that is not working properly.Brings online any formerly nonfunctional quorum device whose problem is resolved, and loads the quorum device withthe appropriate information.

Sun Cluster 3.2 1/09 Release Notes

6

Any changes in the status of quorum devices are accompanied by notifications on .syslog

The following is one example of how this feature can affect cluster behavior. When a quorum device is located on a differentnetwork than the cluster nodes, transient network problems might cause communication failures. The quorum-monitoring featureautomatically detects a communicationfailure and takes the quorum device offline. Similarly, when communications are restored, the quorum-monitoring featureautomatically brings the quorum device back online.

If you check the status of quorum devices by using the command, or if the cluster undergoes aclquorum status

reconfiguration, the software performs the same actions that the quorum monitor does upon detecting a change inquorum-device status.

In addition, if you check the status of quorum devices by using the command, the command now reportsclquorum status

the present status of quorum devices, rather than report the status as of the last cluster reconfiguration.

Top

Solaris Containers Clusters, or Zone Clusters

On the Solaris 10 OS, with this release of Sun Cluster software you can now create a Solaris Containers cluster, simply called a . The new zone-cluster feature is a virtual cluster, where each virtual node is a brand zone. A zone clusterzone cluster cluster

provides the following primary features:

Cluster application fault isolationSecurity isolationResource managementLicense fee cost containmentCluster application consolidation

A zone cluster is a simplified cluster that only contains those resources that are directly used by applications. There can be anynumber of zone clusters, and each zone cluster can run any number of applications.

The first data service that has been qualified to run in a zone cluster is Oracle RAC, including all RAC software, such as the database instances, CRS, and ASM. You can also run user-developed applications that run with RAC. As additional data servicesbecome qualified to run in a zone cluster, they will be added to in this Release Notes.Zone Cluster Support

For details about configuring a zone cluster, see in the and"Configuring a Zone Cluster" Sun Cluster Software Installation Guidethe man page.(1CL)clzonecluster

For a discussion of the terminology that was changed related to virtual clusters, see "Introduction to the Sun Cluster in the .Environment" Sun Cluster Concepts Guide

Top

Support for Sun Storage 7000 Unified Storage Systems With Oracle RAC

Sun Cluster 3.2 1/09 software now supports Sun Storage 7000 Unified Storage Systems, for use with Oracle RAC in the followingconfigurations:

Up to four SPARC nodesAt least Oracle RAC 10.2.0.4 10g Release 2 or Oracle RAC 11.1.0.7 11g softwareNFS file systems are used for Oracle RAC database files

No data services other than Oracle RAC are currently supported for use with these storage systems.

Observe the following guidelines when you use Sun Storage 7000 Unified Storage Systems with NFS file systems in a clusterconfiguration:

You can use an iSCSI LUN from the storage system as a cluster quorum device. You must disable fencing for the LUN,which automatically enables the use of the software quorum protocol. For information about configuring a quorumdevice with fencing disabled, follow instructions in in the How to Configure Quorum Devices Sun Cluster Software

.Installation Guide

When Oracle RAC is installed in the global zone, you can also use NFS file systems for Oracle Clusterware OCR andVoting disks.

Sun Cluster 3.2 1/09 Release Notes

7

When Oracle RAC is installed in a zone cluster, you must use iSCSI LUNs from Sun Storage 7000 Unified Storage Systemsas OCR and Voting devices. You cannot use NFS file systems in this configuration due to a problem with OracleClusterware's detection of NFS mount options when running inside a non-global zone.

If you use iSCSI LUNs for Oracle Clusterware OCR and Voting disks, either in the global zone or in a zone cluster,configure the corresponding Sun Cluster DID devices with fencing disabled. Use Sun Cluster DID device paths, not Solarisdevice paths, to reference those LUNs. For instructions to disable fencing globally or for individual disks, see proceduresin . For instructions to add a raw device to a zone cluster,Administering the SCSI Protocol Settings for Storage Devices see .How to Add a Raw-Disk Device to a Zone Cluster

Use the mount options that are required by Oracle RAC for NFS file systems. See bulletin 359515.1,OracleMetaLink"Mount Options for Oracle Files When Used with NAS Devices".

Set the option to in the file to automate the NFS file systems mount.mount at boot yes /etc/vfstab

Example:

#device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # ... qualfugu:/export/oracrs - /data/crs nfs 2 yes rw,bg,forcedirectio,wsize=32768,rsize=32768,hard,noac,nointr,proto=tcp,vers=3 qualfugu:/export/oradb - /data/db nfs 2 yes rw,bg,forcedirectio,wsize=32768,rsize=32768,hard,noac,nointr,proto=tcp,vers=3

At this time, you cannot use the Sun Cluster command to register Sun Storage 7000 Unified Storage Systems withclnas

the cluster.

When you configure the RAC framework and RAC proxy resources, either by using the RAC wizard or by usingmaintenance commands, use the procedure for hardware RAID storage management and not for NAS NFS. See Sun

.Cluster Data Service for Oracle RAC Guide

Top

Support for Sun Storage J4200/J4400 SATA Arrays

Sun Cluster 3.2 1/09 software now supports the use of Sun Storage J4200 and J4400 arrays with SATA HDDs in a clusterconfiguration. Sun Storage J4200 and J4400 storage arrays with SATA drives rely on an internal STP/SATA bridge, which handlesthe mapping to SATA. Storage arrays with SATA drives have certain limitations that must be taken into consideration in a SunCluster configuration. Observe the following requirements and guidelines when you configure dual-ported Sun Storage J4200 andJ4400 SATA drives for use in a Sun Cluster configuration:

SATA constraint - Because of the SATA protocol's point-to-point nature, only one initiator can be affiliated with a SATAdrive on each J4200 or J4400 array's SIM card.Cabling- To work around the SATA protocol constraints, the cabling layout from hosts to the J4200 or J4400 arrays mightexpose the SIMs and the HBAs as a single point of failure in certain configurations. Figure 2 shows an example of such aconfiguration.Initiators - You must use one initiator from a host to a SIM card, as shown in Figures 1 and 2.

Sun Cluster 3.2 1/09 Release Notes

8

Figure 1 - Simple Configuration With Standalone J44xx Array Figure 2 - Simple Configuration With Cascaded J44xx Array

Mirroring - To avoid possible loss of data if a disk fails, use host-based mirroring of the data.Sharing arrays - You cannot share a SATA JBOD array between hosts that belong to different clusters.Disabling global fencing- You must disable global fencing for all disks in the SATA storage array. This is requiredregardless of whether any SATA disks are to be used as a quorum device or simply as shared storage.

To disable fencing during initial cluster configuration, run the utility in Custom Mode. Whenscinstall

prompted by the utility whether to disable global fencing, respond "Yes".To disable fencing of all storage devices in the cluster after the cluster is established, follow instructions in "How

in the to Change the Default Global Fencing Protocol Settings for All Storage Devices" Sun Cluster System to set the property to . Alternatively, followAdministration Guide global_fencing nofencing_noscrub

instructions in to set the "How to Change the Fencing Protocol for a Single Storage Device" property of each SATA disk to .default_fencing nofencing_noscrub

For more information about setting up Sun Storage J4200 or J4400 arrays, see the Sun Storage J4200/J4400 Array System Overviewat and the at http://docs.sun.com/source/820-3223-11/ Sun Storage J4200/J4400 Array System Hardware Installation Guide

.http://docs.sun.com/app/docs/doc/820-3218-11

Top

VLANs Shared Among Multiple Clusters

Sun Cluster configurations now support the sharing of the same private-interconnect VLAN among multiple clusters. It is nolonger necessary to configure a separate VLAN for each cluster. However, limiting the use of a VLAN to a single cluster providesbetter fault isolation and interconnect resilience.

Top

ZFS Root File Systems, Except the Global-Devices Namespace

This release supports the use of ZFS for root file systems, with one significant exception. If you use a dedicated partition of theboot disk for the global-devices file system, you must use only UFS as its file system. The global-devices namespace requires theproxy file system (PxFS) running on a UFS file system.

Sun Cluster 3.2 1/09 Release Notes

9

However, a UFS file system for the global-devices namespace can coexist with a ZFS file system for the root ( ) file system and/

other root file systems, for example, or . Alternatively, if you instead use a to host the global-devices/var /home lofi devicenamespace, there is no limitation on the use of ZFS for root file systems.

Top

Features Nearing End of LifeThe following feature is nearing end of life in Sun Cluster software.

Solaris 9

Sun Cluster support for the Solaris 9 OS might no longer be supported in a future release.

Top

Compatibility IssuesAccessibility Features for People With DisabilitiesGlassFish with Sun Cluster Data Service for Sun Java System Application ServerLoopback File System (LOFS)SAP 7.1 with Sun Cluster Data Service for SAP Web Application ServerOracle RAC Clusters With Solaris Volume ManagerShared QFS With Solaris Volume Manager for Sun ClusterSolaris 10 Patch 137137-09/137138-09 Causes ZFS Pool CorruptionSolaris Trusted ExtensionsSolaris Volume Manager GUIZFS Root Disk Encapsulation

This section contains information about Sun Cluster compatibility issues.

Additional Sun Cluster framework compatibility issues are documented in in "Planning the Sun Cluster Configuration".Sun Cluster Software Installation Guide for Solaris OS

Additional Sun Cluster upgrade compatibility issues are documented in "Upgrade Requirements and Software Support in .Guidelines" Sun Cluster Upgrade Guide for Solaris OS

For other known problems or restrictions, see .Known Issues and Bugs

Top

Accessibility Features for People With Disabilities

To obtain accessibility features that have been released since the publishing of this media, consult Section 508 productassessments that are available from Sun on request to determine which versions are best suited for deploying accessible solutions.

Top

GlassFish with Sun Cluster Data Service for Sun Java System ApplicationServer

In GlassFish V2 UR2, the Domain Administration Server (DAS) takes a long time to start if the node agent's host is not online. Thisbehavior is due to an existing bug in GlassFish. To support GlassFish V2 UR2 in a Sun Cluster environment, ensure that node-agenthosts are online before you bring the DAS resource online. For detailed information about the bug, see .GlassFish Bug 5057

Top

Loopback File System (LOFS)

Sun Cluster 3.2 1/09 software does not support the use of LOFS under certain conditions. If you must enable LOFS on a cluster

Sun Cluster 3.2 1/09 Release Notes

10

node, such as when you configure non-global zones, first determine whether the LOFS restrictions apply to your configuration.See the guidelines in in for more"Solaris OS Feature Restrictions" Sun Cluster Software Installation Guide for Solaris OSinformation about the restrictions and workarounds that permit the use of LOFS when restricting conditions exist.

Top

(SPARC) Oracle RAC Clusters With Solaris Volume Manager

Due to CR 6580729, Solaris Volume Manager support is not available on SPARC based clusters with more than eight nodes thatrun Oracle RAC.

Top

SAP 7.1 with Sun Cluster Data Service for SAP Web Application Server

The following SAP 7.1 issues affect the operation of the Sun Cluster Data Service for SAP:

In the Sun Cluster environment for SAP 7.1, the SAP Web Application Server goes to mode after aSTOP_Failed

public-network failure. After a public-network failure, the resource should fail over instead of going to STOP_Failedmode. Refer to SAP note 888312 to resolve this issue.The SAP Web Application Server probe returns an error message with status after switchover of the Degraded

resource group. Once the switchover of the resource is complete, the SAP Web Application Serverenq-msg enq-msg

restarts due to a dependency on the messaging server. The restart of the SAP Web Application Server fails and returnsthe error message . Refer to SAP note 966416 and follow the instructions toerror 503 service unavailable

remove all entries from the profile, to prevent deadlock situations.krnlreg

Top

Shared QFS With Solaris Volume Manager for Sun Cluster

The following behavior has been reported on x86 Oracle RAC configurations with one or more Sun StorageTek QFS shared filesystems that mount devices from multiowner disksets of Solaris Volume Manager for Sun Cluster (CR 6655081).

If the QFS file system metadata server is located on a node that is not the master node of the disk set and that node loses allstorage connectivity, Oracle CRS will reboot the node. At the same time, the other nodes that are QFS metadata client nodesmight experience errors writing to related database files. The write error condition corrects itself when the Oracle RAC instancesare restarted. This restart should be an automatic recovery action by Oracle CRS, after the QFS metadata server and SolarisVolume Manager automatic recovery has been completed by Sun Cluster software.

Top

Solaris 10 Patch 137137-09/137138-09 Causes ZFS Pool Corruption

A change to the management of the file, as delivered by Solaris 10 kernel patches 137137-09zpool_import() zpool.cache

(for SPARC) or 137138-09 (for x86), might cause systems that have their shared ZFS (zfs(1M)) storage pools under the control ofHAStoragePlus to be simultaneously imported on multiple cluster nodes. Importing a ZFS storage pool on multiple cluster nodeswill result in pool corruption, which might cause data integrity issues or cause a cluster node to panic.

To avoid this problem, install Solaris 10 patch 139579-02 (for SPARC) or 139580-02 (for x86) immediately after you install137137-09 or 137138-09 but before you reboot the cluster nodes.

Alternatively, only on the Solaris 10 5/08 OS, remove the affected patch before any ZFS pools are simultaneously imported tomultiple cluster nodes. You cannot remove patch 137137-09 or 137138-09 from the Solaris 10 10/08 OS, because these patchesare preinstalled on that release.

For more information, see Sun Alert 245626.

Top

Solaris Trusted Extensions

Sun Cluster 3.2 1/09 Release Notes

11

Starting with the Solaris 10 11/06 release, the Solaris 10 OS includes support for Solaris Trusted Extensions, including with the useof non-global zones. The interaction between Sun Cluster and Solaris Trusted Extensions when using non-global zones is not yettested and is not supported.

Top

Solaris Volume Manager GUI

The Enhanced Storage module of Solaris™ Management Console (Solaris Volume Manager) is not compatible with Sun Clustersoftware. Use the command-line interface or Sun Cluster utilities to configure Solaris Volume Manager software.

Top

VxVM and ZFS Root Disk

There is currently an incompatibility with VxVM root disk encapsulation of a root disk that runs on ZFS. Therefore, you cannot usethe command on a cluster node whose root disk uses ZFS. Instead, you can create a root disk group byclvxvm encapsulate

using local nonroot disks, or choose not to create a root disk group.

[excerpt:hidden=true}CR 6803117

Top

Commands Modified in This ReleaseThis section describes changes to the Sun Cluster command interfaces that might cause user scripts to fail.

clzonecluster Utility

A new utility is added to the Solaris 10 version of the Sun Cluster 3.2 1/09 software. This command is used toclzonecluster

create and administer a zone cluster. This utility is similar to the utility and accepts resources that are used by the zonecfg

and utilities. See the (1M) man page for usage details.zonecfg sysidcfg clzonecluster

Top

New Network Property for the Commandnum_zoneclusters cluster

A new network property, , is introduced for use with the command. Thisnum_zoneclusters cluster set-netprops

property specifies the number of zone clusters that you expect to include in the global cluster. The cluster infrastructure uses thisvalue to modify the private-network IP address range, to support the additional number of addresses that the specified number ofzone clusters would require. Unlike the other network properties, you can set this property while the global cluster is in clustermode as well as in noncluster mode. This property is valid only on the Solaris 10 OS.

Top

New and Subcommands for the Commandcheck list-checks cluster

The new subcommand replaces the command and performs the same role as . For anycluster check sccheck sccheck

procedures that use , run instead, except continue to use the sccheck command to verify thesccheck cluster check

existence of mount points when you configure a cluster file system. For details about the subcommand, see the check cluster

man page.(1CL)

The new subcommand is also added. Use to display a list of all available clusterlist-checks cluster list-checks

configuration checks. For details about the subcommand, see the man page.list-checks (1CL)cluster

Top

pnmd Daemon

Sun Cluster 3.2 1/09 Release Notes

12

The daemon is replaced by the daemon, and the (1M) man page is replaced by the manpnmd cl_pnmd pnmd (1M)cl_pnmd

page. This change has little to no user impact, as this daemon is used only by the system.

Top

Product Name ChangesThis section provides information about product name changes for applications that Sun Cluster software supports. Depending onthe Sun Cluster software release that you are running, your Sun Cluster documentation might not reflect the following productname changes.

NoteSun Cluster 3.2 1/09 software is distributed under Solaris Cluster 3.2 1/09 and Sun Java Availability Suite.

Current Product Name Former Product Name

Sun Cluster HA for MaxDB Sun Cluster HA for SAP DB

Top

Supported ProductsThis section describes the supported software and memory requirements for Sun Cluster 3.2 1/09 software.

Common Agent ContainerData ServicesFile SystemsMemory RequirementsSolaris Operating System (OS)Sun Management CenterSun StorageTek Availability SuiteSun StorEdge Availability SuiteVolume Managers

Common Agent Container

This Sun Cluster release supports the common agent container version 2.1.

Top

Data Services

Contact your Sun sales representative for the complete list of supported data services (agents) and application versions.

NoteData service documentation, including man pages and wizard online help, is no longer translated from Englishto other languages.

HA-Containers Brand SupportNon-Global Zones SupportZone Cluster SupportResource Type Names

HA-Containers Brand Support

The Sun Cluster Data Service for Solaris Containers supports non-global zones of the following brands:

Sun Cluster 3.2 1/09 Release Notes

13

lx

native

solaris8

solaris9

Non-Global Zones Support

The following Sun Cluster data services are supported to run in non-global zones that are not part of a zone cluster.

NoteThis support is available only for brand non-global zones.native

Sun Cluster Data Service for ApacheSun Cluster Data Service for Apache TomcatSun Cluster Data Service for DHCPSun Cluster Data Service for Domain Name Service (DNS)Sun Cluster Data Service for KerberosSun Cluster Data Service for MaxDBSun Cluster Data Service for mySQLSun Cluster Data Service for N1 Grid Service Provisioning ServerSun Cluster Data Service for OracleSun Cluster Data Service for Oracle Application ServerSun Cluster Data Service for PostgreSQLSun Cluster Data Service for SambaSun Cluster Data Service for SAPSun Cluster Data Service for SAP liveCacheSun Cluster Data Service for SAP Web Application ServerSun Cluster Data Service for Sun Java System Application ServerSun Cluster Data Service for Sun Java System Message Queue ServerSun Cluster Data Service for Sun Java System Web ServerSun Cluster Data Service for SWIFTAlliance AccessSun Cluster Data Service for SWIFTAlliance GatewaySun Cluster Data Service for Sybase ASE

The resource type is supported in a brand non-global zone.SUNW.gds native

NoteProcedures for the version of Sun Cluster HA for Sun Java™ System Directory Server that uses Sun Java SystemDirectory Server 5.0 and 5.1 are located in .Sun Cluster 3.1 Data Service for Sun ONE Directory ServerBeginning with Version 5.2 of Sun ONE Directory Server, see the Sun ONE Directory Server or Sun Java SystemDirectory Server installation documentation.

NoteThe Sun Cluster Data Service for Agfa IMPAX 6.3 is supported only on the Solaris 10 OS in this Sun Clusterrelease. This data service is not supported on the Solaris 9 OS.

Top

Zone Cluster Support

The following Sun Cluster data services are supported in a zone cluster:

Sun Cluster Data Service for Apache (failover and scalable)Sun Cluster Data Service for MySQL - requires the following minimum patch version to run in a zone cluster:

SPARC: 126032-06x86: 126033-07

Sun Cluster Support for Oracle RAC

The resource type is supported in a zone cluster.SUNW.gds

Sun Cluster 3.2 1/09 Release Notes

14

NoteOnly the Oracle RAC wizard is supported for use in a zone cluster.

Top

Resource-Type Names

The following is a list of Sun Cluster data services and their resource types.

Data Service Sun Cluster Resource Type

Sun Cluster HA for Agfa IMPAX SUNW.gds

Sun Cluster HA for Apache SUNW.apache

Sun Cluster HA for ApacheTomcat

SUNW.gds

Sun Cluster HA for DHCP SUNW.gds

Sun Cluster HA for DNS SUNW.dns

Sun Cluster HA for Informix ]SUNW.gds

Sun Cluster HA for Kerberos ]SUNW.krb5

Sun Cluster HA for MaxDB , SUNW.sapdb SUNW.sap_xserver

Sun Cluster HA for MySQL SUNW.gds

Sun Cluster HA for N1 GridService Provisioning System

SUNW.gds

Sun Cluster HA for NFS SUNW.nfs

Sun Cluster HA for Oracle , SUNW.oracle_server SUNW.oracle_listener

Sun Cluster HA for OracleApplication Server

SUNW.gds

Sun Cluster HA for OracleE-Business Suite

SUNW.gds

Sun Cluster Support for OracleReal Application Clusters

, , , , SUNW.rac_framework SUNW.rac_udlm SUNW.rac_svm SUNW.rac_cvm

, , , SUNW.oracle_rac_server SUNW.oracle_listener SUNW.scaldevicegroup

, , SUNW.scalmountpoint SUNW.crs_framework

SUNW.scalable_rac_server_proxy

Sun Cluster HA for PostgreSQL SUNW.gds

Sun Cluster HA for Samba SUNW.gds

Sun Cluster HA for SAP , , , SUNW.sap_ci SUNW.sap_ci_v2 SUNW.sap_as SUNW.sap_as_v2

Sun Cluster HA for SAPliveCache

, SUNW.sap_livecache SUNW.sap_xserver

Sun Cluster HA for SAP WebApplication Server

, , , SUNW.sapenq SUNW.saprepl SUNW.sapscs SUNW.sapwebas

Sun Cluster HA for Siebel , SUNW.sblgtwy SUNW.sblsrvr

Sun Cluster HA for SolarisContainers

SUNW.gds

Sun Cluster HA for Sun GridEngine

SUNW.gds

Sun Cluster 3.2 1/09 Release Notes

15

Sun Cluster HA for Sun JavaSystem Application Serversupported versions before 8.1

SUNW.s1as

Sun Cluster HA for Sun JavaSystem Application Serversupported versions as of 8.1

, SUNW.jsas SUNW.jsas-na

Sun Cluster HA for Sun JavaSystem Application Server EE(supporting HADB versionsbefore 4.4)

SUNW.hadb

Sun Cluster HA for Sun JavaSystem Application Server EE(supporting HADB versions as of4.4)

SUNW.hadb_ma

Sun Cluster HA for Sun JavaSystem Message Queue

SUNW.s1mq

Sun Cluster HA for Sun JavaSystem Web Server

SUNW.iws

Sun Cluster HA forSWIFTAlliance Access

SUNW.gds

Sun Cluster HA forSWIFTAlliance Gateway

SUNW.gds

Sun Cluster HA for Sybase ASE SUNW.sybase

Sun Cluster HA for WebLogicServer

SUNW.wls

Sun Cluster HA for WebSphereMessage Broker

SUNW.gds

Sun Cluster HA for WebSphereMQ

SUNW.gds

Top

File Systems

Solaris 10 SPARC

File System Additional Information

Solaris UFS  

Solaris ZFS Not supported for the file system/globaldevices

Sun StorEdge QFS  

QFS 5.0 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager onlyExternal Volume Management:

QFS 4.6 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager,External Volume Management:

VxVM

QFS 4.6 and 5.0 - Shared QFS file system Oracle RAC Supported Data Services: Solaris Volume Manager forExternal Volume Management:

Sun Cluster

Sun Cluster 3.2 1/09 Release Notes

16

QFS 4.6 and 5.0 - Shared QFS clients outside the cluster(SC-COTC)

None; only a shared file system isSupported Data Services:supported

No external volume managerExternal Volume Management:is supported

QFS 4.6 and 5.0 - HA-SAM Failover None; only a shared file system isSupported Data Servicessupported

No external volume managerExternal Volume Management:is supported

QFS 4.5 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager,External Volume Management:

VxVM

QFS 4.5 - Shared QFS file system Oracle RAC Supported Data Services: Solaris Volume Manager forExternal Volume Management:

Sun Cluster

Veritas File System components that are delivered as part ofVeritas Storage Foundation 5.0. This support requires VxFS 5.0 MP3.

 

Top

Solaris 10 x86

File System Additional Information

Solaris UFS  

Solaris ZFS Not supported for the file system/globaldevices

Sun StorEdge QFS  

QFS 5.0 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager onlyExternal Volume Management:

QFS 4.6 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager,External Volume Management:

VxVM

QFS 4.6 and 5.0 - Shared QFS file system Oracle RAC Supported Data Services: Solaris Volume Manager forExternal Volume Management:

Sun Cluster

QFS 4.6 and 5.0 - Shared QFS clients outside the cluster(SC-COTC)

None; only a shared file system isSupported Data Services:supported

No external volume managerExternal Volume Management:is supported

QFS 4.6 and 5.0 - HA-SAM Failover None; only a shared file system isSupported Data Servicessupported

No external volume managerExternal Volume Management:is supported

QFS 4.5 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager,External Volume Management:

VxVM

QFS 4.5 - Shared QFS file system Oracle RAC Supported Data Services: Solaris Volume Manager forExternal Volume Management:

Sun Cluster

Veritas File System components that are delivered as part ofVeritas Storage Foundation 5.0. This support requires VxFS 5.0 MP3.

 

Sun Cluster 3.2 1/09 Release Notes

17

Top

Solaris 9 SPARC

File System Additional Information

Solaris UFS  

Sun StorEdge QFS  

QFS 5.0 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager onlyExternal Volume Management:

QFS 4.6 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager,External Volume Management:

VxVM

QFS 4.6 and 5.0 - Shared QFS file system Oracle RAC Supported Data Services: Solaris Volume Manager forExternal Volume Management:

Sun Cluster

QFS 4.6 and 5.0 - Shared QFS clients outside the cluster(SC-COTC)

None; only a shared file system isSupported Data Services:supported

No external volume managerExternal Volume Management:is supported

QFS 4.6 and 5.0 - HA-SAM Failover None; only a shared file system isSupported Data Servicessupported

No external volume managerExternal Volume Management:is supported

QFS 4.5 - Standalone file system All failover data services Supported Data Services: Solaris Volume Manager,External Volume Management:

VxVM

QFS 4.5 - Shared QFS file system Oracle RAC Supported Data Services: Solaris Volume Manager forExternal Volume Management:

Sun Cluster

Veritas File System components that are delivered as part ofVeritas Storage Foundation 5.0. This support requires VxFS 5.0 MP3.

 

Top

Memory Requirements

Sun Cluster 3.2 1/09 software requires the following memory requirements for every cluster node:

Minimum of 1 Gbytes of physical RAM (2 Gbytes typical)Minimum of 6 Gbytes of available hard drive space

Actual physical memory and hard drive requirements are determined by the applications that are installed. Consult theapplication's documentation or contact the application vendor to calculate additional memory and hard drive requirements.

Top

Solaris Operating System (OS)

Sun Cluster 3.2 1/09 software and Quorum Server software require the following minimum versions of the Solaris OS:

Solaris 9 (SPARC only) - Solaris 9 9/05, Solaris 9 9/05 HWSolaris 10 - Solaris 10 5/08, Solaris 10 10/08, Solaris 10 5/09*, Solaris 10 10/09*    *The Solaris 10 5/09 and Solaris 10 10/09 OS might require the latest Sun Cluster patches

Sun Cluster 3.2 1/09 Release Notes

18

NoteSun Cluster software does not support multiple versions of Solaris software in the same runningcluster.

Top

Sun Management Center

This Sun Cluster release supports Sun Management Center software versions 3.6.1 and 4.0.

Top

Sun StorageTek™ Availability Suite

This Sun Cluster release supports Sun StorageTek Availability Suite 4.0 software on the Solaris 10 OS only.

Top

Sun StorEdge™ Availability Suite

This Sun Cluster release supports Sun StorEdge Availability Suite 3.2.1 software on the Solaris 9 OS only.

Top

Volume Managers

This Sun Cluster release supports the following volume managers.

Solaris 10 SPARC

Volume Manager Cluster Feature

Solaris Volume Manager Solaris Volume Manager for SunCluster

Veritas Volume Manager components that are delivered as part of Veritas StorageFoundation 5.0. This support requires VxVM 5.0 MP3 RP1.

Veritas Volume Manager 5.0 clusterfeature (with RAC only)

Solaris 10 x86

Volume Manager Cluster Feature

Solaris Volume Manager Solaris Volume Manager for Sun Cluster

Veritas Volume Manager components that are delivered as part ofVeritas Storage Foundation 5.0.  This support requires VxVM 5.0 MP3RP1.

Not applicable - Sun Cluster 3.2 1/09 software doesnot support the VxVM cluster feature on the x86platform.

Solaris 9 SPARC

Volume Manager Cluster Feature

Solaris Volume Manager Solaris Volume Manager for SunCluster

Sun Cluster 3.2 1/09 Release Notes

19

Veritas Volume Manager components that are delivered as part of Veritas StorageFoundation 5.0. This support requires VxVM 5.0 MP3 RP1.

Veritas Volume Manager 5.0 clusterfeature (with RAC only)

Top

Sun Cluster Security Hardening

Sun Cluster Security Hardening uses the Solaris Operating System hardening techniques recommended by the Sun BluePrints™program to achieve basic security hardening for clusters. The Solaris Security Toolkit automates the implementation of SunCluster Security Hardening.

The Sun Cluster Security Hardening documentation is available at . You can alsohttp://www.sun.com/blueprints/0203/817-1079.pdfaccess the article from . From this URL, scroll down to the Architecture headinghttp://www.sun.com/software/security/blueprintsto locate the article "Securing the Sun Cluster 3.x Software." The documentation describes how to secure Sun Cluster 3.xdeployments in a Solaris environment. The description includes the use of the Solaris Security Toolkit and other best-practicesecurity techniques recommended by Sun security experts. The following data services are supported by Sun Cluster SecurityHardening:

Sun Cluster HA for ApacheSun Cluster HA for Apache TomcatSun Cluster HA for DHCPSun Cluster HA for DNSSun Cluster HA for MySQLSun Cluster HA for N1 Grid EngineSun Cluster HA for NFSSun Cluster HA for OracleSun Cluster HA for Oracle E-Business SuiteSun Cluster Support for Oracle Real Application ClustersSun Cluster HA for PostgreSQLSun Cluster HA for SambaSun Cluster HA for SiebelSun Cluster HA for Solaris ContainersSun Cluster HA for SWIFTAlliance AccessSun Cluster HA for SWIFTAlliance GatewaySun Cluster HA for Sun Java System Directory ServerSun Cluster HA for Sun Java System Message QueueSun Cluster HA for Sun Java System Messaging ServerSun Cluster HA for Sun Java System Web ServerSun Cluster HA for Sybase ASESun Cluster HA for WebLogic ServerSun Cluster HA for WebSphere MQSun Cluster HA for WebSphere MQ Integrator

Top

Known Issues and BugsThe following known issues and bugs affect the operation of the Sun Cluster 3.2 1/09 release. Bugs and issues are grouped intothe following categories:

AdministrationData ServicesGUIInstallationLocalizationRuntimeUpgrade

Top

Administration

Sun Cluster 3.2 1/09 Release Notes

20

1. 2.

On x86 Systems the CPU Power Management Feature Is Disabled During ClusterConfiguration (6870631)

Problem Summary: On all platforms, the cluster configuration process disables power management by renaming the file to . However, on systems based on x86 processors running the Solaris 10/etc/power.conf /etc/power.conf.date_stamp

10/09 OS or the equivalent patch, the power management feature does not interfere with cluster functioning and can remain on.

Workaround: After you configure an x86 based cluster, to reenable the CPU power management feature, perform the followingsteps:

Rename to ./etc/power.conf.date_stamp /etc/power.conf

Execute the command for the change to take effect.pmconfig

Top

Logical-Hostname/Shared-Address Resource Creation Fails in Zone Cluster WhenAdapters Are Not in IPMP (6798276 and 6796386)

Problem Summary: The creation of a logical-hostname or shared-address resource in a zone cluster fails if any public adapter inthe cluster is not part of an IPMP group. The action fails with the following message:

Failed to retrieve the ipmp group name for adapter ...

: Configure in IPMP groups all public adapters that are available on the cluster nodes, including those that are notWorkaroundcurrently used by Sun Cluster software.Top

An Update to HAStoragePlus Resources Fails in Sun Cluster 3.2 1/09 on Solaris 9(6787430)

Problem Summary: On the Solaris 9 OS, an update to HAStoragePlus resources fails with the hastorageplus_validatemethod being killed with the signal.SIGSEGV

Workaround: Delete the failing resource and recreate the HAStoragePlus resource with the latest required property values.

Top

Validate Method Core Dumps While Creating Logical-Hostname/Shared-AddressResource in Zone Cluster (6784809)

Problem Summary: The creation of a logical-hostname resource or a shared-address resource in a zone cluster fails when thereare no other resource groups configured in the cluster. The validate method dumps core and the resource is not created. Thisproblem is not seen if any resource groups already exist in the cluster, either in the global zone or in brand non-globalnative

zones.

Workaround: Before you create a logical-hostname resource or a shared-address resource in a zone cluster, if no resource groupalready exists in the cluster, create an empty resource group in the global zone by using the command.clesourcegroup

Top

Removal of Last Quorum Device From Two-Host Cluster Does Not Work for aQuorum Server or NAS Device (6775391)

Problem Summary: On a two-node cluster with installmode disabled, the last quorum device cannot be removed by using , which is the usual procedure to remove a quorum device. It is necessary to useclquorum remove quorum_device_name

the option, , to remove the last quorum device on a two-node cluster with installmode disabled. The usage is force -F

. However, this option works only for , , and clquorum remove -F quorum_device_name force scsi2 scsi3

types of quorum devices.software_quorum

Sun Cluster 3.2 1/09 Release Notes

21

1. 2.

3.

If the last quorum device configured on a two-node cluster with installmode disabled is a quorum server or a NAS quorum device,it cannot be removed by using or clquorum remove quorum_device_name clquorum remove -F

. If the user attempts this action, error messages similar to the following would be shown (in thesequorum_device_name

examples, the last quorum device is a quorum server named , and is the short form equivalent of ):qs1 clq clquorum

For :clquorum remove qs1

clq:  (C115344) Cluster quorum could be compromised if you remove "qs1".

For :clquorum remove -F qs1

Warning: Force option used.This cluster is at risk of experiencing a serious

outage.Configure a new quorum device to prevent loss of cluster quorum and

outages.clq:  (C115344) Cluster quorum could be compromised if you remove

"qs1".

Workaround: If the last quorum device configured on a two-node cluster, with installmode disabled, is a quorum server or a NASquorum device, do the following to remove this last quorum device:

Become superuser on a node running in cluster mode.Enable installmode.

# cluster set -p installmode=enabled

Remove the last quorum device.

# clquorum remove quorum_device_name

Top

Scalable Services Fail to Fail Over When There Is a Network Outage (6774504)

Problem Summary: The failover of scalable resources does not happen if there is a network outage on the nodes where theresources are currently online. Due to this issue, the resources do not fail over to a healthy node and the resource's status stillshows as online on the failed node.

Workaround: There is no workaround for this issue. Contact your Sun service representative to determine whether a patch isavailable.

Top

clresourcegroup create and add-node Subcommands Cannot Prevent MixingGlobal-Cluster Nodes With Zone-Cluster Nodes (6762845)

Problem Summary: When zone clusters are configured, the and clresourcegroup create clresourcegroup add-node

subcommands do not prevent the node list from containing both the global-cluster nodes and the zone-cluster nodes. You willsee the following error message when you try to bring a resource group with mixed nodes online:

# clresourcegroup online rg1clrg:  (C135343) No primary node could be found forresource group rg1; it remains offline

: If a resource group's node list contains both global-cluster nodes and zone-cluster nodes, change the node list soWorkaroundthat the list contains only global-cluster nodes or only zone-cluster nodes. Use the or clresroucegroup add-node

commands to add or remove nodes from the node list.clresourcegroup remove-node

Top

Resource Groups Might Come Online on Less-Preferred Nodes When a Zone Cluster is

Sun Cluster 3.2 1/09 Release Notes

22

Rebooted. (6761158)

Problem Summary: When an entire zone cluster is rebooted at once, for example, by executing the clzonecluster rebootcommand, resource groups might fail to come online on their most-preferred nodes. For example, you might find that allresource groups come online on the same node, even though their node lists indicate a preference for different nodes.

Workaround: There are two simple workarounds. However, neither of them exactly matches the behavior that should occur whenthis bug is fixed:

Manually execute the command on all resource groups, after the zone cluster hasclresourcegroup remaster

booted up:

# clresourcegroup remaster -Z myzonecluster +

This will cause all resource groups in the zone cluster to switch onto their most-preferred master, if they are not alreadymastered by that node. This command should be executed just once after a zone cluster reboot, when all of thezone-cluster nodes are up and running. However, on an unattended cluster, this workaround will not automatically takeeffect, for example, after a power outage when power is restored, since the commandclresourcegroup remaster

must be executed manually.

Set the property to on all resource groups in the zone cluster:Failback TRUE

# clresourcegroup set -Z myzonecluster -p Failback=TRUE +

With this setting, a resource group might initially come online on a less-preferred node, but will automatically switchover to a more-preferred node that joins the cluster membership. However, the automatic switchover will be performedany time a more-preferred node joins the zone-cluster membership, not only when the zone cluster is initially booted.

Top

rpcbind Authentication Errors When JASS is Deployed with Sun Cluster 3.2 1/09(6727594)

Problem Summary: When JASS is installed and enabled on a cluster node, the Sun Cluster commands to retrieve IPMP groupstatus, or , might fail with the following message:scstat -i clnode status -m

scrconf: RPC: Rpcbind failure - RPC: Authentication error

: Add IP addresses that correspond to all cluster nodes' private hostnames to the rpcbind line in the Workaround file. By default, the cluster nodes' private hostnames are of the form . Their/etc/hosts.allow clusternodeN-priv

corresponding IP addresses can be determined by using the following command:

# getent hosts clusternodeN-priv

For instance, on a four-node cluster, you might update the line in to be:rpcbind /etc/hosts.allow

rpcbind:        172.16.4.1 172.16.4.2 172.16.4.3 172.16.4.4

Top

Whole-Root Zones of ip-type=exclusive Do Not Have Access toSUNW.LogicalHostname Resource Methods (6711918)

Problem Summary: Attempts to create resources for whole-root zones with SUNW.LogicalHostname ip-type=exclusive

fail. Error messages similar to the following can be found in the syslog:

Sun Cluster 3.2 1/09 Release Notes

23

Jun  6 23:01:24 lzkuda-4a Cluster.RGM.fed: [ID 838032 daemon.error] lzkuda-4a.test-rg.kuda-3.2: Couldn't runmethod tag. Error in execve: No such file or directory.

: For each whole-root zone with that might host a resource,Workaround ip-type=exclusive SUNW.LogicalHostname

configure a file-system resource that mounts the method directory from the global zone.

# zonecfg -z myzonezonecfg:myzone> add fszonecfg:myzone:fs> set dir=/usr/cluster/lib/rgmzonecfg:myzone:fs> set special=/usr/cluster/lib/rgmzonecfg:myzone:fs> set type=lofszonecfg:myzone:fs> end

Top

Data Services

The START_TIMEOUT for WebAS Instance Needs to be Changed to 600 (6808410)

Problem Summary: The default START_TIMEOUT value for the SAP WebAS resource is 300 seconds. The Sun Cluster agent for SAP WebAS starts the instance and then waits for the server to be completely up (by probing for the server) before successfullyreturning from the start method. Depending on your system resources and the load on your server, the startup of the SAP WebASinstance might take longer than 300 seconds.

Workaround: If the WebAS instance needs more than 300 seconds to start up on the cluster nodes, create the SAP WebASresource with a higher value for the START_TIMEOUT property, for example, 600, 

Top

Scalable WebAS Resource Fails to Come Online if Wrapper Script Is Not Used inWebas_start_up_script (6808403)

Problem Summary: For the SAP Web Application Server (WebAS) scalable data service, as of the SAP 6.4 release it should nolonger be necessary to create wrapper scripts when you create the WebAS instances. But even in SAP 64 and later versions theWebAS resource does not come online if no wrapper scripts are created.

Workaround: To make the SAP Web Application Server (WebAS) instances highly available as a scalable service, create wrapperscripts as described in Step 7 of  from"How to Install and Configure the SAP Web Application Server and the SAP J2EE Engine"the . You must use these wrapper scripts as the startup- andSun Cluster HA for SAP Web Application Server Guideshutdown-script extension properties when you create the WebAS instances, regardless of which version of SAP software you run.

Top

BEA WLS Managed Server Went to Admin State After Fault Injection, Client AccessFailed (6626817)

Problem Summary: When a non-global zone that runs BEA WebLogic Server managed servers panics and goes down, the dataservice for WebLogic Server fails over and starts all its resources in another available non-global zone in the cluster. In somesituations, after the failover the WebLogic Managed Servers might go into in the Admin state instead of the Running state andrequire operator assistance.

Workaround: Use the WebLogic Server administration interface to manually change the state to the Running state.

Top

GUI

Sun Cluster 3.2 1/09 Release Notes

24

1. 2.

RAC Wizard Under Some Scenarios Tries to Create Configuration in a Zone ClusterEven When Global Cluster Is Selected (6801490)

Problem Summary: The RAC wizard sometimes attempts to create the configuration for a zone cluster even when the user selectsglobal cluster.

Workaround: Perform the following steps:

Remove the state file , if present./opt/cluster/lib/ds/history/RacStateFile

Restart the common agent container.

# cacaoadm restart

Top

The RAC GUI Storage Wizard Does Not Create the Resources in a Zone Cluster whenSolaris Volume Manager Is Selected (6799694)

Problem Summary: The web-based RAC storage wizard attempts to create the scalable device group resources and resourcegroups for the Solaris Volume Manager for Sun Cluster disk groups in the global cluster, when the user selects Solaris VolumeManager from the storage-scheme selection panel.

Workaround: Use the text-based wizard. The problem exists only in the web-based wizard.

Top

clsetup Does Not Display Existing VxVM Device Groups in the Global Zone (6798909)

Problem Summary: The wizards do not list the existing VxVM device groups in the global zone.

Workaround: Use the command-line interface for RAC configuration on VxVM storage.

Top

Configuration of RAC Framework Fails in a Zone Cluster with a Resource type Error (6768473)not found

Problem Summary: The Oracle RAC wizard fails to create the configuration for a zone cluster with a Resource type not error message.found

Workaround: Copy the RTR files that are used by the RAC configuration, which are present in the directories and , from the global cluster to the /usr/cluster/lib/rgm/rtreg /opt/cluster/lib/rgm/rtreg zoneclusterpath

and directories,/root/usr/cluster/lib/rgm/rtreg zoneclusterpath/root/opt/cluster/lib/rgm/rgtreg

respectively. Namely, the following RTR files need to be copied:

From in the global zone:/usr/cluster/lib/rgm/rtreg

SUNW.ScalMountPointSUNW.ScalDeviceGroupSUNW.crs_frameworkSUNW.rac_frameworkSUNW.rac_udlmSUNW.LogicalHostname

From in the global zone:/opt/cluster/lib/rgm/rtreg

Sun Cluster 3.2 1/09 Release Notes

25

SUNW.scalable_rac_serverSUNW.scalable_rac_server_proxySUNW.scalable_rac_listener

Top

Installation

Autodiscovery Does Not Work for the qlge Driver for PCIe FCoE CNA (6939847)

Problem Summary: During an Oracle Solaris Cluster installation, auto discovery for the driver for the PCIe LP and PCIeqlge

ExpressModule FCoE Converged Network Adapters (CNAs) does not work. The following products are affected:

Oracle's Sun Storage 10GbE FCoE PCIe CNAOracle's Sun Storage 10GbE FCoE ExpressModule CNA

Workaround: When you run the utility and you are prompted for the interconnect adapters, select and typescinstall Otherthe name of each interface.qlge

Top

Auto Discovery During Installation for Sun Cluster/Transport Does Not Work for igbInterfaces on Solaris OS x64 Platforms (6821699)

Problem Summary: During a Sun Cluster installation, auto discovery for Sun Cluster/transport does not work for igb interfaces onSolaris OS x64 platforms.

Workaround: When you run scinstall and you are prompted for the interconnect adapters, select and type each igbOtherinterface name.

Top

sczonecfg Should Set to if Global Zone Does Not Use Namename_service NONE

Service (6798938)

Problem Summary: When you create a new zone cluster, the only common system identification parameter that the user isrequired to specify is the root password. The zone cluster software obtains the values for any other parameters, such as sysid

and , from the global zone's file.name_service timezone /etc/sysidcfg

If the file does not exist on the system or the zone-cluster software cannot open the file to obtain the/etc/sysidcfg

parameter values, the parameter is left unspecified. Because a value of is invalid, systemname_service name_service=

initialization of the new zone cluster fails and the boot process does not complete. The user is prompted to supply the missingparameter value.

Workaround: When you create a new zone cluster, manually specify a value for the parameter. Specify either thename_service

name service to use or if no name service is used.NONE

Top

Common Agent Container Dependency on RGM Service Was Removed in rgm_starterService (6779277)

Problem Summary: When you install Sun Cluster 3.2 1/09 software and establish the cluster, the service might be in the maintenance state. If so, Sunsvc:/application/management/common-agent-container-1:default

Cluster Manager and some of the command-line interfaces will not work.

Workaround: Add the cluster Resource Group Manager (RGM) service as a dependency of thergm-starter

cluster-management service on all cluster nodes right after Sun Cluster 3.2 1/09 software is installed and configured. Perform thefollowing steps on each node of the cluster:

Sun Cluster 3.2 1/09 Release Notes

26

1. 2.

3.

4.

5.

6.

Become superuser on a node of the cluster.Add a property group of dependents to the SMF service.rgm-starter

# svccfg -s rgm-starter:default addpg dependents framework

Add the value for the dependents property group for the SMF service.rgm-starter

# svccfg -s rgm-starter:default addpropvalue dependents/rgm_cacao fmri:svc:/application/management/common-agent-container-1:default

Verify that the value is correctly set for the dependent property group for the SMF service.rgm-starter

# svccfg -s rgm-starter:default listprop | grep dependents

If the service is in the maintenancesvc:/application/management/common-agent-container-1:default

state, clear the SMF service from the maintenance state and enable the SMF service.common-agent-container

# svcadm clear svc:/application/management/common-agent-container-1:default# svcadm enable svc:/application/management/common-agent-container-1:default

Verify that Sun Cluster Manager now works properly by using a web browser to connect to the URL .https://_nodename_:6789

Top

Localization

[zh_CN]:Result of System Requirements Checking Is Wrong (6495984)

Problem Summary: When you use the utility to install Sun Cluster software, the software that checks the systeminstaller

requirements incorrectly reports that the swap space is 0 Mbytes in the Simplified Chinese and Traditional Chinese locales.

Workaround: Ignore this reported information. In these locales, you can run the following command to determine the correctswap space:

# df -h | grep swap

Top

Runtime

HAStoragePlus Resources Fail To Import Using Cachefile After Kernel Patch141444-09 Is Applied (6895580)

Problem Summary: If Solaris patch 141444-09 (SPARC) or 141445-09 (x86) is applied, resources will import ZFSHAStoragePlus

zpools without using the , slowing the import. If there are many zpools or zpools with many disks, the importzpool.cachefile

might take hours.

This issue also occurs if the Solaris OS is upgraded to the Solaris 10 10/09 release on a Sun Cluster 3.2 1/09 configuration.

Workaround: Apply the latest version of the Sun Cluster 3.2 CORE patch: Patch 126106 for Solaris 10 SPARC ,or patch 126107 forSolaris 10 x86.

Top

Sun Cluster 3.2 1/09 Release Notes

27

1. 2.

3.

4.

5.

6.

Upgrade

Rebooting Multiple Upgraded Nodes Into the Cluster During Rolling Upgrade PanicsNodes on Old Software (6788866)

Problem Summary: In a rolling upgrade, old software nodes are taken out of cluster mode and upgraded to the new software,and then booted back into cluster mode. Depending on the quorum configuration and availability of operational quorum on therunning cluster, the user could take one or more than one nodes out of the cluster at a time for upgrades, and then boot themback into cluster after upgrade.

While performing rolling upgrade of a cluster to Sun Cluster 3.2 1/09 software, if more than one upgraded node is booted backinto the cluster at the same time, then one or more of the nodes running the older software version in the cluster might panic.

Workaround: After nodes are upgraded to run Sun Cluster 3.2 1/09, boot only one node at a time into cluster mode to join therunning cluster.

Top

Common Agent Container Dependency on RGM Service Was Removed in rgm_starterService (6779277)

Problem Summary: When you upgrade to Sun Cluster 3.2 1/09 software by using the live upgrade method, the service might be in the maintenance state. If so, Sunsvc:/application/management/common-agent-container-1:default

Cluster Manager and some of the command-line interfaces will not work.

Workaround: Add the cluster Resource Group Manager (RGM) service as a dependency of thergm-starter

cluster-management service on all cluster nodes right after Sun Cluster 3.2 1/09 software is upgraded. Perform the followingsteps on each node of the cluster:

Become superuser on a node of the cluster.Add a property group of dependents to the SMF service.rgm-starter

# svccfg -s rgm-starter:default addpg dependents framework

Add the value for the dependents property group for the SMF service.rgm-starter

# svccfg -s rgm-starter:default addpropvalue dependents/rgm_cacao fmri:svc:/application/management/common-agent-container-1:default

Verify that the value is correctly set for the dependent property group for the SMF service.rgm-starter

# svccfg -s rgm-starter:default listprop | grep dependents

If the service is in the maintenancesvc:/application/management/common-agent-container-1:default

state, clear the SMF service from the maintenance state and enable the SMF service.common-agent-container

# svcadm clear svc:/application/management/common-agent-container-1:default# svcadm enable svc:/application/management/common-agent-container-1:default

Verify that Sun Cluster Manager now works properly by using a web browser to connect to the URL .https://_nodename_:6789

Top

Sun Cluster Manager Does Not Work After Upgrading From Pre-3.2 2/08 Release to3.2 1/09 Release (6768115)

Sun Cluster 3.2 1/09 Release Notes

28

1.

2.

Problem Summary: After performing a live upgrade from the Sun Cluster 3.2 release or earlier to Sun Cluster 3.2 1/09 software,activating the new BE, and booting the nodes, the service is in maintenance state andsvc:/system/cluster/rgm:default

the service is offline. Due to these states, Sunsvc:/application/management/common-agent-container-1:default

Cluster Manager cannot display the cluster configuration data.

Workaround: Become superuser and issue the following commands on all nodes:

# svccfg delete -f svc:/system/cluster/rgm:default# svcadm enable svc:/application/management/common-agent-container-1:default

You can then verify that Sun Cluster Manager works properly by connecting to the URL .https://_nodename_:6789

Top

Upgrade From 3.2 Plus Patch 126107-15 Failed; pkgrm of SUNWscr Fails (6747530)

Problem Summary: On the Solaris 10 OS, upgrading Sun Cluster software from the 3.2 2/08 release to the 3.2 1/09 release failswhen removing the package .SUNWscr

# scinstall -u update

Starting upgrade of Sun Cluster framework softwareSaving current Sun Cluster configurationDo not boot this node into cluster mode until upgrade is complete.

Renamed "/etc/cluster/ccr" to "/etc/cluster/ccr.upgrade".

** Removing Sun Cluster framework packages **        Removing SUNWsctelemetry..done        Removing SUNWscderby..done...       Removing SUNWscr.....failedscinstall:  Failed to remove "SUNWscr"        Removing SUNWscscku..done        Removing SUNWscsckr..done        Removing SUNWsczu....done        Removing SUNWsccomzu..done        Removing SUNWsczr....done        Removing SUNWsccomu..done        Removing SUNWscu.....done

scinstall:  scinstall did NOT complete successfully!

: Before starting the upgrade, install one of the following pairs of patches as appropriate, which provide a new Workaround script that resolves this problem. The patch version numbers shown are the minimum required:pkg_preremove

140017-01 and 126538-02 (S10 Sparc)140018-01 and 126539-02 (S10 X86)If you have already started the upgrade and it fails with the above error, you can install the appropriate patches, then restart theupgrade by running .scinstall -u update

Top

Zones With ip-type=exclusive Cannot Host SUNW.LogicalHostname Resources AfterUpgrade (6702621)

Problem Summary: Resource type is registered at version 2 (use the SUNW.LogicalHostname clresourcetype list

command to display the version). After upgrade, logical-hostname resources can be created for zones with , but network access to the logical hostname, for example, telnet or rsh, does not work.ip-type=exclusive

Workaround: Perform the following steps:

Delete all resource groups with a node list that contains a zone with that hosts logical-hostnameip-type=exclusive

resources.Upgrade the resource type to version 3:SUNW.LogicalHostname

Sun Cluster 3.2 1/09 Release Notes

29

1. 2. 3.

# clresourcetype register SUNW.LogicalHostname:3

Top

Patches and Required Firmware LevelsThis section provides information about patches for Sun Cluster configurations, including the following subsections:

Applying the Sun Cluster 3.2 1/09 Core PatchRemoving the Sun Cluster 3.2 1/09 Core PatchPatch Management ToolsMy Oracle Support OnlineSun Cluster Patch Lists

If you are upgrading to Sun Cluster 3.2 1/09 software, see the . Applying a Sun ClusterSun Cluster Upgrade Guide for Solaris OS3.2 1/09 Core patch does not provide the same result as upgrading the software to the Sun Cluster 3.2 1/09 release.

NoteRead the patch file before applying or removing any patch.README

If you are using the rebooting patch (node) method to install the Sun Cluster core patch 126105 (S9/SPARC), 126106(S10/SPARC), or 126107 (S19/x64), you must have the 125510-02, 125511-02, or 125512-02 core patch, respectively, installedbefore you can install the 126105, 126106, or 126107 patch. If you do not have the 125510-02, 125511-02, or 125512-02 patchinstalled and want to install 126105, 126106, or 126107, you must use the rebooting cluster method.See the following list for examples of patching scenarios:

If you have Sun Cluster 3.2 1/09 software using the Solaris 10 Operating System on SPARC patch 125511-02 installedwithand want to install 126106, use the rebooting node or rebooting cluster method.

If you have Sun Cluster 3.2 1/09 software using the Solaris 10 Operating System on SPARC 125511-02 installedwithoutand want to install 126106, choose one of the following methods:

Use the rebooting cluster method to install 126106.Install 125511-02 by using the rebooting node method and then install 126106 by using the rebooting nodemethod.

NoteYou must be a registered My Oracle Support™ user to view and download the requiredpatches for the Sun Cluster product. If you do not have a My Oracle Support account, contactyour Oracle service representative or sales engineer, or register online at .My Oracle Support

Top

Applying the Sun Cluster 3.2 1/09 Core Patch

Complete the following procedure to apply the Sun Cluster 3.2 1/09 core patch. Ensure that all nodes of the cluster aremaintained at the same patch level.

How to Apply the Sun Cluster 3.2 1/09 Core Patch

Install the patch by using the usual rebooting patch procedure for a core patch.Verify that the patch has been installed correctly on all nodes and is functioning properly.Register the new version of resource types , , SUNW.HAStoragePlus SUNW.ScalDeviceGroup and

that are being updated in this patch. Perform resource type upgrade on any existing resourcesSUNW.ScalMountPoint

of these types to the new versions.

For information about registering a resource type, see "Registering a Resource Type" in the Sun Cluster Data Services Planning.and Administration Guide for Solaris OS

Sun Cluster 3.2 1/09 Release Notes

30

1.

2.

3.

4.

5.

6.

CautionIf the Sun Cluster 3.2 1/09 core patch is removed, any resources that were upgraded in Step 3 must bedowngraded to the earlier resource type versions. The procedure for downgrading requires planned downtimeof these services. Therefore, do not perform Step 3 until you are ready to commit the Sun Cluster 3.2 1/09 corepatch permanently to your cluster.

Top

Removing the Sun Cluster 3.2 1/09 Core Patch

Complete the following procedure to remove the Sun Cluster 3.2 1/09 core patch.

How to Remove the Sun Cluster 3.2 1/09 Core Patch

Top

List the resource types on the cluster.

# clresourcetype list

If the list returns , , or , you mustSUNW.HAStoragePlus:5 SUNW.ScalDeviceGroup:2 SUNW.ScalMountPoint:2

remove these resource types. For instructions about removing a resource type, see "How to Remove a Resource Type" inthe .Sun Cluster Data Services Planning and Administration Guide for Solaris OSReboot all nodes of the cluster into noncluster, single-user mode.For instructions about rebooting cluster nodes into noncluster, single-user mode, see "How to Boot a Cluster Node inNoncluster Mode" in the .Sun Cluster System Administration Guide for Solaris OSRemove the Sun Cluster 3.2 1/09 core patch from each node on which you installed the patch.

# patchrm patch-id

Reboot into cluster mode all of the nodes from which you removed the Sun Cluster 3.2 1/09 core patch.Rebooting all of the nodes from which you removed the Sun Cluster 3.2 1/09 core patch before rebooting anyunaffected nodes ensures that the cluster is formed with the correct configuration information on all nodes. If all nodeson the cluster were patched with the core patch, you can reboot the nodes into cluster mode in any order.Reboot any remaining nodes into cluster mode.

For instructions about rebooting nodes into cluster mode, see "How to Reboot a Cluster Node" in the Sun Cluster System.Administration Guide for Solaris OS

Top

Patch Management Tools

Information about patch management options for the Solaris OS is available at the web sites for and Sun xVM Ops Center Sun.Connection Update Manager

The following tools are part of the Solaris OS. Refer to the version of the manual that is published for the Solaris OS release thatis installed on your system:

Information for using the Solaris patch management utility, , is provided in patchadd Solaris Administration Guide: Basic at .Administration http://docs.sun.com

Information for using Solaris Live Upgrade to apply patches is provided in the Solaris installation guide for Live Upgradeand upgrade planning at .http://docs.sun.com

If some patches must be applied when the node is in noncluster mode, you can apply them in a rolling fashion, one node at atime, unless a patch's instructions require that you shut down the entire cluster. Follow procedures in How to Apply a Rebooting

in Sun Cluster System Administration Guide for Solaris OS to prepare the node and boot it into noncluster mode.Patch (Node)For ease of installation, consider applying all patches at once to a node that you place in noncluster mode.

Sun Cluster 3.2 1/09 Release Notes

31

Top

Patch for Cluster Support for StorageTek 2530 Array

The Sun StorageTek Common Array Manager (CAM) software, Version 6.0.1, provides SCSI3 or PGR support for the SunStorageTek 2530 array for up to three nodes. The patch is not a required upgrade for the Sun StorEdge 6130, 2540, 6140, and6540, and StorageTek FLX240, FLX280 and FLX380 platforms. The CAM 6.0.1 patch is available from the Sun Download Center.

Top

My Oracle Support Online

The My Oracle Support™ Online Web site provides 24-hour access to the most up-to-date information regarding patches,software, and firmware for Oracle products. Access the My Oracle Support Online site at for the most currentMy Oracle Support matrixes of supported software, firmware, and patch revisions.

Sun Cluster 3.2 1/09 third-party patch information is provided through a My Oracle Support (formerly SunSolve) Info Doc. ThisInfo Doc page provides any third-party patch information for specific hardware that you intend to use in a Sun Cluster 3.2 1/09environment. To locate this Info Doc, log on to My Oracle Support. Type in theSun Cluster 3.x Third-Party Patches

search criteria box.

Before you install Sun Cluster 3.2 1/09 software and apply patches to a cluster component (Solaris OS, Sun Cluster software,volume manager software, data services software, or disk hardware), review each file that accompanies the patches thatREADME

you retrieved. All cluster nodes must have the same patch level for proper cluster operation.

For specific patch procedures and tips on administering patches, see Chapter 10, in"Patching Sun Cluster Software and Firmware".Sun Cluster System Administration Guide for Solaris OS

Top

Sun Cluster Patch Lists

The provides a complete and up-to-date list of patches that you need to apply to the Solaris OS, to SunSun Cluster Patch KlatchCluster software, and to other software in your cluster configuration, based on the versions of the particular software that you areusing.

Top

Sun Cluster 3.2 1/09 DocumentationThe Sun Cluster 3.2 1/09 user documentation set consists of the following collections:

Sun Cluster 3.2 1/09 Software Manuals for Solaris OSSun Cluster 3.2 1/09 Reference Manuals for Solaris OSSun Cluster 3.2 1/09 Data Service Manuals for Solaris OS (SPARC Platform Edition)Sun Cluster 3.2 1/09 Data Service Manuals for Solaris OS (x86 Platform Edition)Sun Cluster 3.1-3.2 Hardware Collection for Solaris OS (SPARC Platform Edition)Sun Cluster 3.1-3.2 Hardware Collection for Solaris OS (x86 Platform Edition)

NoteProcedures for the version of Sun Cluster HA for Sun Java™ System Directory Server that uses Sun JavaSystem Directory Server 5.0 and 5.1 are located in Sun Cluster 3.1 Data Service for Sun ONE Directory

.ServerBeginning with Version 5.2 of Sun ONE Directory Server, see the Sun ONE Directory Server or Sun JavaSystem Directory Server installation documentation.

Some manuals were not updated since the previous Sun Cluster 3.2 release. Their content, however, also applies to theSun Cluster 3.2 1/09 release.

The Sun Cluster 3.2 1/09 user documentation is available in PDF and HTML format at the following web site:

Sun Cluster 3.2 1/09 Release Notes

32

http://docs.sun.com/app/docs/prod/sun.cluster32

NoteBeginning with the Sun Cluster 3.2 release, documentation for individual data services is not translated.Documentation for individual data services is available only in English.

Top

Sun Cluster 3.2 1/09 Software Manuals for Solaris OS

Part Number Book Title

820-4683 Sun Cluster 3.2 1/09 Documentation Center

820-4676 Sun Cluster Concepts Guide for Solaris OS

820-4680 Sun Cluster Data Services Developer's Guide for Solaris OS

820-4682 Sun Cluster Data Services Planning and Administration Guide for Solaris OS

820-4681 Sun Cluster Error Messages Guide for Solaris OS

820-4675 Sun Cluster Overview for Solaris OS

819-6811 Sun Cluster Quick Reference

820-4989 Sun Cluster Quick Start Guide for Solaris OS

820-4677 Sun Cluster Software Installation Guide for Solaris OS

820-4679 Sun Cluster System Administration Guide for Solaris OS

820-4678 Sun Cluster Upgrade Guide for Solaris OS

Top

Sun Cluster 3.2 1/09 Reference Manuals for Solaris OS

Part Number Book Title

820-4685 Sun Cluster Reference Manual for Solaris OS

820-4684 Sun Cluster Data Services Reference Manual for Solaris OS

820-3381 Sun Cluster Quorum Server Reference Manual for Solaris OS

Top

Sun Cluster 3.2 1/09 Data Service Manuals for Solaris OS (SPARC PlatformEdition)

Part Number Book Title

820-5026 Sun Cluster Data Service for Agfa IMPAX Guide for Solaris OS

820-2092 Sun Cluster Data Service for Apache Guide for Solaris OS

819-3057 Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS

819-3058 Sun Cluster Data Service for DHCP Guide for Solaris OS

819-2977 Sun Cluster Data Service for DNS Guide for Solaris OS

820-5024 Sun Cluster Data Service for Informix Guide for Solaris OS

Sun Cluster 3.2 1/09 Release Notes

33

819-5415 Sun Cluster Data Service for Kerberos Guide for Solaris OS

820-5034 Sun Cluster Data Service for MaxDB Guide for Solaris OS

820-5027 Sun Cluster Data Service for MySQL Guide for Solaris OS

819-3060 Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

820-2565 Sun Cluster Data Service for NFS Guide for Solaris OS

820-3041 Sun Cluster Data Service for Oracle Guide for Solaris OS

820-2572 Sun Cluster Data Service for Oracle Application Server Guide for Solaris OS

820-2573 Sun Cluster Data Service for Oracle E-Business Suite Guide for Solaris OS

820-5043 Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

820-5074 Sun Cluster Data Service for PostgreSQL Guide for Solaris OS

820-3040 Sun Cluster Data Service for Samba Guide for Solaris OS

820-5033 Sun Cluster Data Service for SAP Guide for Solaris OS

820-5035 Sun Cluster Data Service for SAP liveCache Guide for Solaris OS

820-2568 Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS

820-5036 Sun Cluster Data Service for Siebel Guide for Solaris OS

820-5025 Sun Cluster Data Service for Solaris Containers Guide

820-3042 Sun Cluster Data Service for Sun Grid Engine Guide for Solaris OS

820-5032 Sun Cluster Data Service for Sun Java System Application Server EE (HADB) Guide for Solaris OS

820-5029 Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS

820-5031 Sun Cluster Data Service for Sun Java System Message Queue Guide for Solaris OS

820-5030 Sun Cluster Data Service for Sun Java System Web Server Guide for Solaris OS

820-5038 Sun Cluster Data Service for SWIFTAlliance Access Guide for Solaris OS

820-5039 Sun Cluster Data Service for SWIFTAlliance Gateway Guide for Solaris OS

820-2570 Sun Cluster Data Service for Sybase ASE Guide for Solaris OS

820-5037 Sun Cluster Data Service for WebLogic Server Guide for Solaris OS

819-3068 Sun Cluster Data Service for WebSphere Message Broker Guide for Solaris OS

819-3067 Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Top

Sun Cluster 3.2 1/09 Data Service Manuals for Solaris OS (x86 PlatformEdition)

Part Number Book Title

820-7092 Sun Cluster Data Service for Apache Guide for Solaris OS

819-3057 Sun Cluster Data Service for Apache Tomcat Guide for Solaris OS

819-3058 Sun Cluster Data Service for DHCP Guide for Solaris OS

819-2977 Sun Cluster Data Service for DNS Guide for Solaris OS

820-5024 Sun Cluster Data Service for Informix Guide for Solaris OS

Sun Cluster 3.2 1/09 Release Notes

34

819-5415 Sun Cluster Data Service for Kerberos Guide for Solaris OS

820-5034 Sun Cluster Data Service for MaxDB Guide for Solaris OS

820-5027 Sun Cluster Data Service for MySQL Guide for Solaris OS

819-3060 Sun Cluster Data Service for N1 Grid Service Provisioning System for Solaris OS

820-2565 Sun Cluster Data Service for NFS Guide for Solaris OS

820-3041 Sun Cluster Data Service for Oracle Guide for Solaris OS

820-2572 Sun Cluster Data Service for Oracle Application Server Guide for Solaris OS

820-5043 Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

820-5074 Sun Cluster Data Service for PostgreSQL Guide for Solaris OS

820-3040 Sun Cluster Data Service for Samba Guide for Solaris OS

820-5033 Sun Cluster Data Service for SAP Guide for Solaris OS

820-5035 Sun Cluster Data Service for SAP liveCache Guide for Solaris OS

820-2568 Sun Cluster Data Service for SAP Web Application Server Guide for Solaris OS

820-5025 Sun Cluster Data Service for Solaris Containers Guide

820-3042 Sun Cluster Data Service for Sun Grid Engine Guide for Solaris OS

820-5032 Sun Cluster Data Service for Sun Java System Application Server EE (HADB) Guide for Solaris OS

820-5029 Sun Cluster Data Service for Sun Java System Application Server Guide for Solaris OS

820-5031 Sun Cluster Data Service for Sun Java System Message Queue Guide for Solaris OS

820-5030 Sun Cluster Data Service for Sun Java System Web Server Guide for Solaris OS

820-2570 Sun Cluster Data Service for Sybase ASE Guide for Solaris OS

820-5037 Sun Cluster Data Service for WebLogic Server Guide for Solaris OS

819-3067 Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

819-3068 Sun Cluster Data Service for WebSphere Message Broker Guide for Solaris OS

Top

Sun Cluster 3.1-3.2 Hardware Collection for Solaris OS (SPARC PlatformEdition)

Part Number Book Title

819-2993 Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

819-2994 Sun Cluster 3.1 - 3.2 With Fibre Channel JBOD Storage Device Manual

819-3024 Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS

819-2995 Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

820-1692 Sun Cluster 3.1 - 3.2 With StorageTek 2540 RAID Arrays Manual for Solaris OS

819-7306 Sun Cluster 3.1 - 3.2 With StorageTek RAID Arrays Manual for Solaris OS

819-3015 Sun Cluster 3.1 - 3.2 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual for Solaris OS

819-3016 Sun Cluster 3.1 - 3.2 With Sun StorEdge 3510 or 3511 FC RAID Array Manual for Solaris OS

819-3017 Sun Cluster 3.1 - 3.2 With Sun StorEdge 3900 Series or Sun StorEdge 6900 Series System Manual

Sun Cluster 3.2 1/09 Release Notes

35

819-3018 Sun Cluster 3.1 - 3.2 With Sun StorEdge 6120 Array Manual for Solaris OS

819-3019 Sun Cluster 3.1 - 3.2 With Sun StorEdge 6130 Array Manual

819-3020 Sun Cluster 3.1 - 3.2 With Sun StorEdge 6320 System Manual for Solaris OS

819-3021 Sun Cluster 3.1 - 3.2 With Sun StorEdge 9900 Series Storage Device Manual for Solaris OS

819-2996 Sun Cluster 3.1 - 3.2 With StorEdge A1000 Array, Netra st A1000 Array, or StorEdge A3500 System Manual

819-3022 Sun Cluster 3.1 - 3.2 With Sun StorEdge A3500FC System Manual for Solaris OS

819-3023 Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

Top

Sun Cluster 3.1-3.2 Hardware Collection for Solaris OS (x86 Platform Edition)

Part Number Book Title

819-2993 Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

819-3024 Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS

819-2995 Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

820-1692 Sun Cluster 3.1 - 3.2 With StorageTek 2540 RAID Arrays Manual for Solaris OS

819-7306 Sun Cluster 3.1 - 3.2 With StorageTek RAID Arrays Manual for Solaris OS

817-0180 Sun Cluster 3.1 - 3.2 With Sun StorEdge 3310 or 3320 SCSI RAID Array Manual for Solaris OS

819-3016 Sun Cluster 3.1 - 3.2 With Sun StorEdge 3510 or 3511 FC RAID Array Manual for Solaris OS

819-3018 Sun Cluster 3.1 - 3.2 With Sun StorEdge 6120 Array Manual for Solaris OS

819-3019 Sun Cluster 3.1 - 3.2 With Sun StorEdge 6130 Array Manual

819-3020 Sun Cluster 3.1 - 3.2 With Sun StorEdge 6320 System Manual for Solaris OS

819-3021 Sun Cluster 3.1 - 3.2 With Sun StorEdge 9900 Series Storage Device Manual for Solaris OS

Top

Documentation IssuesThis section discusses errors or omissions for documentation, online help, or man pages in the Sun Cluster 3.2 1/09 release.

Software Installation GuideSystem Administration GuideData Services Developer's GuideData Services Planning and Administration GuideUpgrade GuideNetwork-Attached Storage Devices ManualMan Pages

Top

Software Installation Guide

This section discusses errors and omissions in the .Sun Cluster Software Installation Guide for Solaris OS

Change to Restriction of ZFS for Root ( ) File Systems/

Sun Cluster 3.2 1/09 Release Notes

36

In "Guidelines for the Root ( ) File System" in Chapter 1, a Note states that no file-system type other than UFS is valid for the/

root file system. This restriction is no longer valid. However, the partition for the global-devices namespace/globaldevices

still requires the UFS file system.

Top

Restriction for Encapsulation of a ZFS Root Disk

The procedure "SPARC: How to Encapsulate the Root Disk" in Chapter 5, "Installing and Configuring Veritas Volume Manager", isnot valid when the root disk uses ZFS rather than UFS. For a ZFS root disk, you must instead create the root disk group on localnonroot disks, or choose to not create a root disk group.

Top

1483, for 3.2u2New Choice for the Location of the Global-Devices Namespace

The following information applies to all procedures in the chapter "Establishing the Cluster" that you would use to establish a newcluster or a new cluster node.

When you establish a new cluster or new cluster node on the Solaris 10 OS, you can now choose between using a dedicatedpartition or using a lofi-based file on which to create the global-devices namespace.

Use one of the following methods to choose a lofi device for the global-devices namespace:

Run the interactive utility in Custom mode and, when prompted, specify the use of a lofi device.scinstall

If a entry does not exist in the files on the hosts that you are configuring as cluster/globaldevices /etc/vfstab

nodes, run the interactive utility in Typical mode and, when prompted, choose to use a lofi device.scinstall

Note:If the utility in Typical mode finds an entry for , thescinstall /etc/vfstab /globaldevices

utility creates the namespace on that file system, the same as in previous Sun Cluster releases. If youchoose not to use lofi and no entry for exists, the installation fails./etc/vfstab /globaldevices

Add the option to the command in noninteractive mode or the command.-G lofi scinstall clnode add

If you use a lofi device to create the global-devices namespace, no entry is added to the /global/.devices/node@nodeID

file. The root file system must have 100 MBytes of free space to create a lofi-based global-devices namespace./etc/vfstab

See the following for additional information:

For information about specifying a lofi device when you establish a cluster or cluster node from the command line, seethe and man pages.(1M)scinstall (1CL)clnode

For more information about lofi devices, see the man page.(7D)lofi

For procedures to migrate the global-devices namespace from a dedicated partition to a lofi device or the reverse, see .Migrating the Global-Devices Namespace

Top

Replacement Command for sccheck

The command is replaced by the command. This new subcommand performs the same rolesccheck cluster check check

as with the exception of checking the existence of mount points. In the procedure "How to Create Cluster Filesccheck

Systems", continue to use the command.sccheck

For details about the subcommand, see the man page.check (1CL)cluster

Top

Setting the IPsec Parameterp2_idletime_secs

After you complete the procedures in ,"How to Configure IP Security Architecture (IPsec) on the Cluster Private Interconnect"perform the following additional step. This setting provides the time for security associations to be regenerated when a clusternode reboots, and limits how quickly a rebooted node can rejoin the cluster. A value of 30 seconds should be adequate.

Sun Cluster 3.2 1/09 Release Notes

37

1.

2.

4. On each node, edit the /etc/inet/ike/config file to set the p2_idletime_secs parameter.

Add this entry to the policy rules that are configured for cluster transports.

phys-schost# vi /etc/inet/ike/config

{label " "clust-priv-interconnect1-clust-priv-interconnect2

p2_idletime_secs 30}

Top

Non-Shared Storage with LDoms Guest-Domain Cluster Nodes

The following is an addition to ."SPARC: Guidelines for Sun Logical Domains in a Cluster"

Non-shared storage - On the Solaris 10 OS, for non-shared storage, such as for LDoms guest-domain OS images, you can use anytype of virtual device. You can back such virtual devices by any implement in the I/O domain, such as files or volumes. However,do not copy files or clone volumes in the I/O domain for the purpose of mapping them into different guest domains of the samecluster. Such copying or cloning would lead to problems because the resulting virtual devices would have the same deviceidentity in different guest domains. Always create a new file or device in the I/O domain, which would be assigned a uniquedevice identity, then map the new file or device into a different guest domain.

Top

System Administration Guide

This section discusses errors and omissions in the .Sun Cluster System Administration Guide for Solaris OS

Restoring an Encapsulated Root File System for VERITAS Volume Manager

The instructions in Chapter 11 for for VERITAS Volume Manager are incorrect. Replacerestoring an encapsulated root file sytemStep #14 with the following step:

14. Run the   command to encapsulate the disk and reboot.clvxvm encapsulate

Top

EMC SRDF Data Recovery for Failed Campus Cluster Primary Room

A patch to Sun Cluster 3.2 1/09 software changes Sun Cluster behavior so that if the primary campus cluster room fails, SunCluster automatically fails over to the secondary room. The patch makes the secondary room's storage device readable andwritable, and enables the failover of the corresponding device groups and resource groups. When the primary room returnsonline, you can manually run the procedure that recovers the data from the EMC Symmetrix Remote Data Facility (SRDF) devicegroup by resynchronizing the data from the original secondary room to the original primary room.

The following are the minimum patch versions that contain this new functionality:

Solaris 10 SPARC - 126106-30Solaris 10 x86 - 126107-31Solaris 9 - 126105-29

In the procedure below, is the SRDF device group name. At the time of the failure, the primary room in this procedure is dg1

and the secondary room is .phys-campus-1 phys-campus-2

How to Recover EMC SRDF Data After a Primary Room's Complete Failure

Log in to the campus cluster's primary room and become superuser or assume a role that provides RBAC authorization.solaris.cluster.modify

From the primary room, use the command to query the replication status of the RDF devices and viewsymrdf

Sun Cluster 3.2 1/09 Release Notes

38

2.

3.

4.

5.

6.

7.

8.

1. 2. 3.

information about those devices.

phys-campus-1# symrdf -g dg1 query

A device group that is in the split state is not synchronized.If the RDF pair state is split and the device group type is , then force a failover of the SRDF device group.RDF1

phys-campus-1# symrdf -g dg1 -force failover

View the status of the RDF devices.

phys-campus-1# symrdf -g dg1 query

After the failover, you can swap the data on the RDF devices that failed over.

phys-campus-1# symrdf -g dg1 swap

Verify the status and other information about the RDF devices.

phys-campus-1# symrdf -g dg1 query

Establish the SRDF device group in the primary room.

phys-campus-1# symrdf -g dg1 establish

Confirm that the device group is in a synchronized state and that the device group type is .RDF2

phys-campus-1# symrdf -g dg1 query

Top

Replacement Command for sccheck

The command is replaced by the command. This new subcommand performs the same rolesccheck cluster check check

as . For any procedures that use , run instead. The exception is that, to check the existencesccheck sccheck cluster check

of mount points when you create a cluster file system, continue to use the command.sccheck

In addition, a new subcommand is added. Use to display a list of all available clusterlist-checks cluster list-checks

checks.

For details about the and subcommands, see the man page.check list-checks (1CL)cluster

Top

Migrating the Global-Devices Namespace

The following procedures describe how to move an existing global-devices namespace from a dedicated partition to a lofidevice or the opposite:

How to Migrate the Global-Devices Namespace From a Dedicated Partition to a Devicelofi

How to Migrate the Global-Devices Namespace From a Device to a Dedicated Partitionlofi

How to Migrate the Global-Devices Namespace From a Dedicated Partition to a Devicelofi

Become superuser on the global-cluster voting node whose namespace location you want to change.Ensure that no file named already exists on the node. If the file does exist, delete it./.globaldevices

Create the device.lofi

Sun Cluster 3.2 1/09 Release Notes

39

3.

4.

5. 6.

7.

8.

1. 2.

3. a.

b.

4. 5.

6.

# mkfile 100m /.globaldevices# lofiadm -a /.globaldevices# LOFI_DEV=`lofiadm

/.globaldevices`# newfs `echo ${LOFI_DEV} | sed -e 's/lofi/rlofi/g'` <

/dev/null# lofiadm -d /.globaldevices

In the file, comment out the global-devices namespace entry. This entry has a mount path that begins/etc/vfstab

with ./global/.device/node@nodeID

Unmount the global-devices partition ./global/.devices/node@nodeID

Disable and re-enable the and SMF services.globaldevices scmountdev

# svcadm disable globaldevices# svcadm disable scmountdev# svcadm enable

scmountdev# svcadm enable globaldevices

A device is now created on and mounted as the global-devices file system.lofi /.globaldevices

Repeat these steps on any other nodes whose global-devices namespace you want to migrate from a partition to a lofidevice.From one node, populate the global-device namespaces.

# /usr/cluster/bin/cldevice populate

On each node, verify that the command has completed processing before you perform any further actions on the cluster.

# ps -ef | grep scgdevs

The global-devices namespace now resides on a device.lofi

Top

How to Migrate the Global-Devices Namespace From a Device to a Dedicated Partitionlofi

Become superuser on the global-cluster voting node whose namespace location you want to change.On a local disk of the node, create a new partition that meets the following requirements:

Is at least 512MB in sizeUses the UFS file system

Add an entry to the file for the new partition to be mounted as the global-devices file system./etc/vfstab

Determine the current node's node ID.

# /usr/sbin/clinfo -nnodeID

Create the new entry in the file, using the following format:/etc/vfstab

blockdevice rawdevice /global/.devices/node@nodeID ufs 2 no global

For example, if the partition that you choose to use is , the new entry to add to the /dev/did/rdsk/d5s3

file would then be as follows:/etc/vfstab

/dev/did/dsk/d5s3 /dev/did/rdsk/d5s3 /global/.devices/node@3 ufs 2 no

global

Unmount the global devices partition ./global/.devices/node@nodeID

Remove the device that is associated with the file.lofi /.globaldevices

# lofiadm -d /.globaldevices

Delete the file./.globaldevices

Sun Cluster 3.2 1/09 Release Notes

40

6.

7.

8.

9.

# rm /.globaldevices

Disable and re-enable the and SMF services.globaldevices scmountdev

# svcadm disable globaldevices# svcadm disable scmountdev# svcadm enable

scmountdev# svcadm enable globaldevices

The partition is now mounted as the global-devices namespace file system.Repeat these steps on any other nodes whose global-devices namespace you want to migrate from a device to alofi

partition.From one node in the cluster, run the command to populate the global-devices namespace.cldevice populate

# /usr/cluster/bin/cldevice populate

Ensure that the process completes on all nodes of the cluster before you perform any further action on any of the nodes.

# ps -ef | grep scgdevs

The global-devices namespace now resides on the dedicated partition.

Top

Data Services Developer's Guide

This section discusses errors and omissions in the .Sun Cluster Data Services Developer's Guide for Solaris OS

Method Timeout Behavior Is Changed

A description of the change in the behavior of method timeouts as of the Sun Cluster 3.2 release is missing. If an RGM methodcallback times out, the process is now killed by using the signal instead of the signal. Terminating the processSIGABRT SIGTERM

by using the signal causes all members of the process group to generate a core file.SIGABRT

NoteAvoid writing a data service method that creates a new process group. If your data service method does needto create a new process group, also write a signal handler for the and signals. Write theSIGTERM SIGABRT

signal handlers to forward the or signal to the child process group before the signalSIGTERM SIGABRT

handler terminates the parent process. This increases the likelihood that all processes that are spawned by themethod are properly terminated.

Top

Data Services Planning and Administration Guide

Configuration Wizard Support for Zone Clusters Limited to Oracle RAC

The configuration of a data service in a zone cluster by using the Sun Cluster Manager or wizards is currently onlyclsetup

supported for Oracle RAC. To configure other data services that are supported in a zone cluster, use the Sun Cluster maintenancecommands.

Top

Upgrade Guide

Sun Cluster 3.2 1/09 Release Notes

41

This section discusses errors and omissions in the .Sun Cluster Upgrade Guide for Solaris OS

Incorrect Command to Change the Private-Network IP Address Range

In Step 9 of "How to Finish Upgrade to Sun Cluster 3.2 1/09 Software", an incorrect command is provided to change the IPaddress range of the private network.

Incorrect:

phys-schost# cluster set net-props num_zonecluster=N

:Correct

phys-schost# cluster set-netprops -p num_zoneclusters=N

Top

Network-Attached Storage Devices Manual

The section "Requirements When Configuring Sun NAS Devices as Quorum Devices" omits the following requirement:

The Sun NAS device must be located on the same network as the cluster nodes. If a Sun NAS quorum device is not located on thesame network as the cluster nodes, the quorum device is at risk of not responding at boot time within the timeout period,causing the cluster bootup to fail due to lack of quorum.

Top

Man Pages

This section discusses errors, omissions, and additions in the Sun Cluster man pages.

clnode(1CL)

The (1CL) man page states that the subcommand works only in a global cluster. However,clnode clnode evacuate

functionality was added to the subcommand so that it does work within a zone cluster.clnode evacuate

Top

clquorum(1CL)

The subcommand command has been enhanced to actively check the state of quorum devices. The updated descriptionstatus

of the subcommand should read as follows:status

Checks and displays the current status and vote counts of quorum devices.

You can use this subcommand in the global zone or in a non-global zone to immediately check the status of quorum devices thatare connected to the specified node. For quorum devices that are not connected to the node, this subcommand displays thestatus that was true during the previous cluster reconfiguration. For ease of administration, use this form of the command in theglobal zone.

Top

clquorumserver(1CL)

disable and enable

Disregard the information previously reported here about and subcommands to the disable enable clquorumserver

command. That information was reported in error; these subcommands were not added to the command.clquorumserver

Sun Cluster 3.2 1/09 Release Notes

42

stop

The new option to the subcommand is not documented in the (1CL) man page. The following is the-d stop clquorumserver

updated syntax for the subcommand:stop

/usr/cluster/bin/clquorumserver stop [-d] {+ | }quorumserver [...]

The option disables the automatic restarting of the quorum server after a reboot.-d

Top

clresource(1CL)

The inter-cluster dependencies are not documented in the (1CL) man page.clresource

A resource in a zone cluster can have a dependency on a resource in another zone cluster or on a resource on the global cluster.Also, a resource from the global cluster can have a dependency on a resource in any of the zone clusters on that global cluster.The inter-cluster dependencies can be set only from the global cluster.

You can use the following command to specify the inter-cluster dependencies:

# clresource set -p resource_dependencies=target-zc:target-rs source-zc:source-rs

For example, if you need to specify a dependency from resource in zone cluster to a resource in zone cluster , useR1 ZC1 R2 ZC2

the following command:

# clresource set -p resource_dependencies=ZC2:R2 ZC1:R1

If you need to specify a dependency of zone cluster resource on global-cluster resource , use the following command:ZC1 R1 R2

# clresource set -p resource_dependencies=global:R2 ZC1:R1

The existing resource dependencies ( , , ,and ) are supported.Strong Weak Restart Offline-Restart

Top

clresourcegroup(1CL)

The zone cluster affinities are not documented in the (1CL) man page.clresourcegroup

The cluster administrator can specify affinities between a resource group in a zone cluster and a resource group in another zonecluster or a resource group on the global cluster. You can use the following command to specify the affinities between resourcegroups in different zone clusters:

# { | | | } : :clresourcegroup set -p RG_affinities= + ++ - -- target-zc target-rg source-zc source-rg

The affinity types can be one of the following:

+ (weak positive)++ (strong positive)- (weak negative)-- (strong negative)

Note: The affinity type +++ (strong positive with failover delegation) is not supported in this release.

For example,if you need to specify a strong positive affinity ( ) between resource group in zone cluster and resource++ RG1 ZC1

group in zone cluster , use the following command:RG2 ZC2

# clresourcegroup set -p RG_affinities=++ZC2:RG2 ZC1:RG1

Sun Cluster 3.2 1/09 Release Notes

43

If you need to specify a strong negative affinity ( ) between resource group in zone cluster and resource group in-- RG1 ZC1 RG2

the global cluster, use the following command:

# clresourcegroup set -p RG_affinities=--global:RG2 ZC1:RG1

Top

clresourcetype(1CL)

The description of the subcommand currently states, "The only resource type property that you can set is theset

Installed_nodes property." This property is no longer the only property that is tunable at any time. See the (5)rt_properties

man page for the descriptions of resource-type properties, including when each is tunable.

Top

cluster(1CL)

-f Option

The and subcommands support the following option, which is not documented in the (1CL) mancheck list-checks cluster

page:

-F--force

Forces the execution of the subcommand by ignoring the file, if it/var/cluster/logs/cluster_check/cfgchk.lck

exists. Use this option only if you are sure that the and subcommands are not already running.check list-checks

Zone Cluster Compatibility

The (1CL) man page states that the command does not work in a zone cluster. However, functionality wascluster cluster

added to the following subcommands to make them work within a zone cluster:cluster

cluster show - Lists the zone cluster, nodes, resource groups, resource types,and resource properties.cluster status - Displays the status of zone cluster components.cluster shutdown - Shuts down the zone cluster in an orderly fashion.cluster list - Displays the name of the zone cluster.cluster list-cmds - Lists the following commands, which are supported inside a zone cluster:

clnode

clreslogicalhostname

clresource

clresourcegroup

clresourcetype

clressharedaddress

cluster

scdpmd.conf(4)

The following new man page describes how to tune the daemon.scdpmd

Name

scdpmd.conf - Disk-path-monitoring daemon configuration file

Synopsis

/etc/cluster/scdpm/scdpmd.conf

Description

The daemon monitors the disk paths and takes appropriate action upon path failures. You can tune this daemon byscdpmd

creating or modifying the configuration file with tunable properties and send a /etc/cluster/scdpm/scdpmd.conf

Sun Cluster 3.2 1/09 Release Notes

44

signal to the daemon to read the configuration file.SIGHUP scdpmd

# pkill -HUP scdpmd

You can tune the following properties in the file:scdpmd.conf

ping_interval

    Description        Interval, in seconds, between disk-path status checks

    Default        600

    Minimum        60

    Maximum        3600

ping_retry

    Description        Number of retries to query the disk-path status on failure

    Default        3

    Minimum        2

    Maximum        10

ping_timeout

    Description        Timeout, in seconds, to query any disk-path status

    Default        5

    Minimum        1

    Maximum        15

Examples

The following is an example of a valid file:scdpmd.conf

ping_interval = 120ping_retry = 5ping_timeout = 10

Attributes

See (5) for descriptions of the following attributes:attributes

ATTRIBUTE TYPE ATTRIBUTE VALUE

Interface Stability Evolving

See Also

Sun Cluster 3.2 1/09 Release Notes

45

cldevice(1CL), (1CL)clnode

TopLabels parameters

Labels

Top