jueves, 11 de febrero de 2010

How to cleanup ASM installation (RAC and Non-RAC)

How to cleanup ASM installation (RAC and Non-RAC) [ID 311350.1]  

  Modified 27-JUL-2009     Type HOWTO     Status PUBLISHED  

In this Document
  Goal
  Solution
  References


Applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.2.0.3
Information in this document applies to any platform.
"Checked for relevance on 05-Feb-2008"

Goal

How to drop the ASM instance installed in a seperate Oracle Home for both RAC and non-RAC installations.

Solution

The outline of the steps involved are :
a) Backup all the ASM client database files stored on the diskgroups.
b) Dropping all the diskgroups.
c) Removing ASM resource from CRS (* RAC specific)
d) Removing ASM disk signature (In case of asmlib)
e) Remove the ASM pfile/spfile.
f) Removing ASM entry in the file oratab
g) Wipe out the disks header using dd


Following are the steps to be followed:
1) Log into the ASM instance and do 'select * from v$asm_client;'
2) For each instance listed above, stop the respective databases.
3) Backup all the datafiles, logfiles, controlfiles, archive logs, etc. that are currently using ASM storage, to tape or to filesystem (using RMAN). This needs to be done for every database (ASM client) using ASM.

** NOTE: Please make sure you have the data secure before continuing to the next step.

4) Find all the diskgroups: 'select * from v$asm_diskgroup'
5) For each diskgroup listed above:
' drop diskgroup <name> including contents'
6) Shutdown all(RAC nodes) ASM instances.

7) On RAC install verify that all asm instances are stopped
$ORA_CRS_HOME/bin/crst_stat |more <- look for ASM resources and make sure the target=offline

8) For single instance install run the following script:
$ORACLE_HOME/bin/localconfig  delete

* This cleans up the CSSD configuration.

9)Invoke OUI, and now de-install the ASM Oracle home.

10) For RAC install, remove the asm related resource.
srvctl remove asm -n <nodename> <- Peform for all nodes of a RAC cluster
crs_stat |more <- make sure no asm resources exists

11) If using asmlib (on Linux only), then
a. oracleasm listdisks
b. oracleasm deletedisks (do this for every disk listed above)
c. oracleasm listdisks (to verify they have been deleted)
d. on other RAC nodes: oracleasm listdisks (to verify they have been deleted too)
e. On all nodes(RAC) :
As root run:
# /etc/init.d/oracleasm stop
# /etc/init.d/oracleasm disable

12) delete the ASM pfile or spfile

13) in the file oratab, remove the line relative to the ASM instance

14) clean out the disks headers using the dd command:

for example: dd if=/dev/zero of=/dev/<asm_disk_name> bs=1024k count=50

Blogged with the Flock Browser

sábado, 6 de febrero de 2010

Key RDBMS Install Differences in 11gR2 [ID 881063.1]

Key RDBMS Install Differences in 11gR2 [ID 881063.1]  

  Modified 08-JAN-2010     Type REFERENCE     Status PUBLISHED  

In this Document
  Purpose
  Scope
  Key RDBMS Install Differences in 11gR2


Applies to:

Oracle Server - Enterprise Edition - Version: 9.2.0.4 to 11.2.0.2
Information in this document applies to any platform.

Purpose

The Purpose of this NOTE is to provide a quick reference of some of the key differences between the Oracle RDBMS 11gR2 Product, and previous Oracle RDBMS 9iR2, 10g, and 11g products in the single focus area of RDBMS install and patching. It is not intended to document every known RDBMS install and patching difference, nor is it intended to document any differences beyond the RDBMS install and patching focus.

Scope

Although the intended scope of this matrix is Oracle Install and Patching Support Engineers, it may also be of some limited use to other Oracle Support Engineers. Because this general overview of RDBMS install and patching differences may also be of help in Installation Planning to Oracle Customer's, it is also published externally.

Key RDBMS Install Differences in 11gR2

 

QUICK REFERENCE
----------------------------------------------------
Jump to Difference Category:
RDBMS File Structure RDBMS Log File Locations SQL Script Names Startup Modes Opatch Tool Log Locations Common Problems Cloning Metalink Notes

DISCLAIMER
------------
Whenever a contradiction arises between this document and Metalink-Certification, Metalink-Certification is the Certification authority.
Users of this bulletin are obligated to provide real-time corrections/updates in the form of Metalink "Feedback" submissions to ensure the accuracy and up-to-date status of this bulletin.

Difference Category Difference Item 11gR2 11gR1 10gR2 10gR1 9iR2
RDBMS File Structure $O_B / $O_H relationship Each $O_H has a corresponding $O_B. It is recommended that one Oracle Base location be shared by all the Oracle Homes per user. You must set an $ORACLE_BASE. Each $O_H has a corresponding $O_B. It is recommended that one Oracle Base location be shared by all the Oracle Homes per user. By default, $ORACLE_BASE will be set to $ORACLE_HOME. All $ORACLE_HOMEs have a common $ORACLE_BASE All $ORACLE_HOMEs have a common $ORACLE_BASE All $ORACLE_HOMEs have a common $ORACLE_BASE

default inventory location $ORACLE_BASE/../oraInventory $ORACLE_BASE/../oraInventory $ORACLE_BASE/oraInventory $ORACLE_BASE/oraInventory $ORACLE_BASE/oraInventory

cdump directory $ORACLE_BASE/diag/rdbms/{SID}/{SID}/cdump $ORACLE_BASE/diag/rdbms/{SID}/{SID}/cdump $ORACLE_BASE/admin/{SID}/cdump $ORACLE_BASE/admin/{SID}/cdump $ORACLE_BASE/admin/{SID}/cdump

control backup to trace files $ORACLE_BASE/diag/rdbms/{SID}/{SID}/trace $ORACLE_BASE/diag/rdbms/{SID}/{SID}/trace $ORACLE_BASE/admin/{SID}/udump $ORACLE_BASE/admin/{SID}/udump $ORACLE_BASE/admin/{SID}/udump
Difference Category Difference Item 11gR2 11gR1 10gR2 10gR1 9iR2
RDBMS Log File Locations alert logs $ORACLE_BASE/diag/rdbms/{SID}/{SID}/alert $ORACLE_BASE/diag/rdbms/{SID}/{SID}/alert $ORACLE_BASE/admin/{SID}/bdump/alert_{SID}.log $ORACLE_BASE/admin/{SID}/bdump/alert_{SID}.log $ORACLE_BASE/admin/{SID}/bdump/alert_{SID}.log
SQL Script Names migration script
(upgrades DB between major release levels)
@?/rdbms/admin/catupgrd.sql @?/rdbms/admin/catupgrd.sql @?/rdbms/admin/catupgrd.sql u#######.sql
For example: u0902000.sql
See NOTE 263809.1 for other scripts
u#######.sql
For example: u0800060.sql
See NOTE 159657.1 for other scripts

patchset script
(modifies DB at patchset level within the same release)
@?/rdbms/admin/catupgrd.sql @?/rdbms/admin/catupgrd.sql @?/rdbms/admin/catupgrd.sql @?/rdbms/admin/catpatch.sql @?/rdbms/admin/catpatch.sql

downgrade script
(Removing the Patch Set Software)
@?/rdbms/admin/catdwgrd.sql @?/rdbms/admin/catdwgrd.sql @?/rdbms/admin/catdwngrd.sql @?/rdbms/admin/catbkout.sql
and then
@?/rdbms/admin/catbkout.sql
and then
Difference Category Difference Item 11gR2 11gR1 10gR2 10gR1 9iR2
Startup Modes manual migration startup upgrade startup upgrade startup upgrade startup upgrade startup migrate

CPU Patchset Post-Install Task
(@catcpu.sql or @catbundle.sql script)
no 11gR2 patchsets yet startup startup startup startup

CPU Patchset Post-Install Task
(@view_recompile script)
no 11gR2 patchsets yet startup upgrade startup upgrade startup upgrade startup migrate

Release Patchset Post-Install Task no 11gR2 patchsets yet startup upgrade startup upgrade startup upgrade startup migrate
Difference Category Difference Item 11gR2 11gR1 10gR2 10gR1 9iR2
Opatch Version number 11.1.0.x.x (not 11.2.0.x.x) 11.1.0.x.x 10.2.0.x.x 1.0.0.0.xx 1.0.0.0.xx

opatch lock file $ORACLE_HOME/.patch_storage/patch_locked $ORACLE_HOME/.patch_storage/patch_locked $ORACLE_HOME/.patch_storage/patch_locked $ORACLE_HOME/.patch_storage/patch_locked $ORACLE_HOME/.patch_storage/patch_locked

opatch apply logs $ORACLE_HOME/cfgtoollogs/opatch/ $ORACLE_HOME/cfgtoollogs/opatch/ $ORACLE_HOME/cfgtoollogs/opatch/ $ORACLE_HOME/.patch_storage $ORACLE_HOME/.patch_storage

opatch rollback logs $ORACLE_HOME/cfgtoollogs/opatch/ $ORACLE_HOME/cfgtoollogs/opatch/ $ORACLE_HOME/cfgtoollogs/opatch/ $ORACLE_HOME/.patch_storage $ORACLE_HOME/.patch_storage

opatch rollback files
(scripts)
$ORACLE_HOME/.patch_storage/{patch id} $ORACLE_HOME/.patch_storage/{patch id} $ORACLE_HOME/.patch_storage/{patch id} $ORACLE_HOME/.patch_storage/{patch id} $ORACLE_HOME/.patch_storage/{patch id}

opatch history log $ORACLE_HOME/cfgtoollogs/opatch/opatch_history.txt $ORACLE_HOME/cfgtoollogs/opatch/opatch_history.txt $ORACLE_HOME/cfgtoollogs/opatch/opatch_history.txt did not exist did not exist
Difference Category Difference Item 11gR2 11gR1 10gR2 10gR1 9iR2
Tool Log Location Automatic Diagnostic Repository (ADR) $ORACLE_BASE/diag/rdbms/{SID}/{instance}/ $ORACLE_BASE/diag/rdbms/{SID}/{instance}/ Did not exist Did not exist Did not exist

DBCA $ORACLE_BASE/cfgtoollogs/dbca $ORACLE_BASE/cfgtoollogs/dbca $ORACLE_HOME/cfgtoollogs/ $ORACLE_HOME/assistants/dbca/logs $ORACLE_HOME/assistants/dbca/logs

DBUA (aka DBMA) $ORACLE_BASE/cfgtoollogs/ $ORACLE_BASE/cfgtoollogs/ $ORACLE_HOME/cfgtoollogs/ $ORACLE_HOME/assistants/dbma/logs $ORACLE_HOME/assistants/dbma/logs

EMCA $ORACLE_BASE/cfgtoollogs/ $ORACLE_BASE/cfgtoollogs/ $ORACLE_HOME/cfgtoollogs/ $ORACLE_HOME/assistants/emca/logs $ORACLE_HOME/assistants/emca/logs

NETCA $ORACLE_BASE/cfgtoollogs/ $ORACLE_BASE/cfgtoollogs/ $ORACLE_HOME/cfgtoollogs/ $ORACLE_HOME/assistants/netca/logs $ORACLE_HOME/assistants/netca/logs

OUI $ORACLE_BASE/../oraInventory/logs/ $ORACLE_BASE/../oraInventory/logs/ $ORACLE_HOME/cfgtoollogs/ $ORACLE_BASE/oraInventory/logs/ $ORACLE_BASE/oraInventory/logs/
Difference Category Difference Item 11gR2 11gR1 10gR2 10gR1 9iR2
Common Problems $O_H file permissions already correct after install already correct after install 10.2.0.1 - correct after patch 4516865
10.2.0.2+ - correct after Post-Install Tasks step
10.1.0.1 thru 10.1.0.4 - correct after install
10.1.0.5 - correct after Post-Install Tasks step
9201 thru 9206 - correct after install
9207+ - correct after patch 4533592

DST March 2007 (version 4) TimeZone files Included in release Included in release 10.1.0.1 thru 10.2.0.3 - Patch required. See NOTE 359145.1
10.2.0.4 - Included in patchset
Patch required. See NOTE 359145.1 Patch required. See NOTE 359145.1

Documented OS foundation All Install Guides recommend "default-RPMs", and offer "reduced RPMs" as an alternative Clusterware for Linux states "default-RPMs" are required.
All other platforms and products - only implied
The required OS foundation is only implied The required OS foundation is only implied The required OS foundation is only implied
Difference Category Difference Item 11gR2 11gR1 10gR2 10gR1 9iR2
Cloning Functionality Oracle OUI and OPatch User's Guide 11g Release 1 (11.2) for Windows and UNIX, chapter 6
and NOTE 300062.1, "How to Clone an Existing RDBMS Installation Using OUI"
Oracle OUI and OPatch User's Guide 11g Release 1 (11.1) for Windows and UNIX, chapter 6
and NOTE 300062.1, "How to Clone an Existing RDBMS Installation Using OUI"
Oracle OUI and OPatch User's Guide 10g Release 2 (10.2) for Windows and UNIX, chapter 7
and NOTE 300062.1, "How to Clone an Existing RDBMS Installation Using OUI"
does not actually work properly Note 300062.1
Metalink Notes Manual Migration Note 837570.1 Note 429825.1 Note 316889.1 Note 263809.1 Note 159657.1


Blogged with the Flock Browser

11gR2 Clusterware and Grid Home - What You Need to Know

11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]  

  Modified 02-FEB-2010     Type BULLETIN     Status PUBLISHED  

In this Document
  Purpose
  Scope and Application
  11gR2 Clusterware and Grid Home - What You Need to Know
     11gR2 Clusterware Key Facts
     Clusterware Startup Sequence
     Important Log Locations
     Clusterware Resource Status Check
     Clusterware Resource Administration
     OCRCONFIG Options:
     OLSNODES Options
     Cluster Verification Options
  References


Applies to:

Oracle Server - Enterprise Edition - Version: 11.2.0.1 to 11.2.0.1 - Release: 11.2 to 11.2
Information in this document applies to any platform.

Purpose

The 11gR2 Clusterware has undergone numerous changes since the previous release. For information on the previous release(s), see Note: 259301.1 "CRS and 10g Real Application Clusters". This document is intended to go over the 11.2 Clusterware which has some similarities and some differences from the previous version(s). 

Scope and Application


This document is intended for RAC Database Administrators and Oracle support engineers.

11gR2 Clusterware and Grid Home - What You Need to Know

11gR2 Clusterware Key Facts

  • 11gR2 Clusterware is required to be up and running prior to installing a 11gR2 Real Application Clusters database.
  • The GRID home consists of the Oracle Clusterware and ASM.  ASM should not be in a seperate home.
  • The 11gR2 Clusterware can be installed in "Standalone" mode for ASM and/or "Oracle Restart" single node support. This clusterware is a subset of the full clusterware described in this document.
  • The 11gR2 Clusterware can be run by itself or on top of vendor clusterware.  See the certification matrix for certified combinations. Ref: Note: 184875.1 "How To Check The Certification Matrix for Real Application Clusters"
  • The GRID Home and the RAC/DB Home must be installed in different locations.
  • The 11gR2 Clusterware requires a shared OCR files and voting files.  These can be stored on ASM or a cluster filesystem.
  • The OCR is backed up automatically every 4 hours to <GRID_HOME>/cdata/<scan name>/ and can be restored via ocrconfig. 
  • The voting file is backed up into the OCR at every configuration change and can be restored via crsctl. 
  • The 11gR2 Clusterware requires at least one private network for inter-node communication and at least one public network for external communication.  Several virtual IPs need to be registered with DNS.  This includes the node VIPs (one per node), SCAN VIPs (up to 3).  This can be done manually via your network administrator or optionally you could configure the "GNS" (Grid Naming Service) in the Oracle clusterware to handle this for you (note that GNS requires its own VIP).  
  • A SCAN (Single Client Access Name) is provided to clients to connect to.  For more info on SCAN see Note: 887522.1 and/or http://www.oracle.com/technology/products/database/clustering/pdf/scan.pdf
  • The root.sh script at the end of the clusterware installation starts the clusterware stack.  For information on troubleshooting root.sh issues see Note: 1053970.1
  • Only one set of clusterware daemons can be running per node. 
  • On Unix, the clusterware stack is started via the init.ohasd script referenced in /etc/inittab with "respawn".
  • A node can be evicted (rebooted) if a node is deemed to be unhealthy.  This is done so that the health of the entire cluster can be maintained.  For more information on this see: Note: 1050693.1 "Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"
  • Either have vendor time synchornization software (like NTP) fully configured and running or have it not configured at all and let CTSS handle time synchonization.  See Note: 1054006.1 for more infomation.
  • If installing DB homes for a lower version, you will need to unpin the nodes in the clusterware or you will see ORA-29702 errors.  See Note 946332.1 for more info.
  • The clusterware stack can be started by either booting the machine, running "crsctl start crs" to start the clusterware stack, or by running "crsctl start cluster" to start the clusterware on all nodes.  Note that crsctl is in the <GRID_HOME>/bin directory.
  • The clusterware stack can be stopped by either shutting down the machine, running "crsctl stop crs" to start the clusterware stack, or by running "crsctl stop cluster" to start the clusterware on all nodes.  Note that crsctl is in the <GRID_HOME>/bin directory.
  • Killing clusterware daemons is not supported.
Note that it is also a good idea to follow the RAC Assurance best practices in Note: 810394.1

Clusterware Startup Sequence

The following is the Clusterware startup sequence (image from the "Oracle Clusterware Administration and Deployment Guide):


Don't let this picture scare you too much.  You aren't responsible for managing all of these processes, that is the Clusterware's job!

Short summary of the startup sequence: INIT spawns init.ohasd (with respawn) which in turn starts the OHASD process (Oracle High Availability Services Daemon).  This daemon spawns 4 processes.

Level 1: OHASD Spawns:

  • cssdagent - Agent responsible for spawning CSSD.
  • orarootagent - Agent responsible for managing all root owned ohasd resources.
  • oraagent - Agent responsible for managing all oracle owned ohasd resources.
  • cssdmonitor - Monitors CSSD and node health (along wth the cssdagent).

Level 2: OHASD rootagent spawns:

  • CRSD - Primary daemon responsible for managing cluster resources.
  • CTSSD - Cluster Time Synchronization Services Daemon
  • Diskmon
  • ACFS (ASM Cluster File System) Drivers 

Level 2: OHASD oraagent spawns:

  • MDNSD - Used for DNS lookup
  • GIPCD - Used for inter-process and inter-node communication
  • GPNPD - Grid Plug & Play Profile Daemon
  • EVMD - Event Monitor Daemon
  • ASM - Resource for monitoring ASM instances

Level 3: CRSD spawns:

  • orarootagent - Agent responsible for managing all root owned crsd resources.
  • oraagent - Agent responsible for managing all oracle owned crsd resources.

Level 4: CRSD rootagent spawns:

  • Network resource - To monitor the public network
  • SCAN VIP(s) - Single Client Access Name Virtual IPs
  • Node VIPs - One per node
  • ACFS Registery - For mounting ASM Cluster File System
  • GNS VIP (optional) - VIP for GNS

Level 4: CRSD oraagent spawns:

  • ASM Resouce - ASM Instance(s) resource
  • Diskgroup - Used for managing/monitoring ASM diskgroups.  
  • DB Resource - Used for monitoring and managing the DB and instances
  • SCAN Listener - Listener for single client access name, listening on SCAN VIP
  • Listener - Node listener listening on the Node VIP
  • Services - Used for monitoring and managing services
  • ONS - Oracle Notification Service
  • eONS - Enhanced Oracle Notification Service
  • GSD - For 9i backward compatibility
  • GNS (optional) - Grid Naming Service - Performs name resolution

This image shows the various levels more clearly:


Important Log Locations

Clusterware daemon logs are all under <GRID_HOME>/log/<nodename>.  Structure under <GRID_HOME>/log/<nodename>:

alert<NODENAME>.log - look here first for most clusterware issues
./admin:
./agent:
./agent/crsd:
./agent/crsd/oraagent_oracle:
./agent/crsd/ora_oc4j_type_oracle:
./agent/crsd/orarootagent_root:
./agent/ohasd:
./agent/ohasd/oraagent_oracle:
./agent/ohasd/oracssdagent_root:
./agent/ohasd/oracssdmonitor_root:
./agent/ohasd/orarootagent_root:
./client:
./crsd:
./cssd:
./ctssd:
./diskmon:
./evmd:
./gipcd:
./gnsd:
./gpnpd:
./mdnsd:
./ohasd:
./racg:
./racg/racgeut:
./racg/racgevtf:
./racg/racgmain:
./srvm:

The cfgtoollogs dir under <GRID_HOME> and $ORACLE_BASE contains other important logfiles.  Specifically for rootcrs.pl and configuration assistants like ASMCA, etc...

ASM logs live under $ORACLE_BASE/diag/asm/+asm/<ASM Instance Name>/trace

The diagcollection.pl script under <GRID_HOME>/bin can be used to automatically collect important files for support.  Run this as the root user. 

Clusterware Resource Status Check

The following command will display the status of all cluster resources:


$ ./crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG.dg
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.LISTENER.lsnr
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.SYSTEMDG.dg
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.asm
               ONLINE  ONLINE       racbde1                  Started
               ONLINE  ONLINE       racbde2                  Started
ora.eons
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.gsd
               OFFLINE OFFLINE      racbde1
               OFFLINE OFFLINE      racbde2
ora.net1.network
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.ons
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.registry.acfs
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racbde1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racbde2
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racbde2
ora.oc4j
      1        OFFLINE OFFLINE
ora.rac.db
      1        ONLINE  ONLINE       racbde1                  Open
      2        ONLINE  ONLINE       racbde2                  Open
ora.racbde1.vip
      1        ONLINE  ONLINE       racbde1
ora.racbde2.vip
      1        ONLINE  ONLINE       racbde2
ora.scan1.vip
      1        ONLINE  ONLINE       racbde1
ora.scan2.vip
      1        ONLINE  ONLINE       racbde2
ora.scan3.vip
      1        ONLINE  ONLINE       racbde2

Clusterware Resource Administration

Srvctl and crsctl are used to manage clusterware resources.  The general rule is to use srvctl for whatever resource management you can.  Crsctl should only be used for things that you cannot do with srvctl (like start the cluster).  Both have a help feature to see the available syntax.

Note that the following only shows the available srvctl syntax.  For additional explanation on what these commands do, see the Oracle Documentation

Srvctl syntax:

$ srvctl -h
Usage: srvctl [-V]
Usage: srvctl add database -d <db_unique_name> -o <oracle_home> [-m <domain_name>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-n <db_name>] [-y {AUTOMATIC | MANUAL}] [-g "<serverpool_list>"] [-x <node_name>] [-a "<diskgroup_list>"]
Usage: srvctl config database [-d <db_unique_name> [-a] ]
Usage: srvctl start database -d <db_unique_name> [-o <start_options>]
Usage: srvctl stop database -d <db_unique_name> [-o <stop_options>] [-f]
Usage: srvctl status database -d <db_unique_name> [-f] [-v]
Usage: srvctl enable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl disable database -d <db_unique_name> [-n <node_name>]
Usage: srvctl modify database -d <db_unique_name> [-n <db_name>] [-o <oracle_home>] [-u <oracle_user>] [-m <domain>] [-p <spfile>] [-r {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-s <start_options>] [-t <stop_options>] [-y {AUTOMATIC | MANUAL}] [-g "<serverpool_list>" [-x <node_name>]] [-a "<diskgroup_list>"|-z]
Usage: srvctl remove database -d <db_unique_name> [-f] [-y]
Usage: srvctl getenv database -d <db_unique_name> [-t "<name_list>"]
Usage: srvctl setenv database -d <db_unique_name> {-t <name>=<val>[,<name>=<val>,...] | -T <name>=<val>}
Usage: srvctl unsetenv database -d <db_unique_name> -t "<name_list>"

Usage: srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name> [-f]
Usage: srvctl start instance -d <db_unique_name> {-n <node_name> [-i <inst_name>] | -i <inst_name_list>} [-o <start_options>]
Usage: srvctl stop instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>}  [-o <stop_options>] [-f]
Usage: srvctl status instance -d <db_unique_name> {-n <node_name> | -i <inst_name_list>} [-f] [-v]
Usage: srvctl enable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl disable instance -d <db_unique_name> -i "<inst_name_list>"
Usage: srvctl modify instance -d <db_unique_name> -i <inst_name> { -n <node_name> | -z }
Usage: srvctl remove instance -d <db_unique_name> [-i <inst_name>] [-f] [-y]

Usage: srvctl add service -d <db_unique_name> -s <service_name> {-r "<preferred_list>" [-a "<available_list>"] [-P {BASIC | NONE | PRECONNECT}] | -g <server_pool> [-c {UNIFORM | SINGLETON}] } [-k   <net_num>] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <failover_retries>] [-w <failover_delay>]
Usage: srvctl add service -d <db_unique_name> -s <service_name> -u {-r "<new_pref_inst>" | -a "<new_avail_inst>"}
Usage: srvctl config service -d <db_unique_name> [-s <service_name>] [-a]
Usage: srvctl enable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl disable service -d <db_unique_name> -s "<service_name_list>" [-i <inst_name> | -n <node_name>]
Usage: srvctl status service -d <db_unique_name> [-s "<service_name_list>"] [-f] [-v]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <old_inst_name> -t <new_inst_name> [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -i <avail_inst_name> -r [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> -n -i "<preferred_list>" [-a "<available_list>"] [-f]
Usage: srvctl modify service -d <db_unique_name> -s <service_name> [-c {UNIFORM | SINGLETON}] [-P {BASIC|PRECONNECT|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z <integer>] [-w <integer>]
Usage: srvctl relocate service -d <db_unique_name> -s <service_name> {-i <old_inst_name> -t <new_inst_name> | -c <current_node> -n <target_node>} [-f]
       Specify instances for an administrator-managed database, or nodes for a policy managed database
Usage: srvctl remove service -d <db_unique_name> -s <service_name> [-i <inst_name>] [-f]
Usage: srvctl start service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-o <start_options>]
Usage: srvctl stop service -d <db_unique_name> [-s "<service_name_list>" [-n <node_name> | -i <inst_name>] ] [-f]

Usage: srvctl add nodeapps { { -n <node_name> -A <name|ip>/<netmask>/[if1[|if2...]] } | { -S <subnet>/<netmask>/[if1[|if2...]] } } [-p <portnum>] [-m <multicast-ip-address>] [-e <eons-listen-port>] [-l <ons-local-port>]  [-r <ons-remote-port>] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl config nodeapps [-a] [-g] [-s] [-e]
Usage: srvctl modify nodeapps {[-n <node_name> -A <new_vip_address>/<netmask>[/if1[|if2|...]]] | [-S <subnet>/<netmask>[/if1[|if2|...]]]} [-m <multicast-ip-address>] [-p <multicast-portnum>] [-e <eons-listen-port>] [ -l <ons-local-port> ] [-r <ons-remote-port> ] [-t <host>[:<port>][,<host>[:<port>]...]] [-v]
Usage: srvctl start nodeapps [-n <node_name>] [-v]
Usage: srvctl stop nodeapps [-n <node_name>] [-f] [-r] [-v]
Usage: srvctl status nodeapps
Usage: srvctl enable nodeapps [-v]
Usage: srvctl disable nodeapps [-v]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t "<name_list>"]
Usage: srvctl setenv nodeapps {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
Usage: srvctl unsetenv nodeapps -t "<name_list>" [-v]

Usage: srvctl add vip -n <node_name> -k <network_number> -A <name|ip>/<netmask>/[if1[|if2...]] [-v]
Usage: srvctl config vip { -n <node_name> | -i <vip_name> }
Usage: srvctl disable vip -i <vip_name> [-v]
Usage: srvctl enable vip -i <vip_name> [-v]
Usage: srvctl remove vip -i "<vip_name_list>" [-f] [-y] [-v]
Usage: srvctl getenv vip -i <vip_name> [-t "<name_list>"]
Usage: srvctl start vip { -n <node_name> | -i <vip_name> } [-v]
Usage: srvctl stop vip { -n <node_name>  | -i <vip_name> } [-f] [-r] [-v]
Usage: srvctl status vip { -n <node_name> | -i <vip_name> }
Usage: srvctl setenv vip -i <vip_name> {-t "<name>=<val>[,<name>=<val>,...]" | -T "<name>=<val>"}
Usage: srvctl unsetenv vip -i <vip_name> -t "<name_list>" [-v]

Usage: srvctl add asm [-l <lsnr_name>]
Usage: srvctl start asm [-n <node_name>] [-o <start_options>]
Usage: srvctl stop asm [-n <node_name>] [-o <stop_options>] [-f]
Usage: srvctl config asm [-a]
Usage: srvctl status asm [-n <node_name>] [-a]
Usage: srvctl enable asm [-n <node_name>]
Usage: srvctl disable asm [-n <node_name>]
Usage: srvctl modify asm [-l <lsnr_name>]
Usage: srvctl remove asm [-f]
Usage: srvctl getenv asm [-t <name>[, ...]]
Usage: srvctl setenv asm -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl unsetenv asm -t "<name>[, ...]"

Usage: srvctl start diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl stop diskgroup -g <dg_name> [-n "<node_list>"] [-f]
Usage: srvctl status diskgroup -g <dg_name> [-n "<node_list>"] [-a]
Usage: srvctl enable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl disable diskgroup -g <dg_name> [-n "<node_list>"]
Usage: srvctl remove diskgroup -g <dg_name> [-f]

Usage: srvctl add listener [-l <lsnr_name>] [-s] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-o <oracle_home>] [-k <net_num>]
Usage: srvctl config listener [-l <lsnr_name>] [-a]
Usage: srvctl start listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl stop listener [-l <lsnr_name>] [-n <node_name>] [-f]
Usage: srvctl status listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl enable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl disable listener [-l <lsnr_name>] [-n <node_name>]
Usage: srvctl modify listener [-l <lsnr_name>] [-o <oracle_home>] [-p "[TCP:]<port>[, ...][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]"] [-u <oracle_user>] [-k <net_num>]
Usage: srvctl remove listener [-l <lsnr_name> | -a] [-f]
Usage: srvctl getenv listener [-l <lsnr_name>] [-t <name>[, ...]]
Usage: srvctl setenv listener [-l <lsnr_name>] -t "<name>=<val> [,...]" | -T "<name>=<value>"
Usage: srvctl unsetenv listener [-l <lsnr_name>] -t "<name>[, ...]"

Usage: srvctl add scan -n <scan_name> [-k <network_number> [-S <subnet>/<netmask>[/if1[|if2|...]]]]
Usage: srvctl config scan [-i <ordinal_number>]
Usage: srvctl start scan [-i <ordinal_number>] [-n <node_name>]
Usage: srvctl stop scan [-i <ordinal_number>] [-f]
Usage: srvctl relocate scan -i <ordinal_number> [-n <node_name>]
Usage: srvctl status scan [-i <ordinal_number>]
Usage: srvctl enable scan [-i <ordinal_number>]
Usage: srvctl disable scan [-i <ordinal_number>]
Usage: srvctl modify scan -n <scan_name>
Usage: srvctl remove scan [-f] [-y]
Usage: srvctl add scan_listener [-l <lsnr_name_prefix>] [-s] [-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]]
Usage: srvctl config scan_listener [-i <ordinal_number>]
Usage: srvctl start scan_listener [-n <node_name>] [-i <ordinal_number>]
Usage: srvctl stop scan_listener [-i <ordinal_number>] [-f]
Usage: srvctl relocate scan_listener -i <ordinal_number> [-n <node_name>]
Usage: srvctl status scan_listener [-i <ordinal_number>]
Usage: srvctl enable scan_listener [-i <ordinal_number>]
Usage: srvctl disable scan_listener [-i <ordinal_number>]
Usage: srvctl modify scan_listener {-u|-p [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]}
Usage: srvctl remove scan_listener [-f] [-y]

Usage: srvctl add srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]
Usage: srvctl config srvpool [-g <pool_name>]
Usage: srvctl status srvpool [-g <pool_name>] [-a]
Usage: srvctl status server -n "<server_list>" [-a]
Usage: srvctl relocate server -n "<server_list>" -g <pool_name> [-f]
Usage: srvctl modify srvpool -g <pool_name> [-l <min>] [-u <max>] [-i <importance>] [-n "<server_list>"]
Usage: srvctl remove srvpool -g <pool_name>

Usage: srvctl add oc4j [-v]
Usage: srvctl config oc4j
Usage: srvctl start oc4j [-v]
Usage: srvctl stop oc4j [-f] [-v]
Usage: srvctl relocate oc4j [-n <node_name>] [-v]
Usage: srvctl status oc4j [-n <node_name>]
Usage: srvctl enable oc4j [-n <node_name>] [-v]
Usage: srvctl disable oc4j [-n <node_name>] [-v]
Usage: srvctl modify oc4j -p <oc4j_rmi_port> [-v]
Usage: srvctl remove oc4j [-f] [-v]

Usage: srvctl start home -o <oracle_home> -s <state_file> -n <node_name>
Usage: srvctl stop home -o <oracle_home> -s <state_file> -n <node_name> [-t <stop_options>] [-f]
Usage: srvctl status home -o <oracle_home> -s <state_file> -n <node_name>

Usage: srvctl add filesystem -d <volume_device> -v <volume_name> -g <dg_name> [-m <mountpoint_path>] [-u <user>]
Usage: srvctl config filesystem -d <volume_device>
Usage: srvctl start filesystem -d <volume_device> [-n <node_name>]
Usage: srvctl stop filesystem -d <volume_device> [-n <node_name>] [-f]
Usage: srvctl status filesystem -d <volume_device>
Usage: srvctl enable filesystem -d <volume_device>
Usage: srvctl disable filesystem -d <volume_device>
Usage: srvctl modify filesystem -d <volume_device> -u <user>
Usage: srvctl remove filesystem -d <volume_device> [-f]

Usage: srvctl start gns [-v] [-l <log_level>] [-n <node_name>]
Usage: srvctl stop gns [-v] [-n <node_name>] [-f]
Usage: srvctl config gns [-v] [-a] [-d] [-k] [-m] [-n <node_name>] [-p] [-s] [-V]
Usage: srvctl status gns -n <node_name>
Usage: srvctl enable gns [-v] [-n <node_name>]
Usage: srvctl disable gns [-v] [-n <node_name>]
Usage: srvctl relocate gns [-v] [-n <node_name>] [-f]
Usage: srvctl add gns [-v] -d <domain> -i <vip_name|ip> [-k <network_number> [-S <subnet>/<netmask>[/<interface>]]]
srvctl modify gns [-v] [-f] [-l <log_level>] [-d <domain>] [-i <ip_address>] [-N <name> -A <address>] [-D <name> -A <address>] [-c <name> -a <alias>] [-u <alias>] [-r <address>] [-V <name>] [-F <forwarded_domains>] [-R <refused_domains>] [-X <excluded_interfaces>]
Usage: srvctl remove gns [-f] [-d <domain_name>]

Crsctl Syntax (for further explanation of these commands see the Oracle Documentation)

$ ./crsctl -h
Usage: crsctl add       - add a resource, type or other entity
       crsctl check     - check a service, resource or other entity
       crsctl config    - output autostart configuration
       crsctl debug     - obtain or modify debug state
       crsctl delete    - delete a resource, type or other entity
       crsctl disable   - disable autostart
       crsctl enable    - enable autostart
       crsctl get       - get an entity value
       crsctl getperm   - get entity permissions
       crsctl lsmodules - list debug modules
       crsctl modify    - modify a resource, type or other entity
       crsctl query     - query service state
       crsctl pin       - Pin the nodes in the nodelist
       crsctl relocate  - relocate a resource, server or other entity
       crsctl replace   - replaces the location of voting files
       crsctl setperm   - set entity permissions
       crsctl set       - set an entity value
       crsctl start     - start a resource, server or other entity
       crsctl status    - get status of a resource or other entity
       crsctl stop      - stop a resource, server or other entity
       crsctl unpin     - unpin the nodes in the nodelist
       crsctl unset     - unset a entity value, restoring its default

For more information non each command. Run "crsctl <command> -h". 

OCRCONFIG Options:

Note that the following only shows the available ocrconfig syntax. For additional explanation on what these commands do, see the Oracle Documentation.

$ ./ocrconfig -help
Name:
        ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

Synopsis:
        ocrconfig [option]
        option:
                [-local] -export <filename>
                                                    - Export OCR/OLR contents to a file
                [-local] -import <filename>         - Import OCR/OLR contents from a file
                [-local] -upgrade [<user> [<group>]]
                                                    - Upgrade OCR from previous version
                -downgrade [-version <version string>]
                                                    - Downgrade OCR to the specified version
                [-local] -backuploc <dirname>       - Configure OCR/OLR backup location
                [-local] -showbackup [auto|manual]  - Show OCR/OLR backup information
                [-local] -manualbackup              - Perform OCR/OLR backup
                [-local] -restore <filename>        - Restore OCR/OLR from physical backup
                -replace <current filename> -replacement <new filename>
                                                    - Replace a OCR device/file <filename1> with <filename2>
                -add <filename>                     - Add a new OCR device/file
                -delete <filename>                  - Remove a OCR device/file
                -overwrite                          - Overwrite OCR configuration on disk
                -repair -add <filename> | -delete <filename> | -replace <current filename> -replacement <new filename>
                                                    - Repair OCR configuration on the local node
                -help                               - Print out this help information

Note:
        * A log file will be created in
        $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure
        you have file creation privileges in the above directory before
        running this tool.
        * Only -local -showbackup [manual] is supported.
        * Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry

OLSNODES Options

Note that the following only shows the available olsnodes syntax. For additional explanation on what these commands do, see the Oracle Documentation.

$ ./olsnodes -h
Usage: olsnodes [ [-n] [-i] [-s] [-t] [<node> | -l [-p]] | [-c] ] [-g] [-v]
        where
                -n print node number with the node name
                -p print private interconnect address for the local node
                -i print virtual IP address with the node name
                <node> print information for the specified node
                -l print information for the local node
                -s print node status - active or inactive
                -t print node type - pinned or unpinned
                -g turn on logging
                -v Run in debug mode; use at direction of Oracle Support only.
                -c print clusterware name

Cluster Verification Options

Note that the following only shows the available olsnodes syntax. For additional explanation on what these commands do, see the Oracle Documentation.

Component Options:

$ ./cluvfy comp -list

USAGE:
cluvfy comp  <component-name> <component-specific options>  [-verbose]

Valid components are:
        nodereach : checks reachability between nodes
        nodecon   : checks node connectivity
        cfs       : checks CFS integrity
        ssa       : checks shared storage accessibility
        space     : checks space availability
        sys       : checks minimum system requirements
        clu       : checks cluster integrity
        clumgr    : checks cluster manager integrity
        ocr       : checks OCR integrity
        olr       : checks OLR integrity
        ha        : checks HA integrity
        crs       : checks CRS integrity
        nodeapp   : checks node applications existence
        admprv    : checks administrative privileges
        peer      : compares properties with peers
        software  : checks software distribution
        asm       : checks ASM integrity
        acfs       : checks ACFS integrity
        gpnp      : checks GPnP integrity
        gns       : checks GNS integrity
        scan      : checks SCAN configuration
        ohasd     : checks OHASD integrity
        clocksync      : checks Clock Synchronization
        vdisk      : check Voting Disk Udev settings


Stage Options:

$ ./cluvfy stage -list

USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options>  [-verbose]

Valid stage options and stage names are:
        -post hwos    :  post-check for hardware and operating system
        -pre  cfs     :  pre-check for CFS setup
        -post cfs     :  post-check for CFS setup
        -pre  crsinst :  pre-check for CRS installation
        -post crsinst :  post-check for CRS installation
        -pre  hacfg   :  pre-check for HA configuration
        -post hacfg   :  post-check for HA configuration
        -pre  dbinst  :  pre-check for database installation
        -pre  acfscfg  :  pre-check for ACFS Configuration.
        -post acfscfg  :  post-check for ACFS Configuration.
        -pre  dbcfg   :  pre-check for database configuration
        -pre  nodeadd :  pre-check for node addition.
        -post nodeadd :  post-check for node addition.
        -post nodedel :  post-check for node deletion.


References

NOTE:1050693.1 - Troubleshooting 11.2 Clusterware Node Evictions (Reboots)
NOTE:1053970.1 - Troubleshooting 11.2 Grid Infastructure Installation Root.sh Issues
NOTE:1054006.1 - CTSSD Runs in Observer Mode Even Though No Time Sync Software is Running
NOTE:184875.1 - How To Check The Certification Matrix for Real Application Clusters
NOTE:259301.1 - CRS and 10g/11.1 Real Application Clusters
NOTE:810394.1 - RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
NOTE:887522.1 - 11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained
NOTE:946332.1 - Unable To Create 10.1 or 10.2 or 11.1(< 11gR2) ASM RAC Databases (ORA-29702) Using Brand New 11.2 CRS Installation (11gR2 Grid Infrastructure).
Oracle Clusterware Administration and Deployment Guide
http://www.oracle.com/technology/documentation/index.html

Show Attachments Attachments


CLUSTERWARE.JPG (45.36 KB)
cwadd004.gif (29.17 KB)
Blogged with the Flock Browser

RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)

RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic) [ID 810394.1]  

  Modified 03-FEB-2010     Type BULLETIN     Status PUBLISHED  

In this Document
  Purpose
  Scope and Application
  RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)
     RAC Platform Specific Starter Kits and Best Practices
     
     RAC Platform Generic Load Testing  and System Test Plan Outline
     
     RAC Platform Generic Highlighted Recommendations
     
     RAC Platform Generic Best Practices
     Getting Started - Preinstallation and Design Considerations
     Clusterware Considerations
     Networking Considerations
     Storage Considerations
     Installation Considerations
     Patching Considerations
     Upgrade Considerations
     Oracle VM Considerations
     Database Initialization Parameter Considerations
     Performance Tuning Considerations
     General Configuration Considerations
     E-Business Suite (with RAC) Considerations
     Peoplesoft (with RAC) Considerations
     Tools/Utilities for Diagnosing and Working with Oracle Support
     11gR2 Specific Considerations
     RAC Platform Generic References
     CRS / RAC Related References
     RAC / RDBMS Related References
     VIP References
     ASM References
     11.2 References
     Infiniband References
     MAA / Standby References
     Patching References
     Upgrade References
     E-Business References
     Unix References
     Weblogic/RAC References
     References Related to Working with Oracle Support


Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.2.0.1.0 - Release: 10.2 to 11.2
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.7
Information in this document applies to any platform.

Purpose

The goal of the Oracle Real Application Clusters (RAC) Starter Kit is to provide you with the latest information on generic and platform specific best practices for implementing an Oracle RAC cluster. This document is compiled and provided based on Oracle's experience with its global RAC customer base.

This Starter Kit is not meant to replace or supplant the Oracle Documentation set, but rather, it is meant as a supplement to the same. It is imperative that the Oracle Documentation be read, understood, and referenced to provide answers to any questions that may not be clearly addressed by this Starter Kit. 

All recommendations should be carefully reviewed by your own operations group and should only be implemented if the potential gain as measured against the associated risk warrants implementation. Risk assessments can only be made with a detailed knowledge of the system, application, and business environment.

As every customer environment is unique, the success of any Oracle Database implementation, including implementations of Oracle RAC, is predicated on a successful test environment. It is thus imperative that any recommendations from this Starter Kit are thoroughly tested and validated using a testing environment that is a replica of the target production environment before being implemented in the production environment to ensure that there is no negative impact associated with the recommendations that are made.

Scope and Application

This article is intended for use by all new (and existing) Oracle RAC implementers.

RAC Assurance Support Team: RAC Starter Kit and Best Practices (Generic)

RAC Platform Specific Starter Kits and Best Practices

While this note focuses on Generic RAC Best Practices, the following notes contain detailed platform specific best practices. Please refer to the below notes for more specifics, including example step-by-step install cookbooks, and sample system test plans.

Note 811306.1 RAC Assurance Support Team:   RAC Starter Kit and Best Practices (Linux)
Note 811280.1 RAC Assurance Support Team:   RAC Starter Kit and Best Practices (Solaris)
Note 811271.1 RAC Assurance Support Team:   RAC Starter Kit and Best Practices (Windows)
Note 811293.1 RAC Assurance Support Team:   RAC Starter Kit and Best Practices (AIX)
Note 811303.1 RAC Assurance Support Team:   RAC Starter Kit and Best Practices (HP-UX)


RAC Platform Generic Load Testing  and System Test Plan Outline

A critical component of any successful implementation, particularly in the High Availability arena, is testing.  For a RAC environment, testing should include both load generation, to monitor and measure how the system works under heavy load, and a system test plan, to understand how the system reacts to certain types of failures.   To assist with this type of testing, this document contains links to documents to get you started in both of these areas.

Click here for a White Paper on available RAC System Load Testing Tools
Click here for a platform generic RAC System Test Plan Outline

Use these documents to validate your system setup and configuration, and also as a means to practice responses and establish procedures in case of certain types of failures.


RAC Platform Generic Highlighted Recommendations

Highlighted Recommendations are recommendations that are thought to have the greatest impact, or answer most commonly addressed questions or issues. In this case, Generic Highlighted Recommendations talk about commonly asked or encountered issues that are generic to RAC implementations across all platforms.


RAC Platform Generic Best Practices

Beyond the Highlighted Recommendations above, the RAC Assurance Team has recommendations for various different parts/components of your RAC setup. These additional recommendations are broken into categories and listed below.

Getting Started - Preinstallation and Design Considerations

  • Check with the Disk Vendor that the Number of Nodes, OS version, RAC version, CRS version, Network fabric, and Patches are certified, as some Storage/San vendors may require special certification for a certain number of nodes.
  • Use both external and Oracle provided redundancy for the OCR and Voting disks.  Note 428681.1 explains how to add OCR mirror and how to add additional voting disks.
  • Check the support matrix to ensure supportability of product, version and platform combinations or for understanding any specific steps which need to be completed which are extra in the case of some such combinations.  Note 337737.1
  • Avoid SSH and XAUTH warning before RAC 10G installation. Reference Note 285070.1
  • Consider configuring the system logger to log messages to one central server.
  • For CRS, ASM, and Oracle ensure one unique User ID with a single name, is in use across the cluster. Problems can occur accessing OCR keys when multiple O/S users share the same UID. Also this results in logical corruptions and permission problems which are hard to diagnose.
  • Make sure machine clocks are synchronized on all nodes to the same NTP source.
    Implementing NTP (Network Time Protocol) on all nodes prevents evictions and helps to facilitate problem diagnosis. Use the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact Oracle Clusterware. Note 759143.1
  • Eliminate any single points of failure in the architecture. Examples include (but are not limited to):  Cluster interconnect redundancy (NIC bonding etc), multiple access paths to storage, using 2 or more HBA's or initiators and multipathing software, and Disk mirroring/RAID
  • Plan and document capacity requirements.  Work with server vendor to produce detailed capacity plan and system configuration, but consider:  Use normal capacity planning process to estimate number of CPUs required to run workload. Both SMP and RAC clusters have synchronization costs as the number of CPUs increase. SMPs normally scale well for small number of CPUs, RAC clusters normally scale better than SMPs for large number of CPUs. Typical synchronization cost: 5-20%
  • Use proven high availability strategies.  RAC is one component in a high availability architecture. Make sure all parts are covered.  Review Oracle's Maximimum Availability Architecture recommendations and references further down in this document. 
  • It is strongly advised that a production RAC instance does not share a node with a DEV, TEST, QA or TRAINING instance. These extra instances can often introduce unexpected performance changes into a production environment.
  • Configure Servers to boot from SAN disk, rather than local disk for easier repair, quick provisioning and consistency.

Clusterware Considerations

  • Configure 3 or more voting disks (always an odd number).  This is because losing 1/2 or more of all of your voting disks will cause nodes to get evicted from the cluster, or nodes to kick themselves out of the cluster.

Networking Considerations

  • Underscores should not be used in a host or domainname according to RFC952 - DoD Internet host table specification. The same applies for Net, Host, Gateway, or Domain name. Reference: http://www.faqs.org/rfcs/rfc952.html
  • Ensure the default gateway is on the same subnet as the VIP. Otherwise this can cause problems with racgvip and cause the vip and listener to keep restarting.
  • Make sure network interfaces have the same name on all nodes. This is required. To check - use ifconfig (on Unix) or ipconfig (on Windows).
  • Use Jumbo Frames if supported and possible in the system. Reference: Note 341788.1
  • Use non-routable network addresses for private interconnect; Class A: 10.0.0.0 to 10.255.255.255, Class B: 172.16.0.0 to 172.31.255.255, Class C: 192.168.0.0 to 192.168.255.255.  Reference: http://www.faqs.org/rfcs/rfc1918.html and Note 338924.1
  • Make sure network interfaces are configured correctly in terms of speed, duplex, etc. Various tools exist to monitor and test network: ethtool, iperf, netperf, spray and tcp. Note 563566.1
  • Configure nics for fault tolerance (bonding/link aggregation). Note 787420.1.
  • Performance: check for faulty switches, bad hba's or ports which drop packets. Most cases we see with network related evictions is when either there is too much traffic on the interconnect (so the interconnect capacity is exhausted which is where aggregation or some other hardware solution helps) or the switch, network card is not configured properly and this is evident from the "netstat -s | grep udp" settings (if using UDP protocol for IPC for RAC) where this will register underflows (buffer size configuration for UDP) or errors due to bad ports, switches, network card, network card settings. Please review the same in the context of errors reported from packets sent through the interface.
  • For more predictable hardware discovery, place hba and nic cards in the same corresponding slot on each server in the Grid.
  • Ensure that all network cables are terminated in a grounded socket. A switch is required for the private network. Use dedicated redundant switches for private interconnect and VLAN considerations. RAC and Clusterware deployment best practices recommend that the interconnection be deployed on a stand-alone, physically separate, dedicated switch.
  • Deploying the RAC/Clusterware interconnect on a shared switch, segmented VLAN may expose the interconnect links to congestion and instability in the larger IP network topology. If deploying the interconnect on a VLAN, there should be a 1:1 mapping of VLAN to non-routable subnet and the VLAN should not span multiple VLANs (tagged) or multiple switches. Deployment concerns in this environment include Spanning Tree loops when the larger IP network topology changes, Asymmetric routing that may cause packet flooding, and lack of fine grained monitoring of the VLAN/port.
  • Consider using Infiniband on the interconnect for workloads that have high volume requirements.   Infiniband can also improve performance by lowering latency, particularly with Oracle 11g, with the RDS protocol.  See Note 751343.1.
  • Configure IPC address first in listener.ora address list. For databases upgraded from earlier versions to 10gR2 the netca did not configure the IPC address first in the listener.ora file. In 10gR2 this is the default but if you upgrade this isn't changed unless you do it manually. Failure to do so can adversely impact the amount of time it takes the VIP to fail over if the public network interface should fail. Therefore, check the 10gR1 and 10gR2 listener.ora file. Not only should the IPC address be contained in the address list but it should be FIRST. Note 403743.1
  • Increase the SDU (and in older versions the TDU as well) to a higher value (e.g. 4KB 8KB, up to 32KB), thus reducing round trips on the network, possibly decreasing response time and over all perceived user responsiveness of the system.  Note 44694.1
  • To avoid ORA-12545 errors, ensure that client HOSTS files and/or DNS are furnished with both VIP and Public hostnames.

Storage Considerations

  • Ensure Correct Mount Options for NFS Disks when RAC is used with NFS.The documented mount options are detailed in Note 359515.1 for each platform. 
  • Implement multiple access paths to storage array using two or more HBAs or initiators with multi-pathing software over these HBAs. Where possible, use the pseudo devices (multi-path I/O) as the diskstring for ASM. Examples are: EMC PowerPath, Veritas DMP, Sun Traffic Manager, Hitachi HDLM, IBM SDDPC, Linux 2.6 Device Mapper. This is useful for I/O loadbalancing and failover. Reference: Note 294869.1 and Note 394956.1
  • Adhere to ASM best practices. Reference: Note 265633.1 ASM Technical Best Practices
  • ORA-15196 (ASM block corruption) can occur, if LUNs larger than 2TB are presented to an ASM diskgroup. As a result of the fix, ORA-15099 will be raised if a disk larger than 2TBis specified. This is irrespective of the presence of asmlib. Workaround: Do not add more than 2 TB size disk to a diskgroup. Reference: Note 6453944.8
  • On some platforms repeat warnings about AIO limits may be seen in the alert log:
    "WARNING:Oracle process running out of OS kernel I/O resources." Apply patch 6687381, available on many platforms. This issue affects 10.2.0.3, 10.2.0.4, and 11.1.0.6. It is fixed in 11.1.0.7. Note 6687381.8
  • Create two ASM disk groups, one for database area and one for flash recovery area, on separate physical disks. RAID storage array LUNs can be used as ASM disks to minimize the number of LUNs presented to the OS . Place database and redo log files in database area.

Installation Considerations

  • Check Cluster Prequisites Using cluvfy (Cluster Verification Utility). Use cluvfy at all stages prior to and during installation of Oracle software. Also, rather than using the version on the installation media, it is crucial to download the latest version of cluvfy OTN: http://www.oracle.com/technology/products/database/clustering/cvu/cvu_download_homepage.html. Note 339939.1 and Note 316817.1 contain more relevant information on this topic.
  • It is recommended to patch the Clusterware Home to the desired level before doing any RDBMS or ASM home install.
    For example, install clusterware 10.2.0.1 and patch to 10.2.0.4 before installing 10.2.0.1 RDBMS.
  • Install ASM in a separate ORACLE_HOME from the database for maintenance and availability reasons (eg., to independently patch and upgrade).
  • If you are installing Oracle Clusterware as a user that is a member of multiple operating system groups, the installer installs files on all nodes of the cluster with group ownership set to that of the user's current active or primary group.  Therefore:  ensure that the first group listed in the file /etc/ group is the current active group OR invoke the Oracle Clusterware installation using the following additional command line option, to force the installer to use the proper group when setting group ownership on all files:  runInstaller s_usergroup=current_active_group (Bug 4433140)

Patching Considerations

This section is targeted towards customers beginning a new implementation of Oracle Real Application Clusters, or customers who are developing a proactive patching strategy for an existing implementation. For new implementations, it is strongly recommended that the latest available patchset for your platform be applied at the outset of your testing. In cases where that latest version of the RDBMS cannot be used because of lags in internal or 3rd party application certification or due to other limitations, it is still supported to have the CRS Home and ASM Homes running at a later patch level than the RDBMS Home, therefore, it may still be possible to run either the CRS or ASM Home at the latest patchset level. As a best practice (with some exceptions, see the Note in the references section below), Oracle Support recommends that the following be true:
  • The CRS_HOME must be at a patch level or version that is greater than or equal to the patch level or version of the ASM Home. The CRS_HOME must be a patch level or version that is greater than or equal to the patch level or version of the RDBMS home.
  • The ASM_HOME must be at a patch level or version that is greater than or equal to the patch level or version of the RDBMS Home. The ASM_HOME must be a patch level or version that is equal to but not greater than the patch level or version of the CRS_HOME. 
  • Before patching the database, ASM or clusterware homes using opatch check the available space on the filesystem and use Note:550522.1 in order to estimate how much space will be needed and how to handle the situation if the filesystem should fill up during the patching process.
  • Review known issues specific to the 10.2 0.4.0 patchset:  Note 555579.1.

    For more detailed notes and references on patching in a RAC environment, see the patching section below, in the "RAC Platform Generic References" section at the end of this note.

Upgrade Considerations

  • Begin with minimum version 10.2.0.3 when upgrading 10.2.0.X to 11.X
  • Use rolling upgrades where appropriate for Oracle Clusterware (CRS) Note 338706.1.  For detailed upgrade assistance, refer to the appropriate Upgrade Companion for your release:  Note 466181.1 10g Upgrade Companion and Note 601807.1 Oracle 11gR1 Upgrade Companion
  • For information about upgrading a database using a transient logical standby, refer to:  Note 949322.1 : Oracle11g Data Guard: Database Rolling Upgrade Shell Script

Oracle VM Considerations

Database Initialization Parameter Considerations

  • Set PRE_PAGE_SGA=false. If set to true, it can significantly increase the time required to establish database connections. In cases where clients might complain that connections to the database are very slow then consider setting this parameter to false, doing so avoids mapping the whole SGA and process startup and thus saves connection time.
  • Set PARALLEL_MIN_SERVERS to CPU_COUNT-1. This will pre-spawn recovery slaves at startup time and will avoid having to spawn them when recovery is required which could delay recovery due to the fact that slaves are started in serial. Note that SGA memory for PX msg pool will be allocated for all PARALLEL_MAX_SERVERS if you set PARALLEL_MIN_SERVERS.
  • Tune PARALLEL_MAX_SERVERS to your hardware. Start with (2 * ( 2 threads ) *(CPU_COUNT)) = 4 x CPU count and repeat test for higher values with test data.
  • Consider setting FAST_START_PARALLEL_ROLLBACK. This parameter determines how many processes are used for transaction recovery, which is done after redo application. Optimizing transaction recovery is important to ensure an efficient workload after an unplanned failure. As long as the system is not CPU bound, setting this to a value of HIGH is a best practice. This causes Oracle to use four times the CPU count (4 X cpu_count) parallel processes for transaction recovery. The default for this parameter is LOW, or two times the CPU count (2 X cpu_count).
  • Set FAST_START_MTTR_TARGET to a non-zero value in seconds. Crash recovery will complete within this desired time frame.
  • In 10g and 11g databases, init parameter ACTIVE_INSTANCE_COUNT should no longer be set. This is because the RACG layer doesn't take this parameter into account. As an alternative, you should create a service with one preferred instance.
  • Increase PARALLEL_EXECUTION_MESSAGE_SIZE from default (normallly 2048) to 8192. This can be set higher for datawarehousing based systems where there is a lot of data transferred through PQ.
  • Set OPTIMIZER_DYNAMIC_SAMPLING = 1 or simply analyze your objects because 10g Dynamic sampling can generate extra CR buffers during execution of SQL statements.
  • Tune DataGuard to avoid cluster related waits. Improperly tuned DataGuard settings can cause high LOG FILE SYNC WAIT and GLOBAL CACHE LOG FLUSH TIME. Reference: http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_DataGuardNetworkBestPractices.pdf, http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_RecoveryBestPractices.pdf, http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_Roadmap.pdf

Performance Tuning Considerations

In any database system, RAC or single instance, the most significant performance gains are usually obtained from traditional application tuning techniques. The benefits of those techniques are even more remarkable in a RAC database.
  • Many sites run with too few redo logs or with logs that are sized too small. With too few redo logs configured, there is the potential that the archiver process(es) cannot keep up which could cause the database to stall. Small redo logs cause frequent log switches, which can put a high load on the buffer cache and I/O system. As a general practice each thread should have at least three redo log groups with two members in each group.
    Oracle Database 10g introduced the Redo Logfile Size Advisor which determines the optimal, smallest online redo log file size based on the current FAST_START_MTTR_TARGET setting and corresponding statistics. Thus, the Redo Logfile Size Advisor is enabled only if FAST_START_MTTR_TARGET is set.
A new column is added to V$INSTANCE_RECOVERY. This column shows the redo log file size (in megabytes) that is considered to be optimal based on the current FAST_START_MTTR_TARGET setting. It is recommended that you set all online redo log files to at least this value.
  • Avoid and eliminate long full table scans in OLTP environments.
  • Use Automatic Segment Space Management (ASSM). Hard to avoid in 10gR2 and higher. All tablespaces except system, temp, and undo should use ASSM.
  • Increasing sequence caches in insert intensive applications improves instance affinity to index keys deriving their values from sequences.  Increase the Cache for Application Sequences and some System sequences for better performance. Use a large cache value of maybe 10,000 or more. Additionaly use of the NOORDER attribute is most effective, but it does not guarantee sequence numbers are generated in order of request (this is actually the default.)
  • The default setting for the SYS.AUDSES$ sequence is 20, this is too low for a RAC system where logins can occur concurrently from multiple nodes.  Refer to Note 395314.1.

General Configuration Considerations

  • In 10gR2 and above the LMS process is intended to run in the real time scheduling class. In some instances we have seen this prevented due to incorrect ownership or permissions for the oradism executable which is stored in the $ORACLE_HOME/bin directory. See Note 602419.1 for more details on this.
  • Avoid SETTING ORA_CRS_HOME environment variable. Setting this variable can cause problems for various Oracle components, and it is never necessary for CRS programs because they all have wrapper scripts.
  • Use Enterprise Manager or Grid Control to create database services - all features available in one tool. For 10.2 and 10.1 one can use dbca to create these services and hence define the preferred and available instances for these services as part of database creation. However in 11.1.0.6 this is only available in Enterprise Manager and has been removed from DBCA.
  • Configure Oracle Net Services load balancing properly to distribute connections. Load balancing should be used in combination with 10g Workload Services to provide the highest availability. The CLB_GOAL attribute of 10g workload services should be configured appropriately depending upon application requirements. Different workloads might require different load balancing goals. Use separate services for each workload with different CLB_GOAL.
  • Ensure the NUMA (Non Uniform Memory Architecture) feature is turned OFF unless explicitly required and tested, as there have been issues reported with NUMA enabled.  Refer to Note 759565.1 for more details.
  • Read and follow the Best Practices Guide for XA and RAC to avoid problems with XA transactions being split across RAC Instances. Reference: http://www.oracle.com/technology/products/database/clustering/pdf/bestpracticesforxaandrac.pdf
  • Increase retention period for AWR data from 7 days to at least one business cycle. Use the awrinfo.sql script to budget for the amount of information required to be stored in the AWR and hence sizing the same.
  • ONS spins consuming high CPU and/or memory. This is fixed in 10.2.0.4 & 11.1.0.6. Refer to Note 4417761.8 and Note 731370.1 for more details and workaround.
  • Use SRVCTL to register resources as the Oracle user (not as root user). Registering (database, instances, asm, listener, and services) resources as root can lead to inconsistent behavior. During clusterware install, nodeapps is created by the root user. Only the vip resource should be owned by root. Any other resources owned by root will need to be removed (as root) then re-created via the oracle user. Check the OCRDDUMP output for resource keys owned by root.
  • For versions 10gR2 and 11gR1, it is a best practice on all platforms to set the CSS diagwait parameter to 13 in order to provide time for dumping diagnostics in case of node evictions. Note 559365.1 has more details on diagwait.  In 11gR2 it is possible but should not be necessary to set diagwait.

E-Business Suite (with RAC) Considerations

  • Patch against known issues Bug 6142040 :  ICM DOES NOT UPDATE TARGET NODE AFTER FAILOVER and Bug 6161806 : APPSRAP: PCP NODE FAILURE IS NOT WORKING 
  • Change RAC APPS default setting to avoid slow Purchase Order approval.  Note 339508.1
  • It is recommended to set the init.ora parameter max_commit_propagation_delay= 0 in the init.ora or spfile for the E-business Suite on RAC. Note 259454.1
  • You can use Advanced Planning and Scheduling (APS) on a separate RAC (clustered). Merging APS into OLTP database and isolating the load to a separate RAC instance is supported. Refer to Knowledge Documents Note 279156.1 and Note 286729.1 for more details.
  • You can run Email Center in a RAC environment. Reference Knowledge Document Note 272266.1 for RAC related specific instructions.
  • You can run Oracle Financial Services Applications (OFSA) in a RAC environment? Refer to Knowledge Document Note 280294.1 for RAC related best practices.
  • Activity Based Management (ABM) is supported in a RAC environment. Reference Knowledge Document Note 303542.1 for RAC related best practices.
  • When using Oracle Application Tablespace Migration Utility (OATM) in a RAC environment, be sure to follow the instructions for RAC environments in Note 404954.1.

Peoplesoft (with RAC) Considerations

  • Each instance and service must have its own row in the PSDBOWNER table.  PSDBOWNER table must have as many rows as the number of database instances in cluster plus number of services in database.  
  • If the batch servers are on database nodes then set USELOCALORACLEDB=1.  By default process scheduler connects to database using sqlnet even its running locally and uses TCP/IP. If we set UseLocalOracleDB=1 in process scheduler domain configuration file(prcs.conf), it will use bequeath rather than TCP/IP and will improve performance.  If we set UseLocalOracleDB=1, we have to set ORACLE_SID in peoplesoft users profile otherwise process scheduler will not boot.
  • For REN (Remote Event Notification) server work to properly,  DB_NAME parameter should match in Application server domain and Process scheduler domain configuration which is being used to run the report.  In the case of RAC, we should always use Service name for App and batch server as database name, so it will match the DB_NAME for REN server to work, as well as balance the load across all instances.
  • See Note 747587.1 regarding PeopleSoft Enterprise PeopleTools Certifications

Tools/Utilities for Diagnosing and Working with Oracle Support

    • Install and run OSWATCHER (OSW) proactively for OS resource utilization diagnosability. OSW is a collection of UNIX shell scripts intended to collect and archive operating system and network metrics to aid diagnosing performance issues that is designed to run continuously and to write the metrics to ASCII files which are saved to an archive directory. The amount of archived data saved and frequency of collection are based on user parameters set when starting OSW. It is highly recommended that OSW be installed and run continuously on ALL cluster nodes, at all times. Note 301137.1. Be sure to use separate directories per node for storing OSW output. When using OSWatcher in a RAC environment, each node must write its output files to a separate archive directory. Combining the output files under one archive (on shared storage) is not supported and causes the OSWg tool to crash. Shared storage is fine, but each node needs a separate archive directory.
    • Use the ASM command line utility (ASMCMD) to manage Automatic Storage Management (ASM). Oracle database 10gR2 provides two new options to access and manage ASM files and related information via command line interface - asmcmd and ASM ftp. Note 332180.1 discusses asmcmd and provides sample Linux shell script to demonstrate the asmcmd in action.
    • Use the cluster deinstall tool to remove CRS install - if needed. The clusterdeconfig tool removes and deconfigures all of the software and shared files that are associated with an Oracle Clusterware or Oracle RAC Database installation. The clusterdeconfig tool removes the software and shared files from all of the nodes in a cluster. Reference: http://www.oracle.com/technology/products/database/clustering/index.html
    • Use diagcollection.pl for CRS diagnostic collections. Located in $ORA_CRS_HOME/bin as part of a default installation. Note 330358.1
    • On Windows and Linux Platforms, the Cluster Health Monitor can be used to track OS resource consumption and collect and analyze data cluster-wide. For more information, and to download the tool, refer to the following link on OTN:  http://www.oracle.com/technology/products/database/clustering/ipd_download_homepage.html

    11gR2 Specific Considerations

    RAC Platform Generic References

    CRS / RAC Related References

    RAC / RDBMS Related References

    VIP References

    • Note 298895.1 Modifying the default gateway address used by the Oracle 10g VIP
    • Note 338924.1 CLUVFY Fails With Error: Could not find a suitable set of interfaces for VIPs

    ASM References

    11.2 References

    • Note 1050693.1 Troubleshooting 11.2 Clusterware Node Evictions (Reboots)
    • Note 1053147.1 11gR2 Clusterware and Grid Home - What You Need to Know

    Infiniband References

    MAA / Standby References

    Oracle's Maximum Availability Architecture (MAA) provides superior data protection and availability by minimizing or eliminating planned and unplanned downtime at all technology stack layers including hardware or software components. Data protection and high availability are achieved regardless of the scope of a failure event - whether from hardware failures that cause data corruptions or from catastrophic acts of nature that impact a broad geographic area.

    MAA also eliminates guesswork and uncertainty when implementing a high availability architecture utilizing the full complement of Oracle HA technologies.   RAC is an integral component of the MAA Architecture, but is just one piece of the MAA strategy.    The following references will provide more background and refrences on the Oracle MAA Strategy:

    Patching References

    • Note 854428.1 Intro to Patch Set Updates (PSU)
    • Note 850471.1 Oracle Announces First Patch Set Update For Oracle Database Release 10.2
    • Note 756671.1 Oracle Recommended Patches -- Oracle Database
    • Note 567631.1 How to Check if a Patch requires Downtime?
    • Note 761111.1 Online Patches
    • Note 438314.1 Critical Patch Update - Introduction to Database n-Apply CPUs
    • Note 405820.1 10.2.0.X CRS Bundle Patch Information
    • Note 810663.1 11.1.0.X CRS Bundle Patch Information
    • Note 742060.1 Release Schedule of Current Database Patch Sets
    • Note 363254.1 Applying one-off Oracle Clusterware patches in a mixed version home environment
    • Note 550522.1 How To Avoid Disk Full Issues Because OPatch Backups Take Big Amount Of Disk Space.
    • Note 555579.1  10.2.0.4 Patch Set - Availability and Known Issues

      Upgrade References

    E-Business References

    • 11g E-business white papers: http://www.oracle.com/apps_benchmark/html/white-papers-e-business.html
    • Note 455398.1 Using Oracle 11g Release 1 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 11i (11.1.0.7)
    • Note 388577.1 Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12
    • Note 559518.1 Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone
    • Note 165195.1 Using AutoConfig to Manage System Configurations with Oracle Applications 11i
    • Note 294652.1 E-Business Suite 11i on RAC : Configuring Database Load balancing & Failover
    • Note 362135.1 Configuring Oracle Applications Release 11i with 10g R2 RAC and ASM
    • Note 362203.1 Oracle Applications Release 11i with Oracle 10g Release 2 (10.2.0)
    • Note 241370.1 Concurrent Manager Setup and Configuration Requirements in an 11i RAC Environment
    • Note 240818.1 Concurrent Processing: Transaction Manager Setup and Configuration Requirement in an 11i RAC Environment

    Unix References

    Weblogic/RAC References

    References Related to Working with Oracle Support

              My Oracle Support (formerly MetaLink) Knowledge Documents
    • Note 736737.1 My Oracle Support - The Next Generation Support Platform
    • Note 730283.1 Get the most out of My Oracle Support
    • Note 747242.1 My Oracle Support Configuration Management FAQ
    • Note 209768.1 Database, FMW, Em Grid Control, and OCS Software Error Correction Support Policy
    • Note 868955.1 My Oracle Support Health Checks Catalog
    Process Oriented and Self Service Notes
    Service Request Diagnostics


            Modification History
            [11-Aug-2009] created this Modification History section

            [21-Aug-2009] added ORA-12545 suggestion

            [16-Sep-2009] changed IPD/OS to new name:  Cluster Health Monitor

            [22-Sep-2009] added opatch patch number

            [29-Sep-2009]  clarified support of OATM in RAC environments

            [09-Oct-2009]  added odd # of voting disks recommendation and reference to Health Check catalog note

            [23-Oct-2009]  added reference to space considerations while patching and 11.1 CRS patch bundle reference

            [10-Nov-2009]  uploaded new version of RAC System Load Testing white paper

            [12-Nov-2009]  added 11gR2 specific section

            [24-Nov-2009]  added Infiniband References

            [20-Nov-2009]  added link to 11gR2 upgrade presentation and reference to 555579.1 and 454506.1

            [09-Dec-2009]  added 'REN' success factor

            [21-Dec-2009]  added reference to Rapid Oracle RAC Standby Deployment white paper, Golden Gate reference, created Oracle VM section, added optimizer reference to the 11gR2 section, added reference to PeopleSoft Enterprise PeopleTools Certifications

            [7-Jan-2010]  added some MAA/Standby reference links

            [19-Jan-2010] added reference to Note 1050693.1

            [27-Jan-2010] added reference to Note 1053147.1 11gR2 Clusterware and Grid Home - What You Need to Know

            [28-Jan-2010] modified diagwait best practice to include information on 11gR2

            [1-Feb-2010]  added reference to Note 949322.1 Oracle11g Data Guard: Database Rolling Upgrade Shell Script

            [3-Feb-2010]  added reference to Database Upgrade Using Transportable Tablespaces




          Show Attachments Attachments

          Blogged with the Flock Browser