Step by Step Upgrade Oracle RAC Grid Infrastructure and Database from 11g to 12c

 

Upgrade RAC Grid and Database from 11.2.0.4 to 12.1.0.2 :-

Main steps :

Grid :-

  1.  Check all services are up and running from 11gR2 GRID_HOME
  2.  Perform backup of OCR, voting disk and Database.
  3.  Create new directory for installing 12C software on both RAC nodes.
  4.  Run “runcluvfy.sh” to verify errors .
  5.  Install and upgrade GRID from 11gR2 to 12cR1
  6. Verify upgrade version

Database  :-

  1. Backup the database before the upgrade
  2. Database upgrade Pre-check
    • Creating Stage for 12c database software
    • Creating directory for 12c oracle home
    • Check the pre upgrade status.
  3. Unzip 12c database software in stage
  4. Install the 12.1.0.2 using the software only installation
  5. Run Preupgrade.sql script in 11.2.0.4 existing database from newly installed 12c home.
  6. Run the DBUA to start the database upgrade.
  7. Database post upgrade check.
  8. Check Database version.

Environment variables for 11g database :-

GRID :

grid()
{
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
export ORACLE_SID=+ASM1
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
SQLPATH=/u01/app/oracle/scripts/sql:/u01/app/11.2.0/grid/rdbms/admin:/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin; export SQLPATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
}

DATABASE :

11g()
{
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_HOME
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_SID=orcl11g1
export ORACLE_SID
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:.
export LD_LIBRARY_PATH
LIBPATH=$ORACLE_HOME/lib32:$ORACLE_HOME/lib:/usr/lib:/lib
export LIBPATH
TNS_ADMIN=${ORACLE_HOME}/network/admin
export TNS_ADMIN
PATH=$ORACLE_HOME/bin:$PATH:.
export PATH
}

Upgrade GRID Infrastructure Software 12c :-

Check GRID Infrastructure software version and Clusterware status:

[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]

[oracle@racpb1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Verify all services are up and running from 11gR2 GRID Home :

[oracle@racpb1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.LISTENER.lsnr
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.asm
ONLINE ONLINE racpb1 Started
ONLINE ONLINE racpb2 Started
ora.gsd
OFFLINE OFFLINE racpb1
OFFLINE OFFLINE racpb2
ora.net1.network
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.ons
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.registry.acfs
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racpb2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE racpb1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE racpb1
ora.cvu
1 ONLINE ONLINE racpb1
ora.oc4j
1 ONLINE ONLINE racpb1
ora.orcl11g.db
1 ONLINE ONLINE racpb1 Open
2 ONLINE ONLINE racpb2 Open
ora.racpb1.vip
1 ONLINE ONLINE racpb1
ora.racpb2.vip
1 ONLINE ONLINE racpb2
ora.scan1.vip
1 ONLINE ONLINE racpb2
ora.scan2.vip
1 ONLINE ONLINE racpb1
ora.scan3.vip
1 ONLINE ONLINE racpb1

Check Database status and configuration :

oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl11g
Database instances: orcl11g1,orcl11g2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed

Perform local backup of OCR :

[root@racpb1 ~]$ mkdir -p /u01/ocrbkp
[root@racpb1 ~]# cd /u01/app/11.2.0/grid/bin/
[root@racpb1 bin]# ./ocrconfig -export /u01/ocrbkp/ocrfile

Move the 12c GRID Software to the server and unzip the software :

[oracle@racpb1 12102_64bit]$ unzip -d /u01/ linuxamd64_12102_grid_1of2.zip
Archive:  linuxamd64_12102_grid_1of2.zip
   creating: /u01/grid/
.
.

[oracle@racpb1 12102_64bit]$ unzip -d /u01/ linuxamd64_12102_grid_2of2.zip
Archive:  linuxamd64_12102_grid_2of2.zip
   creating: /u01/grid/stage/Components/oracle.has.crs/
.
.

Run cluvfy utility to pre-check  any errors :

Execute runcluvfy.sh from 12cR1 software location,

[oracle@racpb1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/zpp/12.1.0/grid -dest_version 12.1.0.2.0 -verbose

Make sure the cluvfy executed successfully. If any error, please take action before going to GRID 12cR1 upgrade.The cluvfy log is attached here.

Stop the running 11g database :

[oracle@racpb1 ~]$ ps -ef|grep pmon
oracle 3953 1 0 Dec22 ? 00:00:00 asm_pmon_+ASM1
oracle 4976 1 0 Dec22 ? 00:00:00 ora_pmon_orcl11g1
oracle 23634 4901 0 00:55 pts/0 00:00:00 grep pmon

[oracle@racpb1 ~]$ srvctl stop database -d orcl11g

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is not running on node racpb1
Instance orcl11g2 is not running on node racpb2

Take GRID_HOME backup on both nodes :

[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$  tar -cvf grid_home_11g.tar $GRID_HOME

Check Clusterware services status before upgrade :

[oracle@racpb1 ~]$ crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Start the 12cR1 upgrade by executing runInstaller :

[oracle@racpb1 ~]$ cd /u01/
[oracle@racpb1 u01]$ cd grid/

[oracle@racpb1 grid]$ ./runInstaller 
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 415 MB. Actual 8565 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5996 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-12-23_01

Select Upgrade option to upgrade GRID 12c infrastructure and ASM.

Check the public host names and existing GRID_HOME

Uncheck the EM cloud control option to disable EM.

Specify location for ORACLE_BASE and ORACLE_HOME for 12c. 

Ignore the SWAP SIZE it has to be twice the size of memory in server.

 

Execute rootupgrade.sh script in both nodes :

 First node (racpb1)  :-

[root@racpb1 bin]# sh /u01/app/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2018/12/23 12:18:59 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:18:59 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:08 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:19 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:22 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/12/23 12:19:30 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/12/23 12:19:30 CLSRSC-363: User ignored prerequisites during installation
2018/12/23 12:19:38 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2018/12/23 12:19:38 CLSRSC-482: Running command: '/u01/app/12.1.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/11.2.0/grid -oldCRSVersion 11.2.0.4.0 -nodeNumber 1 -firstNode true -startRolling true'

ASM configuration upgraded in local node successfully.

2018/12/23 12:19:45 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2018/12/23 12:19:45 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/12/23 12:20:36 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2018/12/23 12:24:43 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/12/23 12:29:05 CLSRSC-472: Attempting to export the OCR
2018/12/23 12:29:06 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle oinstall'
2018/12/23 12:29:23 CLSRSC-473: Successfully exported the OCR
2018/12/23 12:29:29 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2018/12/23 12:29:29 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.

2018/12/23 12:29:30 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2018/12/23 12:29:30 CLSRSC-543:
3. The downgrade command must be run on the node racpb1 with the '-lastnode' option to restore global configuration data.
2018/12/23 12:29:55 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/12/23 12:30:19 CLSRSC-474: Initiating upgrade of resource types
2018/12/23 12:31:12 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p first'
2018/12/23 12:31:12 CLSRSC-475: Upgrade of resource types successfully initiated.
2018/12/23 12:31:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Second node (racpb2)  :-

[root@racpb2 ~]# sh /u01/app/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2018/12/23 12:34:35 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:35:15 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:35:17 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/12/23 12:35:24 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/12/23 12:35:24 CLSRSC-363: User ignored prerequisites during installation
ASM configuration upgraded in local node successfully.
2018/12/23 12:35:41 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/12/23 12:36:10 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2018/12/23 12:36:37 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/12/23 12:39:54 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
2018/12/23 12:40:21 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2018/12/23 12:40:21 CLSRSC-482: Running command: '/u01/app/12.1.0/grid/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2018/12/23 12:42:33 CLSRSC-479: Successfully set Oracle Clusterware active version

2018/12/23 12:42:39 CLSRSC-476: Finishing upgrade of resource types

2018/12/23 12:43:00 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p last'

2018/12/23 12:43:00 CLSRSC-477: Successfully completed upgrade of resource types

2018/12/23 12:43:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

After running rootupgrade.sh script,Click OK button.

Check the Clusterware upgrade version:

[root@racpb1 ~]# cd /u01/app/12.1.0/grid/bin/
[root@racpb1 bin]# ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

Note: If you are upgrading from 11.2.0.1/11.2.0.2/11.2.0.3 version to 12cR1 then you may need to apply additional patches before you proceed with upgrade.

Start the 11g database :

[oracle@racpb1 ~]$ srvctl start database -d orcl11g
[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Upgrade RAC database from 11gR2 to 12cR1 :-

Backup the database before the upgrade :

Take level zero backup or cold backup of database.

Database upgrade Pre-check :

  • Creating Stage for 12c database software.
[oracle@racpb1 ~]$ mkdir -p /u01/stage
[oracle@racpb1 ~]$ chmod -R 755 /u01/stage/
  • Creating directory for 12c ORACLE_HOME.
[oracle@racpb1 ~]$ mkdir -p /u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ chown -R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ chmod -R 775 /u01/app/oracle/product/12.1.0/db_1
  • Check the preupgrade status :

Run runcluvfy.sh from grid stage location :

[oracle@racpb1 grid]$ ./runcluvfy.sh stage -pre dbinst -upgrade -src_dbhome /u01/app/oracle/product/11.2.0/dbhome_1 -dest_dbhome /u01/app/oracle/product/12.1.0/db_1 -dest_version 12.1.0.2.0

Above command output has to be completed successfully to upgrade database from 11gR1 to 12cR1.

Unzip 12c database software in stage :

[oracle@racpb1 12102_64bit]$ unzip -d /u01/stage/ linuxamd64_12102_database_1of2.zip

[oracle@racpb1 12102_64bit]$ unzip -d /u01/stage/ linuxamd64_12102_database_2of2.zip

Unset the 11g env. :

unset ORACLE_HOME
unset ORACLE_BASE
unset ORACLE_SID

Install the 12.1.0.2 using the software only installation :

Set new 12c env. and Execute runInstaller.

[oracle@racpb1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ export ORACLE_BASE=/u01/app/oracle
[oracle@racpb1 ~]$ export ORACLE_SID=orcl12c
[oracle@racpb1 ~]$ 
[oracle@racpb1 ~]$ cd /u01/stage/database/
[oracle@racpb1 database]$ ./runInstaller 
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB. Actual 8533 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed
Checking monitor: must be configured to display at least 256 colors. 
Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-12-23_02-05-54PM. Please wait ...

Skip the security updates from Oracle Support.

Select RAC database installation.

After database 12c software installation done run the below script from both nodes :

 

Run Preupgrade.sql script :

  • Preupgrade script to identify any pre-reqs tasks that must be done on the database  before the upgrade.
  • Execute Preupgrade.sql script in 11.2.0.4 existing database from newly installed 12c ORACLE_HOME.
[oracle@racpb1 ~]$ . .bash_profile
[oracle@racpb1 ~]$ 11g
[oracle@racpb1 ~]$ cd /u01/app/oracle/product/12.1.0/db_1/rdbms/admin/
[oracle@racpb1 admin]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Dec 24 03:35:26 2018

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @preupgrd.sql

Loading Pre-Upgrade Package...

***************************************************************************
Executing Pre-Upgrade Checks in ORCL11G...
***************************************************************************************************************************************

====>> ERRORS FOUND for ORCL11G <<====

The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
prior to attempting your upgrade.
Failure to do so will result in a failed upgrade.

You MUST resolve the above errors prior to upgrade

************************************************************************************************************************

====>> PRE-UPGRADE RESULTS for ORCL11G <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/postupgrade_fixups.sql

***************************************************************************************************************************************
Pre-Upgrade Checks in ORCL11G Completed.
******************************************************************************************************************************************************
***********************************************************************

Run the DBUA to start the database upgrade :

Check Database version and configuration :-

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2
Configured nodes: racpb1,racpb2
Database is administrator managed

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Successfully upgrade the Rac database From 11g to 12c (Grid & DB).

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Reinstating a Failed Oracle Rac Primary Database using Dataguard Broker

In the previous article, we have performed the Oracle RAC to RAC physical standby database failover steps in 11gR2. Now, let us continue with the reinstate process of the failed primary RAC database.

Below is the environment details:-Note:-

As per the current status, the primary site ORCL_PRODRAC1 is failed and physical standby site ORCL_PRODRAC2 already converted as primary by failover process. So, now ORCL_PRODRAC2 is primary and ORCL_PRODRAC1 is failed primary which should be reinstated.

Steps involved in reinstate of failed primary RAC database:-

A) Check Database Configuration
B) Performing Reinstate
C) Verify the Reinstate of new physical standby database

A) Check Database Configuration:-

Configuration check in Cluster:-

Failed Primary:

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

New Primary:

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

DG Broker configuration:-

$ dgmgrl
DGMGRL> connect sys/manager@ORCL_PRODRAC2
DGMGRL> show configuration verbose;

Configuration - ORACLEDBWR

Protection Mode: MaxPerformance
Databases:
ORCL_PRODRAC2 - Primary database
ORCL_PRODRAC1 - Physical standby database (disabled)
ORA-16661: the standby database needs to be reinstated

Properties:
FastStartFailoverThreshold = '30'
OperationTimeout = '30'
FastStartFailoverLagLimit = '30'
CommunicationTimeout = '180'
ObserverReconnect = '0'
FastStartFailoverAutoReinstate = 'TRUE'
FastStartFailoverPmyShutdown = 'TRUE'
BystandersFollowRoleChange = 'ALL'
ObserverOverride = 'FALSE'
ExternalDestination1 = ''
ExternalDestination2 = ''
PrimaryLostWriteAction = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

B) Performing Reinstate:-

During the reinstate using dataguard broker, the failed primary site will be flashbacked and converted as physical standby database, then the media recovery process will be started automatically started. These can be view in the alert log of the database which is to be reinstated.

DGMGRL> reinstate database 'ORCL_PRODRAC1';

Reinstating database "ORCL_PRODRAC1", please wait...
Operation requires shutdown of instance "ORCL1" on database "ORCL_PRODRAC1"
Shutting down instance "ORCL1"...
Database closed.
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "ORCL1" on database "ORCL_PRODRAC1"
Starting instance "ORCL1"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "ORCL_PRODRAC1" ...
Operation requires shutdown of instance "ORCL1" on database "ORCL_PRODRAC1"
Shutting down instance "ORCL1"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "ORCL1" on database "ORCL_PRODRAC1"
Starting instance "ORCL1"...
ORACLE instance started.
Database mounted.
Continuing to reinstate database "ORCL_PRODRAC1" ...
Reinstatement of database "ORCL_PRODRAC1" succeeded

Attached the output of alert log of instance ORCL1 of database  ORCL_PRODRAC1.

C) Verify the Reinstate of new physical standby database:-

Configuration check in Cluster:-

Failed Primary:

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

New Primary:

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Verify in DG Broker configuration:-

$ dgmgrl
DGMGRL> connect sys/manager@ORCL_PRODRAC1
DGMGRL> show configuration verbose;

Configuration - ORACLEDBWR

Protection Mode: MaxPerformance
Databases:
ORCL_PRODRAC2 - Primary database
ORCL_PRODRAC1 - Physical standby database

Properties:
FastStartFailoverThreshold = '30'
OperationTimeout = '30'
FastStartFailoverLagLimit = '30'
CommunicationTimeout = '180'
ObserverReconnect = '0'
FastStartFailoverAutoReinstate = 'TRUE'
FastStartFailoverPmyShutdown = 'TRUE'
BystandersFollowRoleChange = 'ALL'
ObserverOverride = 'FALSE'
ExternalDestination1 = ''
ExternalDestination2 = ''
PrimaryLostWriteAction = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

From the above we can notice that the failed primary is now converted as physical standby database. If needed we can switchback it to primary. Refer here for the switchover process.

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Oracle Rac Database Failover using Dataguard Broker

In the previous article, we have configured the dataguard broker and performed the switchover from Oracle 11gR2 two node RAC primary to two node physical standby. In this article, let us perform the failover process. The environment details are provided below.

Steps involved in Failover using dataguard broker:-

A) Check Database Configuration before Failover
B) Performing Failover
C) Check database configuration after Failover

A) Check Database Configuration before Failover:-

Configuration check in Cluster:-

Primary:

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Standby:

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

B) Performing Failover:-

Here, connect to the standby site and perform the failover process.

$ dgmgrl
DGMGRL> connect sys/manager@ORCL_PRODRAC2
DGMGRL> failover to 'ORCL_PRODRAC2';
Performing failover NOW, please wait...
Operation requires a connection to instance "ORCL1" on database "ORCL_PRODRAC2"
Connecting to instance "ORCL1"...
Connected.
Failover succeeded, new primary is "ORCL_PRODRAC2"

C) Check database configuration after Failover:-

Configuration check in Cluster:-

Failed Primary(ORCL_PRODRAC1):

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Failedover Primary (ORCL_PRODRAC2):

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

From the above configuration we can see that the Database role for both failed primary site(ORCL_PRODRAC1) and new primary site(ORCL_PRODRAC2) is ‘Primary’ since the failed primary site crashed due to some unplanned outage and that site should be reinstated using dataguard broker.

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Oracle Rac Database Switchover Using Dataguard Broker

Description:-

In the previous article, we have configured dataguard broker for Oracle RAC database with physical standby in 11gR2. Here, we will continue with the switchover process. Below is the environment.Steps Involved in Switchover using dataguard broker:-

A) Check Database Configuration before Switchover
B) Performing Switchover
C) Check database configuration after Switchover

A) Check Database Configuration before Switchover:-

Configuration check in Cluster:-

Primary:

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Standby:

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Primary Node 1:-

$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 25-AUG-2018 01:09:45

Copyright (c) 1991, 2013, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 24-AUG-2018 02:18:33
Uptime 0 days 22 hr. 51 min. 11 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oradb/app/11.2.0.4/grid/network/admin/listener.ora
Listener Log File /oradb/app/oracle/diag/tnslsnr/prodrac101/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.211)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.214)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "ORCLXDB" has 1 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC1" has 1 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC1_DGB" has 1 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC1_DGMGRL" has 1 instance(s).
Instance "ORCL1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Primary Node 2:-

$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 25-AUG-2018 01:11:02

Copyright (c) 1991, 2013, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 24-AUG-2018 02:18:45
Uptime 0 days 22 hr. 52 min. 17 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oradb/app/11.2.0.4/grid/network/admin/listener.ora
Listener Log File /oradb/app/oracle/diag/tnslsnr/prodrac102/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.212)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.215)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "ORCLXDB" has 1 instance(s).
Instance "ORCL2", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC1" has 1 instance(s).
Instance "ORCL2", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC1_DGB" has 1 instance(s).
Instance "ORCL2", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC1_DGMGRL" has 1 instance(s).
Instance "ORCL2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Standby Node 1:-

$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 25-AUG-2018 01:11:29

Copyright (c) 1991, 2013, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 24-AUG-2018 02:18:46
Uptime 0 days 22 hr. 52 min. 43 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oradb/app/11.2.0.4/grid/network/admin/listener.ora
Listener Log File /oradb/app/oracle/diag/tnslsnr/prodrac201/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.221)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.224)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "ORCLXDB" has 1 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC2" has 1 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC2_DGB" has 1 instance(s).
Instance "ORCL1", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC2_DGMGRL" has 1 instance(s).
Instance "ORCL1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Standby Node 2:-

lsnrctl status

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 25-AUG-2018 01:12:10

Copyright (c) 1991, 2013, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 24-AUG-2018 02:18:55
Uptime 0 days 22 hr. 53 min. 14 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oradb/app/11.2.0.4/grid/network/admin/listener.ora
Listener Log File /oradb/app/oracle/diag/tnslsnr/prodrac202/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.222)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.225)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM2", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC2" has 1 instance(s).
Instance "ORCL2", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC2_DGB" has 1 instance(s).
Instance "ORCL2", status READY, has 1 handler(s) for this service...
Service "ORCL_PRODRAC2_DGMGRL" has 1 instance(s).
Instance "ORCL2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

Configuration check in database level (Primary):-

$ sqlplus / as sysdba
SQL> select name, db_unique_name, database_role, switchover_status, open_mode from v$database;

NAME   DB_UNIQUE_NAME  DATABASE_ROLE  SWITCHOVER_STATUS  OPEN_MODE
------ --------------  -------------  -----------------  ---------
ORCL   ORCL_PRODRAC1   PRIMARY        TO STANDBY         READ WRITE

SQL> @stby_gap

DG Broker configuration:-

$ dgmgrl
DGMGRL> connect sys/manager@ORCL_PRODRAC1
DGMGRL> show configuration verbose;


Configuration - ORACLEDBWR

Protection Mode: MaxPerformance
Databases:
ORCL_PRODRAC1 - Primary database
ORCL_PRODRAC2 - Physical standby database

Properties:
FastStartFailoverThreshold = '30'
OperationTimeout = '30'
FastStartFailoverLagLimit = '30'
CommunicationTimeout = '180'
ObserverReconnect = '0'
FastStartFailoverAutoReinstate = 'TRUE'
FastStartFailoverPmyShutdown = 'TRUE'
BystandersFollowRoleChange = 'ALL'
ObserverOverride = 'FALSE'
ExternalDestination1 = ''
ExternalDestination2 = ''
PrimaryLostWriteAction = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Primary database setting:-

DGMGRL> show database verbose 'ORCL_PRODRAC1';

Database - ORCL_PRODRAC1

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
ORCL1
ORCL2

Properties:
DGConnectIdentifier = 'orcl_prodrac1'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '30'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value

Database Status:
SUCCESS


DGMGRL> show instance verbose 'ORCL1' on database 'ORCL_PRODRAC1';

Instance 'ORCL1' of database 'ORCL_PRODRAC1'

  Host Name: prodrac101.oracledbwr.com
  PFILE:
  Properties:
    SidName                         = 'ORCL1'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.214)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL_PRODRAC1_DGMGRL)(INSTANCE_NAME=ORCL1)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = '+DBWR_FRA'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = 'ORCL_%t_%s_%r.arc'
    TopWaitEvents                   = '(monitor)'

Instance Status:
SUCCESS

DGMGRL> show instance verbose 'ORCL2' on database 'ORCL_PRODRAC1';

Instance 'ORCL2' of database 'ORCL_PRODRAC1'

Host Name: prodrac102.oracledbwr.com
PFILE:
Properties:
SidName = 'ORCL2'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.215)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL_PRODRAC1_DGMGRL)(INSTANCE_NAME=ORCL2)(SERVER=DEDICATED)))'
StandbyArchiveLocation = '+DBWR_FRA'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'ORCL_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Instance Status:
SUCCESS

Standby database setting:-

DGMGRL> show database verbose 'ORCL_PRODRAC2';

Database - ORCL_PRODRAC2

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Apply Rate: 187.00 KByte/s
Real Time Query: ON
Instance(s):
ORCL1
ORCL2

Properties:
DGConnectIdentifier = 'orcl_prodrac2'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '30'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value

Database Status:
SUCCESS

DGMGRL> show instance verbose 'ORCL1' on database 'ORCL_PRODRAC2';

Instance 'ORCL1' of database 'ORCL_PRODRAC2'

Host Name: prodrac201.oracledbwr.com
PFILE:
Properties:
SidName = 'ORCL1'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.224)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL_PRODRAC2_DGMGRL)(INSTANCE_NAME=ORCL1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = '+DBWR_FRA'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'ORCL_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Instance Status:
SUCCESS

DGMGRL> show instance verbose 'ORCL2' on database 'ORCL_PRODRAC2';

Instance 'ORCL2' of database 'ORCL_PRODRAC2'

Host Name: prodrac202.oracledbwr.com
PFILE:
Properties:
SidName = 'ORCL2'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.225)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL_PRODRAC2_DGMGRL)(INSTANCE_NAME=ORCL2)(SERVER=DEDICATED)))'
StandbyArchiveLocation = '+DBWR_FRA'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'ORCL_%t_%s_%r.arc'
TopWaitEvents = '(monitor)'

Instance Status:
SUCCESS

B) Performing Switchover:-

It is ready for the switchover. Note that we do not need to shutdown any instances

DGMGRL> switchover to 'ORCL_PRODRAC2';
Performing switchover NOW, please wait...
Operation requires a connection to instance "ORCL1" on database "ORCL_PRODRAC2"
Connecting to instance "ORCL1"...
Connected.
New primary database "ORCL_PRODRAC2" is opening...
Operation requires startup of instance "ORCL1" on database "ORCL_PRODRAC1"
Starting instance "ORCL1"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "ORCL_PRODRAC2"

C) Check database configuration after Switchover:-

Old Primary:-

$ srvctl status database -d ORCL_PRODRAC1 -v
Instance ORCL1 is running on node prodrac101. Instance status: Open.
Instance ORCL2 is running on node prodrac102. Instance status: Mounted (Closed).

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

New Primary:-

$ srvctl status database -d ORCL_PRODRAC2 -v
Instance ORCL1 is running on node prodrac201. Instance status: Open.
Instance ORCL2 is running on node prodrac202. Instance status: Open.

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

In the above configuration we can see that database role for both the databases has been changed automatically in cluster, whereas in the manual switchover this has to be done manually. Also, we can notice that the “Start options” is open and read only in old primary and new primary respectively. It is recommended to change this in the cluster manually.

Old Primary:-

$ srvctl modify database -d ORCL_PRODRAC1 -s 'READ ONLY'
$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

New Primary:-

$ srvctl modify database -d ORCL_PRODRAC2 -s 'OPEN'
$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Let us verify in the database for archive gap.

$ sqlplus / as sysdba

SQL> @stby_gap

We can also verify the log shipping from the new primary to old primary by switch log in new primary and tail the alert log in old primary.

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Step by Step Configuration Of Data Guard Broker For Rac Database

Description:-

We have already configured Oracle RAC database with physical standby in 11gR2 here. Now, we are going to configure dataguard broker for the same environment.

Below is the High level steps involved in the configuring dataguard broker:-

1) Listener Entry for dataguard broker
2) Enable DG Broker Parameters
3) Create configuration for dataguard broker using dgmgrl
4) Check the configuration

Environment Details:-

Primary Configuration:-

$  srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Standby Configuration:-

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

1) Listener Entry for dataguard broker:-

Make the below entry in the listener.ora file which already exist in the GRID_HOME:-

Primary Node 1:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ORCL_PRODRAC1_DGMGRL)
(ORACLE_HOME = /oradb/app/oracle/product/11.2.0.4/db_1)
(SID_NAME = ORCL1)
)
)
Primary Node 2:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ORCL_PRODRAC1_DGMGRL)
(ORACLE_HOME = /oradb/app/oracle/product/11.2.0.4/db_1)
(SID_NAME = ORCL2)
)
)
Standby Node 1:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ORCL_PRODRAC2_DGMGRL)
(ORACLE_HOME = /oradb/app/oracle/product/11.2.0.4/db_1)
(SID_NAME = ORCL1)
)
)
Standby Node 2:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ORCL_PRODRAC2_DGMGRL)
(ORACLE_HOME = /oradb/app/oracle/product/11.2.0.4/db_1)
(SID_NAME = ORCL2)
)
)

Stop and start the listener in all the Nodes of both primary & standby:-

$ lsnrctl stop
$ lsnrctl start

Make sure the below entry is available in the tnsnames.ora file available in  ORACLE_HOME of all the nodes.

ORCL_PRODRAC1 =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = prodscn101.oracledbwr.com)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC1)
)
)

ORCL_PRODVIP101 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodvip101.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC1)
(INSTANCE_NAME = ORCL1)
)
)

ORCL_PRODVIP102 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodvip102.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC1)
(INSTANCE_NAME = ORCL2)
)
)

ORCL_PRODRAC2 =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = prodscn201.oracledbwr.com)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC2)
(UR = A)
)
)

ORCL_PRODRAC201 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodrac201.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC2)
(INSTANCE_NAME = ORCL1)
(UR = A)
)
)


ORCL_PRODVIP201 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodvip201.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC2)
(INSTANCE_NAME = ORCL1)
(UR = A)
)
)

ORCL_PRODVIP202 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodvip202.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC2)
(INSTANCE_NAME = ORCL2)
(UR = A)
)
)

2) Enable DG Broker Parameters:-

Create the required directories in the ASM diskgroup for the dataguard configuration file and start the DMON process by enabling “dg_broker_start” parameter.

Primary:-

ASMCMD> mkdir +DBWR_DATA/ORCL_PRODRAC1/DATAGUARDCONFIG
ASMCMD> mkdir +DBWR_FRA/ORCL_PRODRAC1/DATAGUARDCONFIG

SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1='+DBWR_DATA/ORCL_PRODRAC1/DATAGUARDCONFIG/dgb_config01.ora' SCOPE=BOTH sid='*';

System altered.

SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2='+DBWR_FRA/ORCL_PRODRAC1/DATAGUARDCONFIG/dgb_config02.ora' SCOPE=BOTH sid='*';

System altered.

SQL> ALTER SYSTEM SET dg_broker_start=true scope=both sid='*';

System altered.

Standby:-

ASMCMD> mkdir +DBWR_DATA/ORCL_PRODRAC2/DATAGUARDCONFIG
ASMCMD> mkdir +DBWR_FRA/ORCL_PRODRAC2/DATAGUARDCONFIG

SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1='+DBWR_DATA/ORCL_PRODRAC2/DATAGUARDCONFIG/dgb_config01.ora' SCOPE=BOTH sid='*';

System altered.

SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2='+DBWR_FRA/ORCL_PRODRAC2/DATAGUARDCONFIG/dgb_config02.ora' SCOPE=BOTH sid='*';

System altered.

SQL> ALTER SYSTEM SET dg_broker_start=true scope=both sid='*';

System altered.

3) Create configuration for dataguard broker using dgmgrl:-

$ dgmgrl

DGMGRL> connect sys/manager@ORCL_PRODRAC1
DGMGRL> show configuration;
DGMGRL> create configuration 'ORACLEDBWR' as primary database is 'ORCL_PRODRAC1' connect identifier is ORCL_PRODRAC1;

Configuration "ORACLEDBWR" created with primary database "ORCL_PRODRAC1"

DGMGRL> show configuration;

Configuration - ORACLEDBWR

Protection Mode: MaxPerformance
Databases:
ORCL_PRODRAC1 - Primary database

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED

DGMGRL> ADD DATABASE 'ORCL_PRODRAC2' AS CONNECT IDENTIFIER IS ORCL_PRODRAC2;

Database "ORCL_PRODRAC2" added

DGMGRL> show configuration;

Configuration - ORACLEDBWR
Protection Mode: MaxPerformance
Databases:
ORCL_PRODRAC1 - Primary database
ORCL_PRODRAC2 - Physical standby database

Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS

DGMGRL> enable configuration;

4) Check the configuration:-

DGMGRL> show configuration;

Configuration - ORACLEDBWR

Protection Mode: MaxPerformance
Databases:
ORCL_PRODRAC1 - Primary database
ORCL_PRODRAC2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database verbose 'ORCL_PRODRAC1';

Database - ORCL_PRODRAC1

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
ORCL1
ORCL2

Properties:
DGConnectIdentifier = 'orcl_prodrac1'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '30'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value

Database Status:
SUCCESS

DGMGRL> show database verbose 'ORCL_PRODRAC2';

Database - ORCL_PRODRAC2

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 1 second ago)
Apply Rate: 242.00 KByte/s
Real Time Query: ON
Instance(s):
ORCL1
ORCL2
Properties:
DGConnectIdentifier = 'orcl_prodrac2'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '30'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
SidName(*)
StaticConnectIdentifier(*)
StandbyArchiveLocation(*)
AlternateLocation(*)
LogArchiveTrace(*)
LogArchiveFormat(*)
TopWaitEvents(*)
(*) - Please check specific instance for the property value

Database Status:
SUCCESS

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Performing Reinstating a Failed Oracle 2 Node Rac Primary Database

Converting a Failed Primary Into a Standby Database Using Flashback Database

Description:-

In this article, we will be performing the reinstate of failed Oracle 11gR2 two node RAC into a physical standby database.

Let us assume, we have primary site with two node RAC in 11gR2 and standby site similar to primary with two nodes in 11gR2 and both the nodes of primary site was completely down and we have already failed over primary to standby. Now instead of building the physical standby using rman backup in the failed primary site, we have the option of reinstate the failed primary into physical standby from Oracle version 11.2.0.3 if we have already enabled the flashback in the failed primary before the failover.

Below are the steps to be followed to reinstate the failed primary RAC database into physical standby database.
1) Flashback the Failed Primary Database into a Physical Standby Database
2) Flash back the failed primary database
3) Convert the failed primary database to a physical standby database
4) Start the transport and apply redo to the new physical standby database

Below is the primary and standby configuration:Let us verify the status and configuration of the RAC database:-

New Primary Site:-

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Standby:- (Failed Primary Site)

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Step 1 – Flashback the Failed Primary Database into a Physical Standby Database:-

The following steps brings the old primary database back into the Data Guard configuration as a physical standby database.

On the new primary database, issue the following query to determine the SCN at which the old standby database became the new primary database:

SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;

TO_CHAR(STANDBY_BECAME_PRIMARY_SCN)
----------------------------------------
2398590

Step 2 – Flashback the failed primary database:-
Shut down the old primary database (if necessary), mount it, and flash it back to the value for STANDBY_BECAME_PRIMARY_SCN that was determined in Step 1

$ srvctl status database -d ORCL_PRODRAC1
Instance ORCL1 is not running on node prodrac101
Instance ORCL2 is not running on node prodrac102

$ srvctl start database -d ORCL_PRODRAC1 -i ORCL1 -o mount
$ srvctl status database -d ORCL_PRODRAC1 -v
Instance ORCL1 is running on node prodrac101. Instance status: Mounted.
Instance ORCL2 is not running on node prodrac102

Flashing back in Node1 (ORCL1) of failed primary:

SQL> FLASHBACK DATABASE TO SCN 2398590;

Flashback complete.

Step 3 – Convert the failed primary database to a physical standby database:-

Issue the following statement on the old primary database.

SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Database altered.

This statement will dismount the database after successfully converting the control file to a standby control file. We can verify it from the alert log.

$ tail -f alert_ORCL1.log
ALTER DATABASE CONVERT TO PHYSICAL STANDBY (ORCL1)
Flush standby redo logfile failed:1649
Clearing standby activation ID 1512257592 (0x5a233838)
The primary database controlfile was created using the
'MAXLOGFILES 192' clause.
There is space for up to 186 standby redo logfiles
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl5.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl6.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl7.f' SIZE 52428800;
Shutting down archive processes
Archiving is disabled
Completed: ALTER DATABASE CONVERT TO PHYSICAL STANDBY

Shut down and restart the database and start the recovery process in new standby database.

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Database altered.
$ tail -f alert_ORCL1.log
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT
Attempt to start background Managed Standby Recovery process (ORCL1)
Sun Aug 19 16:52:57 2018
MRP0 started with pid=80, OS id=1356
MRP0: Background Managed Standby Recovery process started (ORCL1)
Serial Media Recovery started
Sun Aug 19 16:53:02 2018
Managed Standby Recovery starting Real Time Apply
Media Recovery start incarnation depth : 1, target inc# : 3, irscn : 2398592
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Clearing online redo logfile 1 +DBWR_DATA/orcl_prodrac1/onlinelog/group_1.261.983984787
Clearing online log 1 of thread 1 sequence number 84
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT
Clearing online redo logfile 1 complete
Clearing online redo logfile 2 +DBWR_DATA/orcl_prodrac1/onlinelog/group_2.262.983984795
Clearing online log 2 of thread 1 sequence number 82
Clearing online redo logfile 2 complete
Clearing online redo logfile 3 +DBWR_DATA/orcl_prodrac1/onlinelog/group_3.263.983984807
Clearing online log 3 of thread 1 sequence number 83
Clearing online redo logfile 3 complete
Clearing online redo logfile 4 +DBWR_DATA/orcl_prodrac1/onlinelog/group_4.267.983985363
Clearing online log 4 of thread 2 sequence number 66
Clearing online redo logfile 4 complete
Clearing online redo logfile 5 +DBWR_DATA/orcl_prodrac1/onlinelog/group_5.268.983985375
Clearing online log 5 of thread 2 sequence number 67
Sun Aug 19 16:53:07 2018
Clearing online redo logfile 5 complete
Clearing online redo logfile 6 +DBWR_DATA/orcl_prodrac1/onlinelog/group_6.269.983985387
Clearing online log 6 of thread 2 sequence number 65
Clearing online redo logfile 6 complete
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_1_seq_84.410.984588465
Identified End-Of-Redo (failover) for thread 1 sequence 84 at SCN 0x0.249980
Resetting standby activation ID 1512257592 (0x5a233838)
Media Recovery End-Of-Redo indicator encountered
Media Recovery Continuing
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_1_seq_1.412.984588469
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_1_seq_2.413.984588469
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_1_seq_3.411.984588467
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_2_seq_1.407.984588457
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_2_seq_2.406.984588457
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_2_seq_3.408.984588457
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_1_seq_4.414.984588469
Sun Aug 19 16:53:29 2018
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_1_seq_5.415.984588469
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_2_seq_4.409.984588461
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_2_seq_5.417.984588593
Media Recovery Log +DBWR_FRA/orcl_prodrac1/archivelog/2018_08_19/thread_1_seq_6.416.984588577
Media Recovery Waiting for thread 1 sequence 7 (in transit)
Recovery of Online Redo Log: Thread 1 Group 8 Seq 7 Reading mem 0
Mem# 0: +DBWR_DATA/orcl_prodrac1/onlinelog/group_8.272.984080839
Media Recovery Waiting for thread 2 sequence 6 (in transit)
Recovery of Online Redo Log: Thread 2 Group 11 Seq 6 Reading mem 0
Mem# 0: +DBWR_DATA/orcl_prodrac1/onlinelog/group_11.275.984080865

Step 4 – Verify the redo transport and apply to the new physical standby database:-

Perform the following steps on the new primary database:
Issue the following query to see the current state of the archive destinations

SQL> col DEST_NAME for a20
SQL> col DESTINATION for a20
SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, ERROR,SRL FROM V$ARCHIVE_DEST_STATUS where DEST_ID=2;

DEST_ID    DEST_NAME            STATUS    PROTECTION_MODE
---------- -------------------- --------- --------------------
2           LOG_ARCHIVE_DEST_2  VALID     MAXIMUM PERFORMANCE 

DESTINATION     ERROR             SRL
--------------- ----------------- ----------------------------------
ORCL_PRODVIP101                    YES

Here, we have a valid destination. So, let’s verify the redo transport from primary to new physical standby database.

SQL> ALTER SYSTEM SWITCH LOGFILE;

System altered.
$ tail -f alert_ORCL1.log   <-- New Standby Site - Node1

Tue Aug 21 00:47:11 2018
RFS[7]: Selected log 8 for thread 1 sequence 13 dbid 1511765518 branch 984586385
Tue Aug 21 00:47:11 2018
Archived Log entry 265 added for thread 1 sequence 12 ID 0x5a248ea9 dest 1:
Tue Aug 21 00:47:11 2018
Media Recovery Waiting for thread 1 sequence 13 (in transit)
Recovery of Online Redo Log: Thread 1 Group 8 Seq 13 Reading mem 0
Mem# 0: +DBWR_DATA/orcl_prodrac1/onlinelog/group_8.272.984080839

Reference:-

https://docs.oracle.com/cd/E11882_01/server.112/e41134/scenarios.htm#SBYDB00910

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Step by Step Oracle 11gR2 RAC Standby Database Manual Failover

Oracle 11gR2 RAC to RAC Standby Database Manual Failover Steps

 

Description:-

In this article we are going to performing the Oracle 11gR2 two node RAC to RAC manual failover steps to a physical standby database.

Let us consider, we have primary site with two node RAC and standby site also configured with two node RAC in 11gR2. You can refer here, for two node RAC to RAC dataguard setup in 11gR2 and also we are using the same configuration from the referred one.

Here, we can see the primary and standby configuration:-

Let us verify the status and configuration of the RAC database:-

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Let us assume, the primary and standby sites are geographical segmented and due to a natural disaster or some power outage issue both the nodes of the primary site become unavailable for the application and we may not be able to restore/recover the primary site within a reasonable period of time,. In this situation, we will be pushed to make the available standby database which is with most recent recovery to be converted into primary and reduce the downtime of the database and make the database available for continuation of the business.

So, let’s follow the below steps for the above scenario.

A) Pre-Checks for Failover
1) Ensure primary is using Flashback for reinstate the failed primary
2) Flush any unsent redo in primary
3) Verify the standby database has the most recent archived redo log from each primary redo thread
4) Identify and resolve any archived redo log gaps
5) Repeat Step 4 until all gaps are resolved for each thread

B) Failover
1) Stop Redo Apply
2) Finish applying all received redo data
3) Verify that the target standby database is ready to become a primary database
4) Switch the physical standby database to the primary role
5) Open the new primary database

Let us verify the status and configuration of the RAC database:-

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

A) Pre-Failover Checks
Step 1 – Ensure primary is using Flashback for reinstate the failed primary:-  (Primary Side)

In order to reinstate the failed primary database, the flashback option must be enabled in the primary

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

If the flashback is disabled, follow the below step to enable flashback in the primary. This should be done before the failover.

I) Ensure db_recovery_file_dest is set

SQL> show parameter db_recovery_file_dest

NAME                          TYPE        VALUE
--------------------------    ----------- ------------------
db_recovery_file_dest         string      +DBWR_FRA
db_recovery_file_dest_size    big integer 12000M

If the above command returns any value, go for step III. Else set a proper directory or diskgroups.

SQL> alter system set db_recovery_file_dest='+DBWR_FRA' SCOPE=spfile sid='*';

System altered.

II) Ensure db_recovery_file_dest_size is set

SQL> alter system set db_recovery_file_dest_size=20G SCOPE=spfile sid='*';

System altered.

III) Stop and start the database

$ srvctl stop database -d ORCL_PRODRAC1
$ srvctl start database -d ORCL_PRODRAC1 -o mount
SQL> alter database flashback on;

Database altered.
SQL> alter database open; <--- Both Nodes

Database altered.
SQL> alter system set db_flashback_retention_target=2880 sid='*';

System altered.

Let our scenario Primary Database Crush Due to several reason like Hardware failure.

B) Failover
Step 1 – Stop Redo Apply:- (Standby Side) 

Issue the following SQL statement on the target standby database.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

Database altered.

Step 2 – Finish applying all received redo data:-

Issue the following SQL statement on the target standby database

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

Database altered.

Step 3 – Verify that the target standby database is ready to become a primary database:-

Query the SWITCHOVER_STATUS column of the V$DATABASE view on the target standby database

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

A value of either TO PRIMARY or SESSIONS ACTIVE indicates that the standby database is ready to be switched to the primary role. If neither of these values is returned, verify that Redo Apply is active and continue to query this view until either TO PRIMARY or SESSIONS ACTIVE is returned.

Step 4 – Switch the physical standby database to the primary role:-

Issue the following SQL statement on the target standby database

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;

Database altered.

Note:-
The WITH SESSION SHUTDOWN clause can be omitted from the switchover statement if the query of the SWITCHOVER_STATUS column performed in the previous step returned a value of TO PRIMARY.

Step 5 – Open the new primary database:-

SQL> ALTER DATABASE OPEN;  <-- Both Nodes

Database altered.
$ srvctl status database -d ORCL_PRODRAC2 -v
Instance ORCL1 is running on node prodrac201. Instance status: Open.
Instance ORCL2 is running on node prodrac202. Instance status: Open.

Do not use srvctl command to start the secondary instance in the new primary database, since we configured the “Start options as read only” and Oracle Cluster Register do not recognize the switchover/failover automatically. So, if we use srvctl the secondary instance will shutdown as below

$ tail -f alert_ORCL2.log
Lost write protection disabled
Completed: ALTER DATABASE MOUNT /* db agent *//* {1:13107:16648} */
ALTER DATABASE OPEN READ ONLY /* db agent *//* {1:13107:16648} */
AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access
Errors in file /oradb/app/oracle/diag/rdbms/orcl_prodrac2/ORCL2/trace/ORCL2_ora_5999.trc:
ORA-16002: database already open for read/write access by another instance
ORA-16002 signalled during: ALTER DATABASE OPEN READ ONLY /* db agent *//* {1:13107:16648} */...
Sun Aug 19 16:15:39 2018
Shutting down instance (abort)
License high water mark = 1
USER (ospid: 6022): terminating the instance
Instance terminated by USER, pid = 6022
Sun Aug 19 16:15:39 2018
Instance shutdown complete

Note:-
We can modify the database configuration details as below and use srvctl if we need.

$ srvctl modify database -d ORCL_PRODRAC2 -r PRIMARY -s OPEN
$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed
  1. After this we will start the new primary Database using SRVCTL
  2. After failover take fresh rman backup of New primary database

Reference: 

https://docs.oracle.com/cd/E11882_01/server.112/e41134/role_management.htm#SBYDB00625

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Step by Step Oracle 11gR2 RAC Standby Database Switchover

Step by Step Oracle 11gR2 RAC Standby Database Switchover

Description:- 

In this article, let us do the two node RAC to RAC manual switchover steps.

Let us consider, we have already configured primary site with two node RAC and standby site also with two node RAC in 11gR2. You can refer here, for two node RAC to RAC dataguard setup in 11gR2 and also we are using the same configuration from the referred one.

A) Pre-Switchover Checks:-
1) Verify Managed Recovery is running on the standby
2) Verify there are no large Gaps

B) Switchover:-
1) Cancel any apply delay for the target standby
2) Create Guaranteed Restore Points (optional)
3) Verify that the primary database can be switched to the standby role
4) If The Primary is a RAC, then shutdown all secondary primary instances
5) Switchover the primary to a standby database
6) Verify that the standby database can be switched to the primary role
7) Switchover the standby database to a primary
8) Open the new primary database
9) Restart the new standby

C) Post-Switchover Steps:-
1) Drop any Switchover Guaranteed Restore Points

Here, we can see the primary and standby configuration:-

Let us verify the status and configuration of the RAC database:-

Primary:

$ srvctl config database -d ORCL_PRODRAC1 -a
Database unique name: ORCL_PRODRAC1
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC1/spfileORCL_PRODRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC1
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

Standby:

$ srvctl config database -d ORCL_PRODRAC2 -a
Database unique name: ORCL_PRODRAC2
Database name: ORCL
Oracle home: /oradb/app/oracle/product/11.2.0.4/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL_PRODRAC2/spfileORCL_PRODRAC2.ora
Domain:
Start options: read only
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Server pools: ORCL_PRODRAC2
Database instances: ORCL1,ORCL2
Disk Groups: DBWR_DATA,DBWR_FRA
Mount point paths:
Services:
Type: RAC
Database is enabled
Database is administrator managed

A) Pre-Switchover Checks:-
Step 1 – Verify Managed Recovery is running on the standby:-
Execute the following statement to verify managed recovery is running in standby

SQL> SELECT PROCESS FROM V$MANAGED_STANDBY WHERE PROCESS LIKE 'MRP%';

no rows selected

Execute the below statement in primary to verify that recovery is running with “REAL TIME APPLY” option
In our environment we have configured LOG_ARCHIVE_DEST_2 as the ship redo for the standby. So verify with LOG_ARCHIVE_DEST_2 parameter in primary

SQL> SELECT RECOVERY_MODE FROM V$ARCHIVE_DEST_STATUS WHERE DEST_ID=2;

RECOVERY_MODE
-----------------------
IDLE

Here, managed standby recovery is not running or not started with real-time apply. So restart manager recovery in standby with below command. But, if the status is “REAL TIME APPLY”, we can go ahead for the next step.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Database altered.

Step 2 – Verify there are no large Gaps:-
Identify the current sequence number for each thread on the primary

SQL> SELECT THREAD#, SEQUENCE# FROM V$THREAD;

THREAD#    SEQUENCE#
---------- ----------
1           53
2           49

Verify the target physical standby database has applied up to, but not including the logs from the primary query. On the standby the following query should be within 1 or 2 of the primary query result.

SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$ARCHIVED_LOG
WHERE APPLIED = 'YES'
AND RESETLOGS_CHANGE# = (SELECT RESETLOGS_CHANGE#
FROM V$DATABASE_INCARNATION WHERE STATUS = 'CURRENT')
GROUP BY THREAD#;

THREAD#    MAX(SEQUENCE#)
---------- --------------
1          52
2          47

B) Switchover:-
Step 1 – Cancel any apply delay for the target standby:-
Remove any delay in applying redo that may be in effect on the standby database that will become the new primary database. Execute the following statement in standby, if there is a delay. Here, we don’t have any delay, so we can skip this step

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE NODELAY DISCONNECT FROM SESSION;

Step 2 – Create Guaranteed Restore Points (optional):-
Create a guaranteed restore point on the primary and standby database participating in the switchover as a fallback plan in case of any failure

On the standby:
Stop the apply process

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
Create a guaranteed restore point:
SQL> CREATE RESTORE POINT SWITCHOVER_A_TO_B GUARANTEE FLASHBACK DATABASE;

Restore point created.
Start the apply process:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Database altered.

On the primary
Create a guaranteed restore point

SQL> CREATE RESTORE POINT SWITCHOVER_A_TO_B GUARANTEE FLASHBACK DATABASE;
Restore point created.

Note: If a guaranteed restore points are created, make sure they are dropped post-switchover!

Step 3 – Verify that the primary database can be switched to the standby role:-
Query the SWITCHOVER_STATUS column of the V$DATABASE view on the primary database:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
--------------------
TO STANDBY

A value of TO STANDBY or SESSIONS ACTIVE (which requires the WITH SESSION SHUTDOWN clause on the switchover command) indicates that the primary database can be switched to the standby role. If neither of these values is returned, a switchover is not possible because redo transport is either mis-configured or is not functioning properly

Here we have TO STANDBY. So, now we are ready to the switchover

Step 4 – If The Primary is a RAC, then shutdown all secondary primary instances:-
If The Primary is a RAC, then shutdown all secondary instances in the primary site

$ srvctl status database -d ORCL_PRODRAC1
Instance ORCL1 is running on node prodrac101
Instance ORCL2 is running on node prodrac102

Here we know instance ORCL2 is running on prodrac102. So, shutdown it by below command

$ srvctl stop instance -d ORCL_PRODRAC1 -i ORCL2

Step 5 – Switchover the primary to a standby database:-

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY WITH SESSION SHUTDOWN;
Database altered.
$ tail -f alert_ORCL1.log   <---- In Primary

The primary database controlfile was created using the
'MAXLOGFILES 192' clause.
There is space for up to 186 standby redo logfiles
Use the following SQL commands on the standby database to create
standby redo logfiles that match the primary database:
ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl5.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl6.f' SIZE 52428800;
ALTER DATABASE ADD STANDBY LOGFILE 'srl7.f' SIZE 52428800;
Archivelog for thread 1 sequence 54 required for standby recovery
Switchover: Primary controlfile converted to standby controlfile succesfully.
Switchover: Complete - Database shutdown required
USER (ospid: 3779): terminating the instance
Instance terminated by USER, pid = 3779
Completed: ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY WITH SESSION SHUTDOWN
Shutting down instance (abort)
License high water mark = 5
Fri Aug 17 21:02:50 2018
Instance shutdown complete

Step 6 – Verify that the standby database can be switched to the primary role:-
Query the SWITCHOVER_STATUS column of the V$DATABASE view on the standby database:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
--------------------
TO PRIMARY

Step 7 – Switchover the standby database to a primary:-

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
Database altered.
$ tail -f alert_ORCL1.log <---- In New Primary

MRP0: Background Media Recovery process shutdown (ORCL1)
Role Change: Canceled MRP
All dispatchers and shared servers shutdown
CLOSE: killing server sessions.
CLOSE: all sessions shutdown successfully.
Fri Aug 17 21:05:17 2018
SMON: disabling cache recovery
Backup controlfile written to trace file /oradb/app/oracle/diag/rdbms/orcl_prodrac2/ORCL1/trace/ORCL1_ora_13556.trc
SwitchOver after complete recovery through change 1976213
Online logfile pre-clearing operation disabled by switchover
Online log +DBWR_DATA/orcl_prodrac2/onlinelog/group_1.265.984104543: Thread 1 Group 1 was previously cleared
Online log +DBWR_DATA/orcl_prodrac2/onlinelog/group_2.264.984104545: Thread 1 Group 2 was previously cleared
Online log +DBWR_DATA/orcl_prodrac2/onlinelog/group_3.263.984104545: Thread 1 Group 3 was previously cleared
Online log +DBWR_DATA/orcl_prodrac2/onlinelog/group_4.262.984104547: Thread 2 Group 4 was previously cleared
Online log +DBWR_DATA/orcl_prodrac2/onlinelog/group_5.261.984104547: Thread 2 Group 5 was previously cleared
Online log +DBWR_DATA/orcl_prodrac2/onlinelog/group_6.260.984104547: Thread 2 Group 6 was previously cleared
Standby became primary SCN: 1976211
AUDIT_TRAIL initialization parameter is changed back to its original value as specified in the parameter file.
Switchover: Complete - Database mounted as primary
Completed: ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN

Step 8 – Open the new primary database:-

SQL> ALTER DATABASE OPEN;
Database altered.
$ srvctl status database -d ORCL_PRODRAC2
Instance ORCL1 is running on node prodrac201
Instance ORCL2 is running on node prodrac202

Step 9 – Start the new standby:-

SQL> STARTUP MOUNT;   <--- In New Standby

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Database altered.

III) Post-Switchover Steps:-
Step – 1 Drop any Switchover Guaranteed Restore Points:-
On all databases where a Guaranteed Restore point was created

On New Standby:-

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.

SQL> DROP RESTORE POINT SWITCHOVER_A_TO_B;
Restore point dropped.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Database altered.
On New Primary:-

SQL> DROP RESTORE POINT SWITCHOVER_A_TO_B;
Restore point dropped.

Reference: 

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=579890596063881&id=1304939.1&_adf.ctrl-state=1emnpr0uk_267

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

 

Step by Step Configuration Of Oracle 11gR2 Two Node RAC Dataguard Setup

Step by step configuration of Oracle 11gR2 Two Node RAC Dataguard Setup

Description:-

In this article we are going to configure Oracle 11gR2 two node RAC dataguard setup in RHEL 6.8.

Two Node RAC to Two Node RAC Physical Dataguard:

· Prepare Primary Site
· Prepare Standby Site
· Create RAC Physical Standby Database

Below is the environment details for two node RAC dataguard setup.

Let us consider that we have already configured two node 11gR2 RAC database in prodrac101 & prodrac102 with database name ORCL and database unique name ORCL_PRODRAC1 for primary site.

For two node RAC setup please refer here. It is in 12cR1, whereas both 11gR2 and 12cR1 RAC setup is same. So, we can follow the same steps for two node RAC primary setup in 11gR2 also.

Let’s start the two node dataguard setup for two node RAC in 11gR2.

As similar to primary(prodrac101 & prodrac102) site, the grid software and database software should be installed already in standby(prodrac201 & prodrac202) site.

Overview of Primary RAC environment:-

IP Address Details of Primary:

#Public IP
192.168.1.211  prodrac101.oracledbwr.com  prodrac101
192.168.1.212  prodrac102.oracledbwr.com  prodrac102

#Private IP
192.168.2.211  prodprv101.oracledbwr.com  prodprv101
192.168.2.212  prodprv102.oracledbwr.com  prodprv102

#Virtual IP
192.168.1.214  prodvip101.oracledbwr.com  prodvip101
192.168.1.215  prodvip102.oracledbwr.com  prodvip102

#Scan IP
192.168.1.218  prodscn101.oracledbwr.com  prodscn101
192.168.1.219  prodscn101.oracledbwr.com  prodscn101
192.168.1.220  prodscn101.oracledbwr.com  prodscn101

Overview of Standby RAC environment:-

IP Address Details of Primary:

IP Address Details of Standby:

#Public IP
192.168.1.221  prodrac201.oracledbwr.com  prodrac201
192.168.1.222  prodrac202.oracledbwr.com  prodrac202

#Private IP
192.168.2.221  prodprv201.oracledbwr.com  prodprv201
192.168.2.222  prodprv202.oracledbwr.com  prodprv202

#Virtual IP
192.168.1.224  prodvip201.oracledbwr.com  prodvip201
192.168.1.225  prodvip202.oracledbwr.com  prodvip202

#Scan IP
192.168.1.228  prodscn201.oracledbwr.com  prodscn201
192.168.1.229  prodscn201.oracledbwr.com  prodscn201
192.168.1.230  prodscn201.oracledbwr.com  prodscn201

Prepare Primary Site:-

Prepare Primary RAC database for DR configuration:-

  • Change Database Unique Name & Enable Force Logging.
  •  Enable Archivelog Mode.
  • Modify init Parameters.
  • Create the SLRs (Standby Redo Logs).
  •  Create pfile for standby database.
  • Update the tnsnames.ora.

Change Database Unique Name & Enable Force Logging:-

SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='ORCL_PRODRAC1' SCOPE=SPFILE SID='*';
System altered.

$ srvctl stop database -d ORCL
$ srvctl start database -d ORCL_PRODRAC1 -o mount

SQL> ALTER DATABASE FORCE LOGGING;
Database altered.

SQL> ALTER DATABASE ARCHIVELOG;  <--- If not in archivelog mode
Database altered.
$ srvctl stop database -d ORCL_PRODRAC1
$ srvctl start database -d ORCL_PRODRAC1

SQL> SHOW PARAMETER DB_UNIQUE_NAME
NAME           TYPE        VALUE
-------------  ----------  ------------------------------
db_unique_name string      ORCL_PRODRAC1
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(ORCL_PRODRAC1,ORCL_PRODRAC2)' sid='*';
System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=ORCL_PRODVIP201 NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL_PRODRAC2' sid='*';
System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE sid='*';
System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_FORMAT='ORCL_%t_%s_%r.arc' SCOPE=SPFILE;
System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=30 sid='*';
System altered.

SQL> ALTER SYSTEM SET REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE SCOPE=SPFILE;
System altered.

SQL> ALTER SYSTEM SET FAL_SERVER='ORCL_PRODRAC2' sid='*';
System altered.

SQL> ALTER SYSTEM SET FAL_CLIENT='ORCL_PRODRAC1' sid='*';
System altered.

SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO SCOPE=BOTH SID='*';
System altered.

Add the standby redolog files in primary:

SQL> alter database add standby logfile thread 1 group 7 '+DBWR_DATA' size 50M;
Database altered.

SQL> alter database add standby logfile thread 1 group 8 '+DBWR_DATA' size 50M;
Database altered.

SQL> alter database add standby logfile thread 1 group 9 '+DBWR_DATA' size 50M;
Database altered.

SQL> alter database add standby logfile thread 2 group 10 '+DBWR_DATA' size 50M;
Database altered.

SQL> alter database add standby logfile thread 2 group 11 '+DBWR_DATA' size 50M;
Database altered.

SQL> alter database add standby logfile thread 2 group 12 '+DBWR_DATA' size 50M;
Database altered.

Run the below query in both the nodes of primary to find the newly added standby redlog files:

set lines 999 pages 999
col inst_id for 9999
col group# for 9999
col member for a60
col archived for a7
select * from
(select '[ ONLINE REDO LOG ]' AS "REDOLOG FILE TYPE", a.inst_id AS inst_id, a.group#, b.thread#, b.sequence#, a.member, b.status, b.archived, (b.BYTES/1024/1024) AS SIZE_MB
from gv$logfile a, gv$log b
where a.group#=b.group#
and a.inst_id=b.inst_id
and b.thread#=(select value from v$parameter where name = 'thread')
and a.inst_id=( select instance_number from v$instance)
UNION
select '[ STANDBY REDO LOG ]' AS "REDOLOG FILE TYPE", a.inst_id AS inst_id, a.group#, b.thread#, b.sequence#, a.member, b.status, b.archived, (b.BYTES/1024/1024) AS SIZE_MB
from gv$logfile a, gv$standby_log b
where a.group#=b.group#
and a.inst_id=b.inst_id
and b.thread#=(select value from v$parameter where name = 'thread')
and a.inst_id=( select instance_number from v$instance))
order by 2,3;

In Node 1:In Node 2:Now, verify all the required values have the appropriate values.

set lines 999 pages 999
col value for a110
col name for a50
select name, value
from v$parameter
where name in ('db_name','db_unique_name','log_archive_config', 'log_archive_dest_1','log_archive_dest_2',
'log_archive_dest_state_1','log_archive_dest_state_2', 'remote_login_passwordfile',
'log_archive_format','log_archive_max_processes','fal_server','fal_client','standby_file_management');


NAME                      VALUE
-------------------       -------------------------------
log_archive_dest_1        LOCATION=+DBWR_FRA
log_archive_dest_2        SERVICE=ORCL_PRODVIP201 NOAFFIRM ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL_PRODRAC2
log_archive_dest_state_1  enable
log_archive_dest_state_2  ENABLE
fal_client                ORCL_PRODRAC1
fal_server                ORCL_PRODRAC2
log_archive_config        DG_CONFIG=(ORCL_PRODRAC1,ORCL_PRODRAC2)
log_archive_format        ORCL_%t_%s_%r.arc
log_archive_max_processes 30
standby_file_management   AUTO
remote_login_passwordfile EXCLUSIVE
db_name                   ORCL
db_unique_name            ORCL_PRODRAC1

Create pfile and password in primary and copy to standby servers.

SQL> create pfile='/tmp/initORCL1.ora' from spfile;
File created.


$ orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=manager entries=5 force=y

Once the password file is copied to other node, rename the file with it’s respective $ORACLE_SID name. For example, Primary & Standby Node1’s password file name should be like orapwORCL1 and Primary & Standby Node2’s password file name should be like orapwORCL2

Create static listener in Oracle database home temporarily in the Standby Node1 and start it for active duplicate dataguard setup

cat listener.ora <-- Standby Node1

SID_LIST_LISTENER1 =
 (SID_LIST =
  (SID_DESC =
   (GLOBAL_DBNAME = ORCL_PRODRAC201)
   (ORACLE_HOME = /oradb/app/oracle/product/11.2.0.4/db_1)
   (SID_NAME = ORCL1)
  )
 )

LISTENER1 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = prodrac201.oracledbwr.com)(PORT = 1525))
    )
  )

ADR_BASE_LISTENER1 = /oradb/app/oracle

Start the new listener:

lsnrctl start LISTENER1

Make the below tns entries in tnsnames.ora file of Oracle Database Home of all the Nodes in the cluster.

cat tnsnames.ora <-- All Nodes

ORCL_PRODRAC1 =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = prodscn101.oracledbwr.com)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC1)))

ORCL_PRODVIP101 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodvip101.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC1)
(INSTANCE_NAME = ORCL1)))

ORCL_PRODVIP102 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodvip102.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC1)
(INSTANCE_NAME = ORCL2)))

ORCL_PRODRAC2 =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = prodscn201.oracledbwr.com)(PORT = 1521)))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC2)
(UR = A)))

ORCL_PRODRAC201 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodrac201.oracledbwr.com)(PORT = 1525))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC2)
(INSTANCE_NAME = ORCL1)
(UR = A)))


ORCL_PRODVIP201 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodvip201.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC2)
(INSTANCE_NAME = ORCL1)
(UR = A)))

ORCL_PRODVIP202 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = prodvip202.oracledbwr.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL_PRODRAC2)
(INSTANCE_NAME = ORCL2)
(UR = A)))

Prepare Standby Site:-

Prepare Standby Node 1 for DR configuration:-

· Create required directories for Standby Servers.
· Modify the pfile for standby.

Create required dump file directories in the Standby Servers:-

On prodrac201:
mkdir -p /oradb/app/oracle/admin/ORCL_PRODRAC2/adump
mkdir -p /oradb/app/oracle/diag/rdbms/orcl_prodrac2/ORCL1
cd /oradb/app/oracle/diag/rdbms/orcl_prodrac2/ORCL1
mkdir trace cdump

On prodrac202:
mkdir -p /oradb/app/oracle/admin/ORCL_PRODRAC2/adump
mkdir -p /oradb/app/oracle/diag/rdbms/orcl_prodrac2/ORCL2
cd /oradb/app/oracle/diag/rdbms/orcl_prodrac2/ORCL2
mkdir trace cdump

Modify initORCL1.ora file (under $ORACLE_HOME/dbs) which was copied from primary to standby:-

cat $ORACLE_HOME/dbs/initORCL1.ora

__oracle_base='/oradb/app/oracle'#ORACLE_BASE set from environment
*.audit_file_dest='/oradb/app/oracle/admin/ORCL_PRODRAC2/adump'
*.audit_trail='db'
*.compatible='11.2.0.4.0'
*.db_block_size=8192
*.db_create_file_dest='+DBWR_DATA'
*.db_create_online_log_dest_1='+DBWR_DATA'
*.db_domain=''
*.db_name='ORCL'
*.control_files='+DBWR_DATA','+DBWR_DATA','+DBWR_FRA'
*.db_recovery_file_dest='+DBWR_FRA'
*.db_recovery_file_dest_size=12G
*.db_unique_name='ORCL_PRODRAC2'
*.diagnostic_dest='/oradb/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'
*.fal_client='ORCL_PRODRAC2'
*.fal_server='ORCL_PRODRAC1'
*.log_archive_config='DG_CONFIG=(ORCL_PRODRAC1,ORCL_PRODRAC2)'
*.log_archive_dest_1='LOCATION=+DBWR_FRA'
*.log_archive_dest_2='SERVICE=ORCL_PRODVIP101 NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL_PRODRAC1'
*.log_archive_dest_state_2='ENABLE'
*.log_archive_format='ORCL_%t_%s_%r.arc'
*.log_archive_max_processes=30
*.memory_target=1048576000
*.nls_date_format='DD-MON-YYYY HH24:MI:SS'
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.standby_file_management='AUTO'
*.job_queue_processes=20
*.memory_max_target=1200M
*.memory_target=1200M
*.undo_tablespace='UNDOTBS1'
*.undo_management='AUTO'
*.undo_retention=10800
*.service_names='ORCL_PRODRAC2'
*.remote_listener='prodscn201.oracledbwr.com:1521'

Create the Physical Standby Database:-

· Start the Instance in NOMOUNT using the pfile_for_standby.txt.
· Restore the Standby Database using RMAN DUPLICATE command.
· Start the Managed Recovery Process
· Create the spfile and start the database on both the nodes using spfile.
· Register the New Standby database to the OCR.

Start the Instance in NOMOUNT state:-

Set the environment in Standby Node1 and start the instance in nomount:

export ORACLE_SID=ORCL1
export ORACLE_BASE=/oradb/app/oracle
export ORACLE_HOME=/oradb/app/oracle/product/11.2.0.4/db_1
export PATH=$ORACLE_HOME/bin:$PATH

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Aug 13 22:12:49 2018

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup nomount pfile=$ORACLE_HOME/dbs/initORCL1.ora
ORACLE instance started.

Total System Global Area 1252663296 bytes
Fixed Size 2252824 bytes
Variable Size 788533224 bytes
Database Buffers 452984832 bytes
Redo Buffers 8892416 bytes
SQL> exit

$ rman target sys/manager@ORCL_PRODVIP101 auxiliary sys/manager@ORCL_PRODRAC201

Recovery Manager: Release 11.2.0.4.0 - Production on Tue Aug 14 02:17:48 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1511765518)
connected to auxiliary database: ORCL (not mounted)

RMAN> run
2> {
3> allocate channel ch1 device type disk;
4> allocate channel ch2 device type disk;
5> allocate auxiliary channel ch3 device type disk;
6> allocate auxiliary channel ch4 device type disk;
7> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE NOFILENAMECHECK;
8> release channel ch1;
release channel ch2;
9> 10> release channel ch3;
11> release channel ch4;
12> }

using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=105 instance=ORCL1 device type=DISK
allocated channel: ch2
channel ch2: SID=112 instance=ORCL1 device type=DISK
allocated channel: ch3
channel ch3: SID=29 device type=DISK
allocated channel: ch4
channel ch4: SID=1 device type=DISK
Starting Duplicate Db at 14-AUG-18 02:17:56
contents of Memory Script:
{
backup as copy reuse
targetfile '/oradb/app/oracle/product/11.2.0.4/db_1/dbs/orapwORCL1' auxiliary format
'/oradb/app/oracle/product/11.2.0.4/db_1/dbs/orapwORCL1' ;
}
executing Memory Script
Starting backup at 14-AUG-18 02:17:56
Finished backup at 14-AUG-18 02:17:57
contents of Memory Script:
{
backup as copy current controlfile for standby auxiliary format '+DBWR_DATA/orcl_prodrac2/controlfile/current.272.984104279';
restore clone controlfile to '+DBWR_DATA/orcl_prodrac2/controlfile/current.273.984104279' from
'+DBWR_DATA/orcl_prodrac2/controlfile/current.272.984104279';
restore clone controlfile to '+DBWR_FRA/orcl_prodrac2/controlfile/current.267.984104279' from
'+DBWR_DATA/orcl_prodrac2/controlfile/current.272.984104279';
sql clone "create spfile from memory";
shutdown clone immediate;
startup clone nomount;
sql clone "alter system set control_files =
''+DBWR_DATA/orcl_prodrac2/controlfile/current.272.984104279'', ''+DBWR_DATA/orcl_prodrac2/controlfile/current.273.984104279'', ''+DBWR_FRA/orcl_prodrac2/controlfile/current.267.984104279'' comment=
''Set by RMAN'' scope=spfile";
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script
Starting backup at 14-AUG-18 02:17:58
channel ch1: starting datafile copy
copying standby control file
output file name=/oradb/app/oracle/product/11.2.0.4/db_1/dbs/snapcf_ORCL1.f tag=TAG20180814T021759 RECID=12 STAMP=984104282
channel ch1: datafile copy complete, elapsed time: 00:00:07
Finished backup at 14-AUG-18 02:18:06
Starting restore at 14-AUG-18 02:18:06
channel ch4: skipped, AUTOBACKUP already found
channel ch3: copied control file copy
Finished restore at 14-AUG-18 02:18:14
Starting restore at 14-AUG-18 02:18:15
channel ch4: skipped, AUTOBACKUP already found
channel ch3: copied control file copy
Finished restore at 14-AUG-18 02:18:23
sql statement: create spfile from memory
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1252663296 bytes
Fixed Size 2252824 bytes
Variable Size 805310440 bytes
Database Buffers 436207616 bytes
Redo Buffers 8892416 bytes
allocated channel: ch3
channel ch3: SID=29 device type=DISK
allocated channel: ch4
channel ch4: SID=30 device type=DISK
sql statement: alter system set control_files = ''+DBWR_DATA/orcl_prodrac2/controlfile/current.272.984104279'', ''+DBWR_DATA/orcl_prodrac2/controlfile/current.273.984104279'', ''+DBWR_FRA/orcl_prodrac2/controlfile/current.267.984104279'' comment= ''Set by RMAN'' scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)
Oracle instance started
Total System Global Area 1252663296 bytes
Fixed Size 2252824 bytes
Variable Size 805310440 bytes
Database Buffers 436207616 bytes
Redo Buffers 8892416 bytes
allocated channel: ch3
channel ch3: SID=29 device type=DISK
allocated channel: ch4
channel ch4: SID=30 device type=DISK
contents of Memory Script:
{
sql clone 'alter database mount standby database';
}
executing Memory Script
sql statement: alter database mount standby database
contents of Memory Script:
{
set newname for clone tempfile 1 to new;
switch clone tempfile all;
set newname for clone datafile 1 to new;
set newname for clone datafile 2 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 6 to new;
backup as copy reuse
datafile 1 auxiliary format new
datafile 2 auxiliary format new
datafile 3 auxiliary format new
datafile 4 auxiliary format new
datafile 5 auxiliary format new
datafile 6 auxiliary format new
;
sql 'alter system archive log current';
}
executing Memory Script
executing command: SET NEWNAME
renamed tempfile 1 to +DBWR_DATA in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting backup at 14-AUG-18 02:18:59
channel ch1: starting datafile copy
input datafile file number=00001 name=+DBWR_DATA/orcl_prodrac1/datafile/system.256.983984619
channel ch2: starting datafile copy
input datafile file number=00002 name=+DBWR_DATA/orcl_prodrac1/datafile/sysaux.257.983984621
output file name=+DBWR_DATA/orcl_prodrac2/datafile/sysaux.275.984104341 tag=TAG20180814T021900
channel ch2: datafile copy complete, elapsed time: 00:01:57
channel ch2: starting datafile copy
input datafile file number=00005 name=+DBWR_DATA/orcl_prodrac1/datafile/example.265.983984851
output file name=+DBWR_DATA/orcl_prodrac2/datafile/system.274.984104339 tag=TAG20180814T021900
channel ch1: datafile copy complete, elapsed time: 00:02:24
channel ch1: starting datafile copy
input datafile file number=00003 name=+DBWR_DATA/orcl_prodrac1/datafile/undotbs1.258.983984621
output file name=+DBWR_DATA/orcl_prodrac2/datafile/undotbs1.268.984104487 tag=TAG20180814T021900
channel ch1: datafile copy complete, elapsed time: 00:00:36
channel ch1: starting datafile copy
input datafile file number=00006 name=+DBWR_DATA/orcl_prodrac1/datafile/undotbs2.266.983985173
output file name=+DBWR_DATA/orcl_prodrac2/datafile/example.269.984104459 tag=TAG20180814T021900
channel ch2: datafile copy complete, elapsed time: 00:01:04
channel ch2: starting datafile copy
input datafile file number=00004 name=+DBWR_DATA/orcl_prodrac1/datafile/users.259.983984621
output file name=+DBWR_DATA/orcl_prodrac2/datafile/undotbs2.267.984104523 tag=TAG20180814T021900
channel ch1: datafile copy complete, elapsed time: 00:00:08
output file name=+DBWR_DATA/orcl_prodrac2/datafile/users.266.984104523 tag=TAG20180814T021900
channel ch2: datafile copy complete, elapsed time: 00:00:07
Finished backup at 14-AUG-18 02:22:10
sql statement: alter system archive log current
contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=12 STAMP=984104541 file name=+DBWR_DATA/orcl_prodrac2/datafile/system.274.984104339
datafile 2 switched to datafile copy
input datafile copy RECID=13 STAMP=984104541 file name=+DBWR_DATA/orcl_prodrac2/datafile/sysaux.275.984104341
datafile 3 switched to datafile copy
input datafile copy RECID=14 STAMP=984104541 file name=+DBWR_DATA/orcl_prodrac2/datafile/undotbs1.268.984104487
datafile 4 switched to datafile copy
input datafile copy RECID=15 STAMP=984104541 file name=+DBWR_DATA/orcl_prodrac2/datafile/users.266.984104523
datafile 5 switched to datafile copy
input datafile copy RECID=16 STAMP=984104541 file name=+DBWR_DATA/orcl_prodrac2/datafile/example.269.984104459
datafile 6 switched to datafile copy
input datafile copy RECID=17 STAMP=984104542 file name=+DBWR_DATA/orcl_prodrac2/datafile/undotbs2.267.984104523
Finished Duplicate Db at 14-AUG-18 02:22:33
released channel: ch1
released channel: ch2
released channel: ch3
released channel: ch4

Now stop the temporarily created listener and add the cluster parameters in standby and convert to two node RAC dataguard.

Stop the new listener:

lsnrctl stop LISTENER1

Create pfile from spfile and add the cluster parameters:

SQL> create pfile from spfile;
File created.

SQL> shut immediate
*.cluster_database_instances=2
*.cluster_database=true
ORCL1.local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.224)(PORT=1521))'
ORCL2.local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.225)(PORT=1521))'
ORCL1.instance_number=1
ORCL2.instance_number=2
ORCL1.instance_name='ORCL1'
ORCL2.instance_name='ORCL2'
ORCL1.thread=1
ORCL2.thread=2
ORCL1.undo_tablespace='UNDOTBS1'
ORCL2.undo_tablespace='UNDOTBS2'

Copy the new pfile to Node2 of Standby and startup the instance in both the nodes

Environment variable setup on node1:

$ export ORACLE_SID=ORCL1
$ export ORACLE_HOME=/oradb/app/oracle/product/11.2.0.4/db_1
$ orapwd file=$ORACLE_HOME/dbs/orapwORCL1 password=manager force=y
$ sqlplus / as sysdba
SQL> startup mount pfile=$ORACLE_HOME/dbs/initORCL1.ora

Environment variable setup on node2:

$ export ORACLE_SID=ORCL2
$ export ORACLE_HOME=/oradb/app/oracle/product/11.2.0.4/db_1
$ orapwd file=$ORACLE_HOME/dbs/orapwORCL2 password=manager force=y
$ sqlplus / as sysdba
SQL> startup mount pfile=$ORACLE_HOME/dbs/initORCL2.ora

Register the database instances with CRS framework using this command:

$ srvctl add database -d ORCL_PRODRAC2 -o /oradb/app/oracle/product/11.2.0.4/db_1
$ srvctl add instance -d ORCL_PRODRAC2 -i ORCL1 -n prodrac201
$ srvctl add instance -d ORCL_PRODRAC2 -i ORCL2 -n prodrac202

Now the instances are registered with CRS, shutdown both the instance in standby and start using srvctl command

$ srvctl start database -d ORCL_PRODRAC2 -o mount
$ srvctl status database -d ORCL_PRODRAC2 -v

Start the recovery process in Standby:

alter database recover managed standby database disconnect from session;

Create spfile in ASM and point it using the pfile’s in $ORACLE_HOME/dbs location of both the nodes in Standby.

SQL> create spfile='+DBWR_DATA/ORCL_PRODRAC2/PARAMETERFILE/spfileORCL.ora' from pfile;

File created.

Use the following command to check the status of all instances converted RAC database:

SQL> set lines 180
SQL> col INST_NAME for a80
SQL> select * from v$active_instances;
INST_NUMBER   INST_NAME
------------  -------------------------------------------
          1   prodrac201.oracledbwr.com:ORCL1
          2   prodrac202.oracledbwr.com:ORCL2

In Node1 & Node2 of Standby:-

$ cat initORCL1.ora
spfile='+DBWR_DATA/ORCL_PRODRAC2/PARAMETERFILE/spfileORCL.ora'
$ cat initORCL2.ora
spfile='+DBWR_DATA/ORCL_PRODRAC2/PARAMETERFILE/spfileORCL.ora'

Now shutdown both the instances and startup anyone of the instance using the spfile from ASM and start the media recovery:-

Here, I am starting Node1 of Standby and starting the Media Recovery Process in it to apply archivelog from both nodes of primary.

$ srvctl stop database -d ORCL_PRODRAC2
$ srvctl start instance -d ORCL_PRODRAC2 -i ORCL1 -o mount 
$ srvctl status database -d ORCL_PRODRAC2 -v
$ sqlplus / as sysdba
SQL> alter database recover managed standby database disconnect from session;

We can verify the sync between Primary(Node1 & Node2) and Standby(Node1) by switch logfile in primary and tail alert log from OS or query the alert log from sqlplus. Here, we go with the query.

In Primary – Node1:

SQL> alter system switch logfile;

System altered.

In Standby – Node1:

SQL> @taln.sql

 ID Date Message
-------- -------------------- ------------------------------------------------------------------------------------------------------------------------
6161 14.08.2018 07:13:21 RFS[14]: Assigned to RFS process 56469
6162 14.08.2018 07:13:21 RFS[14]: Selected log 10 for thread 2 sequence 34 dbid 1511765518 branch 983984785
6163 14.08.2018 07:15:21 RFS[14]: Selected log 11 for thread 2 sequence 35 dbid 1511765518 branch 983984785
6164 14.08.2018 07:15:21 Archived Log entry 33 added for thread 2 sequence 34 ID 0x5a1b820b dest 1:
6165 14.08.2018 07:40:19 RFS[13]: Selected log 8 for thread 1 sequence 40 dbid 1511765518 branch 983984785
6166 14.08.2018 07:40:19 Archived Log entry 34 added for thread 1 sequence 39 ID 0x5a1b820b dest 1:
6167 14.08.2018 07:40:40 Archived Log entry 35 added for thread 2 sequence 35 ID 0x5a1b820b dest 1:
6168 14.08.2018 07:40:40 RFS[14]: Selected log 10 for thread 2 sequence 36 dbid 1511765518 branch 983984785
6169 14.08.2018 07:41:08 RFS[13]: Selected log 7 for thread 1 sequence 41 dbid 1511765518 branch 983984785
6170 14.08.2018 07:41:08 Archived Log entry 36 added for thread 1 sequence 40 ID 0x5a1b820b dest 1:

In Primary – Node2:

SQL> alter system switch logfile;

System altered.

In Standby – Node1:

SQL> @taln.sql

 ID Date Message
-------- -------------------- ------------------------------------------------------------------------------------------------------------------------
6163 14.08.2018 07:15:21 RFS[14]: Selected log 11 for thread 2 sequence 35 dbid 1511765518 branch 983984785
6164 14.08.2018 07:15:21 Archived Log entry 33 added for thread 2 sequence 34 ID 0x5a1b820b dest 1:
6165 14.08.2018 07:40:19 RFS[13]: Selected log 8 for thread 1 sequence 40 dbid 1511765518 branch 983984785
6166 14.08.2018 07:40:19 Archived Log entry 34 added for thread 1 sequence 39 ID 0x5a1b820b dest 1:
6167 14.08.2018 07:40:40 Archived Log entry 35 added for thread 2 sequence 35 ID 0x5a1b820b dest 1:
6168 14.08.2018 07:40:40 RFS[14]: Selected log 10 for thread 2 sequence 36 dbid 1511765518 branch 983984785
6169 14.08.2018 07:41:08 RFS[13]: Selected log 7 for thread 1 sequence 41 dbid 1511765518 branch 983984785
6170 14.08.2018 07:41:08 Archived Log entry 36 added for thread 1 sequence 40 ID 0x5a1b820b dest 1:
6171 14.08.2018 07:46:20 Archived Log entry 37 added for thread 2 sequence 36 ID 0x5a1b820b dest 1:
6172 14.08.2018 07:46:21 RFS[14]: Selected log 10 for thread 2 sequence 37 dbid 1511765518 branch 983984785

DONE!!!

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Voting Disk In Oracle Rac Database

Description:-

  • In this article we are going to see Voting Disk In Oracle Rac Database concepts
  • Oracle Clusterware uses voting disk files to determine which nodes are members of a cluster.
  • You can configure voting disks on Oracle ASM, or you can configure voting disks on shared storage.
  • If you do not configure voting disks on Oracle ASM, then for high availability, Oracle recommends that you have a minimum of three voting disks on physically separate storage.This avoids having a single point of failure. If you configure a single voting disk, then you must use external mirroring to provide redundancy.
  • No. of voting disks depend on the type of redundancy. From 11.2.0.x onwards OCR and voting files are placed in the ASM diskgroup.

External redundancy = 1 Voting disk
Normal redundancy = 3 Voting disks
High redundancy =      5 Voting disks

You can have up to 32 voting disks in your cluster

Oracle recommends that you configure multiple voting disks during Oracle Clusterware installation to improve availability. If you choose to put the voting disks into an Oracle ASM disk group, then Oracle ASM ensures the configuration of multiple voting disks if you use a normal or high redundancy disk group.

To identify the voting disk location :-

[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE b4a7f383bb414f7ebf6aaae7c3873401 (/dev/oracleasm/disks/ASMDISK1) [DATA]
Located 1 voting disk(s).

To backup the voting disk (Before 11gR2) :-

dd if=voting_disk_name of=backup_file_name

The following can be used to restore the voting disk from the backup file
created.

dd if=backup_file_name of=voting_disk_name

In previous versions of Oracle Clusterware you needed to backup the voting disks with the dd command. Starting with Oracle Clusterware 11g Release 2 you no longer need to backup the voting disks. The voting disks are automatically backed up as a part of the OCR. In fact, Oracle explicitly
indicates that you should not use a backup tool like dd to backup or restore voting disks. Doing so can lead to the loss of the voting disk.

What Information is stored in VOTING DISK/FILE?

It contains 2 types of data.

Static data: Information about the nodes in cluster

Dynamic data: Disk heartbeat logging

It contains the important details of the cluster nodes membership like

  • Which node is part of the cluster?
  • Which node is leaving the cluster?
  • Which node is joining the cluster?

Although the Voting disk contents are not changed frequently, you will need to back up the Voting disk file every time
– you add or remove a node from the cluster or
– immediately after you configure or upgrade a cluster.

To move voting disk  create another diskgroup with external redundancy named as ‘DATA1’

  • From 11gR2,voting files are stored on ASM diskgroup.
  • “add” or “delete” command is not available , only “replace” command is available when voting files are stored on ASM diskgroup.
  • Note: You cannot create more than 1 voting disk in the same or on another/different Disk group disk when using External Redundancy in 11.2.

To identify the status and voting disk location :-

[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE b4a7f383bb414f7ebf6aaae7c3873401 (/dev/oracleasm/disks/ASMDISK1) [DATA]
Located 1 voting disk(s).

Replace a voting disk :-

[oracle@rac1 ~]$ crsctl replace votedisk +DATA1
Successful addition of voting disk 9789b4bf42214f8bbf14fda587ba331a.
Successful deletion of voting disk b4a7f383bb414f7ebf6aaae7c3873401.
Successfully replaced voting disk group with +DATA1.
CRS-4266: Voting file(s) successfully replaced

Check the status and verify voting disk location :-

[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9789b4bf42214f8bbf14fda587ba331a (/dev/oracleasm/disks/ASMDISK2) [DATA1]
Located 1 voting disk(s).

Why should we have ODD number of voting disk?

A node must be able to access more than half of the voting disks at any time.

Scenario:

Let us consider 2 node clusters with even number of voting disks say 2.

  • Let node 1 is able to access voting disk 1.
  • Node 2 is able to access voting disk 2.
  • From the above steps, we see that we don’t have any common file where clusterware can check the heartbeat of both the nodes.
  • If we have 3 voting disks and both the nodes are able to access more than half ie., 2 voting disks, there will be atleast one disk which will be accessed by both the nodes. The clusterware can use this disk to check the heartbeat of the nodes.
  • A node not able to do so will be evicted from the cluster by another node that has more than half the voting disks to maintain the integrity of the cluster.

Recover the corrupted voting disk :-

ASMCMD> lsdsk -G DATA1

Path
/dev/oracleasm/disks/ASMDISK2

As a root user,

#dd if=/dev/zero of=/dev/oracleasm/disks/ASMDISK2 bs=4096 count=1000000

The above session will get hang,

Check the clusterware status on  another session,

**************************************************************
rac1:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
**************************************************************
rac2:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
**************************************************************

After reboot both the nodes,check the clusterware status :-

[oracle@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager

 Since voting disk can’t be restored back to DATA1 diskgroup as disk in DATA1 has been corrupted

Stop the CRS forcefully in both the nodes and check the clusterware status,

[root@rac1 bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

Start the CRS in exclusive mode in any nodes,

[root@rac1 bin]# ./crsctl start crs -excl
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2676: Start of 'ora.crf' on 'rac1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2679: Attempting to clean 'ora.asm' on 'rac1'
CRS-2681: Clean of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded

After CRS exclusive startup,check the clusterware status

[root@rac1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4692: Cluster Ready Services is online in exclusive mode
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Recreate the ASM diskgroups using ASMCA where voting disk is placed before named as ‘DATA1’

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 30718 20165 0 20165 0 N DATA/
MOUNTED EXTERN N 512 4096 1048576 10236 10183 0 10183 0 N DATA1/

Check the voting disk location :-

[oracle@rac1 ~]$ crsctl query css votedisk
Located 0 voting disk(s).

Replace the voting disk   :-

[oracle@rac1 ~]$ crsctl replace votedisk +DATA1
Successful addition of voting disk 5a1ef50fe3354f35bfa7f86a6ccb8990.
Successfully replaced voting disk group with +DATA1.
CRS-4266: Voting file(s) successfully replaced

[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 5a1ef50fe3354f35bfa7f86a6ccb8990 (/dev/oracleasm/disks/ASMDISK2) [DATA1]
Located 1 voting disk(s).

Stop the CRS running in exclusive mode,

# crsctl stop crs

Start the CRS(clusterware)  in all nodes,

# crsctl start crs

Check the clusterware status of both nodes,

[root@rac1 bin]# ./crsctl check cluster -all
**************************************************************
rac1:
CRS-4535: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4535: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Reference:-

https://docs.oracle.com/cd/E11882_01/rac.112/e41959/votocr.htm#CWADD91889

Catch Me On:- Hariprasath Rajaram

LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter:  https://twitter.com/hariprasathdba