Oracle 12c-2 Node Rac To Single Instance Standby Database Setup

Steps for creating Single instance standby database from RAC primary database :-

  1. Change the archive log mode :
$ sqlplus / as sysdba

SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> archive log list
Database log mode                  Archive Mode
Automatic archival                 Enabled
Archive destination                +DG01
Oldest online log sequence         299300
Next log sequence to archive       299305
Current log sequence               299305

2. Enable force logging mode:

SQL> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
NO

SQL> alter database force logging;

Database altered.

SQL> select force_logging from v$database;

FORCE_LOGGING
---------------------------------------
YES

3.  Parameter Configuration setup:

SQL> alter system set log_archive_config='DG_CONFIG=(prod,proddr)' SCOPE=both sid='*';

System altered.

SQL> alter system set log_archive_dest_1='LOCATION=/u01/app/oracle/oradata/prod/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=prod' SCOPE=both sid='*';

System altered.

SQL> alter system set log_archive_dest_2='SERVICE=proddr LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=proddr' SCOPE=both sid='*';
SQL> alter system set fal_server=prod SCOPE=both sid='*';

System altered.

SQL> alter system set fal_client=proddr SCOPE=both sid='*';

System altered.

SQL> alter system set standby_file_management=auto SCOPE=both sid='*';

System altered.

SQL> alter system set REMOTE_LOGIN_PASSWORDFILE=exclusive scope=spfile;

System altered.

4. Standby Listener Configuration:

[oracle@proddr01 ]$ export ORACLE_SID=prod
[oracle@proddr01 ]$ export ORACLE_HOME=/oracle/app/oracle/product/12.1.0/dbhome_1
[oracle@proddr01 admin]$ cd $ORACLE_HOME/network/admin
[oracle@proddr01 admin]$ cat listener.ora

# listener.ora Network Configuration File: /oracle/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora

# Generated by Oracle configuration tools.

SID_LIST_LISTENER =

  (SID_LIST =

    (SID_DESC =

      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)

      (SID_NAME = prod )

    )

  )

LISTENER_PRODDR=

  (DESCRIPTION_LIST =

    (DESCRIPTION =

      (ADDRESS = (PROTOCOL = TCP)(HOST = proddr01)(PORT = 1521))

    )

  )

ADR_BASE_LISTENER = /u01/app/oracle

5. TNS Connection string Configuration :

Standby and primary tnsnames.ora entry should be available in both nodes:

[oracle@proddr01 admin]$ cd $ORACLE_HOME/network/admin
[oracle@proddr01 admin]$ cat tnsnames.ora

PROD =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = prod1)(PORT = 1521))

    )

    (CONNECT_DATA =

       (SERVER = DEDICATED)

        (SID = prod1)

    )

  )

PRODDR =

  (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = proddr01)(PORT = 1521))

    (CONNECT_DATA =

      (SERVER = DEDICATED)

      (SID = prod)

    )

  )

6. Create respective directories in Standby Server:

[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/oradata/proddr/ctrl
[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/oradata/proddr/data
[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/oradata/proddr/logs
[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/oradata/proddr/arch
[oracle@proddr01 admin]$ mkdir /oracle/app/oracle/admin/proddr/adump

7. Start Standby listener :

[oracle@proddr01 admin] $lsnrctl start LISTENER_PRODDR

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 28-JAN-2019 14:05:49

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Starting listener to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=proddr01.localdomain.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER_PRODDR
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 03-DEC-2018 14:09:08
Uptime 55 days 23 hr. 56 min. 40 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/app/oracle/product/12.1.0/db_1/network/admin/listener.ora
Listener Log File /oracle/app/oracle/diag/tnslsnr/proddr01/listener_proddr/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=proddr01)(PORT=1521)))
Services Summary...
Service "proddr" has 1 instance(s).
Instance "proddr", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

8. Copying password & parameter file to standby server:

  • After copying pfile,only keep the parameter entry in PFILE:

db_name

[oracle@proddr01 ]$ cd $ORACLE_HOME/dbs
[oracle@prod1 dbs]$ scp initprod.ora orapwprod
oracle@proddr01:/oracle/app/oracle/product/12.1.0/dbhome_1/dbs oracle@proddr01's password: 
initprod.ora  100% 1536     1.5KB/s   00:00
orapwprod     100% 1536     1.5KB/s   00:00                                 
[oracle@proddr01 dbs]$ cat initprod.ora

db_name='prod'

9. Check connectivity between primary and standby side :

[oracle@proddr01 ]$ tnsping prod   [In boths the nodes]

[oracle@proddr01 ]$ tnsping proddr    [In boths the nodes]

10. Standby Database Creation :

Startup in nomount stage :

[oracle@proddr01 ]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Jan 29 01:12:25 2019

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup nomount

ORACLE instance started.

Total System Global Area  217157632 bytes
Fixed Size                  2211928 bytes
Variable Size             159387560 bytes
Database Buffers           50331648 bytes
Redo Buffers                5226496 bytes

11. Connect RMAN to create standby database,

Set cluster_database is FALSE.

[oracle@proddr01 ]$ rman target sys/****@prod auxiliary sys/****@proddr
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Jan 27 16:15:10 2019 Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: PROD (DBID=1459429229)
connected to auxiliary database: PROD (not mounted)

RMAN> run
{
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
parameter_value_convert 'prod','proddr'
set db_file_name_convert='+DG01/prod/datafile','/oradata1/proddr/data' 
set db_unique_name='proddr'
set cluster_database='false'
set log_file_name_convert='+DG01/prod/onlinelog','/oradata1/proddr/logs' 
set control_files='/oracle/app/oracle/oradata/proddr/ctrl/control.ctl'
set fal_client='proddr'
set fal_server='prod'
set audit_file_dest='/oracle/app/oracle/admin/proddr/adump'
set log_archive_config='dg_config=(proddr,prod)'
set log_archive_dest_1='location=location=/oradata1/prod/arch'
set log_archive_dest_2='service=prod async valid_for=(online_logfiles,primary_role) db_unique_name=prod'
set sga_target='50GB'
set sga_max_size='50GB'
set undo_tablespace='UNDOTBS1'
nofilenamecheck;
}

using target database control file instead of recovery catalog
allocated channel: prmy1
channel prmy1: SID=42 device type=DISK
 
allocated channel: prmy2
channel prmy2: SID=36 device type=DISK
 
allocated channel: prmy3 
channel prmy3 : SID=45 device type=DISK

allocated channel: prmy4 
channel prmy4 : SID=45 device type=DISK
 
allocated channel: stby
channel stby: SID=20 device type=DISK
 
Starting Duplicate Db at 28-JAN-19
.
.
.
.
.
Finished Duplicate Db at 28-JAN-19
released channel: prmy1
released channel: prmy2
released channel: prmy3
released channel: prmy4
released channel: stby
RMAN>

12. Enable Recovery Manager in standby side:

[oracle@proddr01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Jan 28 10:36:39 2019

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter database recover managed standby database disconnect from session;

Database altered.

13. Check Standby SYNC Verification:

SQL> SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference" FROM (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL WHERE ARCH.THREAD# = APPL.THREAD# ORDER BY 1;

Thread     Last Sequence Received Last Sequence Applied Difference
---------- ---------------------- --------------------- -----------
1          299314                 299314                0
2          149803                 149803                0

 

Catch Me On:- Hariprasath Rajaram 

Telegram:https://t.me/joinchat/I_f4DkeGfZuxgMIoJSpQZg LinkedIn:https://www.linkedin.com/in/hariprasathdba Facebook:https://www.facebook.com/HariPrasathdba 
FB Group:https://www.facebook.com/groups/894402327369506/ 
FB Page: https://www.facebook.com/dbahariprasath/? 
Twitter: https://twitter.com/hariprasathdba

Step by Step Deleting Node In Oracle RAC (12c Release 1) Environment

 

Steps for deleting node in Oracle RAC (12c Release 1) environment :

Steps for Deleting an Instance From the Cluster database :-

Invoke dbca from node 1 (racpb1) :

[oracle@racpb1 ~]$ . .bash_profile 
[oracle@racpb1 ~]$ 
[oracle@racpb1 ~]$ dbca

Check number of Instance running status :

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Check Instance removed from OCR :

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2
Configured nodes: racpb1,racpb2
Database is administrator managed

Remove Oracle RAC Database home :-

 Disable and Stop listener :

[oracle@racpb3 ~]$ srvctl status listener -l LISTENER
Listener LISTENER is enabled
Listener LISTENER is running on node(s): racpb3,racpb2,racpb1
[oracle@racpb3 ~]$ srvctl disable listener -l LISTENER -n racpb3
[oracle@racpb3 ~]$ srvctl stop listener -l LISTENER -n racpb3

Update Inventory on deleting node (racpb3) :

[oracle@racpb3 ~]$ export ORACLE_SID=orcl11g3
[oracle@racpb3 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@racpb3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 "CLUSTER_NODES={racpb3}" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5869 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Deinstall ORACLE_HOME :

Specify the “-local” flag as not to remove more than just the local node’s software.

[oracle@racpb3 ~]$ $ORACLE_HOME/deinstall/deinstall -local

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DECONFIG TOOL START ############

######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/app/oracle/product/12.1.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.1.0/grid
The following nodes are part of this cluster: racpb3,racpb2,racpb1
Checking for sufficient temp space availability on node(s) : 'racpb3'

## [END] Install check configuration ##

Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-12-28_11-36-29-PM.log
Network Configuration check config END
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2018-12-28_11-36-31-PM.log
Use comma as separator when specifying list of values as input

Specify the list of database names that are configured locally on this node for this Oracle home. Local configurations of the discovered databases will be removed []: orcl11g
Database Check Configuration END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check7641.log
Oracle Configuration Manager check END

######################### DECONFIG CHECK OPERATION END #########################

####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12.1.0/grid
The following nodes are part of this cluster: racpb3,racpb2,racpb1
The cluster node(s) on which the Oracle home deinstallation will be performed are:racpb3
Oracle Home selected for deinstall is: /u01/app/oracle/product/12.1.0/db_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-12-28_11-37-08-PM.log
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-12-28_11-37-08-PM.log
Network Configuration clean config END
Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean7641.log
Oracle Configuration Manager clean END

######################### DECONFIG CLEAN OPERATION END #########################

####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
#######################################################################

############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2018-12-28_11-27-37PM/response/deinstall_2018-12-28_11-36-19-PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############

####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_11-36-19-PM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to racpb3
Setting CLUSTER_NODES to racpb3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-12-28_11-27-37PM/oraInst.loc
Setting oracle.installer.local to true

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/12.1.0/db_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/12.1.0/db_1' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/12.1.0/grid'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-28_11-27-37PM' on node 'racpb3'

## [END] Oracle install clean ##

######################### DEINSTALL CLEAN OPERATION END #########################

####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/oracle/product/12.1.0/db_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/12.1.0/db_1' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL TOOL END #############

Update Inventory in remaining nodes :

[oracle@racpb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 "CLUSTER_NODES={racpb1,racpb2}" 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5999 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Remove GRID_HOME :-

Check the pinned status of nodes :

[oracle@racpb1 ~]$ olsnodes -s -t
racpb1 Active Unpinned
racpb2 Active Unpinned
racpb3 Active Unpinned

If the node is pinned, then run the crsctl unpin css to unpinned the nodes from GRID_HOME.

Disable the Clusterware daemon process from node (racpb3):

[root@racpb3 ~]# /u01/app/12.1.0/grid/crs/install/rootcrs.pl -deconfig -force -verbose
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.12.0/255.255.255.0/eth0, static
Subnet IPv6: 
Ping Targets: 
Network is enabled
Network is individually enabled on nodes: 
Network is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb1
VIP Name: racvr1
VIP IPv4 Address: 192.168.12.130
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb2
VIP Name: racvr2
VIP IPv4 Address: 192.168.12.131
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
VIP exists: network number 1, hosting node racpb3
VIP Name: racvr3
VIP IPv4 Address: 192.168.12.132
VIP IPv6 Address: 
VIP is enabled.
VIP is individually enabled on nodes: 
VIP is individually disabled on nodes: 
ONS exists: Local port 6100, remote port 6200, EM port 2016, Uses SSL false
ONS is enabled
ONS is individually enabled on nodes: 
ONS is individually disabled on nodes: 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racpb3'
CRS-2673: Attempting to stop 'ora.crsd' on 'racpb3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racpb3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'racpb3'
CRS-2677: Stop of 'ora.DATA.dg' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racpb3'
CRS-2677: Stop of 'ora.asm' on 'racpb3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racpb3' has completed
CRS-2677: Stop of 'ora.crsd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racpb3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racpb3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'racpb3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racpb3'
CRS-2673: Attempting to stop 'ora.storage' on 'racpb3'
CRS-2677: Stop of 'ora.gpnpd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.storage' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racpb3'
CRS-2677: Stop of 'ora.ctssd' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.crf' on 'racpb3' succeeded
CRS-2677: Stop of 'ora.asm' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racpb3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racpb3'
CRS-2677: Stop of 'ora.cssd' on 'racpb3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racpb3'
CRS-2677: Stop of 'ora.gipcd' on 'racpb3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racpb3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/12/29 00:13:32 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2018/12/29 00:14:03 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

2018/12/29 00:14:05 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

Delete clusterware configuration from other running nodes :

[root@racpb1 ~]# /u01/app/12.1.0/grid/bin/crsctl delete node -n racpb3
CRS-4661: Node racpb3 successfully deleted.

Check Clusterware status :

[oracle@racpb1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host 
----------------------------------------------------------------------
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racpb1 
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racpb1 
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb2 
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racpb1 
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE racpb1 
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE racpb2 
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racpb1 
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE racpb2 
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racpb1 
ora.orcl11g.db ora....se.type 0/2 0/1 ONLINE ONLINE racpb1 
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racpb1 
ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE racpb1 
ora.racpb1.ons application 0/3 0/0 ONLINE ONLINE racpb1 
ora.racpb1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb1 
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racpb2 
ora....B2.lsnr application 0/5 0/0 ONLINE ONLINE racpb2 
ora.racpb2.ons application 0/3 0/0 ONLINE ONLINE racpb2 
ora.racpb2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1 
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1
[oracle@racpb1 ~]$ crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racpb1 ~]$ olsnodes -s -t
racpb1 Active Unpinned
racpb2 Active Unpinned

Update Inventory :

[oracle@racpb3 ~]$ grid
[oracle@racpb3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@racpb3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racpb3}" CRS=TRUE -local 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5980 MB Passed
The inventory pointer is located at /etc/oraInst.loc


Deinstall GRID_HOME :

[oracle@racpb3 ~]$ cd /u01/app/12.1.0/grid/deinstall  
[oracle@racpb3 deinstall]$ ./deinstall -local  

Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/12.1.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home 
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc_2018-12-28_08-35-48PM.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-12-28_08-35-48-PM.log
Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all. 
[ASMNET1LSNR_ASM,M GMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2018-12-28_08-35-48-PM.log
ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: n
ASM was not detected in the Oracle Home
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_2018-12-28_08-35-48-PM.log
Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: 
The following nodes are part of this cluster: null
The cluster node(s) on which the Oracle home deinstallation will be performed are:null
Oracle Home selected for deinstall is: /u01/app/1210/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
ASM was not detected in the Oracle Home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-35-46-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-35-46-PM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-12-08_08-36-48-PM.log
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2018-12-28_08-36-48-PM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-12-28_08-36-48-PM.log
De-configuring Oracle Restart enabled listener(s): ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
De-configuring listener: ASMNET1LSNR_ASM
Stopping listener: ASMNET1LSNR_ASM
Deleting listener: ASMNET1LSNR_ASM
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: MGMTLSNR
Stopping listener: MGMTLSNR
Deleting listener: MGMTLSNR
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER
Stopping listener: LISTENER
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN3
Stopping listener: LISTENER_SCAN3
Deleting listener: LISTENER_SCAN3
Listener deleted successfully.
Listener de-configured successfully.
De-configuring listener: LISTENER_SCAN2
Stopping listener: LISTENER_SCAN2
Deleting listener: LISTENER_SCAN2
Listener deleted successfully.
Listener de-configured successfully
De-configuring listener: LISTENER_SCAN1
Stopping listener: LISTENER_SCAN1
Deleting listener: LISTENER_SCAN1
Listener deleted successfully.
Listener de-configured successfully.
De-configuring Listener configuration file...
Listener configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully. 
Network Configuration clean config END
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Following Oracle Restart enabled listener(s) were de-configured successfully: ASMNET1LSNR_ASM,MGMTLSNR,LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Restart is stopped and de-configured successfully.
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2018-12-28_08-33-16PM/response/deinstall2018-12-28_08-33-16PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deinstall2018-12-28_08-33-16PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-28_08-33-16PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to racpb3
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-12-15_28-33-16PM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/12.1.0/grid' on the local node : Succeeded <<<<

Delete directory '/u01/app/oraInventory' on the local node : Done

The Oracle Base directory '/u01/app/oracle' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END

## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-29_00-52-55AM' on node 'racpb3'

## [END] Oracle install clean ##

######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/12.1.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/12.1.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############

Update Inventory in remaining nodes :

[oracle@racpb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={racpb1,racpb2}" CRS=TRUE 
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5997 MB Passed
The inventory pointer is located at /etc/oraInst.loc

'UpdateNodeList' was successful.

 Verify the integrity of the cluster after the nodes have been removed :

[oracle@racpb1 ~]$ cluvfy stage -post nodedel -n racpb3

Performing post-checks for node removal

Checking CRS integrity...

CRS integrity check passed

Clusterware version consistency passed.

Node removal check passed

Post-check for node removal was successful.

 

Catch Me On:- Hariprasath Rajaram Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/ Facebook:https://www.facebook.com/HariPrasathdba                       FB Group:https://www.facebook.com/groups/894402327369506/                   FB Page: https://www.facebook.com/dbahariprasath/? Twitter: https://twitter.com/hariprasathdba

Step by Step Adding Node In Oracle RAC (12c Release 1) Environment

Steps for adding node in Oracle RAC (12c Release 1) environment :

For adding node to exisiting RAC environment,Initially we need a setup Oracle RAC environment to add nodes.Follow the below link  Steps for Oracle RAC 12cR1 Installation for Two-node RAC Installation.

Existing /etc/hosts file for Two-Node RAC Setup :-

#Public
192.168.12.128 racpb1.localdomain.com racpb1
192.168.12.129 racpb2.localdomain.com racpb2

#Private
192.168.79.128 racpv1.localdomain.com racpv1
192.168.79.129 racpv2.localdomain.com racpv2

#Virtual
192.168.12.130 racvr1.localdomain.com racvr1
192.168.12.131 racvr2.localdomain.com racvr2

#Scan
#192.168.12.140 racsn.localdomain.com racsn
#192.168.12.150 racsn.localdomain.com racsn
#192.168.12.160 racsn.localdomain.com racsn

Configure new host entry in all nodes of /etc/hosts file,

#Public
192.168.12.128 racpb1.localdomain.com racpb1
192.168.12.129 racpb2.localdomain.com racpb2
192.168.12.127 racpb3.localdomain.com racpb3


#Private
192.168.79.128 racpv1.localdomain.com racpv1
192.168.79.129 racpv2.localdomain.com racpv2
192.168.79.127 racpv3.localdomain.com racpv3


#Virtual
192.168.12.130 racvr1.localdomain.com racvr1
192.168.12.131 racvr2.localdomain.com racvr2
192.168.12.132 racvr3.localdomain.com racvr3

#Scan
#192.168.12.140 racsn.localdomain.com racsn
#192.168.12.150 racsn.localdomain.com racsn
#192.168.12.160 racsn.localdomain.com racsn

Creating groups and user on new node with same group and user id of existing nodes :

groups  : oinstall(primary group)  dba (secondary group)

#groupadd -g 54321 oinstall
#groupadd -g 54322 dba
#useradd -u 54323 -g oinstall -G dba oracle

ASM library Installation and Configuration :

[oracle@racpb3 Desktop]$ rpm -Uvh oracleasmlib-2.0.4-1.el6.x86_64 --nodeps --force
[oracle@racpb3 Desktop]$ oracleasm-support-2.1.8-1.el6.x86_64 --nodeps --force

Configuration  and Check ASM disks :

[root@racpb3 Panasonic DBA]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@racpb3 Panasonic DBA]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size 
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@racpb3 Panasonic DBA]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DATA"
[root@racpb3 Panasonic DBA]# oracleasm listdisks
DATA

Configure SSH for oracle user on all nodes :

Copy sshUserSetup.sh script to new node(racpb3) and execute it.

[root@racpb1 deinstall]# cd /u01/app/12.1.0/grid/deinstall

[root@racpb1 deinstall]# scp sshUserSetup.sh oracle@racpb3:/home/oracle
oracle@racpb3's password: 
sshUserSetup.sh 100% 32KB 31.6KB/s 00:00

Run sshUserSetup.sh

[oracle@racpb3 ~]$ sh sshUserSetup.sh -hosts "racpb3" -user oracle
The output of this script is also logged into /tmp/sshUserSetup_2018-12-27-03-51-12.log
Hosts are racpb3
user is oracle
Platform:- Linux 
Checking if the remote hosts are reachable
PING racpb3.localdomain.com (192.168.12.127) 56(84) bytes of data.
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=3 ttl=64 time=0.045 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=4 ttl=64 time=0.046 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=5 ttl=64 time=0.075 ms

--- racpb3.localdomain.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.032/0.046/0.075/0.016 ms
Remote host reachability check succeeded.
The following hosts are reachable: racpb3.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost racpb3
numhosts 1
The script will setup SSH connectivity from the host racpb3.localdomain.com to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host racpb3.localdomain.com
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes

The user chose yes
Please specify if you want to specify a passphrase for the private key this script will create for the local host. Passphrase is used to encrypt the private key and makes SSH much more secure. Type 'yes' or 'no' and then press enter. In case you press 'yes', you would need to enter the passphrase whenever the script executes ssh or scp. 
The estimated number of times the user would be prompted for a passphrase is 2. In addition, if the private-public files are also newly created, the user would have to specify the passphrase on one additional occasion. 
Enter 'yes' or 'no'.
yes

The user chose yes
The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, press 'no' and enter. If you press 'no', the script will not attempt to create any new public/private key pairs. If you press 'yes', the script will remove the old private/public key files existing and create new ones prompting the user to enter the passphrase. If you enter 'yes', any previous SSH user setups would be reset. If you press 'change', the script will associate a new passphrase with the old keys.
Press 'yes', 'no' or 'change'
yes
The user chose yes
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Generating public/private rsa key pair.
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
2b:88:2f:d5:38:5d:51:6a:2d:1e:a6:e0:51:a2:7e:c7 oracle@racpb3.localdomain.com
The key's randomart image is:
+--[ RSA 1024]----+
| . . .. |
| . o .o |
| . o *.. |
| . . + =.o |
| . o+E.S |
| o+oo . |
| ..... . |
| .. . |
| .. |
+-----------------+
Creating .ssh directory and setting permissions on remote host racpb3
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host racpb3. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host racpb3.
Warning: Permanently added 'racpb3,192.168.12.127' (RSA) to the list of known hosts.
oracle@racpb3's password: 
Done with creating .ssh directory and setting permissions on remote host racpb3.
Copying local host public key to the remote host racpb3
The user may be prompted for a password or passphrase here since the script would be using SCP for host racpb3.
oracle@racpb3's password: 
Done copying local host public key to the remote host racpb3
The script will run SSH on the remote machine racpb3. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Agent admitted failure to sign using the key.
oracle@racpb3's password: 
cat: /home/oracle/.ssh/known_hosts.tmp: No such file or directory
cat: /home/oracle/.ssh/authorized_keys.tmp: No such file or directory
SSH setup is complete.

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user oracle.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--racpb3:--
Running /usr/bin/ssh -x -l oracle racpb3 date to verify SSH connectivity has been setup from local host to racpb3.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
The script will run SSH on the remote machine racpb3. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Agent admitted failure to sign using the key.
oracle@racpb3's password: 
Thu Dec 27 03:52:59 IST 2018
-----------------------------------------------------------------------
SSH verification complete.

Copy authorized keys to all nodes running under cluster environment.

[oracle@racpb3 .ssh]$ scp authorized_keys oracle@racpb1:/home/oracle/
oracle@racpb1's password:
authorized_keys 100% 478 0.5KB/s 00:00

[oracle@racpb1 ~]$ cat authorized_keys >> .ssh/authorized_keys
[oracle@racpb1 ~]$ cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqucDimj845F+iE2cWRtVf4qBP/YYqtMcUgpuORdlWuEsRN3wygAlrLszJ9h3gzlIfORUYGLT01A4lj0ZmQtxxfNjKW74feK25ieYkeQUsADLNPvmsdXwpNSCZ4IerLpp74sm0mzFdAZC8o2hAPhvJwiCU85naxTDo/NSNGDMOf6eCRAE8fSb4rICrC+FNdC+TlagyhM+K1Jxt2MmFpKgauzjCpQcGqkCo6DsD59nppf7fAXUUovL7Ykh1AVufYdEhFGFS6lffhV90qrsHEmOKVodek8p16I9lemeJRNaXdM1QT4UcmBLlC+qWF6WMmh9PYMmq3+3cUca74G1U6gF+w== oracle@racpb1.localdomain.com
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA43diGL6I8oEnOa+WQc0gvIj0KkaNYIT06UwqvWhyfibwCUATBdj0aSQiSIGmiy95+wDiyfWJDKFAR60Bb8ZG5UzgP/XPhoZKcJKYxVMtX2zppeVQjoyXR2mwyElcT5xLR/PNhUMnDHbWPPp9kK6flyMGrpYjxbwh55FzC6MQ/jw19u9VVLDsNtt4q8Zv/LZF7jwwPAn4YXT2WFVnY6Td709C05RD7GVRA35wsVCXiAoQbl5EsQ6/4Hdz9IKEcDSDcD6EnGhaLARnSy2ose1CL/Zk/5/iyMldhKxA8m26ZuVu7G1bZqKIbnUfUWnyx48opSbANLn2fTzPaIIO2Cwd1w== oracle@racpb2.localdomain.com
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0MWOu3g/Pfw729Fn7ruHif5eJxQDTb6km1SbeUfIZTRPrpA62e9fu6TVDrmVupAqlrswKJU2HueSPk7uidgS2zbLC9BsrBx2O/P/GBO+MgIYVjpzWd0uCJ9yjCAD0ciWosdBjafxVNsO/hZ08Wqc49BqJ9fZV8IbOD9xnYQOJls= oracle@racpb3.localdomain.com

[oracle@racpb1 .ssh]$ scp authorized_keys racpb2:/home/oracle/.ssh/
authorized_keys 100% 1300 1.3KB/s 00:00

[oracle@racpb1 .ssh]$ scp authorized_keys racpb3:/home/oracle/.ssh/

authorized_keys 100% 1300 1.3KB/s 00:00

Check Time Synchornization :

It will check SSH not configure successfully.Run the below commands from all nodes.

Example for 1st node :

[oracle@racpb1 .ssh]$ ssh racpb1 date
Thu Dec 27 04:08:16 IST 2018
[oracle@racpb1 .ssh]$ ssh racpb2 date
Thu Dec 27 04:08:19 IST 2018
[oracle@racpb1 .ssh]$ ssh racpb3 date
Thu Dec 27 04:08:23 IST 2018

Verify Cluster utility :-

[oracle@racpb1 bin]$ ./cluvfy comp peer -n racpb3 -refnode racpb1 -r 11gr2

Verifying peer compatibility

Checking peer compatibility...

Compatibility check: Physical memory [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 7.8085GB (8187808.0KB) 7.8085GB (8187808.0KB) matched
Physical memory <null>

Compatibility check: Available memory [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 7.80856GB (8187808.0KB) 7.8085GB (8187808.0KB) matched
Available memory <null>

Compatibility check: Swap space [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 5.8594GB (6143996.0KB) 5.8594GB (6143996.0KB) matched
Swap space <null>

Compatibility check: Free disk space for "/u01/app/12.1.0/grid" [reference node: racpb1]storage
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 35.5566GB (3.728384E7KB) 28.9248GB (3.0329856E7KB) matched
Free disk space <null>

Compatibility check: Free disk space for "/tmp" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 6.9678GB (7306240.0KB) 8.1494GB (8545280.0KB) matched
Free disk space <null>

Compatibility check: User existence for "oracle" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 oracle(54321) oracle(54321) matched
User existence for "oracle" check passed

Compatibility check: Group existence for "oinstall" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 oinstall(54321) oinstall(54321) matched
Group existence for "oinstall" check passed

Compatibility check: Group existence for "dba" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 dba(54322) dba(54322) matched
Group existence for "dba" check passed

Compatibility check: Group membership for "oracle" in "oinstall (Primary)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 yes yes matched
Group membership for "oracle" in "oinstall (Primary)" check passed

Compatibility check: Group membership for "oracle" in "dba" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 yes yes matched
Group membership for "oracle" in "dba" check passed

Compatibility check: Run level [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 5 5 matched
Run level check passed

Compatibility check: System architecture [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 x86_64 x86_64 matched
System architecture check passed

Compatibility check: Kernel version [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 2.6.39-400.17.1.el6uek.x86_64 2.6.39-400.17.1.el6uek.x86_64 matched
Kernel version check passed

Compatibility check: Kernel param "semmsl" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 256 256 matched
Kernel param "semmsl" check passed

Compatibility check: Kernel param "semmns" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 32000 32000 matched
Kernel param "semmns" check passed

Compatibility check: Kernel param "semopm" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 100 100 matched
Kernel param "semopm" check passed

Compatibility check: Kernel param "semmni" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 142 142 matched
Kernel param "semmni" check passed

Compatibility check: Kernel param "shmmax" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4294967295 4294967295 matched
Kernel param "shmmax" check passed

Compatibility check: Kernel param "shmmni" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4096 4096 matched
Kernel param "shmmni" check passed

Compatibility check: Kernel param "shmall" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 0 0 matched
Kernel param "shmall" check passed

Compatibility check: Kernel param "file-max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 6815744 6815744 matched
Kernel param "file-max" check passed

Compatibility check: Kernel param "ip_local_port_range" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 9000 65500 9000 65500 matched
Kernel param "ip_local_port_range" check passed

Compatibility check: Kernel param "rmem_default" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4194304 4194304 matched
Kernel param "rmem_default" check passed

Compatibility check: Kernel param "rmem_max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4194304 4194304 matched
Kernel param "rmem_max" check passed

Compatibility check: Kernel param "wmem_default" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 262144 262144 matched
Kernel param "wmem_default" check passed

Compatibility check: Kernel param "wmem_max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 1048576 1048576 matched
Kernel param "wmem_max" check passed

Compatibility check: Kernel param "aio-max-nr" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 1048576 1048576 matched
Kernel param "aio-max-nr" check passed

Compatibility check: Package existence for "binutils" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2-5.36.el6 matched
Package existence for "binutils" check passed

Compatibility check: Package existence for "compat-libcap1" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 compat-libcap1-1.10-1 compat-libcap1-1.10-1 matched
Package existence for "compat-libcap1" check passed

Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 compat-libstdc++-33-3.2.3-69.el6 (x86_64) compat-libstdc++-33-3.2.3-69.el6 (x86_64) matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed

Compatibility check: Package existence for "libgcc (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libgcc-4.4.7-3.el6 (x86_64),libgcc-4.4.7-3.el6 (i686) libgcc-4.4.7-3.el6 (x86_64),libgcc-4.4.7-3.el6 (i686) matched
Package existence for "libgcc (x86_64)" check passed

Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libstdc++-4.4.7-3.el6 (x86_64) libstdc++-4.4.7-3.el6 (x86_64) matched
Package existence for "libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libstdc++-devel-4.4.7-3.el6 (x86_64) libstdc++-devel-4.4.7-3.el6 (x86_64) matched
Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 sysstat-9.0.4-20.el6 sysstat-9.0.4-20.el6 matched
Package existence for "sysstat" check passed

Compatibility check: Package existence for "gcc" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 gcc-4.4.7-3.el6 gcc-4.4.7-3.el6 matched
Package existence for "gcc" check passed

Compatibility check: Package existence for "gcc-c++" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.7-3.el6 matched
Package existence for "gcc-c++" check passed

Compatibility check: Package existence for "ksh" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 ksh-20100621-19.el6 ksh-20100621-19.el6 matched
Package existence for "ksh" check passed

Compatibility check: Package existence for "make" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 make-3.81-20.el6 make-3.81-20.el6 matched
Package existence for "make" check passed

Compatibility check: Package existence for "glibc (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 glibc-2.12-1.107.el6 (x86_64),glibc-2.12-1.107.el6 (i686) glibc-2.12-1.107.el6 (x86_64),glibc-2.12-1.107.el6 (i686) matched
Package existence for "glibc (x86_64)" check passed

Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 glibc-devel-2.12-1.107.el6 (x86_64) glibc-devel-2.12-1.107.el6 (x86_64) matched
Package existence for "glibc-devel (x86_64)" check passed

Compatibility check: Package existence for "libaio (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libaio-0.3.107-10.el6 (x86_64) libaio-0.3.107-10.el6 (x86_64) matched
Package existence for "libaio (x86_64)" check passed

Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libaio-devel-0.3.107-10.el6 (x86_64) libaio-devel-0.3.107-10.el6 (x86_64) matched
Package existence for "libaio-devel (x86_64)" check passed

Verification of peer compatibility was successful.
Checks passed for the following node(s):
racpb3

Verify new node pre-check :

[oracle@racpb1 bin]$ ./cluvfy stage -pre nodeadd -n racpb3 -fixup -verbose > /home/oracle/cluvfy_pre_nodeadd.txt

Above node addition pre-check has to get passed to add nodes to the existing Two-Node RAC environment.I have attached the cluvfy_pre_nodeadd file here.

From racpb1 node,

For GRID_HOME :

[oracle@racpb1 ~]$ . .bash_profile
[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@racpb1 ~]$ cd $ORACLE_HOME/addnode

[oracle@racpb1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={racpb3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racvr3}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 7957 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

Copying files to node successful.
.................................................. 73% Done.

Saving cluster inventory in progress.
.................................................. 80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 88% Done.

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[racpb3]
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[racpb3]

The scripts can be executed in parallel on all the nodes.

..........
Update Inventory in progress.
.................................................. 100% Done.

Update Inventory successful.
Successfully Setup Software.

As root user,execute orainstRoot.sh and root.sh on racpb3 :

[root@racpb3 ]# sh orainstRoot.sh 
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@racpb3 grid]# sh root.sh
Check /u01/app/12.1.0/grid/install/root_racpb3.localdomain.com_2018-12-27_21-52-22.log for the output of root script

I have attached here root script output log

Check Clusterware status :-

[root@racpb3 bin]# ./crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racpb3 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host 
----------------------------------------------------------------------
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racpb1 
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racpb1 
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb3 
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb2 
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racpb1 
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE racpb2 
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE racpb2 
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racpb1 
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE racpb2 
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racpb1 
ora.orcl11g.db ora....se.type 0/2 0/1 ONLINE ONLINE 
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racpb1 
ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE racpb1 
ora.racpb1.ons application 0/3 0/0 ONLINE ONLINE racpb1 
ora.racpb1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb1 
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racpb2 
ora....B2.lsnr application 0/5 0/0 ONLINE ONLINE racpb2 
ora.racpb2.ons application 0/3 0/0 ONLINE ONLINE racpb2 
ora.racpb2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb2 
ora....SM3.asm application 0/5 0/0 ONLINE ONLINE racpb3 
ora....B3.lsnr application 0/5 0/0 ONLINE ONLINE racpb3 
ora.racpb3.ons application 0/3 0/0 ONLINE ONLINE racpb3 
ora.racpb3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb3 
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1 
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb3 
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb2

 

For ORACLE_HOME :

[oracle@racpb1 addnode]$ export ORACLE_SID=orcl11g
[oracle@racpb1 addnode]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ cd $ORACLE_HOME/addnode

[oracle@racpb1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={racpb3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racvr3}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 7937 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed


Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2018-12-28_12-34-24AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

Copying files to node successful.
.................................................. 73% Done.

Saving cluster inventory in progress.
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'racpb2'. Refer to '/u01/app/oraInventory/logs/addNodeActions2018-12-28_12-34-24AM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes: 
/u01/app/oracle/product/12.1.0/db_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 CLUSTER_NODES=racpb1,racpb2,racpb3 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=<node on which command is to be run>. 
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
.................................................. 80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/oracle/product/12.1.0/db_1 was unsuccessful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 88% Done.

As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0/db_1/root.sh

Execute /u01/app/oracle/product/12.1.0/db_1/root.sh on the following nodes: 
[racpb3]

..........
Update Inventory in progress.
.................................................. 100% Done.

Update Inventory successful.
Successfully Setup Software.

Run the below commands on failed nodes racpb3

[oracle@racpb3 db_1]$ /u01/app/oracle/product/12.1.0/db_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 CLUSTER_NODES=racpb1,racpb2,racpb3 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=3
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5994 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Execute root.sh on new node (racpb3) as root user 

[root@racpb3 Desktop]# sh /u01/app/oracle/product/12.1.0/db_1/root.sh 
Check /u01/app/oracle/product/12.1.0/db_1/install/root_racpb3.localdomain.com_2018-12-28_00-57-10.log for the output of root script
[root@racpb3 Desktop]# tail -f /u01/app/oracle/product/12.1.0/db_1/install/root_racpb3.localdomain.com_2018-12-28_00-57-10.log
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/db_1
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Set env. to new node (racpb3) :-

grid()
{
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/12.1.0/grid; export ORACLE_HOME
export ORACLE_SID=+ASM3
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
}

11g()
{
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export ORACLE_HOME
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_SID=orcl11g3
export ORACLE_SID
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:.
export LD_LIBRARY_PATH
LIBPATH=$ORACLE_HOME/lib32:$ORACLE_HOME/lib:/usr/lib:/lib
export LIBPATH
TNS_ADMIN=${ORACLE_HOME}/network/admin
export TNS_ADMIN
PATH=$ORACLE_HOME/bin:$PATH:.
export PATH
}

Check the database status and instances :-

[oracle@racpb3 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

It will show only two instances are present under clusterware.Add new instance using DBCA.

Adding Instance to Cluster Database :

Invoke dbca from node 1 (racpb1) :

[oracle@racpb1 ~]$ . .bash_profile 
[oracle@racpb1 ~]$ 11g
[oracle@racpb1 ~]$ dbca

 

 

 

 

 

 

 

 

 

Check Database status and configuration :

[oracle@racpb3 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2
Instance orcl11g3 is running on node racpb3


[oracle@racpb3 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2,orcl11g3
Configured nodes: racpb1,racpb2,racpb3
Database is administrator managed

 

Catch Me On:- Hariprasath Rajaram Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/ Facebook:https://www.facebook.com/HariPrasathdba                       FB Group:https://www.facebook.com/groups/894402327369506/                   FB Page: https://www.facebook.com/dbahariprasath/? Twitter: https://twitter.com/hariprasathdba

Step by Step Upgrade Oracle RAC Grid Infrastructure and Database from 11g to 12c

 

Upgrade RAC Grid and Database from 11.2.0.4 to 12.1.0.2 :-

Main steps :

Grid :-

  1.  Check all services are up and running from 11gR2 GRID_HOME
  2.  Perform backup of OCR, voting disk and Database.
  3.  Create new directory for installing 12C software on both RAC nodes.
  4.  Run “runcluvfy.sh” to verify errors .
  5.  Install and upgrade GRID from 11gR2 to 12cR1
  6. Verify upgrade version

Database  :-

  1. Backup the database before the upgrade
  2. Database upgrade Pre-check
    • Creating Stage for 12c database software
    • Creating directory for 12c oracle home
    • Check the pre upgrade status.
  3. Unzip 12c database software in stage
  4. Install the 12.1.0.2 using the software only installation
  5. Run Preupgrade.sql script in 11.2.0.4 existing database from newly installed 12c home.
  6. Run the DBUA to start the database upgrade.
  7. Database post upgrade check.
  8. Check Database version.

Environment variables for 11g database :-

GRID :

grid()
{
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
export ORACLE_SID=+ASM1
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
SQLPATH=/u01/app/oracle/scripts/sql:/u01/app/11.2.0/grid/rdbms/admin:/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin; export SQLPATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
}

DATABASE :

11g()
{
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_HOME
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_SID=orcl11g1
export ORACLE_SID
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:.
export LD_LIBRARY_PATH
LIBPATH=$ORACLE_HOME/lib32:$ORACLE_HOME/lib:/usr/lib:/lib
export LIBPATH
TNS_ADMIN=${ORACLE_HOME}/network/admin
export TNS_ADMIN
PATH=$ORACLE_HOME/bin:$PATH:.
export PATH
}

Upgrade GRID Infrastructure Software 12c :-

Check GRID Infrastructure software version and Clusterware status:

[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]

[oracle@racpb1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Verify all services are up and running from 11gR2 GRID Home :

[oracle@racpb1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.LISTENER.lsnr
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.asm
ONLINE ONLINE racpb1 Started
ONLINE ONLINE racpb2 Started
ora.gsd
OFFLINE OFFLINE racpb1
OFFLINE OFFLINE racpb2
ora.net1.network
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.ons
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.registry.acfs
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racpb2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE racpb1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE racpb1
ora.cvu
1 ONLINE ONLINE racpb1
ora.oc4j
1 ONLINE ONLINE racpb1
ora.orcl11g.db
1 ONLINE ONLINE racpb1 Open
2 ONLINE ONLINE racpb2 Open
ora.racpb1.vip
1 ONLINE ONLINE racpb1
ora.racpb2.vip
1 ONLINE ONLINE racpb2
ora.scan1.vip
1 ONLINE ONLINE racpb2
ora.scan2.vip
1 ONLINE ONLINE racpb1
ora.scan3.vip
1 ONLINE ONLINE racpb1

Check Database status and configuration :

oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl11g
Database instances: orcl11g1,orcl11g2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed

Perform local backup of OCR :

[root@racpb1 ~]$ mkdir -p /u01/ocrbkp
[root@racpb1 ~]# cd /u01/app/11.2.0/grid/bin/
[root@racpb1 bin]# ./ocrconfig -export /u01/ocrbkp/ocrfile

Move the 12c GRID Software to the server and unzip the software :

[oracle@racpb1 12102_64bit]$ unzip -d /u01/ linuxamd64_12102_grid_1of2.zip
Archive:  linuxamd64_12102_grid_1of2.zip
   creating: /u01/grid/
.
.

[oracle@racpb1 12102_64bit]$ unzip -d /u01/ linuxamd64_12102_grid_2of2.zip
Archive:  linuxamd64_12102_grid_2of2.zip
   creating: /u01/grid/stage/Components/oracle.has.crs/
.
.

Run cluvfy utility to pre-check  any errors :

Execute runcluvfy.sh from 12cR1 software location,

[oracle@racpb1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/zpp/12.1.0/grid -dest_version 12.1.0.2.0 -verbose

Make sure the cluvfy executed successfully. If any error, please take action before going to GRID 12cR1 upgrade.The cluvfy log is attached here.

Stop the running 11g database :

[oracle@racpb1 ~]$ ps -ef|grep pmon
oracle 3953 1 0 Dec22 ? 00:00:00 asm_pmon_+ASM1
oracle 4976 1 0 Dec22 ? 00:00:00 ora_pmon_orcl11g1
oracle 23634 4901 0 00:55 pts/0 00:00:00 grep pmon

[oracle@racpb1 ~]$ srvctl stop database -d orcl11g

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is not running on node racpb1
Instance orcl11g2 is not running on node racpb2

Take GRID_HOME backup on both nodes :

[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$  tar -cvf grid_home_11g.tar $GRID_HOME

Check Clusterware services status before upgrade :

[oracle@racpb1 ~]$ crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Start the 12cR1 upgrade by executing runInstaller :

[oracle@racpb1 ~]$ cd /u01/
[oracle@racpb1 u01]$ cd grid/

[oracle@racpb1 grid]$ ./runInstaller 
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 415 MB. Actual 8565 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5996 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-12-23_01

Select Upgrade option to upgrade GRID 12c infrastructure and ASM.

Check the public host names and existing GRID_HOME

Uncheck the EM cloud control option to disable EM.

Specify location for ORACLE_BASE and ORACLE_HOME for 12c. 

Ignore the SWAP SIZE it has to be twice the size of memory in server.

 

Execute rootupgrade.sh script in both nodes :

 First node (racpb1)  :-

[root@racpb1 bin]# sh /u01/app/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2018/12/23 12:18:59 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:18:59 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:08 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:19 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:22 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/12/23 12:19:30 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/12/23 12:19:30 CLSRSC-363: User ignored prerequisites during installation
2018/12/23 12:19:38 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2018/12/23 12:19:38 CLSRSC-482: Running command: '/u01/app/12.1.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/11.2.0/grid -oldCRSVersion 11.2.0.4.0 -nodeNumber 1 -firstNode true -startRolling true'

ASM configuration upgraded in local node successfully.

2018/12/23 12:19:45 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2018/12/23 12:19:45 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/12/23 12:20:36 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2018/12/23 12:24:43 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/12/23 12:29:05 CLSRSC-472: Attempting to export the OCR
2018/12/23 12:29:06 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle oinstall'
2018/12/23 12:29:23 CLSRSC-473: Successfully exported the OCR
2018/12/23 12:29:29 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2018/12/23 12:29:29 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.

2018/12/23 12:29:30 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2018/12/23 12:29:30 CLSRSC-543:
3. The downgrade command must be run on the node racpb1 with the '-lastnode' option to restore global configuration data.
2018/12/23 12:29:55 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/12/23 12:30:19 CLSRSC-474: Initiating upgrade of resource types
2018/12/23 12:31:12 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p first'
2018/12/23 12:31:12 CLSRSC-475: Upgrade of resource types successfully initiated.
2018/12/23 12:31:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Second node (racpb2)  :-

[root@racpb2 ~]# sh /u01/app/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2018/12/23 12:34:35 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:35:15 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:35:17 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/12/23 12:35:24 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/12/23 12:35:24 CLSRSC-363: User ignored prerequisites during installation
ASM configuration upgraded in local node successfully.
2018/12/23 12:35:41 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/12/23 12:36:10 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2018/12/23 12:36:37 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/12/23 12:39:54 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
2018/12/23 12:40:21 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2018/12/23 12:40:21 CLSRSC-482: Running command: '/u01/app/12.1.0/grid/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2018/12/23 12:42:33 CLSRSC-479: Successfully set Oracle Clusterware active version

2018/12/23 12:42:39 CLSRSC-476: Finishing upgrade of resource types

2018/12/23 12:43:00 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p last'

2018/12/23 12:43:00 CLSRSC-477: Successfully completed upgrade of resource types

2018/12/23 12:43:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

After running rootupgrade.sh script,Click OK button.

Check the Clusterware upgrade version:

[root@racpb1 ~]# cd /u01/app/12.1.0/grid/bin/
[root@racpb1 bin]# ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

Note: If you are upgrading from 11.2.0.1/11.2.0.2/11.2.0.3 version to 12cR1 then you may need to apply additional patches before you proceed with upgrade.

Start the 11g database :

[oracle@racpb1 ~]$ srvctl start database -d orcl11g
[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Upgrade RAC database from 11gR2 to 12cR1 :-

Backup the database before the upgrade :

Take level zero backup or cold backup of database.

Database upgrade Pre-check :

  • Creating Stage for 12c database software.
[oracle@racpb1 ~]$ mkdir -p /u01/stage
[oracle@racpb1 ~]$ chmod -R 755 /u01/stage/
  • Creating directory for 12c ORACLE_HOME.
[oracle@racpb1 ~]$ mkdir -p /u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ chown -R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ chmod -R 775 /u01/app/oracle/product/12.1.0/db_1
  • Check the preupgrade status :

Run runcluvfy.sh from grid stage location :

[oracle@racpb1 grid]$ ./runcluvfy.sh stage -pre dbinst -upgrade -src_dbhome /u01/app/oracle/product/11.2.0/dbhome_1 -dest_dbhome /u01/app/oracle/product/12.1.0/db_1 -dest_version 12.1.0.2.0

Above command output has to be completed successfully to upgrade database from 11gR1 to 12cR1.

Unzip 12c database software in stage :

[oracle@racpb1 12102_64bit]$ unzip -d /u01/stage/ linuxamd64_12102_database_1of2.zip

[oracle@racpb1 12102_64bit]$ unzip -d /u01/stage/ linuxamd64_12102_database_2of2.zip

Unset the 11g env. :

unset ORACLE_HOME
unset ORACLE_BASE
unset ORACLE_SID

Install the 12.1.0.2 using the software only installation :

Set new 12c env. and Execute runInstaller.

[oracle@racpb1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ export ORACLE_BASE=/u01/app/oracle
[oracle@racpb1 ~]$ export ORACLE_SID=orcl12c
[oracle@racpb1 ~]$ 
[oracle@racpb1 ~]$ cd /u01/stage/database/
[oracle@racpb1 database]$ ./runInstaller 
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB. Actual 8533 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed
Checking monitor: must be configured to display at least 256 colors. 
Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-12-23_02-05-54PM. Please wait ...

Skip the security updates from Oracle Support.

Select RAC database installation.

After database 12c software installation done run the below script from both nodes :

 

Run Preupgrade.sql script :

  • Preupgrade script to identify any pre-reqs tasks that must be done on the database  before the upgrade.
  • Execute Preupgrade.sql script in 11.2.0.4 existing database from newly installed 12c ORACLE_HOME.
[oracle@racpb1 ~]$ . .bash_profile
[oracle@racpb1 ~]$ 11g
[oracle@racpb1 ~]$ cd /u01/app/oracle/product/12.1.0/db_1/rdbms/admin/
[oracle@racpb1 admin]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Dec 24 03:35:26 2018

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @preupgrd.sql

Loading Pre-Upgrade Package...

***************************************************************************
Executing Pre-Upgrade Checks in ORCL11G...
***************************************************************************************************************************************

====>> ERRORS FOUND for ORCL11G <<====

The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
prior to attempting your upgrade.
Failure to do so will result in a failed upgrade.

You MUST resolve the above errors prior to upgrade

************************************************************************************************************************

====>> PRE-UPGRADE RESULTS for ORCL11G <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/postupgrade_fixups.sql

***************************************************************************************************************************************
Pre-Upgrade Checks in ORCL11G Completed.
******************************************************************************************************************************************************
***********************************************************************

Run the DBUA to start the database upgrade :

Check Database version and configuration :-

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2
Configured nodes: racpb1,racpb2
Database is administrator managed

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Successfully upgrade the Rac database From 11g to 12c (Grid & DB).

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Step-by-Step One Node Rac Applying Psu Patch on 12c Grid and DB Home

Description:-

As we already seen how to configure Oracle One node RAC in 12cR1 and the relocation of the instance from one node to another node. In this article, let us apply the July’18 PSU patch to the same environment.

For Oracle One Node RAC configuration, please click here. Below is the configuration of the environment.

High Level steps for applying the Patch:-

  • Current OPatch Version
  • Upgrade Opatch utility
  • Prepare for Patching
  • Applying Patch
  • Patch Verification

Current OPatch Version:-

Step 1:- Current version of Opatch Tool in our environment

$ export PATH=$ORACLE_HOME/OPatch:$PATH
$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.

From the above output,the opatch version is 12.1.0.1.3. But as per our README document, the minimum OPatch utility version shoul be 12.2.0.1.12 or later to apply this patch. Oracle recommends that you use the latest released OPatch version for 12.2, which is available for download from My Oracle Support patch 6880880 by selecting the 12.2.0.1.0 release.

Upgrade Opatch utility:-

Step 2:- Backup the existing Opatch folder

Backup the OPatch directory as root user for GRID_HOME and oracle user for ORACLE_HOME(Database) in both the nodes of the cluster. Otherwise, if we try to backup as oracle user in GRID_HOME, we will receive permission issues.

GRID_HOME:
$ su - root
$ cd /oradb/app/12.1.0.2/grid/
$ mv OPatch/ OPatch_bkp
$ unzip <PATH_TO_PATCH>/p6880880_122010_Linux-x86-64.zip -d .
$ chown -R oracle:oinstall OPatch
$ chmod -R 755 OPatch

ORACLE_HOME:
$ su - oracle
$ cd /oradb/app/oracle/product/12.1.0.2/db_1
$ mv OPatch/ OPatch_bkp
$ unzip <PATH_TO_PATCH>/p6880880_122010_Linux-x86-64.zip -d .
$ chmod -R 755 OPatch

Now, as oracle user verify the OPatch utility version.

GRID_HOME:-(Both Nodes)

$ export ORACLE_HOME=/oradb/app/12.1.0.2/grid
$ export PATH=$ORACLE_HOME/OPatch:$PATH
$ opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.

ORACLE_HOME:-(Both Nodes)

$ export ORACLE_HOME=/oradb/app/oracle/product/12.1.0.2/db_1
$ export PATH=$ORACLE_HOME/OPatch:$PATH
$ opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.

Prepare for Patching:-

Step 3:- Preparing Node 1 to apply the PSU Patch

Now, login as root user and set the environmental variables

Applying Patch:-

Step 4:- Navigate to the patch location and follow the below steps to apply the patch.

$ cd <PATH_TO_PATCH>
$ unzip p27967747_121020_Linux-x86-64.zip
$ cd 27967747
$ $ORACLE_HOME/OPatch/opatchauto apply ./

OPatchauto session is initiated at Wed Sep 26 02:39:52 2018

System initialization log file is /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchautodb/systemconfig2018-09-26_02-40-10AM.log.

Session log file is /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/opatchauto2018-09-26_02-41-29AM.log
The id for this session is WYWB

Executing OPatch prereq operations to verify patch applicability on home /oradb/app/12.1.0.2/grid

Executing OPatch prereq operations to verify patch applicability on home /oradb/app/oracle/product/12.1.0.2/db_1
Patch applicability verified successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

Patch applicability verified successfully on home /oradb/app/12.1.0.2/grid

Verifying SQL patch applicability on home /oradb/app/oracle/product/12.1.0.2/db_1
SQL patch applicability verified successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

Preparing to bring down database service on home /oradb/app/oracle/product/12.1.0.2/db_1

WARNING: The service ORCL.oracledbwr.com configured on orcl will not be switched as it is not configured to run on any other node(s).
Successfully prepared home /oradb/app/oracle/product/12.1.0.2/db_1 to bring down database service

Relocating RACOne home before patching on home /oradb/app/oracle/product/12.1.0.2/db_1
Relocated RACOne home before patching on home /oradb/app/oracle/product/12.1.0.2/db_1

Bringing down CRS service on home /oradb/app/12.1.0.2/grid
Prepatch operation log file location: /oradb/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_prodrac101_2018-09-26_02-49-04AM.log
CRS service brought down successfully on home /oradb/app/12.1.0.2/grid

Performing prepatch operation on home /oradb/app/oracle/product/12.1.0.2/db_1
Perpatch operation completed successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

Start applying binary patch on home /oradb/app/oracle/product/12.1.0.2/db_1
Binary patch applied successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

Performing postpatch operation on home /oradb/app/oracle/product/12.1.0.2/db_1
Postpatch operation completed successfully on home /oradb/app/oracle/product/12.1.0.2/db_1


Start applying binary patch on home /oradb/app/12.1.0.2/grid
Binary patch applied successfully on home /oradb/app/12.1.0.2/grid

Starting CRS service on home /oradb/app/12.1.0.2/grid
Postpatch operation log file location: /oradb/app/12.1.0.2/grid/cfgtoollogs/crsconfig/crspatch_prodrac101_2018-09-26_03-42-46AM.log
CRS service started successfully on home /oradb/app/12.1.0.2/grid

Relocating back RACOne to home /oradb/app/oracle/product/12.1.0.2/db_1
Relocated back RACOne home successfully to home /oradb/app/oracle/product/12.1.0.2/db_1


Preparing home /oradb/app/oracle/product/12.1.0.2/db_1 after database service restarted
No step execution required.........


Trying to apply SQL patch on home /oradb/app/oracle/product/12.1.0.2/db_1
SQL patch applied successfully on home /oradb/app/oracle/product/12.1.0.2/db_1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:prodrac101
RAC Home:/oradb/app/oracle/product/12.1.0.2/db_1
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27762277
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27547329
Log: /oradb/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_02-55-50AM_1.log

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27762253
Log: /oradb/app/oracle/product/12.1.0.2/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_02-55-50AM_1.log


Host:prodrac101
CRS Home:/oradb/app/12.1.0.2/grid
Version:12.1.0.2.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/26983807
Log: /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_03-08-36AM_1.log

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27547329
Log: /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_03-08-36AM_1.log

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27762253
Log: /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_03-08-36AM_1.log

Patch: /mnt/hgfs/shared/soft/12102/July18_PSU/27967747/27762277
Log: /oradb/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2018-09-26_03-08-36AM_1.log

OPatchauto session completed at Wed Sep 26 04:03:09 2018
Time taken to complete the session 83 minutes, 17 seconds

Patch Verification:-

Step 5:- Once the patch has been applied successfully, verify it in the database like below.

$ sqlplus / as sysdba
SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

Patch Id : 27547329
Action : APPLY
Action Time : 26-SEP-2018 04:03:06
Description : DATABASE PATCH SET UPDATE 12.1.0.2.180717
Logfile :
/oradb/app/oracle/cfgtoollogs/sqlpatch/27547329/22280349/27547329_apply_ORCL_201
8Sep26_04_00_51.log
Status : SUCCESS

PL/SQL procedure successfully completed.

Similarly, follow the same steps to apply the patch in Node 2.

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

 

Oracle12c-RAC One Node Switchover

In the previous article we have configured Oracle 12cR1 One Node RAC. Whereas here let us do some playful activities using the configured environment.Description:-

As I said already we have Oracle 12cR1 One Node RAC database configured in Nodes prodrac101 & prodrac102. Due to OS maintenance activity, we are in need to stop the oracle services in Node 1 and relocate them to Node 2 to reduce the downtime of the database and make sure the business continuity.

Let’s start the demo

Below is the database configuration output.

$ srvctl config database -d ORCL
Database unique name: ORCL
Database name: ORCL
Oracle home: /oradb/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL/PARAMETERFILE/spfile.278.985981865
Password file: +DBWR_DATA/ORCL/PASSWORD/pwdorcl.276.985981257
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCLPOOL
Disk Groups: DBWR_FRA,DBWR_DATA
Mount point paths:
Services: ORCL.oracledbwr.com
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: ORCL
Candidate servers:
OSDBA group: dba
OSOPER group: oper
Database instances:
Database is policy managed

Note-down the serverpool name of the database is configured. Let us verify the instance is running on which node.

$ srvctl status database -d ORCL
Instance ORCL_1 is running on node prodrac101
Online relocation: INACTIVE

From the above output we can see that the instance is running in the first node. So, we will relocate the instance from Node 1(prodrac101) to Node 2(prodrac102).

Before we start the relocate process make sure the serverpool’s are configured properly. Say for example, below is configuration of serverpool in our environment.

$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: ORCLPOOL
Importance: 0, Min: 0, Max: 1
Category: hub
Candidate server names:

As we already know that our instance is running under “ORCLPOOL” serverpool from database configuration. In the above output we can see that the Max value of the serverpool is 1 and we need to change it value, otherwise the relocation process will get failed as below.

$ srvctl relocate database -d ORCL -n prodrac102 -w 5 -v
Online relocation failed, rolling back to original state
PRCD-1222 : Online relocation of database "ORCL" failed but database was restored to its original state
PRCR-1114 : Failed to relocate servers prodrac102 into server pool ora.ORCLPOOL
CRS-2598: Server pool 'ora.ORCLPOOL' is already at its maximum size of '1'

In order to avoid the above error, we need to increase the max value of the serverpool as below.

$ srvctl modify srvpool -g ORCLPOOL -l 1 -u 3 -i 999

Once the max value is increased, verify the configuration now.

$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: ORCLPOOL
Importance: 999, Min: 1, Max: 3
Category: hub
Candidate server names:

Now, we can start the relocation process.

$ srvctl relocate database -d ORCL -n prodrac102 -w 5 -v
Configuration updated to two instances
Instance ORCL_2 started
Services relocated
Waiting for up to 5 minutes for instance ORCL_1 to stop ...
Instance ORCL_1 stopped
Configuration updated to one instance

Now, verify the database configuration and on which node the instance is running.

$ srvctl config database -d ORCL
Database unique name: ORCL
Database name: ORCL
Oracle home: /oradb/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL/PARAMETERFILE/spfile.278.985981865
Password file: +DBWR_DATA/ORCL/PASSWORD/pwdorcl.276.985981257
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCLPOOL
Disk Groups: DBWR_FRA,DBWR_DATA
Mount point paths:
Services: ORCL.oracledbwr.com
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: ORCL
Candidate servers:
OSDBA group: dba
OSOPER group: oper
Database instances:
Database is policy managed
$ srvctl status database -d ORCL
Instance ORCL_2 is running on node prodrac102
Online relocation: INACTIVE

Now, we are sure that the instance has been relocated from Node 1 (prodrac101) to Node 2 (prodrac102).

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Step by Step Install of Oracle12c RAC One Node On OEL 6.5 Using VMware

Description:-

In this article let us configure Oracle 12cR1 One Node RAC. Below is the server details we are going to configure.

High Level Steps:-
1) Pre-requisites for RAC Installation
2) SSH Configuration and running runcluvfy
3) Grid Infrastructure Installation
4) Database Binaries Installation
5) One Node RAC Database Creation

Pre-requisites for RAC Installation:-

Below is the ip-details of the public, private, virtual and scan configuration.

$ cat /etc/hosts

127.0.0.1      localhost.localdomain       localhost

#Public IP
192.168.1.211  prodrac101.oracledbwr.com   prodrac101
192.168.1.212  prodrac102.oracledbwr.com   prodrac102

#Private IP
192.168.2.211  prodprv101.oracledbwr.com   prodprv101
192.168.2.212  prodprv102.oracledbwr.com   prodprv102

#Virtual IP
192.168.1.214  prodvip101.oracledbwr.com   prodvip101
192.168.1.215  prodvip102.oracledbwr.com   prodvip102

#Scan IP
192.168.1.218  prodscn101.oracledbwr.com  prodscn101
192.168.1.219  prodscn101.oracledbwr.com  prodscn101
192.168.1.220  prodscn101.oracledbwr.com  prodscn101

The pre-requisites steps involved in One Node RAC installation is similar to normal two node RAC installation. You can refer here for the OS configuration and pre-requisites need to be done for One Node RAC installation(Follow upto Step 79 for OS configuration and pre-requisites).

SSH Configuration and running runcluvfy:-

Login into NODE1/NODE2 as oracle:

$ cd <path-to-grid-software>/sshsetup
$ ./sshUserSetup.sh -user oracle -hosts "prodrac101 prodrac102" -noPromptPassphrase

Run the above command in both the servers of the cluster we are going to configure One Node RAC.

Once the ssh is configured successfully, execute the runcluvfy to check whether all the pre-requisite for RAC installation has been done perfectly.

$ cd <path-to-grid-software>/
$ ./runcluvfy.sh stage -pre crsinst -n prodrac101,prodrac102 -r 12cR1 -orainv oinstall -fixup -verbose

Please check here for the cluvfy output.

Grid Infrastructure Installation:-

When the cluvfy completed successfully, follow the below steps for the Oracle 12cR1 grid installation. Go to the unzipped directory of the grid software and start the grid installation by executing the runInstaller.

$ cd <path-to-grid-software>/
$ ./runInstaller

Here, provide a cluster name and the scan-name of the cluster.

As similar to normal two node RAC installation, provide the public and virtual ip address name’s of the servers in the below page of the configuration.

Select a diskgroup where we need to place the OCR and voting disk of the cluster.

$ sh /oradb/app/oraInventory/orainstRoot.sh
Changing permissions of /oradb/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /oradb/app/oraInventory to oinstall.
The execution of the script is complete.

Node 1:-

$ sh /oradb/app/12.1.0.2/grid/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oradb/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oradb/app/12.1.0.2/grid/crs/install/crsconfig_params
2018/09/01 22:25:44 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2018/09/01 22:26:14 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
2018/09/01 22:27:02 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'prodrac101'
CRS-2672: Attempting to start 'ora.mdnsd' on 'prodrac101'
CRS-2676: Start of 'ora.mdnsd' on 'prodrac101' succeeded
CRS-2676: Start of 'ora.evmd' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'prodrac101'
CRS-2676: Start of 'ora.gpnpd' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'prodrac101'
CRS-2672: Attempting to start 'ora.gipcd' on 'prodrac101'
CRS-2676: Start of 'ora.cssdmonitor' on 'prodrac101' succeeded
CRS-2676: Start of 'ora.gipcd' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'prodrac101'
CRS-2672: Attempting to start 'ora.diskmon' on 'prodrac101'
CRS-2676: Start of 'ora.diskmon' on 'prodrac101' succeeded
CRS-2676: Start of 'ora.cssd' on 'prodrac101' succeeded

ASM created and started successfully.

Disk Group DBWR_DATA created successfully.

CRS-2672: Attempting to start 'ora.crf' on 'prodrac101'
CRS-2672: Attempting to start 'ora.storage' on 'prodrac101'
CRS-2676: Start of 'ora.storage' on 'prodrac101' succeeded
CRS-2676: Start of 'ora.crf' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'prodrac101'
CRS-2676: Start of 'ora.crsd' on 'prodrac101' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk de3a592de9eb4feabf9fb4121f96c1ae.
Successfully replaced voting disk group with +DBWR_DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE de3a592de9eb4feabf9fb4121f96c1ae (ORCL:DBWR_DATA) [DBWR_DATA]
Located 1 voting disk(s).
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac101'
CRS-2673: Attempting to stop 'ora.crsd' on 'prodrac101'
CRS-2677: Stop of 'ora.crsd' on 'prodrac101' succeeded
CRS-2673: Attempting to stop 'ora.evmd' on 'prodrac101'
CRS-2673: Attempting to stop 'ora.storage' on 'prodrac101'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'prodrac101'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'prodrac101'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'prodrac101'
CRS-2677: Stop of 'ora.storage' on 'prodrac101' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'prodrac101' succeeded
CRS-2677: Stop of 'ora.evmd' on 'prodrac101' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'prodrac101'
CRS-2673: Attempting to stop 'ora.ctssd' on 'prodrac101'
CRS-2673: Attempting to stop 'ora.asm' on 'prodrac101'
CRS-2677: Stop of 'ora.mdnsd' on 'prodrac101' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'prodrac101' succeeded
CRS-2677: Stop of 'ora.crf' on 'prodrac101' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'prodrac101' succeeded
CRS-2677: Stop of 'ora.asm' on 'prodrac101' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'prodrac101'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'prodrac101' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'prodrac101'
CRS-2677: Stop of 'ora.cssd' on 'prodrac101' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'prodrac101'
CRS-2677: Stop of 'ora.gipcd' on 'prodrac101' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac101' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'prodrac101'
CRS-2672: Attempting to start 'ora.evmd' on 'prodrac101'
CRS-2676: Start of 'ora.mdnsd' on 'prodrac101' succeeded
CRS-2676: Start of 'ora.evmd' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'prodrac101'
CRS-2676: Start of 'ora.gpnpd' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'prodrac101'
CRS-2676: Start of 'ora.gipcd' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'prodrac101'
CRS-2676: Start of 'ora.cssdmonitor' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'prodrac101'
CRS-2672: Attempting to start 'ora.diskmon' on 'prodrac101'
CRS-2676: Start of 'ora.diskmon' on 'prodrac101' succeeded
CRS-2676: Start of 'ora.cssd' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'prodrac101'
CRS-2672: Attempting to start 'ora.ctssd' on 'prodrac101'
CRS-2676: Start of 'ora.ctssd' on 'prodrac101' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac101'
CRS-2676: Start of 'ora.asm' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'prodrac101'
CRS-2676: Start of 'ora.storage' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'prodrac101'
CRS-2676: Start of 'ora.crf' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'prodrac101'
CRS-2676: Start of 'ora.crsd' on 'prodrac101' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: prodrac101
CRS-6016: Resource auto-start has completed for server prodrac101
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/09/01 22:33:42 CLSRSC-343: Successfully started Oracle Clusterware stack

CRS-2672: Attempting to start 'ora.asm' on 'prodrac101'
CRS-2676: Start of 'ora.asm' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.DBWR_DATA.dg' on 'prodrac101'
CRS-2676: Start of 'ora.DBWR_DATA.dg' on 'prodrac101' succeeded
2018/09/01 22:35:24 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Node 2:-

$ sh /oradb/app/12.1.0.2/grid/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oradb/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oradb/app/12.1.0.2/grid/crs/install/crsconfig_params
2018/09/01 22:45:47 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2018/09/01 22:46:16 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

OLR initialization - successful
2018/09/01 22:47:40 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac102'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'prodrac102'
CRS-2677: Stop of 'ora.drivers.acfs' on 'prodrac102' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac102' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'prodrac102'
CRS-2672: Attempting to start 'ora.evmd' on 'prodrac102'
CRS-2676: Start of 'ora.evmd' on 'prodrac102' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'prodrac102'
CRS-2676: Start of 'ora.gpnpd' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'prodrac102'
CRS-2676: Start of 'ora.gipcd' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'prodrac102'
CRS-2676: Start of 'ora.cssdmonitor' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'prodrac102'
CRS-2672: Attempting to start 'ora.diskmon' on 'prodrac102'
CRS-2676: Start of 'ora.diskmon' on 'prodrac102' succeeded
CRS-2676: Start of 'ora.cssd' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'prodrac102'
CRS-2672: Attempting to start 'ora.ctssd' on 'prodrac102'
CRS-2676: Start of 'ora.ctssd' on 'prodrac102' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac102'
CRS-2676: Start of 'ora.asm' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'prodrac102'
CRS-2676: Start of 'ora.storage' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'prodrac102'
CRS-2676: Start of 'ora.crf' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'prodrac102'
CRS-2676: Start of 'ora.crsd' on 'prodrac102' succeeded
CRS-6017: Processing resource auto-start for servers: prodrac102
CRS-2672: Attempting to start 'ora.net1.network' on 'prodrac102'
CRS-2676: Start of 'ora.net1.network' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'prodrac102'
CRS-2676: Start of 'ora.ons' on 'prodrac102' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'prodrac101'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac101' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'prodrac101'
CRS-2677: Stop of 'ora.scan1.vip' on 'prodrac101' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'prodrac102'
CRS-2676: Start of 'ora.scan1.vip' on 'prodrac102' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'prodrac102'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac102' succeeded
CRS-6016: Resource auto-start has completed for server prodrac102
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/09/01 22:52:27 CLSRSC-343: Successfully started Oracle Clusterware stack

2018/09/01 22:52:44 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Database Binaries Installation:-

Let us start the Oracle 12cR1 database software installation.

In the below page, select the third option whereas we are configuring One Node RAC installation.

$ sh /oradb/app/oraInventory/orainstRoot.sh
Changing permissions of /oradb/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /oradb/app/oraInventory to oinstall.
The execution of the script is complete.

Node 1:-

$ sh /oradb/app/oracle/product/12.1.0.2/db_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oradb/app/oracle/product/12.1.0.2/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Node 2:-

sh /oradb/app/oracle/product/12.1.0.2/db_1/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oradb/app/oracle/product/12.1.0.2/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

One Node RAC Database Creation:-

After completing the Oracle 12cR1 database binaries installation, go to the bin directory and start dbca for database creation.

Once the one node RAC installation and database creation is complete, check the database configuration and in which node the database is running by the below commands.

$ srvctl config database -d ORCL
Database unique name: ORCL
Database name: ORCL
Oracle home: /oradb/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DBWR_DATA/ORCL/PARAMETERFILE/spfile.278.985981865
Password file: +DBWR_DATA/ORCL/PASSWORD/pwdorcl.276.985981257
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCLPOOL
Disk Groups: DBWR_FRA,DBWR_DATA
Mount point paths: 
Services: ORCL.oracledbwr.com
Type: RACOneNode
Online relocation timeout: 30
Instance name prefix: ORCL
Candidate servers: 
OSDBA group: dba
OSOPER group: oper
Database instances: 
Database is policy managed
$ srvctl status database -d ORCL -v
Instance ORCL_1 is running on node prodrac102. Instance status: Open.
Online relocation: INACTIVE

From the above output we can see that the instance is running in the second node.

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Voting Disk In Oracle Rac Database

Description:-

  • In this article we are going to see Voting Disk In Oracle Rac Database concepts
  • Oracle Clusterware uses voting disk files to determine which nodes are members of a cluster.
  • You can configure voting disks on Oracle ASM, or you can configure voting disks on shared storage.
  • If you do not configure voting disks on Oracle ASM, then for high availability, Oracle recommends that you have a minimum of three voting disks on physically separate storage.This avoids having a single point of failure. If you configure a single voting disk, then you must use external mirroring to provide redundancy.
  • No. of voting disks depend on the type of redundancy. From 11.2.0.x onwards OCR and voting files are placed in the ASM diskgroup.

External redundancy = 1 Voting disk
Normal redundancy = 3 Voting disks
High redundancy =      5 Voting disks

You can have up to 32 voting disks in your cluster

Oracle recommends that you configure multiple voting disks during Oracle Clusterware installation to improve availability. If you choose to put the voting disks into an Oracle ASM disk group, then Oracle ASM ensures the configuration of multiple voting disks if you use a normal or high redundancy disk group.

To identify the voting disk location :-

[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE b4a7f383bb414f7ebf6aaae7c3873401 (/dev/oracleasm/disks/ASMDISK1) [DATA]
Located 1 voting disk(s).

To backup the voting disk (Before 11gR2) :-

dd if=voting_disk_name of=backup_file_name

The following can be used to restore the voting disk from the backup file
created.

dd if=backup_file_name of=voting_disk_name

In previous versions of Oracle Clusterware you needed to backup the voting disks with the dd command. Starting with Oracle Clusterware 11g Release 2 you no longer need to backup the voting disks. The voting disks are automatically backed up as a part of the OCR. In fact, Oracle explicitly
indicates that you should not use a backup tool like dd to backup or restore voting disks. Doing so can lead to the loss of the voting disk.

What Information is stored in VOTING DISK/FILE?

It contains 2 types of data.

Static data: Information about the nodes in cluster

Dynamic data: Disk heartbeat logging

It contains the important details of the cluster nodes membership like

  • Which node is part of the cluster?
  • Which node is leaving the cluster?
  • Which node is joining the cluster?

Although the Voting disk contents are not changed frequently, you will need to back up the Voting disk file every time
– you add or remove a node from the cluster or
– immediately after you configure or upgrade a cluster.

To move voting disk  create another diskgroup with external redundancy named as ‘DATA1’

  • From 11gR2,voting files are stored on ASM diskgroup.
  • “add” or “delete” command is not available , only “replace” command is available when voting files are stored on ASM diskgroup.
  • Note: You cannot create more than 1 voting disk in the same or on another/different Disk group disk when using External Redundancy in 11.2.

To identify the status and voting disk location :-

[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE b4a7f383bb414f7ebf6aaae7c3873401 (/dev/oracleasm/disks/ASMDISK1) [DATA]
Located 1 voting disk(s).

Replace a voting disk :-

[oracle@rac1 ~]$ crsctl replace votedisk +DATA1
Successful addition of voting disk 9789b4bf42214f8bbf14fda587ba331a.
Successful deletion of voting disk b4a7f383bb414f7ebf6aaae7c3873401.
Successfully replaced voting disk group with +DATA1.
CRS-4266: Voting file(s) successfully replaced

Check the status and verify voting disk location :-

[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9789b4bf42214f8bbf14fda587ba331a (/dev/oracleasm/disks/ASMDISK2) [DATA1]
Located 1 voting disk(s).

Why should we have ODD number of voting disk?

A node must be able to access more than half of the voting disks at any time.

Scenario:

Let us consider 2 node clusters with even number of voting disks say 2.

  • Let node 1 is able to access voting disk 1.
  • Node 2 is able to access voting disk 2.
  • From the above steps, we see that we don’t have any common file where clusterware can check the heartbeat of both the nodes.
  • If we have 3 voting disks and both the nodes are able to access more than half ie., 2 voting disks, there will be atleast one disk which will be accessed by both the nodes. The clusterware can use this disk to check the heartbeat of the nodes.
  • A node not able to do so will be evicted from the cluster by another node that has more than half the voting disks to maintain the integrity of the cluster.

Recover the corrupted voting disk :-

ASMCMD> lsdsk -G DATA1

Path
/dev/oracleasm/disks/ASMDISK2

As a root user,

#dd if=/dev/zero of=/dev/oracleasm/disks/ASMDISK2 bs=4096 count=1000000

The above session will get hang,

Check the clusterware status on  another session,

**************************************************************
rac1:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
**************************************************************
rac2:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
**************************************************************

After reboot both the nodes,check the clusterware status :-

[oracle@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager

 Since voting disk can’t be restored back to DATA1 diskgroup as disk in DATA1 has been corrupted

Stop the CRS forcefully in both the nodes and check the clusterware status,

[root@rac1 bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.

Start the CRS in exclusive mode in any nodes,

[root@rac1 bin]# ./crsctl start crs -excl
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2676: Start of 'ora.crf' on 'rac1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2679: Attempting to clean 'ora.asm' on 'rac1'
CRS-2681: Clean of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded

After CRS exclusive startup,check the clusterware status

[root@rac1 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4692: Cluster Ready Services is online in exclusive mode
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Recreate the ASM diskgroups using ASMCA where voting disk is placed before named as ‘DATA1’

ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 30718 20165 0 20165 0 N DATA/
MOUNTED EXTERN N 512 4096 1048576 10236 10183 0 10183 0 N DATA1/

Check the voting disk location :-

[oracle@rac1 ~]$ crsctl query css votedisk
Located 0 voting disk(s).

Replace the voting disk   :-

[oracle@rac1 ~]$ crsctl replace votedisk +DATA1
Successful addition of voting disk 5a1ef50fe3354f35bfa7f86a6ccb8990.
Successfully replaced voting disk group with +DATA1.
CRS-4266: Voting file(s) successfully replaced

[oracle@rac1 ~]$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 5a1ef50fe3354f35bfa7f86a6ccb8990 (/dev/oracleasm/disks/ASMDISK2) [DATA1]
Located 1 voting disk(s).

Stop the CRS running in exclusive mode,

# crsctl stop crs

Start the CRS(clusterware)  in all nodes,

# crsctl start crs

Check the clusterware status of both nodes,

[root@rac1 bin]# ./crsctl check cluster -all
**************************************************************
rac1:
CRS-4535: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4535: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Reference:-

https://docs.oracle.com/cd/E11882_01/rac.112/e41959/votocr.htm#CWADD91889

Catch Me On:- Hariprasath Rajaram

LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter:  https://twitter.com/hariprasathdba

 

Oracle Cluster Registry in RAC (OCR)

Description:-

  • In this article we are going to see the  Oracle Cluster Registry in RAC (OCR) concepts
  • OCR manages Oracle Clusterware and Oracle RAC database configuration information
  • You can store OCR and voting disks on Oracle Automatic Storage Management (Oracle ASM), or a certified cluster file system
  • Oracle recommends that you use Oracle ASM to store OCR and voting disks.
  • The main purpose for Oracle Cluster Registry (OCR) is to hold cluster &
    database information for RAC and Cluster Ready Services (CRS)
    1)cluster database instance to node mapping
    2)cluster node list
    3)CRS application resource profiles.
    4)Local listener & scan listener
    5)VIP,Scan IP & Services.
    7)ASM disk groups, volumes, file systems, and instances
    8)OCRs Automatic and Manual backups information

Backing Up Oracle Cluster Registry

  • This section describes how to back up OCR content and use it for recovery. The first method uses automatically generated OCR copies and the second method enables you to issue a backup command manually:
  • Automatic backups: Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle Database always retains the last three backup copies of OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of files that Oracle Database retains.
  • Manual backups: Run the ocrconfig -manualbackup command on a node where the Oracle Clusterware stack is up and running to force Oracle Clusterware to perform a backup of OCR at any time, rather than wait for the automatic backup. You must run the command as a user with administrative privileges. The -manualbackup option is especially useful when you want to obtain a binary backup on demand, such as before you make changes to OCR. The OLR only supports manual backups.
  • When the clusterware stack is down on all nodes in the cluster, the backups that are listed by the ocrconfig -showbackup command may differ from node to node.

Oracle Clusterware 11gRelease2 backs up the OCR automatically every four hours on a schedule that is dependent on when the node started,                    4-hour backups (3 max) –backup00.ocr, backup01.ocr, and backup02.ocr.
Daily backups (2 max) – day.ocr and day_.ocr
Weekly backups (2 max) – week.ocr and week_.ocr

You can use the ocrconfig command to view the current OCR backups,

To see all available backups,

[root@rac1 bin]# ocrconfig -showbackup

To see all available automatic backups,

[root@rac1 bin]# ocrconfig -showbackup auto

Note: automatic backups will not occur, when the cluster is down

Manual backup the OCR

[root@rac1 bin]# ocrconfig -manualbackup     <<‐‐Physical Backup of OCR

Logical backup of OCR :-

The above command backs up OCR under the default Backup directory. You can export the contents of the OCR using below command,

[root@rac1 bin]# ocrconfig -export /tmp/ocr_exp.ocr

The logical backup of OCR (taken using export option) can be imported using the below command,

[root@rac1 bin]# ocrconfig -import /tmp/ocr_exp.ocr

Verifying OCR integrity of entire cluster nodes by running CVU command:

[oracle@rac1 ~]$ cluvfy comp ocr -n all
Verifying OCR integrity
Checking OCR integrity...
Checking the absence of a non-clustered configuration...
All nodes free of non-clustered, local-only configurations
Checking daemon liveness...
Liveness check passed for "CRS daemon"
Checking OCR config file "/etc/oracle/ocr.loc"...
OCR config file "/etc/oracle/ocr.loc" check successful
Disk group for ocr location "+DATA" is available on all the nodes
Checking OCR backup location "/u01/app/12.1.0.2/grid/cdata/racscan"
OCR backup location "/u01/app/12.1.0.2/grid/cdata/racscan" check passed
NOTE:
This check does not verify the integrity of the OCR contents. Execute 'ocrcheck' as a privileged user to verify the contents of OCR.
OCR integrity check passed
Verification of OCR integrity was successful.

To find OCR file location :-

To Know the OCR Location on the cluster environment

Run as grid installation user,

When the cluster is running,

[oracle@rac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1560
Available space (kbytes) : 408008
ID : 150668753
Device/File Name : +DATA
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user

When the cluster is running or not,

[oracle@rac1 bin]$ cat /etc/oracle/ocr.loc — In Linux
ocrconfig_loc=+DATA
local_only=FALSE

To change OCR Default Backup Location  :-

[root@rac1 bin]#./ocrconfig -backuploc /u01

Restore OCR in Grid environment :-

Step 1 :- Stop cluster on each node (as Root user).

[root@rac1 bin]# crsctl stop crs -f

Step 2 :- We are starting the cluster in the exclusive mode (as Root user)

As root start GI in exclusive mode on one node only:

In 11gR1 cluster environment, we have to use below option to start the cluster in the exclusive mode.

[root@rac1 bin]# crsctl start crs -excl

From 11gR2 cluster environment, we have to use below option to start the cluster in the exclusive mode.

[root@rac1 bin]# crsctl start crs -excl -nocrs

Note: A new option ‘-nocrs‘ has been introduced with 11.2.0.2, which prevents the start of the ora.crsd resource. It is vital that this option is specified; otherwise the failure to start the ora.crsd resource will tear down ora.cluster_interconnect.haip, which in turn will cause ASM to crash.

If you don’t have the OCR DISK GROUP, then we need to create the disk group else move to restoring OCR DISK

Step 3 :- Restoring OCR

To Check whether ocrcheck is corrupted or not

# ocrcheck

Check status of ocrcheck,

OCR CHECK Example:

# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 4404
Available space (kbytes) : 257716
ID : 1306201859
Device/File Name : +DG01
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check failed

Note:

1) Check whether cluster registry integrity check is successful.

2) When you run as root user, logical corruption check will be bypassed.
If you run as oracle user, you can see this line end of the “ocrcheck” output.
“Logical corruption check bypassed due to non-privileged user”

If the OCR DISK corrupted, then perform the below steps

Locate OCR LOG file location

$ORACLE_HOME /log/<hostname>/client/ocrcheck_<pid>.log

The below command is used to restore the OCR from the physical backup. Shutdown CRS on all nodes.

Restore the latest OCR backup(root user) :-

ocrconfig restore <file name>

# ocrconfig -restore $ORACLE_HOME/cdata/racscan/backup00.ocr
[oracle@rac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1560
Available space (kbytes) : 408008
ID : 150668671
Device/File Name : +DATA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check bypassed due to non-privileged user

Reference :-

https://docs.oracle.com/cd/E11882_01/rac.112/e41959/votocr.htm#CWADD90974

Catch Me On:- Hariprasath Rajaram

LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter:  https://twitter.com/hariprasathdba

Oracle Rac crsctl and srvctl commands

 

CRSCTL Commands :-

Cluster Related Commands
crs_stat t  Shows HA resource status (hard to read)
crsstat Output of crs_stat t formatted nicely
crsctl check crs CSS,CRS,EVM appears healthy
crsctl stop crs Stop crs and all other services
crsctl disable crs Prevents CRS from starting on reboot
crsctl enable crs Enables CRS start on reboot
crs_stop all Stops all registered resources
crs_start all Starts all registered resources
crsctl stop cluster -all Stops the cluster in all nodes
crsctl start cluster -all Starts the cluster in all nodes

SRVCTL Commands :-

Database Related Commands
srvctl start instance -d <db_name>  -i <inst_name> Starts an instance
srvctl start database -d <db_name> Starts all instances
srvctl stop database -d <db_name> Stops all instances, closes database
srvctl stop instance -d <db_name> -i <inst_name> Stops an instance
srvctl start service -d <db_name> -s <service_name> Starts a service
srvctl stop service -d <db_name> -s <service_name> Stops a service
srvctl status service -d <db_name> Checks status of a service
srvctl status instance -d <db_name> -i <inst_name> Checks an individual instance
srvctl status database -d  <db_name> Checks status of all instances
srvctl start nodeapps -n  <node_name> Starts gsd, vip, listener, and ons
srvctl stop nodeapps -n  <node_name> Stops gsd, vip and listener
srvctl status scan Status of scan listener
srvctl config scan Configuration of scan listener
srvctl status asm Status of ASM instance

Catch Me On:- Hariprasath Rajaram

LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter:  https://twitter.com/hariprasathdba