Connect to Autonomous Data Warehouse Using Oracle Database Tools (SQL*Plus)

Description:-

Applications and tools connect to an Autonomous Data Warehouse instance using Oracle Net Services (also known as SQL*Net). Oracle Net Services enables a network session from a client application to an Oracle Database server.

When a network session is established, Oracle Net Services acts as the data courier for both the client application and the database. It is responsible for establishing and maintaining the connection between the client application and the database, as well as exchanging messages between them.

Oracle Net Services support a variety of connection types to the Autonomous Data Warehouse, including:

Oracle Call Interface (OCI), which is used by many applications written in C language. Examples include Oracle utilities such as Oracle SQL*Plus, SQL*Loader, and Oracle Data Pump.

Connect with SQL*Plus

SQL*Plus is a command-line interface used to enter SQL commands. SQL*Plus connects to an Oracle database.

To install and configure the client and connect to the Autonomous Data Warehouse using SQL*Plus, do the following:

Prepare for Oracle Call Interface (OCI), ODBC and JDBC OCI Connections.

Before making an Oracle Call Interface(OCI), ODBC, or JDBC OCI connection, do the following:

Step1:-Install Oracle Client software version 12.2.0.1 (or higher) on your computer. Either the full Oracle Client or the Oracle Instant Client may be used. The Instant Client contains the minimal software needed to make an Oracle Call Interface connection. The Instant Client is sufficient for most applications.

Step2:-Download client credentials and store the file in a secure folder on your computer. See Download Client Credentials (Wallets) here

Step3:-Unzip/uncompress the credentials file into a secure folder on your computer.

Copy this  (sqlnet.ora& tnsnames.ora) file to ?\network\admin

 

Step4:-Edit the sqlnet.ora file in the folder where you unzip the credentials file, replacing “?/network/admin” with the name of the folder containing the client

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="?/network/admin")))
SSL_SERVER_DN_MATCH=yes

To

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY="DIRECTORY="D:\walletcloud")))
SSL_SERVER_DN_MATCH=yes

Step5:-Connect using a database user, password, and database service name provided in the tnsnames.ora file.

The Oracle Wallet is transparent to SQL*Plus because the wallet location is specified in the sqlnet.ora file. This is true for any Oracle Call Interface (OCI), ODBC, or JDBC OCI connection.

Successfully connected using Sql*Plus.

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Devops Ansible Helps for Oracle DBA’s

Oracle Database Automation using Ansible Tool 

  • Installation and Configuration of Ansible Here
  • Oracle Automation-Create a DBA User Using Ansible Tool Here
  • Oracle Automation-Creating Oracle 12c Database Using Ansible Tool Here
  • Oracle Automation-Applying PSU patch in Oracle 12c Database Using Ansible Tool Here

New generation of IT Automation  tools

  • Ansible
  • Salt
  • Puppet
  • Chef

About this Tools:

What can be Automated using Ansible Tool?

  • Server Setup
  • Code , App Install & Versioning
  • Database Creation
  • Database Software Install
  • Prerequisites
  • Webserver
  • Upgrades
  • Network
  • Patching
  • Maintenance
  • Backup & Recovery
  • Install PackagesCheck more on Ansible here

Introduction Of ANSIBLE:

Ansible is an agent-less IT automation tool developed in 2012 by Michael DeHaan, a former Red Hat associate.

The Ansible design goals are for it to be: minimal, consistent, secure, highly reliable, and easy to learn.

A Simple Automation Tool which uses YAML Syntax to perform Tasks Ansible is Written on Python Ansible is available as Open Source

ANSIBLE ARCHITECTURE:

Ansible Requirements & Install :

Control Machine

  • Ansible Software
  • Python

How to Install Ansible

  • Using Yum– Linux
  • Using apt– Ubuntu & Debain
  • Using brew– Mac Os
  • Pip– Python
  • Clone From Git

Note : Python is required in Target Machine for Certain Modules

Copy authorized_keys of the ansible control machine to target hosts

Ansible Version & Config :

  • After installing ansible you can check its version

ansible –version

  • Ansible will show the default Config file /etc/ansible/ansible.cfg
  • You can set parameters related to ansible

Ansible Inventory :

  • Inventory is set of hosts where automation will be executed
  • Inventory is mandatory for ansible
  • Inventory can be of two types
  1. Static
  2. Dynamic
  • Static Inventory is a file in INI format which contains the host names. Hostnames can be grouped based on the infrastructure of your company
  • By default /etc/ansible/hosts is the referenced inventory in ansible.cfg file. You can modify to different location
  • Ansible inventory file can also be specified at the run time using -i option of ansible command
  • Dynamic inventory is for getting information directly from CMDB or from any of the Cloud Provider like AWS,Azure,Digital ocean etc.

Ansible Playbooks :

  • Playbooks are the key components of ansible
  • Playbooks are set of instructions/tasks that are executed in the hosts through ansible automation.
  • Playbooks reside in ansible Control Server
  • Playbooks are written using YAML
  • Playbooks are Idempotent

Ansible Playbooks – YAML:

  • YAML – YAML Ain’t Markup Language
  • Human Readable Data Serialization Language
  • Similar to JSON
  • Human friendly
  • YAML uses Space indentations and no tabs
  • If Spacing is not properly specified syntax error is thrown

YAML Syntax

  • Start of the file —-
  • Comments #
  • Strings – Not required to be quoted
  • Boolean – True or False
  • Lists ( Sequences) – Like Arrays in JSON , Use hyphens
  • Dictionaries ( Mappings) – Like Objects in JSON

Ansible Playbook – Explanation :

  • hosts: This lists the host or host group against which we want to run the task
  • remote_user – tells Ansible to use a particular user.
  • Tasks – list of actions you want to perform
  • The name parameter represents what the task is doing
  • Modules – yum and service, have their own set of parameters

Example:

  • YUM – the state parameter has the latest value and it indicates that the httpd latest package should be installed. (yum install httpd)
  • SERVICE – the state parameter with the started value indicates that the httpd service should be started. (/etc/init.d/start)
  • “enabled” parameter defines whether the service should start at boot or not. ( service enable httpd)  
  • become: True tells that the tasks should be executed with sudo access. If sudoers does not contain that user it will throw an error. (sudo su -)

Ansible Playbook Execution :

Save the file in any text editor or IDE with extension *.yml or *.yaml

Command to Execute ansible playbook ansible-playbook –i all httpd_install.yml

If Groups are present then you can specify the group name ansible-playbook –i web httpd_install.yml

To check Syntax ansible-playbook –i web httpd_install.yml –check-syntax

Verbose

ansible-playbook –i web httpd_install.yml –v

ansible-playbook –i web httpd_install.yml –vv

ansible-playbook –i web httpd_install.yml –vvv

Ansible parses the playbook from top to bottom approach.

Below is the order followed in parsing a playbook

  • Variables are loaded
  • Facts are gathered
  • Pre_tasks are performed
  • Handlers
  • Role execution
  • Task execution
  • Handlers
  • Post_tasks

Execution strategies – Linear & Free

Ansible Variables:

Variables can be defined in the playbooks.

Variables can also be declared in a separate *.yml file and can be referenced in the playbook

Variables can also be passed during the command line using –e option

Use register to capture the output of any module into a variable

Variables can be defined at the host level or group level in the inventory file.

Summary :

  • Ansible is a simple IT automation tool
  • Inventory is the collection of hosts where the automation will be executed
  • Playbooks are the heart of Ansible and they are combination of multiple tasks
  • Tasks are the actions performed in remote hosts
  • Roles are collection of playbooks, files ,variables and templates to perform specific configuration tasks

References:-

https://www.doag.org/formes/pubfiles/7375105/2015-K-INF-Frits_Hoogland-Automating__DBA__tasks_with_Ansible-Praesentation.pdf

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Step by Step Adding Node In Oracle RAC (12c Release 1) Environment

Steps for adding node in Oracle RAC (12c Release 1) environment :

For adding node to exisiting RAC environment,Initially we need a setup Oracle RAC environment to add nodes.Follow the below link  Steps for Oracle RAC 12cR1 Installation for Two-node RAC Installation.

Existing /etc/hosts file for Two-Node RAC Setup :-

#Public
192.168.12.128 racpb1.localdomain.com racpb1
192.168.12.129 racpb2.localdomain.com racpb2

#Private
192.168.79.128 racpv1.localdomain.com racpv1
192.168.79.129 racpv2.localdomain.com racpv2

#Virtual
192.168.12.130 racvr1.localdomain.com racvr1
192.168.12.131 racvr2.localdomain.com racvr2

#Scan
#192.168.12.140 racsn.localdomain.com racsn
#192.168.12.150 racsn.localdomain.com racsn
#192.168.12.160 racsn.localdomain.com racsn

Configure new host entry in all nodes of /etc/hosts file,

#Public
192.168.12.128 racpb1.localdomain.com racpb1
192.168.12.129 racpb2.localdomain.com racpb2
192.168.12.127 racpb3.localdomain.com racpb3


#Private
192.168.79.128 racpv1.localdomain.com racpv1
192.168.79.129 racpv2.localdomain.com racpv2
192.168.79.127 racpv3.localdomain.com racpv3


#Virtual
192.168.12.130 racvr1.localdomain.com racvr1
192.168.12.131 racvr2.localdomain.com racvr2
192.168.12.132 racvr3.localdomain.com racvr3

#Scan
#192.168.12.140 racsn.localdomain.com racsn
#192.168.12.150 racsn.localdomain.com racsn
#192.168.12.160 racsn.localdomain.com racsn

Creating groups and user on new node with same group and user id of existing nodes :

groups  : oinstall(primary group)  dba (secondary group)

#groupadd -g 54321 oinstall
#groupadd -g 54322 dba
#useradd -u 54323 -g oinstall -G dba oracle

ASM library Installation and Configuration :

[oracle@racpb3 Desktop]$ rpm -Uvh oracleasmlib-2.0.4-1.el6.x86_64 --nodeps --force
[oracle@racpb3 Desktop]$ oracleasm-support-2.1.8-1.el6.x86_64 --nodeps --force

Configuration  and Check ASM disks :

[root@racpb3 Panasonic DBA]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@racpb3 Panasonic DBA]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size 
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@racpb3 Panasonic DBA]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "DATA"
[root@racpb3 Panasonic DBA]# oracleasm listdisks
DATA

Configure SSH for oracle user on all nodes :

Copy sshUserSetup.sh script to new node(racpb3) and execute it.

[root@racpb1 deinstall]# cd /u01/app/12.1.0/grid/deinstall

[root@racpb1 deinstall]# scp sshUserSetup.sh oracle@racpb3:/home/oracle
oracle@racpb3's password: 
sshUserSetup.sh 100% 32KB 31.6KB/s 00:00

Run sshUserSetup.sh

[oracle@racpb3 ~]$ sh sshUserSetup.sh -hosts "racpb3" -user oracle
The output of this script is also logged into /tmp/sshUserSetup_2018-12-27-03-51-12.log
Hosts are racpb3
user is oracle
Platform:- Linux 
Checking if the remote hosts are reachable
PING racpb3.localdomain.com (192.168.12.127) 56(84) bytes of data.
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=3 ttl=64 time=0.045 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=4 ttl=64 time=0.046 ms
64 bytes from racpb3.localdomain.com (192.168.12.127): icmp_seq=5 ttl=64 time=0.075 ms

--- racpb3.localdomain.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 3999ms
rtt min/avg/max/mdev = 0.032/0.046/0.075/0.016 ms
Remote host reachability check succeeded.
The following hosts are reachable: racpb3.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost racpb3
numhosts 1
The script will setup SSH connectivity from the host racpb3.localdomain.com to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host racpb3.localdomain.com
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes

The user chose yes
Please specify if you want to specify a passphrase for the private key this script will create for the local host. Passphrase is used to encrypt the private key and makes SSH much more secure. Type 'yes' or 'no' and then press enter. In case you press 'yes', you would need to enter the passphrase whenever the script executes ssh or scp. 
The estimated number of times the user would be prompted for a passphrase is 2. In addition, if the private-public files are also newly created, the user would have to specify the passphrase on one additional occasion. 
Enter 'yes' or 'no'.
yes

The user chose yes
The files containing the client public and private keys already exist on the local host. The current private key may or may not have a passphrase associated with it. In case you remember the passphrase and do not want to re-run ssh-keygen, press 'no' and enter. If you press 'no', the script will not attempt to create any new public/private key pairs. If you press 'yes', the script will remove the old private/public key files existing and create new ones prompting the user to enter the passphrase. If you enter 'yes', any previous SSH user setups would be reset. If you press 'change', the script will associate a new passphrase with the old keys.
Press 'yes', 'no' or 'change'
yes
The user chose yes
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Generating public/private rsa key pair.
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
2b:88:2f:d5:38:5d:51:6a:2d:1e:a6:e0:51:a2:7e:c7 oracle@racpb3.localdomain.com
The key's randomart image is:
+--[ RSA 1024]----+
| . . .. |
| . o .o |
| . o *.. |
| . . + =.o |
| . o+E.S |
| o+oo . |
| ..... . |
| .. . |
| .. |
+-----------------+
Creating .ssh directory and setting permissions on remote host racpb3
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host racpb3. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host racpb3.
Warning: Permanently added 'racpb3,192.168.12.127' (RSA) to the list of known hosts.
oracle@racpb3's password: 
Done with creating .ssh directory and setting permissions on remote host racpb3.
Copying local host public key to the remote host racpb3
The user may be prompted for a password or passphrase here since the script would be using SCP for host racpb3.
oracle@racpb3's password: 
Done copying local host public key to the remote host racpb3
The script will run SSH on the remote machine racpb3. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Agent admitted failure to sign using the key.
oracle@racpb3's password: 
cat: /home/oracle/.ssh/known_hosts.tmp: No such file or directory
cat: /home/oracle/.ssh/authorized_keys.tmp: No such file or directory
SSH setup is complete.

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user oracle.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--racpb3:--
Running /usr/bin/ssh -x -l oracle racpb3 date to verify SSH connectivity has been setup from local host to racpb3.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
The script will run SSH on the remote machine racpb3. The user may be prompted for a passphrase here in case the private key has been encrypted with a passphrase.
Agent admitted failure to sign using the key.
oracle@racpb3's password: 
Thu Dec 27 03:52:59 IST 2018
-----------------------------------------------------------------------
SSH verification complete.

Copy authorized keys to all nodes running under cluster environment.

[oracle@racpb3 .ssh]$ scp authorized_keys oracle@racpb1:/home/oracle/
oracle@racpb1's password:
authorized_keys 100% 478 0.5KB/s 00:00

[oracle@racpb1 ~]$ cat authorized_keys >> .ssh/authorized_keys
[oracle@racpb1 ~]$ cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqucDimj845F+iE2cWRtVf4qBP/YYqtMcUgpuORdlWuEsRN3wygAlrLszJ9h3gzlIfORUYGLT01A4lj0ZmQtxxfNjKW74feK25ieYkeQUsADLNPvmsdXwpNSCZ4IerLpp74sm0mzFdAZC8o2hAPhvJwiCU85naxTDo/NSNGDMOf6eCRAE8fSb4rICrC+FNdC+TlagyhM+K1Jxt2MmFpKgauzjCpQcGqkCo6DsD59nppf7fAXUUovL7Ykh1AVufYdEhFGFS6lffhV90qrsHEmOKVodek8p16I9lemeJRNaXdM1QT4UcmBLlC+qWF6WMmh9PYMmq3+3cUca74G1U6gF+w== oracle@racpb1.localdomain.com
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA43diGL6I8oEnOa+WQc0gvIj0KkaNYIT06UwqvWhyfibwCUATBdj0aSQiSIGmiy95+wDiyfWJDKFAR60Bb8ZG5UzgP/XPhoZKcJKYxVMtX2zppeVQjoyXR2mwyElcT5xLR/PNhUMnDHbWPPp9kK6flyMGrpYjxbwh55FzC6MQ/jw19u9VVLDsNtt4q8Zv/LZF7jwwPAn4YXT2WFVnY6Td709C05RD7GVRA35wsVCXiAoQbl5EsQ6/4Hdz9IKEcDSDcD6EnGhaLARnSy2ose1CL/Zk/5/iyMldhKxA8m26ZuVu7G1bZqKIbnUfUWnyx48opSbANLn2fTzPaIIO2Cwd1w== oracle@racpb2.localdomain.com
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAIEA0MWOu3g/Pfw729Fn7ruHif5eJxQDTb6km1SbeUfIZTRPrpA62e9fu6TVDrmVupAqlrswKJU2HueSPk7uidgS2zbLC9BsrBx2O/P/GBO+MgIYVjpzWd0uCJ9yjCAD0ciWosdBjafxVNsO/hZ08Wqc49BqJ9fZV8IbOD9xnYQOJls= oracle@racpb3.localdomain.com

[oracle@racpb1 .ssh]$ scp authorized_keys racpb2:/home/oracle/.ssh/
authorized_keys 100% 1300 1.3KB/s 00:00

[oracle@racpb1 .ssh]$ scp authorized_keys racpb3:/home/oracle/.ssh/

authorized_keys 100% 1300 1.3KB/s 00:00

Check Time Synchornization :

It will check SSH not configure successfully.Run the below commands from all nodes.

Example for 1st node :

[oracle@racpb1 .ssh]$ ssh racpb1 date
Thu Dec 27 04:08:16 IST 2018
[oracle@racpb1 .ssh]$ ssh racpb2 date
Thu Dec 27 04:08:19 IST 2018
[oracle@racpb1 .ssh]$ ssh racpb3 date
Thu Dec 27 04:08:23 IST 2018

Verify Cluster utility :-

[oracle@racpb1 bin]$ ./cluvfy comp peer -n racpb3 -refnode racpb1 -r 11gr2

Verifying peer compatibility

Checking peer compatibility...

Compatibility check: Physical memory [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 7.8085GB (8187808.0KB) 7.8085GB (8187808.0KB) matched
Physical memory <null>

Compatibility check: Available memory [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 7.80856GB (8187808.0KB) 7.8085GB (8187808.0KB) matched
Available memory <null>

Compatibility check: Swap space [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 5.8594GB (6143996.0KB) 5.8594GB (6143996.0KB) matched
Swap space <null>

Compatibility check: Free disk space for "/u01/app/12.1.0/grid" [reference node: racpb1]storage
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 35.5566GB (3.728384E7KB) 28.9248GB (3.0329856E7KB) matched
Free disk space <null>

Compatibility check: Free disk space for "/tmp" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 6.9678GB (7306240.0KB) 8.1494GB (8545280.0KB) matched
Free disk space <null>

Compatibility check: User existence for "oracle" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 oracle(54321) oracle(54321) matched
User existence for "oracle" check passed

Compatibility check: Group existence for "oinstall" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 oinstall(54321) oinstall(54321) matched
Group existence for "oinstall" check passed

Compatibility check: Group existence for "dba" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 dba(54322) dba(54322) matched
Group existence for "dba" check passed

Compatibility check: Group membership for "oracle" in "oinstall (Primary)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 yes yes matched
Group membership for "oracle" in "oinstall (Primary)" check passed

Compatibility check: Group membership for "oracle" in "dba" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 yes yes matched
Group membership for "oracle" in "dba" check passed

Compatibility check: Run level [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 5 5 matched
Run level check passed

Compatibility check: System architecture [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 x86_64 x86_64 matched
System architecture check passed

Compatibility check: Kernel version [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 2.6.39-400.17.1.el6uek.x86_64 2.6.39-400.17.1.el6uek.x86_64 matched
Kernel version check passed

Compatibility check: Kernel param "semmsl" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 256 256 matched
Kernel param "semmsl" check passed

Compatibility check: Kernel param "semmns" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 32000 32000 matched
Kernel param "semmns" check passed

Compatibility check: Kernel param "semopm" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 100 100 matched
Kernel param "semopm" check passed

Compatibility check: Kernel param "semmni" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 142 142 matched
Kernel param "semmni" check passed

Compatibility check: Kernel param "shmmax" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4294967295 4294967295 matched
Kernel param "shmmax" check passed

Compatibility check: Kernel param "shmmni" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4096 4096 matched
Kernel param "shmmni" check passed

Compatibility check: Kernel param "shmall" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 0 0 matched
Kernel param "shmall" check passed

Compatibility check: Kernel param "file-max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 6815744 6815744 matched
Kernel param "file-max" check passed

Compatibility check: Kernel param "ip_local_port_range" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 9000 65500 9000 65500 matched
Kernel param "ip_local_port_range" check passed

Compatibility check: Kernel param "rmem_default" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4194304 4194304 matched
Kernel param "rmem_default" check passed

Compatibility check: Kernel param "rmem_max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 4194304 4194304 matched
Kernel param "rmem_max" check passed

Compatibility check: Kernel param "wmem_default" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 262144 262144 matched
Kernel param "wmem_default" check passed

Compatibility check: Kernel param "wmem_max" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 1048576 1048576 matched
Kernel param "wmem_max" check passed

Compatibility check: Kernel param "aio-max-nr" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 1048576 1048576 matched
Kernel param "aio-max-nr" check passed

Compatibility check: Package existence for "binutils" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 binutils-2.20.51.0.2-5.36.el6 binutils-2.20.51.0.2-5.36.el6 matched
Package existence for "binutils" check passed

Compatibility check: Package existence for "compat-libcap1" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 compat-libcap1-1.10-1 compat-libcap1-1.10-1 matched
Package existence for "compat-libcap1" check passed

Compatibility check: Package existence for "compat-libstdc++-33 (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 compat-libstdc++-33-3.2.3-69.el6 (x86_64) compat-libstdc++-33-3.2.3-69.el6 (x86_64) matched
Package existence for "compat-libstdc++-33 (x86_64)" check passed

Compatibility check: Package existence for "libgcc (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libgcc-4.4.7-3.el6 (x86_64),libgcc-4.4.7-3.el6 (i686) libgcc-4.4.7-3.el6 (x86_64),libgcc-4.4.7-3.el6 (i686) matched
Package existence for "libgcc (x86_64)" check passed

Compatibility check: Package existence for "libstdc++ (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libstdc++-4.4.7-3.el6 (x86_64) libstdc++-4.4.7-3.el6 (x86_64) matched
Package existence for "libstdc++ (x86_64)" check passed

Compatibility check: Package existence for "libstdc++-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libstdc++-devel-4.4.7-3.el6 (x86_64) libstdc++-devel-4.4.7-3.el6 (x86_64) matched
Package existence for "libstdc++-devel (x86_64)" check passed

Compatibility check: Package existence for "sysstat" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 sysstat-9.0.4-20.el6 sysstat-9.0.4-20.el6 matched
Package existence for "sysstat" check passed

Compatibility check: Package existence for "gcc" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 gcc-4.4.7-3.el6 gcc-4.4.7-3.el6 matched
Package existence for "gcc" check passed

Compatibility check: Package existence for "gcc-c++" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 gcc-c++-4.4.7-3.el6 gcc-c++-4.4.7-3.el6 matched
Package existence for "gcc-c++" check passed

Compatibility check: Package existence for "ksh" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 ksh-20100621-19.el6 ksh-20100621-19.el6 matched
Package existence for "ksh" check passed

Compatibility check: Package existence for "make" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 make-3.81-20.el6 make-3.81-20.el6 matched
Package existence for "make" check passed

Compatibility check: Package existence for "glibc (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 glibc-2.12-1.107.el6 (x86_64),glibc-2.12-1.107.el6 (i686) glibc-2.12-1.107.el6 (x86_64),glibc-2.12-1.107.el6 (i686) matched
Package existence for "glibc (x86_64)" check passed

Compatibility check: Package existence for "glibc-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 glibc-devel-2.12-1.107.el6 (x86_64) glibc-devel-2.12-1.107.el6 (x86_64) matched
Package existence for "glibc-devel (x86_64)" check passed

Compatibility check: Package existence for "libaio (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libaio-0.3.107-10.el6 (x86_64) libaio-0.3.107-10.el6 (x86_64) matched
Package existence for "libaio (x86_64)" check passed

Compatibility check: Package existence for "libaio-devel (x86_64)" [reference node: racpb1]
Node Name Status Ref. node status Comment
------------ ------------------------ ------------------------ ----------
racpb3 libaio-devel-0.3.107-10.el6 (x86_64) libaio-devel-0.3.107-10.el6 (x86_64) matched
Package existence for "libaio-devel (x86_64)" check passed

Verification of peer compatibility was successful.
Checks passed for the following node(s):
racpb3

Verify new node pre-check :

[oracle@racpb1 bin]$ ./cluvfy stage -pre nodeadd -n racpb3 -fixup -verbose > /home/oracle/cluvfy_pre_nodeadd.txt

Above node addition pre-check has to get passed to add nodes to the existing Two-Node RAC environment.I have attached the cluvfy_pre_nodeadd file here.

From racpb1 node,

For GRID_HOME :

[oracle@racpb1 ~]$ . .bash_profile
[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$ export IGNORE_PREADDNODE_CHECKS=Y
[oracle@racpb1 ~]$ cd $ORACLE_HOME/addnode

[oracle@racpb1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={racpb3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racvr3}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 7957 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log
ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.

Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2018-12-27_05-25-06AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

Copying files to node successful.
.................................................. 73% Done.

Saving cluster inventory in progress.
.................................................. 80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 88% Done.

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[racpb3]
Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[racpb3]

The scripts can be executed in parallel on all the nodes.

..........
Update Inventory in progress.
.................................................. 100% Done.

Update Inventory successful.
Successfully Setup Software.

As root user,execute orainstRoot.sh and root.sh on racpb3 :

[root@racpb3 ]# sh orainstRoot.sh 
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[root@racpb3 grid]# sh root.sh
Check /u01/app/12.1.0/grid/install/root_racpb3.localdomain.com_2018-12-27_21-52-22.log for the output of root script

I have attached here root script output log

Check Clusterware status :-

[root@racpb3 bin]# ./crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb3:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racpb3 bin]# ./crs_stat -t -v
Name Type R/RA F/FT Target State Host 
----------------------------------------------------------------------
ora.DATA.dg ora....up.type 0/5 0/ ONLINE ONLINE racpb1 
ora....ER.lsnr ora....er.type 0/5 0/ ONLINE ONLINE racpb1 
ora....N1.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb1 
ora....N2.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb3 
ora....N3.lsnr ora....er.type 0/5 0/0 ONLINE ONLINE racpb2 
ora.MGMTLSNR ora....nr.type 0/0 0/0 ONLINE ONLINE racpb2 
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE racpb1 
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE racpb2 
ora.mgmtdb ora....db.type 0/2 0/1 ONLINE ONLINE racpb2 
ora....network ora....rk.type 0/5 0/ ONLINE ONLINE racpb1 
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE racpb2 
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE racpb1 
ora.orcl11g.db ora....se.type 0/2 0/1 ONLINE ONLINE 
ora....SM1.asm application 0/5 0/0 ONLINE ONLINE racpb1 
ora....B1.lsnr application 0/5 0/0 ONLINE ONLINE racpb1 
ora.racpb1.ons application 0/3 0/0 ONLINE ONLINE racpb1 
ora.racpb1.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb1 
ora....SM2.asm application 0/5 0/0 ONLINE ONLINE racpb2 
ora....B2.lsnr application 0/5 0/0 ONLINE ONLINE racpb2 
ora.racpb2.ons application 0/3 0/0 ONLINE ONLINE racpb2 
ora.racpb2.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb2 
ora....SM3.asm application 0/5 0/0 ONLINE ONLINE racpb3 
ora....B3.lsnr application 0/5 0/0 ONLINE ONLINE racpb3 
ora.racpb3.ons application 0/3 0/0 ONLINE ONLINE racpb3 
ora.racpb3.vip ora....t1.type 0/0 0/0 ONLINE ONLINE racpb3 
ora.scan1.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb1 
ora.scan2.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb3 
ora.scan3.vip ora....ip.type 0/0 0/0 ONLINE ONLINE racpb2

 

For ORACLE_HOME :

[oracle@racpb1 addnode]$ export ORACLE_SID=orcl11g
[oracle@racpb1 addnode]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ cd $ORACLE_HOME/addnode

[oracle@racpb1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={racpb3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racvr3}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 7937 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed


Prepare Configuration in progress.

Prepare Configuration successful.
.................................................. 8% Done.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2018-12-28_12-34-24AM.log

Instantiate files in progress.

Instantiate files successful.
.................................................. 14% Done.

Copying files to node in progress.

Copying files to node successful.
.................................................. 73% Done.

Saving cluster inventory in progress.
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'racpb2'. Refer to '/u01/app/oraInventory/logs/addNodeActions2018-12-28_12-34-24AM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes: 
/u01/app/oracle/product/12.1.0/db_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 CLUSTER_NODES=racpb1,racpb2,racpb3 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=<node on which command is to be run>. 
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
.................................................. 80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/oracle/product/12.1.0/db_1 was unsuccessful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
.................................................. 88% Done.

As a root user, execute the following script(s):
1. /u01/app/oracle/product/12.1.0/db_1/root.sh

Execute /u01/app/oracle/product/12.1.0/db_1/root.sh on the following nodes: 
[racpb3]

..........
Update Inventory in progress.
.................................................. 100% Done.

Update Inventory successful.
Successfully Setup Software.

Run the below commands on failed nodes racpb3

[oracle@racpb3 db_1]$ /u01/app/oracle/product/12.1.0/db_1/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1 CLUSTER_NODES=racpb1,racpb2,racpb3 CRS=false "INVENTORY_LOCATION=/u01/app/oraInventory" LOCAL_NODE=3
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 5994 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.

Execute root.sh on new node (racpb3) as root user 

[root@racpb3 Desktop]# sh /u01/app/oracle/product/12.1.0/db_1/root.sh 
Check /u01/app/oracle/product/12.1.0/db_1/install/root_racpb3.localdomain.com_2018-12-28_00-57-10.log for the output of root script
[root@racpb3 Desktop]# tail -f /u01/app/oracle/product/12.1.0/db_1/install/root_racpb3.localdomain.com_2018-12-28_00-57-10.log
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/12.1.0/db_1
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Set env. to new node (racpb3) :-

grid()
{
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/12.1.0/grid; export ORACLE_HOME
export ORACLE_SID=+ASM3
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
}

11g()
{
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export ORACLE_HOME
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_SID=orcl11g3
export ORACLE_SID
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:.
export LD_LIBRARY_PATH
LIBPATH=$ORACLE_HOME/lib32:$ORACLE_HOME/lib:/usr/lib:/lib
export LIBPATH
TNS_ADMIN=${ORACLE_HOME}/network/admin
export TNS_ADMIN
PATH=$ORACLE_HOME/bin:$PATH:.
export PATH
}

Check the database status and instances :-

[oracle@racpb3 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

It will show only two instances are present under clusterware.Add new instance using DBCA.

Adding Instance to Cluster Database :

Invoke dbca from node 1 (racpb1) :

[oracle@racpb1 ~]$ . .bash_profile 
[oracle@racpb1 ~]$ 11g
[oracle@racpb1 ~]$ dbca

 

 

 

 

 

 

 

 

 

Check Database status and configuration :

[oracle@racpb3 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2
Instance orcl11g3 is running on node racpb3


[oracle@racpb3 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2,orcl11g3
Configured nodes: racpb1,racpb2,racpb3
Database is administrator managed

 

Catch Me On:- Hariprasath Rajaram Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/ Facebook:https://www.facebook.com/HariPrasathdba                       FB Group:https://www.facebook.com/groups/894402327369506/                   FB Page: https://www.facebook.com/dbahariprasath/? Twitter: https://twitter.com/hariprasathdba

Scaling Down Autonomous Data Warehouse Database

Remove CPU or Storage Resources

Describes how to scale your Autonomous Data Warehouse on demand by removing CPU cores or storage (TB).

Sign in to your Oracle Cloud Account at cloud.oracle.com.

From the Oracle Cloud Infrastructure page choose your region and compartment.

Select an Autonomous Data Warehouse instance from the links under the Name column.

From the Details page click Scale Up/Down.On the Scale Up/Down page, select the change in resources for your scale request:
Click down arrow to select a value for CPU Core Count. The default is no change.
Click down arrow to select a value for Storage (TB). The default is no change.

Click Update to change your resources.

Successfully scale down our Autonomous Data Warehouse Database on demand by Remove CPU cores.

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

 

Scaling up Autonomous Data Warehouse Database

 

Add CPU or Storage Resources

Description:- 

Describes how to scale your Autonomous Data Warehouse on demand by adding CPU cores or storage (TB).

Sign in to your Oracle Cloud Account at cloud.oracle.com.

Open the Oracle Cloud Infrastructure service console from MyServices through the Services menu navigation icon  the Services Dashboard tile.

From the Oracle Cloud Infrastructure page choose your region and compartment.

Select an Autonomous Data Warehouse instance from the links under the Name column.

From the Details page click Scale Up/Down.

On the Scale Up/Down prompt, select the change in resources for your scale request.
Click up arrow to select a value for CPU Core Count. The default is no change.
Click up arrow to select a value for Storage (TB). The default is no change

Click Update to change your resources.

Successfully scale up our Autonomous Data Warehouse Database on demand by adding CPU cores.

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

 

Starting and Stopping Autonomous Data Warehouse Database

Description:-

In this article we are describes the starting and stopping Autonomous Data Warehouse instance step by step

Start Autonomous Data Warehouse

Describes the steps to start an Autonomous Data Warehouse instance.

Sign in to your Oracle Cloud Account at cloud.oracle.com.

From the Oracle Cloud Infrastructure page choose your region and compartment.

Select an Autonomous Data Warehouse instance from the links under the Name column.

On the Details page, from the Actions drop-down list, select Start.
Start is only shown for a stopped instance.

Click Start to confirm.

Database is Starting in progress..

 

Successfully  Autonomous Data Warehouse instance Started…

When an Autonomous Data Warehouse instance is started, Autonomous Data Warehouse CPU billing is initiated based on full-hour cycles of usage.

Stop Autonomous Data Warehouse

Describes the steps stop an Autonomous Data Warehouse instance.

Sign in to your Oracle Cloud Account at cloud.oracle.com.

From the Oracle Cloud Infrastructure page choose your region and compartment.

Select an Autonomous Data Warehouse instance from the links under the Name column.

On the Details page, from the Actions drop-down list, select Stop.
Click Stop to confirm.

Database is Stopping in progress..

Successfully  Autonomous Data Warehouse instance Stopped…

When an Autonomous Data Warehouse instance is stopped, the following details apply:

Tools are no longer able to connect to a stopped instance.
Autonomous Data Warehouse in-flight transactions and queries are stopped.
Autonomous Data Warehouse CPU billing is halted based on full-hour cycles of usage.

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Step by Step Upgrade Oracle RAC Grid Infrastructure and Database from 11g to 12c

 

Upgrade RAC Grid and Database from 11.2.0.4 to 12.1.0.2 :-

Main steps :

Grid :-

  1.  Check all services are up and running from 11gR2 GRID_HOME
  2.  Perform backup of OCR, voting disk and Database.
  3.  Create new directory for installing 12C software on both RAC nodes.
  4.  Run “runcluvfy.sh” to verify errors .
  5.  Install and upgrade GRID from 11gR2 to 12cR1
  6. Verify upgrade version

Database  :-

  1. Backup the database before the upgrade
  2. Database upgrade Pre-check
    • Creating Stage for 12c database software
    • Creating directory for 12c oracle home
    • Check the pre upgrade status.
  3. Unzip 12c database software in stage
  4. Install the 12.1.0.2 using the software only installation
  5. Run Preupgrade.sql script in 11.2.0.4 existing database from newly installed 12c home.
  6. Run the DBUA to start the database upgrade.
  7. Database post upgrade check.
  8. Check Database version.

Environment variables for 11g database :-

GRID :

grid()
{
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
export ORACLE_SID=+ASM1
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
SQLPATH=/u01/app/oracle/scripts/sql:/u01/app/11.2.0/grid/rdbms/admin:/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin; export SQLPATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
}

DATABASE :

11g()
{
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_HOME
ORACLE_BASE=/u01/app/oracle
export ORACLE_BASE
ORACLE_SID=orcl11g1
export ORACLE_SID
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/lib:.
export LD_LIBRARY_PATH
LIBPATH=$ORACLE_HOME/lib32:$ORACLE_HOME/lib:/usr/lib:/lib
export LIBPATH
TNS_ADMIN=${ORACLE_HOME}/network/admin
export TNS_ADMIN
PATH=$ORACLE_HOME/bin:$PATH:.
export PATH
}

Upgrade GRID Infrastructure Software 12c :-

Check GRID Infrastructure software version and Clusterware status:

[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.4.0]

[oracle@racpb1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Verify all services are up and running from 11gR2 GRID Home :

[oracle@racpb1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.LISTENER.lsnr
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.asm
ONLINE ONLINE racpb1 Started
ONLINE ONLINE racpb2 Started
ora.gsd
OFFLINE OFFLINE racpb1
OFFLINE OFFLINE racpb2
ora.net1.network
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.ons
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
ora.registry.acfs
ONLINE ONLINE racpb1
ONLINE ONLINE racpb2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racpb2
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE racpb1
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE racpb1
ora.cvu
1 ONLINE ONLINE racpb1
ora.oc4j
1 ONLINE ONLINE racpb1
ora.orcl11g.db
1 ONLINE ONLINE racpb1 Open
2 ONLINE ONLINE racpb2 Open
ora.racpb1.vip
1 ONLINE ONLINE racpb1
ora.racpb2.vip
1 ONLINE ONLINE racpb2
ora.scan1.vip
1 ONLINE ONLINE racpb2
ora.scan2.vip
1 ONLINE ONLINE racpb1
ora.scan3.vip
1 ONLINE ONLINE racpb1

Check Database status and configuration :

oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl11g
Database instances: orcl11g1,orcl11g2
Disk Groups: DATA
Mount point paths:
Services:
Type: RAC
Database is administrator managed

Perform local backup of OCR :

[root@racpb1 ~]$ mkdir -p /u01/ocrbkp
[root@racpb1 ~]# cd /u01/app/11.2.0/grid/bin/
[root@racpb1 bin]# ./ocrconfig -export /u01/ocrbkp/ocrfile

Move the 12c GRID Software to the server and unzip the software :

[oracle@racpb1 12102_64bit]$ unzip -d /u01/ linuxamd64_12102_grid_1of2.zip
Archive:  linuxamd64_12102_grid_1of2.zip
   creating: /u01/grid/
.
.

[oracle@racpb1 12102_64bit]$ unzip -d /u01/ linuxamd64_12102_grid_2of2.zip
Archive:  linuxamd64_12102_grid_2of2.zip
   creating: /u01/grid/stage/Components/oracle.has.crs/
.
.

Run cluvfy utility to pre-check  any errors :

Execute runcluvfy.sh from 12cR1 software location,

[oracle@racpb1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u01/zpp/12.1.0/grid -dest_version 12.1.0.2.0 -verbose

Make sure the cluvfy executed successfully. If any error, please take action before going to GRID 12cR1 upgrade.The cluvfy log is attached here.

Stop the running 11g database :

[oracle@racpb1 ~]$ ps -ef|grep pmon
oracle 3953 1 0 Dec22 ? 00:00:00 asm_pmon_+ASM1
oracle 4976 1 0 Dec22 ? 00:00:00 ora_pmon_orcl11g1
oracle 23634 4901 0 00:55 pts/0 00:00:00 grep pmon

[oracle@racpb1 ~]$ srvctl stop database -d orcl11g

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is not running on node racpb1
Instance orcl11g2 is not running on node racpb2

Take GRID_HOME backup on both nodes :

[oracle@racpb1 ~]$ grid
[oracle@racpb1 ~]$  tar -cvf grid_home_11g.tar $GRID_HOME

Check Clusterware services status before upgrade :

[oracle@racpb1 ~]$ crsctl check cluster -all
**************************************************************
racpb1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racpb2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Start the 12cR1 upgrade by executing runInstaller :

[oracle@racpb1 ~]$ cd /u01/
[oracle@racpb1 u01]$ cd grid/

[oracle@racpb1 grid]$ ./runInstaller 
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 415 MB. Actual 8565 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5996 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-12-23_01

Select Upgrade option to upgrade GRID 12c infrastructure and ASM.

Check the public host names and existing GRID_HOME

Uncheck the EM cloud control option to disable EM.

Specify location for ORACLE_BASE and ORACLE_HOME for 12c. 

Ignore the SWAP SIZE it has to be twice the size of memory in server.

 

Execute rootupgrade.sh script in both nodes :

 First node (racpb1)  :-

[root@racpb1 bin]# sh /u01/app/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2018/12/23 12:18:59 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:18:59 CLSRSC-4012: Shutting down Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:08 CLSRSC-4013: Successfully shut down Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:19 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:19:22 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/12/23 12:19:30 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/12/23 12:19:30 CLSRSC-363: User ignored prerequisites during installation
2018/12/23 12:19:38 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2018/12/23 12:19:38 CLSRSC-482: Running command: '/u01/app/12.1.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/11.2.0/grid -oldCRSVersion 11.2.0.4.0 -nodeNumber 1 -firstNode true -startRolling true'

ASM configuration upgraded in local node successfully.

2018/12/23 12:19:45 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2018/12/23 12:19:45 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/12/23 12:20:36 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2018/12/23 12:24:43 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/12/23 12:29:05 CLSRSC-472: Attempting to export the OCR
2018/12/23 12:29:06 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle oinstall'
2018/12/23 12:29:23 CLSRSC-473: Successfully exported the OCR
2018/12/23 12:29:29 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2018/12/23 12:29:29 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.

2018/12/23 12:29:30 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2018/12/23 12:29:30 CLSRSC-543:
3. The downgrade command must be run on the node racpb1 with the '-lastnode' option to restore global configuration data.
2018/12/23 12:29:55 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/12/23 12:30:19 CLSRSC-474: Initiating upgrade of resource types
2018/12/23 12:31:12 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p first'
2018/12/23 12:31:12 CLSRSC-475: Upgrade of resource types successfully initiated.
2018/12/23 12:31:21 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Second node (racpb2)  :-

[root@racpb2 ~]# sh /u01/app/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/12.1.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2018/12/23 12:34:35 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:35:15 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/12/23 12:35:17 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/12/23 12:35:24 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/12/23 12:35:24 CLSRSC-363: User ignored prerequisites during installation
ASM configuration upgraded in local node successfully.
2018/12/23 12:35:41 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/12/23 12:36:10 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
OLR initialization - successful
2018/12/23 12:36:37 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/12/23 12:39:54 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
2018/12/23 12:40:21 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2018/12/23 12:40:21 CLSRSC-482: Running command: '/u01/app/12.1.0/grid/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2018/12/23 12:42:33 CLSRSC-479: Successfully set Oracle Clusterware active version

2018/12/23 12:42:39 CLSRSC-476: Finishing upgrade of resource types

2018/12/23 12:43:00 CLSRSC-482: Running command: 'upgrade model -s 11.2.0.4.0 -d 12.1.0.2.0 -p last'

2018/12/23 12:43:00 CLSRSC-477: Successfully completed upgrade of resource types

2018/12/23 12:43:34 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

After running rootupgrade.sh script,Click OK button.

Check the Clusterware upgrade version:

[root@racpb1 ~]# cd /u01/app/12.1.0/grid/bin/
[root@racpb1 bin]# ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

Note: If you are upgrading from 11.2.0.1/11.2.0.2/11.2.0.3 version to 12cR1 then you may need to apply additional patches before you proceed with upgrade.

Start the 11g database :

[oracle@racpb1 ~]$ srvctl start database -d orcl11g
[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Upgrade RAC database from 11gR2 to 12cR1 :-

Backup the database before the upgrade :

Take level zero backup or cold backup of database.

Database upgrade Pre-check :

  • Creating Stage for 12c database software.
[oracle@racpb1 ~]$ mkdir -p /u01/stage
[oracle@racpb1 ~]$ chmod -R 755 /u01/stage/
  • Creating directory for 12c ORACLE_HOME.
[oracle@racpb1 ~]$ mkdir -p /u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ chown -R oracle:oinstall /u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ chmod -R 775 /u01/app/oracle/product/12.1.0/db_1
  • Check the preupgrade status :

Run runcluvfy.sh from grid stage location :

[oracle@racpb1 grid]$ ./runcluvfy.sh stage -pre dbinst -upgrade -src_dbhome /u01/app/oracle/product/11.2.0/dbhome_1 -dest_dbhome /u01/app/oracle/product/12.1.0/db_1 -dest_version 12.1.0.2.0

Above command output has to be completed successfully to upgrade database from 11gR1 to 12cR1.

Unzip 12c database software in stage :

[oracle@racpb1 12102_64bit]$ unzip -d /u01/stage/ linuxamd64_12102_database_1of2.zip

[oracle@racpb1 12102_64bit]$ unzip -d /u01/stage/ linuxamd64_12102_database_2of2.zip

Unset the 11g env. :

unset ORACLE_HOME
unset ORACLE_BASE
unset ORACLE_SID

Install the 12.1.0.2 using the software only installation :

Set new 12c env. and Execute runInstaller.

[oracle@racpb1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
[oracle@racpb1 ~]$ export ORACLE_BASE=/u01/app/oracle
[oracle@racpb1 ~]$ export ORACLE_SID=orcl12c
[oracle@racpb1 ~]$ 
[oracle@racpb1 ~]$ cd /u01/stage/database/
[oracle@racpb1 database]$ ./runInstaller 
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB. Actual 8533 MB Passed
Checking swap space: must be greater than 150 MB. Actual 5999 MB Passed
Checking monitor: must be configured to display at least 256 colors. 
Actual 16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2018-12-23_02-05-54PM. Please wait ...

Skip the security updates from Oracle Support.

Select RAC database installation.

After database 12c software installation done run the below script from both nodes :

 

Run Preupgrade.sql script :

  • Preupgrade script to identify any pre-reqs tasks that must be done on the database  before the upgrade.
  • Execute Preupgrade.sql script in 11.2.0.4 existing database from newly installed 12c ORACLE_HOME.
[oracle@racpb1 ~]$ . .bash_profile
[oracle@racpb1 ~]$ 11g
[oracle@racpb1 ~]$ cd /u01/app/oracle/product/12.1.0/db_1/rdbms/admin/
[oracle@racpb1 admin]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Dec 24 03:35:26 2018

Copyright (c) 1982, 2013, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> @preupgrd.sql

Loading Pre-Upgrade Package...

***************************************************************************
Executing Pre-Upgrade Checks in ORCL11G...
***************************************************************************************************************************************

====>> ERRORS FOUND for ORCL11G <<====

The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
prior to attempting your upgrade.
Failure to do so will result in a failed upgrade.

You MUST resolve the above errors prior to upgrade

************************************************************************************************************************

====>> PRE-UPGRADE RESULTS for ORCL11G <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
/u01/app/oracle/cfgtoollogs/orcl11g/preupgrade/postupgrade_fixups.sql

***************************************************************************************************************************************
Pre-Upgrade Checks in ORCL11G Completed.
******************************************************************************************************************************************************
***********************************************************************

Run the DBUA to start the database upgrade :

Check Database version and configuration :-

[oracle@racpb1 ~]$ srvctl config database -d orcl11g
Database unique name: orcl11g
Database name: orcl11g
Oracle home: /u01/app/oracle/product/12.1.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl11g/spfileorcl11g.ora
Password file: 
Domain: localdomain.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: 
Disk Groups: DATA
Mount point paths: 
Services: 
Type: RAC
Start concurrency: 
Stop concurrency: 
OSDBA group: dba
OSOPER group: oinstall
Database instances: orcl11g1,orcl11g2
Configured nodes: racpb1,racpb2
Database is administrator managed

[oracle@racpb1 ~]$ srvctl status database -d orcl11g
Instance orcl11g1 is running on node racpb1
Instance orcl11g2 is running on node racpb2

Successfully upgrade the Rac database From 11g to 12c (Grid & DB).

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

What Is Docker?

Docker is available in two editions:

  • Community Edition (CE)
  • Enterprise Edition (EE)

Docker Community Edition (CE) is ideal for individual developers and small teams looking to get started with Docker and experimenting with container-based apps.

Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale.

Capabilities Docker Engine – Community Docker Engine – Enterprise Docker Enterprise
Container engine and built in orchestration, networking, security yes yes yes
Certified infrastructure, plugins and ISV containers yes yes
Image management yes
Container app management yes
Image security scanning yes

Docker concepts

Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers. The use of Linux containers to deploy applications is called containerization. Containers are not new, but their use for easily deploying applications is.

Containerization is increasingly popular because containers are:

  • Flexible: Even the most complex applications can be containerized.
  • Lightweight: Containers leverage and share the host kernel.
  • Interchangeable: You can deploy updates and upgrades on-the-fly.
  • Portable: You can build locally, deploy to the cloud, and run anywhere.
  • Scalable: You can increase and automatically distribute container replicas.
  • Stackable: You can stack services vertically and on-the-fly.

Features of Docker

  • Docker has the ability to reduce the size of development by providing a smaller footprint of the operating system via containers.
  • With containers, it becomes easier for teams across different units, such as development, QA and Operations to work seamlessly across applications.
  • You can deploy Docker containers anywhere, on any physical and virtual machines and even on the cloud.
  • Since Docker containers are pretty lightweight, they are very easily scalable.

 

Components of Docker

Docker has the following components

  • Docker for Mac − It allows one to run Docker containers on the Mac OS.
  • Docker for Linux − It allows one to run Docker containers on the Linux OS.
  • Docker for Windows − It allows one to run Docker containers on the Windows OS.
  • Docker Engine − It is used for building Docker images and creating Docker containers.
  • Docker Hub − This is the registry which is used to host various Docker images.
  • Docker Compose − This is used to define applications using multiple Docker containers.
The standard and traditional architecture of virtualization:

Virtualization

  • The server is the physical server that is used to host multiple virtual machines.
  • The Host OS is the base machine such as Linux or Windows.
  • The Hypervisor is either VMWare or Windows Hyper V that is used to host virtual machines.
  • You would then install multiple operating systems as virtual machines on top of the existing hypervisor as Guest OS.
  • You would then host your applications on top of each Guest OS.
The new generation of virtualization that is enabled via Dockers:

Various Layers

  • The server is the physical server that is used to host multiple virtual machines. So this layer remains the same.
  • The Host OS is the base machine such as Linux or Windows. So this layer remains the same.
  • Now comes the new generation which is the Docker engine. This is used to run the operating system which earlier used to be virtual machines as Docker containers.
  • All of the Apps now run as Docker containers.

The clear advantage in this architecture is that you don’t need to have extra hardware for Guest OS. Everything works as Docker containers.

Docker Administration:

  • Docker Configuration — After installing Docker and starting Docker, the dockerd daemon runs with its default configuration. This page gathers resources on how to customize the configuration, start the daemon manually, and troubleshoot and debug the daemon if run into issues.
  • Collecting Docker Metrics — In order to get as much efficiency out of Docker as possible, we need to track Docker metrics. Monitoring metrics is also important for troubleshooting problems. This page gathers resources on how to collect Docker metrics with tools like Prometheus, Grafana, InfluxDB and more.
  • Starting and Restarting Docker Containers Automatically — Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restarts. Restart policies ensure that linked containers are started in the correct order. This page gathers resources about how to automatically start Docker containers on boot or after server crash.
  • Managing Container Resources — Resource management for Docker containers is a huge requirement for production users. It is necessary for running multiple containers on a single host in an efficient way and to ensure that one container does not starve the others in terms of cpu, memory, io, or networking. This page gathers resources about how to improve Docker performance by managing it’s resources.
  • Controlling Docker With systemd — Systemd provides a standard process for controlling programs and processes on Linux hosts. One of the nice things about systemd is that it is a single command that can be used to manage almost all aspects of a process. This page gathers resources about how to use systemd with Docker daemon service.
  • Docker CLI Commands — There are a large number of Docker client CLI commands, which provide information relating to various Docker objects on a given Docker host or Swarm cluster. Generally, this output is provided in a tabular format. This page gathers resources about how the Docker CLI Work, CLI Tips and Tricks and basic Docker CLI commands.
  • Docker Logging — Logs tell the full story of what is happening, or what happened at every layer of the stack. Whether it’s the application layer, the networking layer, the infrastructure layer, or storage, logs have all the answers. This page gathers resources about working with Docker logs, how to manage and implement Docker logs and more.
  • Troubleshooting Docker Engine — Docker makes everything easier. But even with the easiest platforms, sometimes you run into problems. This page gathers resources about  how to diagnose and troubleshoot problems, send logs, and communicate with the Docker Engine.
  • Docker Orchestration – Tools and Options — To get the full benefit of Docker containers, you need software to move containers around in response to auto-scaling events, a failure of the backing host, and deployment updates. This is container orchestration. This page gathers resources about Docker orchestration tools, fundamentals and best practices.

Summary

A good summary of what Docker does is included in its very own motto: Build , Ship , Run.

  • Build – Docker allows you to compose your application from microservices, without worrying about inconsistencies between development and production environments, and without locking into any platform or language.
  • Ship – Docker lets you design the entire cycle of application development, testing, and distribution, and manage it with a consistent user interface.
  • Run – Docker offers you the ability to deploy scalable services securely and reliably on a wide variety of platforms.

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

What is Anisble?

Ansible is an open source automation platform. It is very, very simple to setup and yet powerful. Ansible can help you with configuration management, application deployment, task automation.

Ansible is an open-source IT automation engine which can remove drudgery from your work life, and will also dramatically improve the scalability, consistency, and reliability of your IT environment.

It can also do IT orchestration, where you have to run tasks in sequence and create a chain of events which must happen on several different servers or devices.

Example : If you have a group of web servers behind a load balancer. Ansible can upgrade the web servers one at a time and while upgrading it can remove the current web server from the load balancer and disable it in your Nagios monitoring system. So in short you can handle complex tasks with a tool which is easy to use.

What Ansible can automate?

Provisioning: Set up the various servers you need in your infrastructure.

Configuration management: Change the configuration of an application, OS, or device; start and stop services; install or update applications; implement a security policy; or perform a wide variety of other configuration tasks.

Application deployment: Make DevOps easier by automating the deployment of internally developed applications to your production systems.

Is Ansible Free?

YES!, Ansible is free to use and can be downloaded and installed from a number of sources, it will currently only run on a Linux or Mac that has Python installed. It will not run on Windows.

There are also some paid products i.e Ansible Engine a version of Ansible with full support from Red Hat and also Ansible Tower a GUI front end to drive Ansible core Ansible Tower is licensed on a per node basis.

But if you just want to download Ansible and use it for your home or production use – it is free to use.

The Keys Features of Ansible

Agentless

There is no software or agent to be installed on the client that communicates back to the server.

Idempotent

No matter how many times you call the operation, the result will be the same.

Simple and extensible

Ansible is written in Python and uses YAML for playbook language, both of which are considered relatively easy to learn.

What are Ansible roles?

Roles provide a framework for fully independent, or interdependent collections of variables, tasks, files, templates, and modules. In Ansible, the role is the primary mechanism for breaking a playbook into multiple files. This simplifies writing complex playbooks, and it makes them easier to reuse.
An Ansible playbook is an organized unit of scripts that defines work for a server configuration managed by the automation tool Ansible. … An Ansible playbook contains one or multiple plays, each of which define the work to be done for a configuration on a managed server. Ansible plays are written in YAML.
By default Ansible modules require python to be present in the target machines, since they are all written in python. … Another is speaking to any devices such as routers that do not have any Python installed. In any other case, using the shell or command module is much more appropriate.

How Does DevOps Help to Oracle DBA

What are the activities performed by DBA?

The DBA role typically spans multiple production environments, development teams, technologies and stakeholders. They may be tuning a database one minute, applying a security patch the next, responding to a production issue or answering developers’ questions.

They need to ensure backups and replication are configured correctly, the appropriate systems and users have access to the right databases (and no-one else!), and they need to be on hand to troubleshoot unusual system behaviour.

Their real value lies in understanding the mechanics and details of the database itself, its runtime characteristics and configuration, so they can bridge the gap between developers writing queries and operations staff running jobs.

A skilled DBA can identify ways to speed up slow-running queries, either by changing the query logic, altering the database schema or editing database runtime parameters.

For instance, changing the order of joins, introducing an index (or sometimes removing an index!), adding hints to the database Query Execution Planner, or updating database heuristics, can all have a dramatic impact on performance.

Oracle DBA to a DevOps :

DevOps engineer you need to undergo training and certification.

DevOps is a set of best practices that emphasize the collaboration and communication of IT professionals (developers, operators, and support staff) in the life cycle of applications and services, leading to:

 Continuous Integration: merging all developed working copies to a shared mainline several times a day

 Continuous Deployment: release continuously or as often as possible

 Continuous Feedback: seek feedback from stakeholders during all life cycle stages

ORACLE DBA CAREER FLOW:

On the ‘Dev’ side, DBAs carefully evaluate each change request to ensure that it is well thought out, is compliant with organizational best practices and won’t have unintended consequences on database performance or the validity of dependent objects. They have developed and tested all of the SQL that has materially changed the database and crafted it into what it is today.

 

On the ‘Ops’ side, DBAs have designed and provisioned the data platform. They are in charge of monitoring their databases and keeping them available and high-performing. They manage access to and the overall security of the platform. They perform release activities in support of the application and troubleshoot any errors that happen during that process or during day to day operation.

Powerful DBAs with skills not just in scripting, but in efficiency and logic, were able to take complicated, multi-tier environments and break them down into strategies that could be easily adopted.

As they’d overcome the challenges of the database being central and blamed for everything in the IT environment, they were able to dissect and built out complex management and monitoring of end-to-end DevOps.

As essential as System, Network and Server Administration was to the Operations group, the Database Administrator possessed advanced skills required, a hybrid of the developer and the operations personnel that make them a natural fit for DevOps.

A database that functions like a well-oiled machine is critical to the implementation of an efficient and high performing DevOps strategy. The reason is simple; a slow database produces slow results – which is bad for business. In addition, within DevOps, DBAs can focus on helping their organisations make strides in innovation.

Independent development teams, faster and earlier corrective measures, as well as more stable deployment configurations are characteristics of database administration that directly affect the success of DevOps initiatives.

The mission of the DBA is to expand the database’s range of functionality. In this ever evolving industry, the objective remains the same: “help organisations extract value from data, integrate it with new and traditional sources, and ensure quality and security.”

While optimising the database for DevOps is a must for organisations, rushing into the transition unprepared could be a detriment instead of an improvement. Understanding your organisation’s goals as well as evaluating the capabilities of your DBAs, developers, and operation managers is imperative for the transition to be a success.

Database Smarts: Automate to Innovate

When you are stuck as a DBA performing database administration activities that you could automate using scripts and schedulers, then you are basically wasting your and your organization’s time due to redundancy and inefficiency. Bill Gates said, “I always choose a lazy person to do a difficult job because he will find an easy way to do it.” The assumption here is, of course, that the lazy person is smart; very smart. And that’s where automation comes in.

You may or may not already have an automated creation of procedures and some sort of code to save time, but the real beauty of fully integrated automation, when faced with managing multiple databases of different calibers, is that you don’t even need much of a DBA Background to implement this feature.

With DevOps tools like Chef, Puppet, and SaltStack, you can set up and manage a private cloud such as AWS DBaaS on-premise in just a few clicks. These tools not only make life far easier for your DB team but also improve the performance of the application and delivery process. And that’s music to the lazy man’s ears. Now, let’s take a closer look at implementing these new tools.

DevOps Tools: Automate Into the Future

If you are dealing with DB challenges on a daily basis, you have to know that there are complexities that cannot be handled merely via manual automation or the basic built-in tools. Take, for example, a new KPI in the application that requires you to make changes in database objects such as packages, procedures, tables, and all the dependent source code before being able to deploy them in the application.

There may be multiple people working on the database part to get this new KPI working on the application as databases contain important information that also needs to be a part of any new change. Hence, there is a possibility of missing one item of information during deployment that will result in failure to deploy on time. Not a good thing.

 

DevOps tools can handle these changes from a single-window option and disallow change from moving to production. They can even generate a detailed report on what needs to be deployed, provide insight on the deployment, and allow you to automate for continuous delivery.

Two great DevOps tools mentioned above that help tackle this challenge are Chef and Puppet, both of which focus on platform automation and enable continuous delivery through their single-window configuration, allowing you to manage change configuration, extensibility, compliance automation, and availability.

Let’s take a look at how you can redefine operations as Developer Services with Chef:

  • Build: Lowers cost of managing and maintaining infrastructure by providing an on-demand and self-service infrastructure that the developer needs.
  • Deploy: Offers management of infrastructure change via automatic testing of deployments, enabling developers to quickly ship quality software at reduced risk.
  • Manage: Provides an easy single-window configuration along with full monitoring so that your organization reduces software and configuration risks; also provides insights into speed and efficiency as well as reduced risk of delivery errors.
  • Collaborate: Reduces friction between teams by enabling full stack transparency and management via separating duties within the team to foster successful collaboration and allow for the delivery of a configured infrastructure with ease.

Summary:

  • Elevate your DBA skills with automation
  • Free up your time for interesting stuff
  • Look for ways to help devs & biz
  • Version control everything
  • Don’t repeat manual tasks more than twice
  • Get started with ansible

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DkeGfZsxfzXxHD6gTg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba