Oracle GoldenGate vs Oracle Data Guard

The configuration of Oracle GoldenGate / Data Guard has the purpose of synchronizing data between two or more systems .

The basic features of the two products may look similar but takes GoldenGate uses replication while dataguard not.

Oracle Data Guard and GoldenGate involving at least two systems where transactional data from one database is required to be moved to another database. The purpose of the replication of data can be disaster recovery, migration of data or preparation of a secondary system.

Data Guard is best for disaster recovery and data protection problems, GoldenGate is a more flexible heterogeneous replication mechanism and is also able to transform the data while it is being replicated.

  • Data Guard is an Oracle specific technology while GoldenGate support heterogeneous database systems including all the major RDBMS as DB2,Sybase, MySQL .
  • Data Guard supports active-passive replication. One of the database is the primary database and the other one is in an inactive Data Guard mode.
  • GoldenGate supports an active-active replication mode and allows both systems to work simultaneously while maintaining the data integrity.
  • GoldenGate allows transformation of the data, with conflict management while it is being replicated between both database systems.
  • GoldenGate allows replication across platform. Data can be extracted from a Unix platform and replicated to an Oracle database running on platform Windows.
  • GoldenGate has many case of utilization. The use of flat files for data transportation and the support of heterogeneous systems makes the technology so very interesting

 

GoldenGate is the best solution:
  • Replicate one or many tables to a read-write database.
  • Replicate and transform tables.
  • Bi-directional replication.
  • Zero or near-zero downtime upgrade.
  • Heterogeneous replication

GoldenGate replication.

• Zero Downtime Upgrade/Migration: You may want to setup an exact replica of your database for upgrade or migration, keep the data in sync till the last moment.

High Availability: Your applications cannot afford to have downtime so you want to have two databases up and running at the same time to support those applications.

Query or Reporting: You want to offload the reporting or long query operations from your production database.

Real Time BI: For your data warehouse requirements.

Data Synchronization: You want to Synchronize data across different departments of the company (different databases).

Furthermore the GoldenGate software supports extraction from the legacy systems and does not have any limitation on the distance either. Additionally, the trail files generated by GoldenGate capture can also be compressed and encrypted.

Oracle Data Guard

Used for

  • High availability
  • Data protection
  • Disaster recovery
  • Primary database workload reduction

Database roles

In a data guard environment, one database is a primary database, the others are standby databases. These roles are mutually exclusive.
The role of a database can be queried in the column database_role in v$database.

Primary database

The primary db is the »real« production database.
The primary db must run in archive log mode and with forced logging.

Standby databases

Standby databases are (transactionally consistent) copies of the primary database.
Up to 30 standby databases are supported.

Physical and logical standby databases

A standby DB is either physical or logical standby database. (A standby database can also become a snapshot standby database).
A physical database is an exact byte-wise copy of the primary database.
A logical database has the same data as the primary database, but is not necessarily stored in the same db blocks.
The physical database is kept synchronized with redo apply, the logical database with SQL apply.

Cascading standby database

A physical standby database might be configured to receive redo from a primary database and relay it to a another standby database.
In this case, the relaying database is  called a cascading standby database. The receiving standby databases is also known as a terminal destination.
With such a configuration, the load on the primary database can be reduced.

SQL*Plus

Some SQL*Plus statements come with the standby keyword which indicates that the statement should be applied on the standby databases.

Role of redo

Data Guard basically works as follows:

  • Archived redo log files from the production DB are copied to the standby DB.
  • They are then registered with the standby DB.
  • Finally, using media recovery, the redo is used to roll forward the standby database.

Preventing physical errors

A standby database validates the redo it receives from the primary DB.
Thus, physical errors (such as data corruption) on the primary DB are not propagated to standby databases.

Changing roles of databases

The role (primary vs standby) of a database can be changed by either a

  • switch over or
  • a fail-over
A switch over is a planned role transition. In a switch over, the primary database becomes a standby database while (one of) the standby databases becomes the primary database.
A switch over is performed in two steps:
1) the primary DB becomes a standby DB
2) a standby DB becomes a primary DB.
A fail over occurs when the primary database becomes unavailable. In this case, one of the standby databases becomes the new primary database.

RAC

The primary db and the standby databases can be run as single instance or real application clusters(RAC).

Active Data Guard

Active data guard offers the same features as data guard plus that it standby databases are opened read only while the copying of data takes place
Thus, it allows to query the standby databases without impacting the performance of the production database (for example to create resource-intensive reports).
The standbase database is called physical standby database.
The buzzword is real time query.
Active data guard must be licenced seperatly.

Data protection modes

Data Guard runs in one of three data protection modes. This allows to balance data availability against system performance requirements.

These protection modes are:

  • Maximum performance (default)
  • Maximum protection
  • Maximum availability

Automatic gap detection

If connectivity is lost between the primary and one or more standby databases
Catch Me On:- Hariprasath Rajaram 

Telegram:https://t.me/joinchat/I_f4DkeGfZuxgMIoJSpQZg LinkedIn:https://www.linkedin.com/in/hariprasathdba Facebook:https://www.facebook.com/HariPrasathdba 
FB Group:https://www.facebook.com/groups/894402327369506/ 
FB Page: https://www.facebook.com/dbahariprasath/? 
Twitter: https://twitter.com/hariprasathdba

Oracle Video Tutorial With Demo (Performance Tuning,GoldenGate,Rac,Dataguard)

Videos Library

I have uploaded the Performance Tuning,GoldenGate,RAC And Dataguard Videos for the below link to download.

https://mega.nz/#F!d00GTQQb!XKuAkQ6LQo7oi711-0ViiA

Documents Library

I have uploaded the Oracle DBA documents, interview questions and materials

Link to download

https://mega.nz/#F!7m5nlDaL

Encryption key

!gwbONpJRzF_b-H-DtnCxLQ

 

Oracle Goldengate: Tune TCPBUFSIZE and TCPFLUSHBYTES parameters.

Tune TCPBUFSIZE and TCPFLUSHBYTES parameters:

The two RMTHOST parameters, TCPBUFSIZE and TCPFLUSHBYTES, are very useful for increasing the buffer sizes and network packets sent by Data Pump over the network from the source to the target system. This is especially beneficial for high latency networks.
It is recommended that you set these parameters to an initial value of 1MB (1,048,576 bytes) or the calculated value, whichever is larger.

Use the ping command to obtain the average round trip time (RTT).
For example: % ping ggtarget.doyensys.com Pinging ggtarget.doyensys.com [192.168.1.171] with 32 bytes of data: Reply from 192.168.1.171: bytes=32 time=31ms TTL=56 Reply from 192.168.1.117: bytes=32 time=61ms TTL=56 Reply from 192.168.1.117: bytes=32 time=32ms TTL=56 Reply from 192.168.1.117: bytes=32 time=34ms TTL=56 Ping statistics for 192.168.1.117: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 31ms, Maximum = 61ms, Average = 45ms

Calculate the bandwidth-delay product (BDP):
For example, if the network between the source and target databases is 125 megabits per second (Mbits) and the latency is 45ms the calculation would be as follows: BDP = (125,000,000 / 8) * 0.045 = 703125bytes

Multiply the result by 4 to determine 4xBDP:
For example: 4xBDP = 703125 x 4 = 2812500
In this example, because the result is more than 1MB, set the values of TCPBUFSIZE and TCPFLUSHBYTES to 2812500
The parameters are set in the Data Pump parameter file. For example: RMTHOST ggtarget, MGRPORT 7810, TCPBUFSIZE 2812500, TCPFLUSHBYTES 2812500 

Note: The maximum socket buffer size for non-Windows systems is usually limited by default.

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

Oracle Goldengate: Configure the Streams pool for integrated Replicat

Configure the Streams pool for integrated Replicat:

When using integrated Replicat the Streams pool must be configured.
If we are using non-integrated Replicat means the Streams pool is not necessary.

The shared memory that is used by the servers comes from the Streams pool portion of the System Global Area (SGA) in the database. Therefore, you must set the database initialization parameter STREAMS_POOL_SIZE high enough to keep enough memory available for the number of Extract and Replicat processes that you expect to run in integrated mode. Note that Streams pool is also used by other components of the database (like Oracle Streams, Advanced Queuing, and Datapump export/import), so make certain to take them into account while sizing the Streams pool for Oracle GoldenGate.

The size requirement of the Streams pool for integrated Replicat is based on a single parameter, MAX_SGA_SIZE. The MAX_SGA_SIZE parameter defaults to INFINITE which allows the Replicat process to use as much of the Streams pool as possible. Oracle does not recommend setting the MAX_SGA_SIZE parameter.

Set the STREAMS_POOL_SIZE initialization parameter for the database to the following value:
(1GB * # of integrated Replicats) + 25% head room
For example, on a system with tw0 integrated Replicat process the calculation would be as follows:
(1GB * 2) * 1.25 = 2.5GB STREAMS_POOL_SIZE = 2500M

For example, on a system with three integrated Replicat process the calculation would be as follows:
(1GB * 3) * 1.25 = 3.75GB STREAMS_POOL_SIZE = 3750M

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

How To Handle Oracle DDL add a Column When not using DDL replication?

Handle Oracle DDL add a Column When not using DDL replication 

Description:-

There will be NO data loss occur  for below activity , because Extract will automatically pick up from where it left off.

Following DDL require downtime for Extract and Replicat.

Step 1:- If the newly added column is a composite key column, make sure there are no open transactions on the affected source table.

Step 2:- Issue LAG EXTRACT. When Extract’s lag is 0… or… at EOF, stop Extract.

GGSCI (gg-11.2.com as gguser@source) 44> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING DPUMP1 00:00:00 00:00:02
EXTRACT RUNNING EXT1 00:00:00 00:00:04

GGSCI (gg-11.2.com as gguser@source) 45> lag extract ext1

Sending GETLAG request to EXTRACT EXT1 …
No records yet processed.
At EOF, no more records to process.

GGSCI (gg-11.2.com as gguser@source) 46> stop extract ext1

Sending STOP request to EXTRACT EXT1 …
Request processed.

Step 3:-Issue LAG REPLICAT. When Replicat’s lag is 0… or… at EOF…

GGSCI (gg-12.2.com as gguser@source) 3> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING REP1 00:00:00 00:00:00

GGSCI (gg-12.2.com as gguser@source) 4> lag replicat rep1

Sending GETLAG request to REPLICAT REP1 …
Last record lag 343,307 seconds.
At EOF, no more records to process.

GGSCI (gg-12.2.com as gguser@source) 5> stop replicat rep1

Sending STOP request to REPLICAT REP1 …
Request processed.

Step 4:- Add the column to both the source and target tables.

[oracle@gg-11 ~]$ !sq
sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Mon Aug 13 18:40:38 2018

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> conn sample/sample
Connected.
SQL> show user
USER is “SAMPLE”
SQL> desc tt
Name Null? Type
—————————————– ——– —————————-
ID NUMBER(10)

SQL> alter table tt add (name varchar2(10));

Table altered.

Target Side:

[oracle@gg-12 ~]$ !sq
sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Mon Aug 13 18:41:20 2018

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> conn sample/sample
Connected.
SQL> show user
USER is “SAMPLE”
SQL> desc tt
Name Null? Type
—————————————– ——– —————————-
ID NUMBER(10)

SQL> alter table tt add (name varchar2(10));

Table altered.

Step 5:- Start Extract:

GGSCI (gg-11.2.com as gguser@source) 49> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING DPUMP1 00:00:00 00:00:05
EXTRACT STOPPED EXT1 00:00:00 00:00:07

GGSCI (gg-11.2.com as gguser@source) 50> start extract ext1

Sending START request to MANAGER …
EXTRACT EXT1 starting

GGSCI (gg-11.2.com as gguser@source) 51> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING DPUMP1 00:00:00 00:00:02
EXTRACT RUNNING EXT1 00:10:18 00:00:02

Step 6:- start Replicat:

GGSCI (gg-12.2.com as gguser@source) 8> start replicat rep1

Sending START request to MANAGER …
REPLICAT REP1 starting

GGSCI (gg-12.2.com as gguser@source) 9> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING REP1 00:00:00 00:00:02

Step 7:- Now insert the record into newly added column in source side.

SQL> insert into tt values (10,’SAM’);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from tt where name=’SAM’;

ID NAME
———- ———-
10 SAM

Step 8:- Target side check whether the above record are replicated or not

SQL> select * from tt where name=’SAM’;

ID NAME
———- ———-
10 SAM

 

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

 

Zero downtime database upgrade from 11g to 12c using Oracle Goldengate

Zero downtime database upgrade from 11g to 12c using Oracle Goldengate

Description:-

In this article we are going to see Zero downtime database upgrade from 11g to 12c using Oracle Goldengate
1)Using Goldengate we are going to upgrade from 11g to 12c database without any downtime.
2)Already Data is available source11g scheme Hari.

Environment Detail’s:-

High Level Steps  upgrade from 11g to 12c database

1)check network between source and target.
2)install goldengate Software both side
3)setup extract and datapump on source site
4)setup replict on target side
5)export and import initial load using SCN
6)start the replicat using on scn

Source 11g database side GG Configuration

Step1:-Login in to 11g server and connect to Goldengate 

[oracle@gg-11 gghome]$ ./ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug 7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (gg-11.2.com) dblogin userid gguser,password gguser
Successfully logged into database.

Step2:-Configure manager 

GGSCI (gg-11.2.com)view param mgr
PORT 7809

Step3:-Add schematrandata

GGSCI (gg-11.2.com )  add schematrandata hari

2018-07-31 19:56:12 INFO OGG-01788 SCHEMATRANDATA has been added on schema hari.
2018-07-31 19:56:12 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema hari.

GGSCI (gg-11.2.com ) info schematrandata hari

2018-07-31 19:56:34 INFO OGG-01785 Schema level supplemental logging is enabled on schema HARI.
2018-07-31 19:56:34 INFO OGG-01980 Schema level supplemental logging is enabled on schema HARI for all scheduling columns.

Step4:-Configure EXTRACT Process 

GGSCI (gg-11.2.com) view param ext1

EXTRACT ext1
SETENV (ORACLE_SID=”SOURCE”)
SETENV (ORACLE_HOME = “/u01/app/oracle/product/11.2.0/db_1”)
USERID gguser@source, PASSWORD gguser
DDL INCLUDE ALL
EXTTRAIL /u01/gghome/dirdat/ac
TABLE hari.*;

GGSCI (gg-11.2.com ) add extract ext1 tranlog begin now
EXTRACT added.

GGSCI (gg-11.2.com )  add exttrail /u01/gghome/dirdat/ac extract ext1
EXTTRAIL added.

GGSCI (gg-11.2.com as gguser@source)  info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER STOPPED
EXTRACT STOPPED EXT1 00:00:00 00:00:40

Step4:-Configure Pump Process  

GGSCI (gg-11.2.com )  view param dpump1

EXTRACT dpump1
USERID gguser@source, PASSWORD gguser
RMTHOST gg-12.2, MGRPORT 7810
RMTTRAIL /u01/gghome/dirdat/ad
DDL INCLUDE ALL
TABLE hari.*;

GGSCI (gg-11.2.com) add extract dpump1 exttrailsource /u01/gghome/dirdat/ac
EXTRACT added.

GGSCI (gg-11.2.com ) add rmttrail /u01/gghome/dirdat/ad extract dpump1
RMTTRAIL added.

GGSCI (gg-11.2.com )  info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER STOPPED
EXTRACT STOPPED DPUMP1 00:00:00 00:01:00
EXTRACT STOPPED EXT1 00:00:00 00:04:40

Step5:-Start all the process and Check

GGSCI (gg-11.2.com )  start ext1

Sending START request to MANAGER …
EXTRACT EXT1 starting

GGSCI (gg-11.2.com )  start dpump1

Sending START request to MANAGER …
EXTRACT DPUMP1 starting

GGSCI (gg-11.2.com )  info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING DPUMP1 00:00:00 00:04:33
EXTRACT RUNNING EXT1 00:00:00 00:00:03

Target side 12c database GG Configuration

Step1:-Configure manager 

GGSCI (gg-12.2.com) 9> dblogin userid gguser,password gguser
Successfully logged into database.

GGSCI (gg-12.2.com ) info mgr

Manager is running (IP port gg-12.2.com.7810, Process ID 3999).

GGSCI (gg-12.2.com ) view param mgr

PORT 7810

Step2:-Checkpoint table creation

GGSCI (gg-12.2.com)add checkpointtable gguser.checkpoint
Successfully created checkpoint table gguser.checkpoint.

Step3:-Configure Replicat Process 

GGSCI (gg-12.2.com) view param rep1

REPLICAT rep1
ASSUMETARGETDEFS
HANDLECOLLISIONS
USERID gguser@source, PASSWORD gguser
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE
MAP hari.*, TARGET hari.*;

GGSCI (gg-12.2.com )  add replicat rep1 exttrail /u01/gghome/dirdat/ad checkpointtable gguser.checkpoint
REPLICAT added.

GGSCI (gg-12.2.com ) 12> info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
REPLICAT STOPPED REP1 00:00:00 00:00:03

Source side check before datapump starts

GGSCI (gg-11.2.com ) info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING DPUMP1 00:00:00 00:00:00
EXTRACT RUNNING EXT1 00:00:00 00:00:10

Start the initial dataload using Datapump on source side 11g database

SQL> select current_scn from v$database;

CURRENT_SCN
———–
1584462

SQL> select count(*) from hari.chennai;

COUNT(*)
———-
1835008

Step1:-Export the table “chennai” using flashback_scn

[oracle@gg-11 gghome]$ expdp system/oracle dumpfile=hari1.dmp schemas=hari directory=DATA_PUMP_DIR logfile=hari1.log FLASHBACK_SCN=1584462

Export: Release 11.2.0.3.0 - Production on Tue Jul 31 20:02:25 2018
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** dumpfile=hari1.dmp schemas=hari directory=DATA_PUMP_DIR logfile=hari1.log FLASHBACK_SCN=1584462
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 88 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "HARI"."CHENNAI" 72.63 MB 1835008 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/admin/source/dpdump/hari1.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 20:02:45

Step2:-copy the datapump files to 11g server to 12c server

[oracle@gg-11 gghome]$ cd /u01/app/oracle/admin/source/dpdump/
[oracle@gg-11 dpdump]$ scp hari1.dmp oracle@192.168.2.157:/u01/app/oracle/admin/source/dpdump/
The authenticity of host ‘192.168.2.157 (192.168.2.157)’ can’t be established.
RSA key fingerprint is c2:3d:72:16:52:01:ae:5c:41:6b:34:f5:c7:a1:df:74.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.2.157’ (RSA) to the list of known hosts.
oracle@192.168.2.157’s password:
hari1.dmp 100% 73MB 72.8MB/s 00:01

Step3:-After export i am inserting more records in Chennai table

[oracle@gg-11 ~]$ sqlplus hari/hari
SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 31 20:06:47 2018
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> insert into chennai select * from chennai;
1835008 rows created.

SQL> commit;
Commit complete.

SQL> select count(*) from chennai;
COUNT(*)
———-
3670016     

Step4:-Now we are going to Import 12c database (1835008 Rows)

[oracle@gg-12 dpdump]$ impdp system/oracle dumpfile=hari1.dmp logfile=hari1.log directory=DATA_PUMP_DIR

Import: Release 12.2.0.1.0 - Production on Tue Jul 31 20:09:23 2018
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Warning: possible data loss in character set conversions
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=hari1.dmp logfile=hari1.log directory=DATA_PUMP_DIR
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HARI"."CHENNAI" 72.63 MB 1835008 rows
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at Tue Jul 31 20:09:57 2018 elapsed 0 00:00:24

Step5:-Target (12c) side going to start replicat process Using CSN 

GGSCI (gg-12.2.com ) start replicat rep1 aftercsn 1584462    ---(this scn number we get from 11g database)

Sending START request to MANAGER ...
REPLICAT REP1 starting

GGSCI (gg-12.2.com )info all

Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING REP1 00:00:00 00:00:01

Note :-Already initial load was completed using datapump,
Now extract was capturing all the changes happening on the source database 11g. These changes will now get applied to the target database 12c by the replicat process

[oracle@gg-12 dpdump]$ sqlplus hari/hari

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jul 31 20:19:26 2018
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Tue Jul 31 2018 20:14:07 +05:30
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select count(*) from chennai;

COUNT(*)
----------
3670016 

Repoint the users from 11g to 12c server.

Successfully completed without downtime upgrade

Catch Me On:- Hariprasath Rajaram

Telegram:https://t.me/joinchat/I_f4DhGF_Zifr9YZvvMkRg
LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter: https://twitter.com/hariprasathdba

 

 

Upgrade Classic Capture to Integrated Capture in Oracle Goldengate-12c

Upgrade Classic Capture to Integrated Capture in Oracle Goldengate-12c

Description:-

Integrated capture mode ,extract process directly communicate with database logminer server for capture the changes information this is called integrated capture mode. Integrated capture supports more data and storage types as compared to classic capture. Integrated capture is the only mode to supports capture from a multitenant container database.

While upgrading classic capture to integrated capture we can see the logminer  creation happening in backgroung level with the help of database alertlog file.

Integrated Capture mode, the extract process does not directly read the Oracle redo log files – that part of the job is performed by a logmining server residing in the Oracle database server.

Step 1: To stop the primary extract process.

info extract ext_upg

stop extract ext_upg

info extract ext_upg

Step 2: Then register the extract process to database.

REGISTER EXTRACT EXT_UPG DATABASE

Step 3: Check whether the extract process is ready for upgarde or not.

INFO EXT_UPG UPGRADE

Step 4: Now upgrade the integrated tranlog using UPGRADE INTEGRATED TRANLOG command.

ALTER EXTRACT EXT_UPG,UPGRADE INTEGRATED TRANLOG

Step 5: Now start extract process.

start extract EXT_UPG

Step 6: Check whether successfully upgrade the extract process from classic capture to integrated capture using info extract command.

info extract ext_upg

Step 7: We can get the information from database level using DBA_CAPTURE  view.

Catch Me On:- Hariprasath Rajaram

LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter:  https://twitter.com/hariprasathdba

Oracle-12c Configure Step By Step Goldengate Bidirectional Method

Oracle-12c Configure Step By Step Goldengate Bidirectional Method

Description:- 

Oracle goldengate introduce bidirectional replication is also called as active-active replication. The replication concurrently captures data from two (or more) databases to synchronize the data. An active-active replication is used for high availability, zero downtime migration.

MANAGER: Runs on all servers involved in the replication. It is the master process which controls all GG activity.

EXTRACT: Runs on the source database server. It extracts data from the Oracle database either directly or indirectly in the case of Integrated capture.

DATAPUMP: Runs on the source database server. It is an optional process which transmits extracted data from the source database server to the target database server. (Optional because the EXTRACT process can do this directly).

SERVER COLLECTOR: Runs on the target database server. Receives extracted data from source database server and stages it in the local trail files on the target database server for the REPLICAT process to apply it to the target database.

REPLICAT: Runs on the target database server. Reads extracted data from the local trail file on the target database server and applies the data to the target database.

Installing Oracle 12.2 GoldenGate on Linux server Here
Oracle-12c Configure Step by step Goldengate Unidirectional Method Here

Source Database

Hostname : ggsource.doyensys.com

Oracle database SID: GGSOURCE

Oracle version: 12.2.0

Oracle GG version: 12.2.0

Target Database

Hostname : ggtarget.doyensys.com

Oracle database SID: GGTARGET

Oracle version: 12.2.0

Oracle GG version: 12.2.0

Check the connectivity from source to target for replication.

Both source and target side add the host information to /etc/hosts file.

Check the connectivity from  target to source for replication.

On source(ggsource.doyensys.com) and target(ggtarget.doyensys.com) verify that LOG_MODE is set to ARCHIVELOG.

Archivelog must be enable on source side because if we are using classic capture the extract process will capture the changes information through archivelogs only, So it is mandatory for classic capture replication.

select LOG_MODE from v$database;
shu immediate;
startup mount;
ALTER DATABASE ARCHIVELOG;
alter database open;
select LOG_MODE from v$database;

Both source and target side Verify that supplemental logging and forced logging are set properly.

ALTER DATABASE FORCE LOGGING;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
select SUPPLEMENTAL_LOG_DATA_MIN,FORCE_LOGGING from v$database;

This parameter must be changed on source and target databases:

ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;
show parameter ENABLE_GOLDENGATE_REPLICATION

Create the administrator and user/schema owners on both source and target database.

create user gguser identified by gguser default tablespace goldengate quota unlimited on goldengate;

grant create session,connect,resource,alter system to gguser;

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee=>’gguser’, privilege_type=>’CAPTURE’, grant_optional_privileges=>’*’);

Both source and target side go to Golden Gate Installed location (in our scenario /u01/gghome) and then run the following Golden Gate inbuild scripts for creating all necessary objects to support DDL replication.

  1. @marker_setup
  2. @ddl_setup
  3. @role_setup.sql
  4. GRANT GGS_GGSUSER_ROLE TO <loggedUser>
  5. @ddl_enable.sql

SOURCE (GGSOURCE.DOYENSYS.COM):

Start GGSCI and login into database using dblogin command.

./ggsci

dblogin userid gguser, password gguser

By default manager parameter has created while installing the goldengate software we just add the user information to manager parameter file.

PORT 7811

USERIDALAIS gguser

Check the manager parameter and status.

view param mgr

Source side add trandata for particular table which we wants to  replicate the data to target database.

add tranadata demo.*

Create the primary Extract parameter file.

EXTRACT ext1
USERID gguser@ggsource, PASSWORD gguser
EXTTRAIL /u01/gghome/dirdat/aa
DDL INCLUDE ALL
TABLE demo.*;

Create the Extract group and the local Extract trail file and start the extract process.

add extract ext1 tranlog begin now

add exttrail /u01/gghome/dirdat/aa extract ext1

start extract ext1

Check the status of primary extract process.

info ext1

Create the secondary Extract (data pump) parameter file.

EXTRACT dpump1
USERID gguser@ggsource, PASSWORD gguser
RMTHOST ggtarget, MGRPORT 7810
RMTTRAIL /u01/gghome/dirdat/ab
DDL INCLUDE ALL
TABLE demo.*;

Create the data pump group and the remote Extract trail file and start the data pump process.

add extract dpump1 exttrailsource /u01/gghome/dirdat/aa

add rmttrail /u01/gghome/dirdat/ab extract dpump1

start extract dpump1

To check the status of data pump process.

info dpump1

TARGET (GGTARGET.DOYENSYS.COM)

Start GGSCI and login into database using dblogin command.

./ggsci

dblogin userid gguser password gguser

To check the manager status on target server.

view param mgr

info mgr

To create a checkpoint Table in the target database.

add checkpointtable gguser.chkpt

info  checkpointtable gguser.chkpt

Create the Replicat parameter file for target side.

REPLICAT rep1
USERID gguser@ggtarget, PASSWORD gguser
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE
MAP demo.*, TARGET demo.*;

Create and start the replicat process.

add replicat rep1 exttrail /u01/gghome/dirdat/ab checkpointtable gguser.chkpt

start replicat rep1

To check the status of replicat process.

info rep1

Configuring Bidirectional method:

TARGET SIDE (GGTARGET.DOYENSYS.COM):

From the target database we have to create Extract and Pump processes  for bidirectional replication.

./ggsci

dblogin userid gguser,password gguser

Target side add trandata for particular table which we wants to  replicate the data to target database.

add tranadata demo.*

Create the primary Extract parameter file fro target side.

EXTRACT ext2
USERID gguser@ggtarget, PASSWORD gguser
EXTTRAIL /u01/gghome/dirdat/ac
DDL INCLUDE ALL
TABLE demo.*;

Create the Extract group and the local Extract trail file and start the extract process.

add extract ext2 tranlog begin now

add exttrail /u01/gghome/dirdat/ac extract ext2

Check the status of primary extract process.

info ext2

Create the secondary Extract (data pump) parameter file for target side.

EXTRACT dpump2
USERID gguser@ggtarget, PASSWORD gguser
RMTHOST ggsource, MGRPORT 7811
RMTTRAIL /u01/gghome/dirdat/ad
DDL INCLUDE ALL
TABLE demo.*;

Create the data pump group and the remote Extract trail file and start the data pump process.

add extract dpump2 exttrailsource /u01/gghome/dirdat/ac

add rmttrail /u01/gg1/dirdat/ad extract dpump2

To check the status of data pump process.

info dpump2

SOURCE SIDE (GGSOURCE.DOYENSYS.COM):

Start GGSCI and login into database using dblogin command.

./ggsci

dblogin userid gguser, password gguser

To create a checkpoint Table in the source database.

add checkpointtable gguser.chkpt

info  checkpointtable gguser.chkpt

Create the Replicat parameter file for source side.

REPLICAT rep2
USERID gguser@ggsource, PASSWORD gguser
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE
MAP demo.*, TARGET demo.*;

Create and start the replicat process.

add replicat rep2 exttrail /u01/gghome/dirdat/ad checkpointtable gguser.chkpt

start rep2

To check the status of replicat process.

info rep2

Now check the replication from source to target database.

Verify that the table and rows were replicated into the target database.

Now check the replication from  target to source database.

Verify that the table and rows were replicated into the source database.

Catch Me On:- Hariprasath Rajaram

LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter:  https://twitter.com/hariprasathdba

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Oracle-12c Configure Step by step Goldengate Unidirectional Method

     Oracle-12c Configure Step by step Goldengate Unidirectional Method

Description:- 

In oracle goldengate unidirectional configuration allows data workflow in one direction, from source to target only. Source system goldengate extract group capture the changes information from either redo log files or archivelog and forward Oracle goldenGate trails to the remote system. And replication group apply the transactions to the target database keeping the two databases synchronized.

Replication between two databases requires the setting up of the Manager, Extract and Pump processes on the source database and setting up the Manager and Replicat processes on the target database.

MANAGER: Runs on all servers involved in the replication. It is the master process which controls all GG activity.

EXTRACT: Runs on the source database server. It extracts data from the Oracle database either directly or indirectly in the case of Integrated capture.

DATAPUMP: Runs on the source database server. It is an optional process which transmits extracted data from the source database server to the target database server. (Optional because the EXTRACT process can do this directly).

SERVER COLLECTOR: Runs on the target database server. Receives extracted data from source database server and stages it in the local trail files on the target database server for the REPLICAT process to apply it to the target database.

REPLICAT: Runs on the target database server. Reads extracted data from the local trail file on the target database server and applies the data to the target database.

Installing Oracle 12.2 GoldenGate on Linux server Here
Oracle-12c Configure Step By Step Goldengate Bidirectional Method Here

Source Database

Hostname : ggsource.doyensys.com

Oracle database SID: GGSOURCE

Oracle version: 12.2.0

Oracle GG version: 12.2.0

Target Database

Hostname : ggtarget.doyensys.com

Oracle database SID: GGTARGET

Oracle version: 12.2.0

Oracle GG version: 12.2.0

 

Check the connectivity from source to target for replication.

Both source and target side add the host information to /etc/hosts file.

On source(ggsource.doyensys.com) verify that LOG_MODE is set to ARCHIVELOG.

Archivelog must be enable on source side because if we are using classic capture the extract process will capture the changes information through archivelogs only, So it is mandatory for classic capture replication.

select LOG_MODE from v$database;
shu immediate;
startup mount;
ALTER DATABASE ARCHIVELOG;
alter database open;
select LOG_MODE from v$database;

Verify that supplemental logging and forced logging are set properly.

ALTER DATABASE FORCE LOGGING;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
select SUPPLEMENTAL_LOG_DATA_MIN,FORCE_LOGGING from v$database;

This parameter must be changed on source and target databases:

ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE SCOPE=BOTH;
show parameter ENABLE_GOLDENGATE_REPLICATION

Create the administrator and user/schema owners on both source and target database.

create user gguser identified by gguser default tablespace goldengate quota unlimited on goldengate;

grant create session,connect,resource,alter system to gguser;

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee=>’gguser’, privilege_type=>’CAPTURE’, grant_optional_privileges=>’*’);

Go to Golden Gate Installed location (in our scenario /u01/gghome) and then run the following Golden Gate inbuild scripts for creating all necessary objects to support DDL replication.

  1. @marker_setup
  2. @ddl_setup
  3. @role_setup.sql
  4. GRANT GGS_GGSUSER_ROLE TO <loggedUser>
  5. @ddl_enable.sql

Start GGSCI and login into database using dblogin command.

dblogin userid gguser, password gguser

By default manager parameter has created while installing the goldengate software we just add the user information to manager parameter file.

PORT 7811

USERIDALAIS gguser

Check the manager parameter and status.

view param mgr

Source side add trandata for particular table which we wants to  replicate the data to target database.

add tranadata demo.*

Create the primary Extract parameter file.

EXTRACT ext1
USERID gguser@ggsource, PASSWORD gguser
EXTTRAIL /u01/gghome/dirdat/aa
DDL INCLUDE ALL
TABLE demo.*;

Create the Extract group and the local Extract trail file and start the extract process.

add extract ext1 tranlog begin now

add exttrail /u01/gghome/dirdat/aa extract ext1

start extract ext1

Check the status of primary extract process.

info ext1

Create the secondary Extract (data pump) parameter file.

EXTRACT dpump1
USERID gguser@ggsource, PASSWORD gguser
RMTHOST ggtarget, MGRPORT 7810
RMTTRAIL /u01/gghome/dirdat/ab
DDL INCLUDE ALL
TABLE demo.*;

Create the data pump group and the remote Extract trail file and start the data pump process.

add extract dpump1 exttrailsource /u01/gghome/dirdat/aa

add rmttrail /u01/gghome/dirdat/ab extract dpump1

start extract dpump1

To check the status of data pump process.

info dpump1

TARGET (GGTARGET.DOYENSYS.COM):

Start GGSCI and login into database using dblogin command.

./ggsci

dblogin userid gguser password gguser

To check the manager status on target server.

view param mgr

info mgr

To create a checkpoint Table in the target database.

add checkpointtable gguser.chkpt

info  checkpointtable gguser.chkpt

Create the Replicat parameter file.

REPLICAT rep1
USERID gguser@ggtarget, PASSWORD gguser
DDL INCLUDE ALL
DDLERROR DEFAULT IGNORE
MAP demo.*, TARGET demo.*;

Create and start the replicat process.

add replicat rep1 exttrail /u01/gghome/dirdat/ab checkpointtable gguser.chkpt

start replicat rep1

To check the status of replicat process.

info rep1

To check the replication from source to target database.

Create one sample table and generate  insert operation into that table.

Verify that the table and rows were replicated into the target database.

Installing Oracle 12.2 GoldenGate on Linux server Here
Oracle-12c Configure Step By Step Goldengate Bidirectional Method Here

Catch Me On:- Hariprasath Rajaram

LinkedIn:    https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:   https://www.facebook.com/HariPrasathdba
FB Group:   https://www.facebook.com/groups/894402327369506/
FB Page:      https://www.facebook.com/dbahariprasath/?
Twitter:       https://twitter.com/hariprasathdba

Create CREDENTIALSTORE for oracle Goldengate -12c

Create CREDENTIALSTORE for oracle Goldengate -12c

Description:- 

ADD CREDENTIALSTORE command it’s used to create a credential store for oracle goldengate. The credential store manages user IDs and their encrypted passwords that are used by Oracle GoldenGate processes to communicate with the local database. The credential store avoid’s to specify the user names and clear-text passwords in the Oracle GoldenGate parameter files.

The ADD CREDENTIALSTORE is a new command in Oracle GoldenGate 12c and the default location of the credential store is “$GGHOME/dircrd” directory of the GoldenGate software home.

Let’s add a user to credential store

GGSCI (ggsource.doyensys.com) 3> DBLOGIN USERID gguser, PASSWORD xxxxx
Successfully logged into database.

GGSCI (ggsource.doyensys.com ) 4> ADD CREDENTIALSTORE

Credential store created in ./dircrd/.

GGSCI (ggsource.doyensys.com ) 5> ALTER CREDENTIALSTORE ADD USER gguser ALIAS gguser
Password:

Credential store in ./dircrd/ altered.

Now check the login with newly created alias name……..

GGSCI (ggsource.doyensys.com ) 6> dblogin useridalias gguser
Successfully logged into database.

$ cd /u01/app/gghome/dircrd
$ ls -lrth
total 4.0K
-rw-r—– 1 oracle dba 517 Feb 22 22:55 cwallet.sso

From above we can see that Auto Login wallet has been created

To verify the credentialstore information:

GGSCI (ggsource.doyensys.com) 1> INFO CREDENTIALSTORE

Reading from ./dircrd/:

Default domain: OracleGoldenGate

Alias: gguser
Userid: gguser

 

Catch Me On:- Hariprasath Rajaram

LinkedIn:https://www.linkedin.com/in/hari-prasath-aa65bb19/
Facebook:https://www.facebook.com/HariPrasathdba
FB Group:https://www.facebook.com/groups/894402327369506/
FB Page: https://www.facebook.com/dbahariprasath/?
Twitter:  https://twitter.com/hariprasathdba