A few weeks ago I published two blog posts, one about Online Cloning and one about Relocate for PDBs. As in many cases, the blog posts got triggered by actual customer questions – and of course out of my own curiosity. In addition, we received the ultimate question: Can I Online Clone a PDB and synch with Oracle GoldenGate? And while my answer would have been “Of course!” a few doubts came up. But read below …
Online Clone versus Relocate of PDBs
I received very good ideas and feedbacks from you regarding Online Cloning and Relocate for PDBs. I’d summarize briefly that the majority of you would favor the Online Cloning over the Relocate for PDBs for upgrade (testing) as the first technique keeps your PDB still alive. In the Relocation example, the source gets terminated once you issue a
STARTUP UPGRADE on the relocated PDB.
And I fully agree with your points. So thanks to Jeannette and Peter and the others who explained their point of view and experiences.
Real Minimal Downtime
Both, Online Cloning and Relocate for PDBs will cause upgrade downtime. In both cases, you will need to upgrade your PDB after the operation has been finished in an upgrade scenario with different version CDBs. Please be aware of the silent COMPATIBLE change in Multitenant in all of these cases.
But what if you need to have really minimal downtime for such a scenario? Of your you could use a Transient Logical Standby setup using your physical standby. Or you could simply use Oracle GoldenGate if you have or plan to get an OGG License.
In this case it should be fairly simply to synch between source and destination PDB, shouldn’t it? But you need to know at which exact SCN the cloning process has been based on and finished.
The Cloning SCN
I reuse my simple example from the Online Cloning blog post. And then I check the
alert.log of both CDBs.
On the source side, CDB1, I get only one additional stamp into the alert.log:
2020-04-13T22:24:53.472382+02:00 PDB3(3): AUDSYS.AUD$UNIFIED (SQL_TEXT) - CLOB populated
Of course, on the receiving side, CDB2, there’s more to tell:
2020-04-13T22:24:51.043427+02:00 CREATE PLUGGABLE DATABASE PDB3 FROM PDB3@clonemypdb file_name_convert=('CDB1','CDB2') 2020-04-13T22:24:55.819212+02:00 PDB3(3):Endian type of dictionary set to little **************************************************************** Pluggable Database PDB3 with pdb id - 3 is created as UNUSABLE. If any errors are encountered before the pdb is marked as NEW, then the pdb must be dropped local undo-1, localundoscn-0x0000000000000107 **************************************************************** 2020-04-13T22:24:57.326460+02:00 Applying media recovery for pdb-4099 from SCN 1293822 to SCN 1293852 Remote log information: count-1 thr-1,seq-14,logfile-/u02/fast_recovery_area/CDB1/CDB1/foreign_archivelog/PDB3/2020_04_13/o1_mf_1_14_3210936238_.arc,los-1201204,nxs-18446744073709551615,maxblks-0 PDB3(3):Media Recovery Start 2020-04-13T22:24:57.333484+02:00 PDB3(3):Serial Media Recovery started PDB3(3):max_pdb is 3 2020-04-13T22:24:57.409144+02:00 PDB3(3):Media Recovery Log /u02/fast_recovery_area/CDB1/CDB1/foreign_archivelog/PDB3/2020_04_13/o1_mf_1_14_3210936238_.arc 2020-04-13T22:24:57.531943+02:00 PDB3(3):Incomplete Recovery applied until change 1293852 time 04/13/2020 22:24:56 2020-04-13T22:24:57.540787+02:00 PDB3(3):Media Recovery Complete (CDB2) PDB3(3):Autotune of undo retention is turned on. PDB3(3):Undo initialization recovery: err:0 start: 336909 end: 336928 diff: 19 ms (0.0 seconds) PDB3(3): Successfully onlined Undo Tablespace 2. PDB3(3):Undo initialization online undo segments: err:0 start: 336928 end: 336937 diff: 9 ms (0.0 seconds) PDB3(3):Undo initialization finished serial:0 start:336909 end:336938 diff:29 ms (0.0 seconds) PDB3(3):Database Characterset for PDB3 is AL32UTF8 PDB3(3):JIT: pid 4698 requesting stop PDB3(3):Buffer Cache flush started: 3 PDB3(3):Buffer Cache flush finished: 3 Completed: CREATE PLUGGABLE DATABASE PDB3 FROM PDB3@clonemypdb file_name_convert=('CDB1','CDB2')
I marked the SCNs above in RED. It is the SCN from the source side. This SCN does get overwritten by the SCN of the target CDB2 as soon as I
STARTUP UPGRADE the cloned PDB.
This is what I get in CDB2 after the cloning has been finished:
SELECT current_scn FROM V$DATABASE; CURRENT_SCN ----------- 4588431 SQL> alter session set container=PDB3; Session altered. SQL> alter pluggable database pdb3 open upgrade; Pluggable database altered. SQL> SELECT current_scn FROM V$DATABASE; CURRENT_SCN ----------- 4588967
My dear colleague, John McHugh explained this to us. In brief, while there’s an implicit BEGIN and END SCN set during the clone process to render the cloned image consistent as of the END SCN (see the
alert.log excerpt above:
1293852 is the END SCN), it does not reflect the “target” SCN. But we don’t need the target SCN. The END SCN is the important one. And this is the SCN from source which appears in the
alert.log of the target CDB2. This is the one we’ll pass on to Oracle GoldenGate.
Upgrade the cloned PDB
As next step, I will upgrade my cloned PDB. I explained and showed this already in detail here:
Synchronization with Oracle GoldenGate
And once my PDB has been upgraded to Oracle 19.7.0 as well, I can start the replicat with Oracle GoldenGate. I assume that my production workload has been continued on the source side in Oracle 184.108.40.206. It is important that the extract starts early enough to capture all necessary transactions.
Many thanks to Nick Wagner, our OGG Product Management Director. As I’m not an OGG expert I did ask Nick. He explained to me that the key command here is
START REPLICAT [name] ATCSN 1293852. In this case the alert.log tells us that the END SCN is
1293852 – This SCN written to the alert log was the next SCN in line to be recovered, so we want to have OGG start AT that SCN value.
Now you wait until your PDB is in synch with the production. And once you get additional downtime, you can switch the application. I guess, if I prepare everything correctly, I can reverse the Oracle GoldenGate setup before I let the application start on the 19.7.0 PDB, and send my changes now back to the old source. This way I’d ensure a seamless fallback.
More Information and Links
- Online Clone a PDB for upgrade testing
- Relocate a PDB – How does this work?
- Oracle Multitenant – Be aware of the silent compatible change
- Transient Logical Standby – Rolling Upgrades – Overview
I am not too familiar with GoldenGate (GG), but why to do a clone first and then sync with GG? Cannot you do everything with GG with zero downtime?
because OGG is the synching solution on top, not the tool to duplicate your 9TB PDB 😉
Thanks for the recent webinars, and for all the content you put here… it’s simply awesome!
One migration+GG related question: Do you have any suggestion regarding migrating data from 10.2.0.3 to 12c using GoldenGate? Is there any limitation that I need to be mindful of, such as data types not supported by 10g GoldenGate? I’ve been searching online and couldn’t find anything solid, so I opened an SR as well, but I was wondering if you had any experience with this at all.
I’m not an OGG experts – but I know that there are some limitations with OGG in older versions. Please check with support as you will need a support case anyways to get access to the OGG version working with your 10.2.0.3 version on your OS.