Yesterday, Daniel pinged me to discuss an upgrade testing scenario I had evaluated briefly by myself a while ago. I can’t remember why I gave up. But Daniel had some great ideas and tried already a lot in our OCI cloud. We want to use a refreshable PDB. And keep it up to date, then upgrade it. The idea of this approach: Whereas a CLONE or RELOCATE uses the initial command as the marker, a REFRESH would allow me to be way more in synch. So the question is: Upgrade Testing with a Refreshable PDB – does this work?
And happy that there are so many sheep pictures on Unsplash 🙂
A clarification upfront
Even though my blog post is headlined with “Testing“, I didn’t mean to use this for testing only. Of course, you can (and actually, you should) use this approach for your production upgrades as well. The idea is really:
- Test it first
- Then use the approach for your production upgrade
I just realized the other day that my wording is a bit misleading
As usual, let me start with a quick description of my setup. I have an Oracle 184.108.40.206 CDB with a PDB, and another Oracle 19.7.0 CDB. I will create a refreshable PDB from 220.127.116.11 to the higher version CDB. Then I would like to refresh it several times and see if this works. Once I’m good to go, I’d like top STARTUP UPGRADE the PDB, and upgrade it to match with the CDB. Sounds strange? Let’s see.
I can trigger the “REFRESH” operation as often as possible. But as soon as I open the PDB with STARTUP UPGRADE, there is no way back. And fallback strategies, also for testing, such as FLASHBACK PLUGGABLE DATABASE don’t work here due to the fact that my PDB hasn’t been upgraded to 19.7.0 yet.
But I may be able to decrease the downtime significantly as the point. I trigger the upgrade at a timestamp AFTER the cloning operation has been finished, and not when I started the cloning potentially hours ago.
Others have written a lot about like Franck Pachot about the Poor-Man’s Standby. But I’d like to go with a very simplified approach:
|Create a common user for the refreshable PDB clone operation and assign necessary privileges:
CREATE USER c##borg IDENTIFIED BY oracle DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp CONTAINER=ALL; GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE, SYSOPER TO c##borg CONTAINER = ALL;
|Create the public database link for the relocate operation:
CREATE public DATABASE link clonemypdb CONNECT TO c##borg IDENTIFIED BY oracle USING 'CDB1';
|Initiate the refreshable PDB clone:
CREATE PLUGGABLE DATABASE PDB1 FROM PDB1@clonemypdb REFRESH MODE MANUAL file_name_convert=('CDB1','CDB2');
|Refresh once, or repeat several times:
ALTER PLUGGABLE DATABASE PDB1 REFRESH;
|End the refresh:
ALTER PLUGGABLE DATABASE PDB1 REFRESH MODE NONE;
|Open the PDB:
ALTER PLUGGABLE DATABASE PDB1 OPEN UPGRADE;
Yes, magically this works as well. Just exchange:
CREATE PLUGGABLE DATABASE PDB1 FROM PDB1@clonemypdb REFRESH MODE MANUAL file_name_convert=('CDB1','CDB2');
CREATE PLUGGABLE DATABASE PDB1 FROM PDB1@clonemypdb REFRESH MODE EVERY 1 MINUTES file_name_convert=('CDB1','CDB2');
in the above step-by-step path and see what happens (snippet from the alert.log):
PDB3(3):alter pluggable database refresh 2020-05-05T09:22:15.818637+02:00 Applying media recovery for pdb-4099 from SCN 1303052 to SCN 1303086 Remote log information: count-1 thr-1,seq-14,logfile-/u02/fast_recovery_area/CDB1/CDB1/foreign_archivelog/PDB3/2020_05_05/o1_mf_1_14_3210936238_.arc,los-1201204,nxs-18446744073709551615,maxblks-0 PDB3(3):Media Recovery Start 2020-05-05T09:22:15.819146+02:00 PDB3(3):Serial Media Recovery started PDB3(3):max_pdb is 3 2020-05-05T09:22:15.869461+02:00 PDB3(3):Media Recovery Log /u02/fast_recovery_area/CDB1/CDB1/foreign_archivelog/PDB3/2020_05_05/o1_mf_1_14_3210936238_.arc 2020-05-05T09:22:16.032668+02:00 PDB3(3):Incomplete Recovery applied until change 1303086 time 05/05/2020 09:22:14 2020-05-05T09:22:16.042738+02:00 PDB3(3):Media Recovery Complete (CDB2) PDB3(3):Completed: alter pluggable database refresh 2020-05-05T09:23:14.881440+02:00 PDB3(3):alter pluggable database refresh 2020-05-05T09:23:16.234524+02:00 Applying media recovery for pdb-4099 from SCN 1303086 to SCN 1303696 Remote log information: count-1 thr-1,seq-14,logfile-/u02/fast_recovery_area/CDB1/CDB1/foreign_archivelog/PDB3/2020_05_05/o1_mf_1_14_3210936238_.arc,los-1201204,nxs-18446744073709551615,maxblks-0 PDB3(3):Media Recovery Start 2020-05-05T09:23:16.235308+02:00 PDB3(3):Serial Media Recovery started PDB3(3):max_pdb is 3 2020-05-05T09:23:16.298760+02:00 PDB3(3):Media Recovery Log /u02/fast_recovery_area/CDB1/CDB1/foreign_archivelog/PDB3/2020_05_05/o1_mf_1_14_3210936238_.arc 2020-05-05T09:23:16.477247+02:00 PDB3(3):Incomplete Recovery applied until change 1303696 time 05/05/2020 09:23:14 2020-05-05T09:23:16.484314+02:00 PDB3(3):Media Recovery Complete (CDB2) PDB3(3):Completed: alter pluggable database refresh
In order to avoid the below error, make sure your CDBs are in
2020-05-05T09:54:14.842820+02:00 PDB3(3):alter pluggable database refresh Applying media recovery for pdb-4099 from SCN 1303729 to SCN 1306371 Remote log information: count-1 thr-1,seq-17,logfile-/u02/fast_recovery_area/CDB1/CDB1/foreign_archivelog/PDB3/2020_05_05/o1_mf_1_17_3210936238_.arc,los-1305794,nxs-18446744073709551615,maxblks-0 PDB3(3):Media Recovery Start 2020-05-05T09:54:14.985606+02:00 PDB3(3):Serial Media Recovery started PDB3(3):max_pdb is 3 PDB3(3):ORA-65345 signalled during: alter pluggable database refresh...
See MOS Note: 2613419.1 – ORA-65345 during hot clone of pluggable database for more information.
Upgrade the cloned PDB
As next step, I will upgrade my cloned PDB. I explained and showed this already in detail here:
Of course, I can’t revert. If I would like to test again, I need to repeat this action. But this looks to be even smarter than simply cloning. And of course, the upgrade completes successfully.
Kudos to Daniel for bringing this topic up.
This is a pretty smart way. You clone your PDB. But instead of waiting until the clone is finished before you can upgrade – where your PDB may have been hours old already – you can refresh it as long as needed. Hence, YOU determine the point in time, when you trigger your database upgrade, and not the cloning process.
In addition, I think it is soon time for a summary blog post with an overview, especially on the different time lines for each approach, as well as the pros and cons of each technique. In addition, Daniel and I will publish some blog posts about upgrades in the cloud. There you will see how helpful this refresh can be, especially in cases where you database is larger and/or the cloning time is unpredictable.
Thanks to Andrea Cremonesi who pointed me to this important fact:
If you are not going into Oracle 19c but Oracle 18c instead, you will need patch 28374604 in addition.
Well, and as it turns out, there are more pitfalls.
Daniel just send me this one he hit. If you see the error below during CREATE PLUGGABLE DATABASE:
ERROR at line 1: ORA-19505: failed to identify file "+DATA/CDB1_FRA1H3/A495BBFD2B921151E0532301000A47E5/DATAFILE/users.275.1039261905" ORA-15173: entry 'CDB1_FRA1H3' does not exist in directory '/'
you may please need to apply:
HOT CLONE OF PDB FAILS WITH ORA-15001 – bug 29469563
TDE – Transparent Data Encryption
My dear PM colleague, Peter Wahl, created 3 excellent videos on how to achieve the above with Transparent Data Encryption (TDE).
- TDE Academy #1:
Remote Clone an encrypted PDB, and upgrade from Oracle 18.10 to Oracle 19.7
- TDE Academy #2:
Remote Clone an encrypted PDB, and upgrade from Oracle 18.104.22.168 to Oracle 19.7.0
- TDE Academy #3:
Remote Clone an encrypted PDB, and upgrade from Oracle 22.214.171.124 to Oracle 19.7.0
Further Information and Links
- Upgrade PDBs with RELOCATE – How does it work?
- Cloning with Pluggable Databases in Oracle 18c
- Online Clone a PDB – and synch with Oracle GoldenGate
- Cloning a PDB from Oracle 12.1 to Oracle 12.2 – Shared Undo Case
- MOS Note: 2613419.1 – ORA-65345 during hot clone of pluggable database
I’ve been using this refreshable PDB method to upgrade some 12.2 PDBs to 19c in the last months, always without problem. It is IMO, the easiest method to upgrade 12.2->19c. You can test the upgrade, drop the PDB, test again, drop again, test and time it and finally do it in production. The most important point is to run the preupgrade.jar on the source and do the necessary fixups.
I fully agree. And a colleague phrased it the other day as “the most underestimated feature in 19c”. I agree.
PS: You have a great blog!
There is a problem with refreshable PDBs if you are using a common user instead of SYS/SYSTEM.
Since it is a common CDB user with same password , I did not specify “connect to identified by ” in the Database Link. Now the cloning part goes fine. But when it comes to refresh, the refresh job is owned by sys. If the DB link is private, SYS cannot access it and the refresh fails. If the DB link is public, since I did not specify the “connect to” clause, the sys tries to connect to the source DB as sys instead of the common user. We would like to avoid passwords in our automation. That’s why we chose a common user with same password across all DBs and we use wallets to connect. But with DB links, wallets cannot be passed. Or is there some way to pass the wallet credentials with DB Links? Could you help us with this scenario?
this is good input – and I was not aware.
Would you mind to drop me an email please – and please not from your GMAIL but with your company email? I need to be able to identify the customer I’m working with. My mail address is mike.dietrich —– at —– oracle.com
Hi, Mike, great blog. It answered a lot of questions for me. Is there any more pointers to the back end mechanism of how datafiles are copied with hot clone? Some notes in metalink indicate the files are copied with backup restore ( Doc ID 2613419.1 ), but does it provide the normal controls of backup restore such as section size and compressed backup set? The only thing I see is the parallel clause as in https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-PLUGGABLE-DATABASE.html#GUID-F2DBA8DD-EEA8-4BB7-A07F-78DC04DB1FFC. This is an important consideration when dealing with VLDB migrations where refreshable mode makes lot of sense to control cut over time. Thanks.
It gets copied with an internal BR mechanism over the DB Link. And there is no control. PARALLEL is chosen automatically based on CPU_COUNT as far as I remember.
Hi Mike thanks for this nice article. We have one problem when we are using PDB refreshable. We are limited to do refresh only once a day and the refresh takes ages. I know that it is due to archivelog number created over the day. But is there possibly a way how to speed up the process? Also, we would like to somehow monitor the state to be able to estimate the refresh time. Thanks in advance for your response.
I need to dig a bit deeper into the reporting capabilities. I guess V$session_longops would help but the internal names used often are hard to relate correctly. And I think, there would be only the possibility to have the refresh happen more often. Furthermore, I’m not sure if that is really expected and what the bottleneck is. It could be the network, it could be the parallel capabilities – or the apply due to the amount of redo.
I guess, this will require a bit deeper analysis.
Did you open an SR for it?
Thanks for your response. From my observation, it seems like the bottleneck can be storage. We are running DBaaS VM in OCI Standard2.24. So from oCPU and memory perspective, I don’t see a problem which I confirmed also by monitoring on OS. Basically, the only thing I can tell is IO waits went several times upto ~40%.
Yes I opened also SR but the asnwer was that there is no other way than increase frequency of refreshes. That is of course true but I was trying to find out if this is really the only option. From my previouse experiences the answer in SR is not always 100% accurate and sometimes there is another option. But in this case it seems like there is no other option.
in this case I’d guess the answer by Support is correct. I don’t have another workaround or hack.
I need to correct my last comment about IO utilization. It was not ~40% like I mentioned. That was from a different scenario. In the case of PDB refresh, there was no significant utilization of any of the resources. Sorry for the confusion.
Thanks for the great article , please shed some light into the following concerns related to the “Refreshable PDB” feature, I am assuming that this feature is also know as “Refreshable PDB switchover”
When I have muliple PDB’s hosted on a CDB, current dataguard supports only CDB level failover as per my understanding. In an event of a single PDB experiencing a outage , we may have to perform the failover to standby and will impact all the PDB’s hosted.
With the Refreshable PDB feature , it is possible to propagate changes to a another CDB instance with a PDB replica. But there seems to be a lag between the two PDB’s due to refresh mode setting.
1.How do I overcome this lag and obtain the same level of protection like dataguard with refreshable PDB’s ?
2. Is Refreshable PDB feature part of Oracle 19c general availability or is it limited to Oracle Exadata/Oracle Engineered systems / Oracle Cloud Service ?
Based on this URL, it seems it is not available on Enterprise edition, (Consolidation – Table 1-2)
Do you have any documents or url’s to understand this option further and how to archive higher protection level ?
I am just baffled that Multitenancy is such a hot topic and the push is to consolidate but once consolidated the whole set of PDB’s have a downtime just because of one or two outages at PDB level.
this is not the same as Refreshable PDB Switchover. As usual, Tim Hall explains this very well:
Hence, I can’t answer your standby-related questions.
Until Oracle 19c, DG works on the CDB level, and not on the PDB level.