At OOW 2019 we announced that you can have up to 3 user-created pluggable databases (PDBs) per container database (CDB) without having a Multitenant license. And in this blog post I’d like to demonstrate how you can save patching downtime with Oracle Multitenant.
The Concept
The idea this entire process settles on is very simple. When you create a fresh container database in the target (already patched!) environment, you don’t have to patch this one. You only have to relocate the PDB from the source into the target CDB. In addition, only datapatch
needs to be executed. As the PDB needs to be open when datapatch gets run, the downtime is only measured by the time for relocation the services. As the files should stay in place and everything happens on the same server, the operation can complete very quickly. And, as long as the source CDB stays intact, you have a simple and quick fallback, too.
See also how upgrade with relocate works:
The Setup
At first, I prepare my new home and patch it. In this case I will use a vanilla 19.3.0 EE home and a patched 19.5.0 EE home.
As next step I need to create a new database, CDB2 in this case. It is very important that it has at least the same components installed as the source. It can have more components but having less will lead to problems. Hence, you should check in your source which components are present before you create the receiving CDB.
SQL> select comp_id from dba_registry order by 1; COMP_ID ------------------------------ CATALOG CATPROC OLS OWM RAC XDB
Workspace Manager is still standard as it is hidden in another create script and not controlled by DBCA. RAC (Option Off) and XDB are both standard, too. Hence, I will need to make sure that only OLS (Oracle Label Security) is present in my new container database.
I need to create an empty container database with no PDBs as I will unplug from CDB1 and plug into CDB2.
And I should make sure to have the same (recommended!) or more components configured.
You may be more careful than I and click on the “Include in PDBs” as otherwise the PDB$SEED
will be without OLS in case you’d like to provision a new PDB in the new container database in addition.
See also: https://mikedietrichde.com/2018/05/15/be-aware-of-database-options-selections-for-pdbs-in-dbca/
The Patching Process
Actually before I can patch, I need to relocate the PDB from CDB1 to CDB2. I expect a warning as I move it from 19.3.0 to 19.5.0.
In addition, there are some key requirements which need to be fulfilled in order to make this work smoothly:
- Local UNDO must be configured for both CDBs
- Both CDBs much have
ARCHIVELOG
modeON
Tim Hall has compiled a complete list in case you’d like to browse through it.
Then I need to prepare a few things:
CDB1 | CDB2 |
Create a common user for the relocate operation and assign necessary privileges: CREATE USER c##borg IDENTIFIED BY oracle DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp CONTAINER=ALL; GRANT CREATE SESSION, CREATE PLUGGABLE DATABASE, SYSOPER TO c##borg CONTAINER = ALL; |
|
Create the public database link for the relocate operation: CREATE public DATABASE link clonemypdb CONNECT TO c##borg IDENTIFIED BY oracle USING 'CDB1'; |
|
Initiate the relocation: CREATE PLUGGABLE DATABASE PDB_CLONE FROM PDB_CLONE@clonemypdb RELOCATE AVAILABILITY MAX file_name_convert=('CDB1','CDB2'); |
|
Open the PDB: ALTER PLUGGABLE DATABASE PDB_CLONE OPEN; The PDB will open with errors in RESTRICTED mode only. Reason is that the patch levels are different. |
Now I need to run datapatch
. This will automatically detect the necessary changes in PDB_CLONE.
$ $ORACLE_HOME/OPatch/datapatch -verbose
SQL Patching tool version 19.5.0.0.0 Production on Thu Nov 28 22:51:13 2019
Copyright (c) 2012, 2019, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_4488_2019_11_28_22_51_13/sqlpatch_invocation.log
Connecting to database...OK
Gathering database info...done
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of interim SQL patches:
No interim patches found
Current state of release update SQL patches:
Binary registry:
19.5.0.0.0 Release_Update 190909180549: Installed
PDB CDB$ROOT:
Applied 19.5.0.0.0 Release_Update 190909180549 successfully on 16-OCT-19 07.42.14.875068 PM
PDB PDB$SEED:
Applied 19.5.0.0.0 Release_Update 190909180549 successfully on 16-OCT-19 07.42.15.686615 PM
PDB PDB1:
Applied 19.5.0.0.0 Release_Update 190909180549 successfully on 16-OCT-19 07.42.15.686615 PM
PDB PDB_CLONE:
Applied 19.3.0.0.0 Release_Update 190410122720 successfully on 28-NOV-19 09.43.23.638656 AM
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
No interim patches need to be rolled back
No release update patches need to be installed
No interim patches need to be applied
For the following PDBs: PDB_CLONE
No interim patches need to be rolled back
Patch 30125133 (Database Release Update : 19.5.0.0.191015 (30125133)):
Apply from 19.3.0.0.0 Release_Update 190410122720 to 19.5.0.0.0 Release_Update 190909180549
No interim patches need to be applied
Installing patches...
Patch installation complete. Total patches installed: 1
Validating logfiles...done
Patch 30125133 apply (pdb PDB_CLONE): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30125133/23151502/30125133_apply_CDB2_PDB_CLONE_2019Nov28_22_51_53.log (no errors)
SQL Patching tool complete on Thu Nov 28 22:52:40 2019
[CDB2] oracle@hol:/u01/app/oracle/product/19/network/admin
But still in this situation, the PDB_CLONE will need to be restarted to change from RESTRICTED
to UNRESTRICTED
mode. And you can only achieve this change when you CLOSE and OPEN the PDB. Hence, no matter if the relocate worked fine, you’ll have to restart the PDB in this case. And this operation will disconnect all sessions which may have been successfully forwarded as well.
Any Alternative?
Once the RELOCATE command completed, I need to start the PDB_CLONE in my destination CDB. But even if I issue an alter pluggable database pdb_clone open upgrade;
I will need to stop and restart the PDB afterwards. So even if the connection forwarding works fine, at this point there will be an interruption resulting in a disconnect.
I don’t see any other option then applying the binary patch rolling in a RAC environment. This ensures that one PDB service is always up accepting connections.
In my opinion, the RELOCATE operation does not give you much of an extra benefit. If I had started my entire CDB1 in the 19.5.0 environment, I would need to run datapatch – but I wouldn’t need a second CDB. The only benefit I can see here is the fact that datapatch needs to be run in one single PDB only – instead in the CDB$ROOT and the PDB$SEED as well.
Another Pitfall
You should also make sure that both, the source and the target CDB have the exact identical COMPATIBLE
setting. Just in case you’d like to have a fallback into the previous CDB, if COMPATIBLE
in the newer CDB would be higher, it gets increased automatically.
See here:
- The COMPATIBLE Pitfall
- Why and how should you change COMPATIBLE
- Should you change COMPATIBLE when you apply an RU (or RUR)?
And be always aware that unplug/plug operations will implicitly raise COMPATIBLE
without giving you any notice – but you never will be able to go backwards with your PDB then.
The Summary
You can use the RELOCATE operation to decrease downtime significantly when you move a PDB from one CDB to another. But as soon as you have different patch states, the PDB in the target CDB will open in RESTRICTED mode only. You need to execute datapatch
to clear the situation – and then restart the PDB. The only saving this gets you is a faster datapatch
completion. But if you want to have real rolling patching with no downtime for a Release Update (RU), you will need to apply it in a RAC environment.
More Information
- Upgrade with RELOCATE – How does it work [or not]?
- Upgrade Testing – Online Clone a PDB to Oracle 19c
- Announcement at OOW19: You can have up to 3 user-created PDBs without the need for the Multitenant license
- Markus Flechtner’s blog post about RELOCATE
- Tim Hall’s blog post about RELOCATE
- Database Migration: From non-CDB to PDB – Overview
- Upgrade, plug in, convert (noncdb_to_pdb.sql) – 11.2.0.4 and higher
- Plugin, upgrade, convert (noncdb_to_pdb.sql) – 12.2.0.1 and higher
- Clone via NON$CDB, upgrade, convert – 12.1.0.2 and higher
–Mike
>> The only benefit I can see here is the fact that datapatch needs to be run in one single PDB only – instead in the CDB$ROOT and the PDB$SEED as well.
Hello, Mike. Thanks for sharing, we took a look into this to minimize downtime, but have come to the same conclusion. It doesn’t seem that we save a significant amount of time on CDB$ROOT and the PDB$SEED. Do you have any numbers?
Hi Aleksei,
no as I have just “toy” databases. There, the additional runtime is insignificant.
Cheers,
Mike
How is the listener or listeners managed in this configuration/methodology is it necessary for each pdb to have its on listener ?
No, I have always only one listener.
Cheers,
Mike
Hi Mike,
I was comparing the runtime of the patch upgrade from this article which was less than 1 min versus the release upgrade from https://mikedietrichde.com/2020/01/30/upgrade-pdbs-with-relocate-how-does-it-work/ which shows ~26 mins.
1) Does “relocate” make sense for patches versus major release upgrades in order to reduce downtime? Even in patch it looks like we still need to restart the PDB
2) For a major release upgrade, would dbms_rolling using active data guard make better sense to reduce downtime if we are good with downtime of switchover
Hi Shirin,
at first, “patch” and “upgrade” is different. Patching usually is faster than upgrading. Hence the divergence between 1 min and 26 minutes.
I wouldn’t use RELOCATE at all but instead REFRESHABLE PDBs:
https://mikedietrichde.com/2020/05/05/upgrade-testing-with-a-refreshable-pdb-does-this-work/
For the Transient Logical Standby, with DBMS_ROLLING (requires an ADG license) or without (does not require extra licenses), you can decrease the downtime for a database upgrade typically to less than 5 minutes, in many cases even more.
Cheers,
Mike
Hi Mike. Excuse me for asking something not quite relevant here. Do you know if there’s any way to improve the performance of applying binary patches? On my 2-node RAC, well tuned, with plenty of CPUs and memory, when applying the April 2021 patchset, the database binary patch took 13 minutes, and the GI binary patch took 19 minutes, on each node. How can we speed this up? The logs don’t tell me any details meaningful to me. Thank you!
Hi Yong,
we think that we have an idea: use opatch directly instead of opatchauto. It seems that one issue is that opatchauto automatically calls datapatch after every node’s patch activity.
So once opatchauto is done with node 1, it will check the local inventory just realizing that it can’t execute datapatch yet within the database. Then it goes on to node 2.
Apart from that, please open an SR and have Support check the logs. I have seen several messages from people complaining that up to 19.10 all worked fine but 19.11 or 19.12 take twice as long in RAC environments. There are even bugs opened already for it.
Thanks,
Mike