This is my next blog post about Database Migration from non-CDB to PDB – Plug in, upgrade, convert. But what is different from the previous one? And why is it necessary? Since Oracle Database 12.2.0.1 you can plugin a non-CDB at first, the upgrade and convert it. And I’ll show you this technique here.

Photo by Dan Freeman on Unsplash
High Level Overview
Endianness change possible: | No |
Source database versions: | Oracle 12.2.0.1 or newer (or 12.1.0.2 when CDB has shared UNDO or with a patch adding the UNDO commands to DBMS_PDB.DESCRIBE in 12.1.0.2) |
Characteristic: | Plugin into CDB first |
Upgrade necessary: | Yes, after plugin |
Downtime: | Plugin, copy (optional), upgrade and noncdb_to_pdb.sql |
Minimal downtime option(s): | Oracle GoldenGate |
Process overview: | At first plugin non-CDB as PDB, then upgrade and finally convert it to a PDB |
Fallback after plugin: |
Data Pump – optional: Oracle GoldenGate |
Database Migration from non-CDB to PDB – Plug in, upgrade, convert
Since Oracle 12.2.0.1 or higher, you have the freedom to plugin a non-CDB at first, and then upgrade and adjust it. It is possible with 12.1.0.2 as well but requires shared UNDO in the receiving CDB. And I haven’t tested it (actually I tested it now – find the results at the end under Trying with a 12.1.0.2 non-CDB). I still prefer the other solution of upgrading first, then plugin as I describe in this article. There’s simple reason for me here: Proven fallback for the upgrade case whereas there’s no easy fallback once you plugged in. But I leave it to you to choose your preferred option.
I can’t do this exercise with my 11.2.0.4 databases, as DBMS_PDB.DESCRIBE
does not exist in this release. And even Oracle 12.1.0.2 does not support the following operation unless I create my CDB with shared UNDO.
Why “upgrade”?
Your non-CDB has a data dictionary. And this data dictionary must be adjusted to match the dictionary version of the receiving CDB. Whether you do this at first or after plugin, is up to you. But here, I will plug in at first, then upgrade my 12.2.0.1 non-CDB databases from our Hands-On Lab. And of course, I will use the AutoUpgrade. If you plan to do this exercise without the new AutoUpgrade, you can either follow my blog post or one of the use case scenarios in the Oracle Documentation.
Plugin Check Operation
For the plugin operation, I need to create an XML manifest file. Once the XML manifest file has been created, I can connect to my CDB and plug in the DB12
database.
As I received a lot of ERROR
s in my other blog post from DB12
due to the existence of ORDIM
and JAVAVM
, I remove these components upfront to avoid getting the same issues again. I will cover this entire topic in a separate blog post about the typical pitfalls.
exec DBMS_PDB.DESCRIBE('/home/oracle/DB12.xml');
Once I created the XML manifest file, I need to execute the recommended plugin-compatibility check in my CDB.
set serveroutput on DECLARE compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/home/oracle/DB12.xml', pdb_name => 'DB12') WHEN TRUE THEN 'YES' ELSE 'NO' END; BEGIN DBMS_OUTPUT.PUT_LINE('Is the future PDB compatible? ==> ' || compatible); END; /
A typical result is this here in the lab:
SQL> start db12_compatible.sql Is the future PDB compatible? ==> NO PL/SQL procedure successfully completed.
Why is it not compatible?
Let us check PDB_PLUG_IN_VIOLATIONS for the reason why DB12 is not compatible to be plugged into CDB2. I use this query to check what I need to fix:
set pages 2000 column message format a50 column status format a9 column type format a9 column con_id format 9 column name format a8 select con_id, name, type, message, status from PDB_PLUG_IN_VIOLATIONS where status<>'RESOLVED' order by name,time;
And the result is interesting:
CON_ID NAME TYPE MESSAGE STATUS ------ -------- --------- -------------------------------------------------- --------- 1 DB12 WARNING PDB plugged in is a non-CDB, requires noncdb_to_pd PENDING b.sql be run. 1 DB12 ERROR PDB's version does not match CDB's version: PDB's PENDING version 12.2.0.1.0. CDB's version 19.0.0.0.0. 1 DB12 WARNING CDB parameter sga_target mismatch: Previous 1200M PENDING Current 1504M 1 DB12 WARNING CDB parameter compatible mismatch: Previous '12.2. PENDING 0' Current '19.0.0' 1 DB12 WARNING CDB parameter _fix_control mismatch: Previous '254 PENDING 76149:1', '23249829:1', '26019148:1', '26986173:1' , '27466597:1', '20107874:1', '27321179:1', '25120 742:1', '26536320:1', '26423085:1', '28072567:1', '25405100:1' Current NULL 1 DB12 WARNING CDB parameter pga_aggregate_target mismatch: Previ PENDING ous 120M Current 200M 1 DB12 ERROR DBRU bundle patch 190416 (DATABASE APR 2019 RELEAS PENDING E UPDATE 12.2.0.1.190416): Not installed in the CD B but installed in the PDB 1 DB12 ERROR '19.3.0.0.0 Release_Update 1904101227' is installe PENDING d in the CDB but no release updates are installed in the PDB
The ERROR
s are quite disturbing and some of them – the patch errors – make no real sense to me. Of course a 12.2.0.1 RU can’t be installed in a 19c CDB (hopefully!). Funny to read the next ERROR
telling me now that the CDB has an RU (which is correct) but the future PDB has none?!?!
Well, let me ignore these – but focus on the first ERROR
:
“PDB’s version does not match CDB’s version: PDB’s version 12.2.0.1.0. CDB’s version 19.0.0.0.0.”
It is misleading to log an error here – even though ERROR
may be correct. But in fact the documentation promises that I can plugin this non-CDB – I just need to upgrade it afterwards.
Plugin Operation
I will simply ignore the above ERROR – and pretend that I know better. Let me see if I can plugin the non-CDB as a new PDB:
create pluggable database DB12 using '/home/oracle/DB12.xml' file_name_convert=('DB12','CDB2/DB12'); Pluggable database created.
This looks good. Let me do a quick check:
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 DB12 MOUNTED
I will open the PDB but it will open RESTRICTED
only of course as it needs to be upgraded and assimilated as a “real” PDB.
I did not specify any command option such as COPY
or NOCOPY
with the create pluggable database
command. Hence, the default – COPY
– will be used. This has the advantage of keeping my source untouched in case anything fails during the plugin operation. But the downside clearly is that I need twice as much disk space. And in case the database is large, the COPY
operation may take longer.
SQL> alter pluggable database all open;
Warning: PDB altered with errors.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 DB12 MIGRATE YES
Please note that it automatically gets opened in MIGRATE
mode – which is equivalent to STARTUP UPGRADE
.
Can I run noncdb_to_pdb.sql
in this stage already?
No, of course not. The new PDB hasn’t been upgraded. You will received this error – and the session gets terminated:
BEGIN * ERROR at line 1: ORA-04023: Object SYS.STANDARD could not be validated or authorized
Upgrade the PDB after plug in
The new PDB needs to be upgraded at first. If you do the steps in this order, the new AutoUpgrade doesn’t support upgrading a single PDB at the moment.
Hence, I’m running my upgrade with the simple command:
dbupgrade -l /home/oracle/logs -c "DB12"
And 20 minutes later, my new PDB “DB12
” is upgraded.
This is not the end of the story as the PDB needs to be recompiled with utlrp.sql
– and postupgrade_fixups.sql
need to be run as well.
Resistance is futile
And still, even thought the PDB is now upgraded to Oracle 19c, it can’t be used yet. The final step in this process is the execution of the noncdb_to_pdb.sql
script. It “assimilates” (no relation or connection to The Borg) the PDB and makes it fully operational.
SQL> alter session set container=DB12; Session altered. SQL> set timing on SQL> set serverout on SQL> set echo on SQL> set termout on SQL> spool /home/oracle/logs/db12_noncdbtopdb.log SQL> start ?/rdbms/admin/noncdb_to_pdb.sql
Once noncdb_to_pdb.sql
has completed its work (in my case, 10 minutes later), I can restart the PDB to get it out of RESTRICTED
mode.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 DB12 READ WRITE YES
SQL> shutdown
Pluggable Database closed.
SQL> startup
Pluggable Database opened.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
3 DB12 READ WRITE NO
Now my new PDB is fully upgraded and operational.
Fallback?
An important topic you need to plan for in all your upgrade and migration tasks are the various fallback opportunities. In the above case, there is no direct fallback for the upgrade. Once you plugged in DB12, you can’t define a guaranteed restore point as the PDB isn’t a PDB yet. It results in:
SQL> create restore point PDB1_GRP1 guarantee flashback database; create restore point PDB1_GRP1 guarantee flashback database * ERROR at line 1: ORA-39893: PDB restore point could not be created
Makes sense – but isn’t nice as this means, your only fallback from now on is to drop the PDB and repeat the action.
Once the plug in, upgrade and convert all succeeded, then you have a fallback option again: AFTER upgrade fallback is an export and import with Oracle Data Pump. Transportable Tablespaces can’t be used when your process did include an upgrade as TTS works only within the same or to a higher version. If you seek for minimal downtime you can use Oracle GoldenGate on top. I’m not aware of any other fallback options once you started production on the plugged in PDB.
Trying with a 12.1.0.2 non-CDB
Rodrigo chased me the other day asking why I think that we can’t do this process with a 12.1.0.2 non-CDB database. Well, I removed my 12.1.0.2 installation a long while ago. But of course I’ve had to try it out. So I installed 12.1.0.2, patched it quickly to the January 2022 BP and created a non-CDB database V121 to play with it.
And here you will find the results of my test showing that it does not work … at first 🙂
At starting point, I need to create the manifest file in my V121 database started read-only:
exec DBMS_PDB.DESCRIBE('/home/oracle/V121.xml');
Then I changed to my 19c CDB and executed:
SQL> create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121'); create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121') * ERROR at line 1: ORA-65353: The undo tablespace is missing from the XML metadata file.
So it fails. To double check, I quickly ran the same command sequence with my 12.2.0.1 non-CDB, and there all went fine.
I wanted to dig a bit deeper and compared the two manifest files, V121.xml and DB12.xml. And as some of you expected already, in the 12.1.0.2 manifest file there is no UNDO tablespace being moved to the CDB. But the CDB seems to expect at least one UNDO tablespaces being moved over as part of a non-CDB. So there is the problem – and you can’t use this technique for a 12.1.0.2 non-CDB database.
Of course, there is a longer section for this UNDO tablespace later – and this is missing in the 12.1.0.2 manifest file completely.
But what if I’d edit the manifest?
Well, a lot of edits later (luckily the command tells me what the divergence between XML manifest file and datafile is):
SQL> create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121'); create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121') * ERROR at line 1: ORA-65139: Mismatch between XML metadata file and data file /u02/oradata/V121/undotbs01.dbf for value of vsn (203423744 in the plug XML file, 202375680 in the data file) SQL> create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121'); create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121') * ERROR at line 1: ORA-65139: Mismatch between XML metadata file and data file /u02/oradata/V121/undotbs01.dbf for value of fileblocks (75520 in the plug XML file, 133760 in the data file) SQL> create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121'); create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121') * ERROR at line 1: ORA-65139: Mismatch between XML metadata file and data file /u02/oradata/V121/undotbs01.dbf for value of createscnbas (3734 in the plug XML file, 3549 in the data file) SQL> create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121'); create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121') * ERROR at line 1: ORA-65139: Mismatch between XML metadata file and data file /u02/oradata/V121/undotbs01.dbf for value of fdbid (1852833295 in the plug XML file, 3394204379 in the data file) SQL> create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121'); create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121') * ERROR at line 1: ORA-65139: Mismatch between XML metadata file and data file /u02/oradata/V121/undotbs01.dbf for value of fcpsb (5879404 in the plug XML file, 762842 in the data file) SQL> create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121'); create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121') * ERROR at line 1: ORA-65139: Mismatch between XML metadata file and data file /u02/oradata/V121/undotbs01.dbf for value of frlt (1005954063 in the plug XML file, 1100796443 in the data file) SQL> create pluggable database V121 using '/home/oracle/V121.xml' file_name_convert=('V121','CDB2/V121'); Pluggable database created. SQL>
But Rodrigo is so much smarter than I. He found the fix which teaches DBMS_PDB.DESCRIBE in 12.1.0.2 to add the description of the UNDO tablespace: bug 21388707. No idea if this conflicts with my BP – but as you see, in theory it is possible. Still, I will always prefer:
- AutoUpgrade doing this move for me – which first will upgrade and then plugin. So this issue will not arise
It is doable – but still do it the right way and don’t waste your time by editing an XML manifest file.
Further Information and Links
- The new AutoUpgrade Blog Post Series
- Oracle Database 19c Hands-On Lab
- Hand-On Lab: Plugin UPGR into CDB2
- Plugin 18c – Compatibility Check does not work – August 14, 2018
- Cloning with Pluggable Database in 18c – August 16, 2018
- noncdb_to_pdb.sql – Why does it take so long – June 8, 2017
- Upgrade to Oracle Database 12.2 and plugin into a 12.2 CDB – March 8, 2017
- 9 Upgrade and Migration Scenarios – August 1, 2018
- Fallback Strategy: Flashback a Pluggable Database – August 30, 2017
Typical Plugin Issues and Workarounds
- Typical Plugin Issues and Workarounds
- The Compatible Pitfall
- The Time Zone Pitfall
- The Component Pitfall
- The Patch Level Pitfall
- Various Pitfalls
- The Fallback Challenge
- The Minimal Downtime Challenge
Related Posts
- Upgrade, plug in, convert (noncdb_to_pdb.sql) – 11.2.0.4 and higher
- Plugin, upgrade, convert (noncdb_to_pdb.sql) – 12.2.0.1 and higher
- Clone via NON$CDB, upgrade, convert – 12.1.0.2 and higher
–Mike
Mike,
We have few 11g databases which currently are not running across all nodes of a cluster. So, the cluster has 4 nodes, the database was created only on first two nodes, no instances on node 3 and node 4. When I upgrade from 11g to 19c non-CDB, it will still be running on two nodes.
When migrating to 19c CDB/PDB, I would like to make it so that I create the CDB across all 4 nodes, plug-in the two instance non-CDB and get 4 instance PDB. Later, I can shutdown two instances and can start them up if need arises.
I cannot find any documentation which addresses this scenario and any additional steps required.
Thanks,
Arun
Hi Arun,
that is not handled via upgrade/migrate, this is something you define via the service management in your cluster.
I’m not sure if this is covered in the Multitenant guide, or if there’s a MOS note explaining all the details. But it should be pretty straight forward.
Cheers,
Mike