Database Migration from non-CDB to PDB – Plug in, upgrade, convert

This is my next blog post about Database Migration from non-CDB to PDB – Plug in, upgrade, convert. But what is different from the previous one? And why is it necessary? Since Oracle Database you can plugin a non-CDB at first, the upgrade and convert it. And I’ll show you this technique here.

Database Migration from non-CDB to PDB – Plug in, upgrade, convert

Photo by Dan Freeman on Unsplash

High Level Overview

Endianness change possible: No
Source database versions: Oracle or newer (or when CDB has shared UNDO)
Characteristic: Plugin into CDB first
Upgrade necessary: Yes, after plugin
Downtime: Plugin, copy (optional), upgrade and noncdb_to_pdb.sql
Minimal downtime option(s): Oracle GoldenGate
Process overview: At first plugin non-CDB as PDB, then upgrade and finally convert it to a PDB
Fallback after plugin:
Data Pump – optional: Oracle GoldenGate

Database Migration from non-CDB to PDB – Plug in, upgrade, convert

Since Oracle or higher, you have the freedom to plugin a non-CDB at first, and then upgrade and adjust it. It is possible with as well but requires shared UNDO in the receiving CDB. And I haven’t tested it. I still prefer the other solution of upgrading first, then plugin as I describe in this article. There’s simple reason for me here: Proven fallback for the upgrade case whereas there’s no easy fallback once you plugged in. But I leave it to you to choose your preferred option.

I can’t do this exercise with my databases, as DBMS_PDB.DESCRIBE does not exist in this release. And even Oracle does not support the following operation unless I create my CDB with shared UNDO.

Database Migration from non-CDB to PDB – Plug in, upgrade, convert

Why “upgrade”?

Your non-CDB has a data dictionary. And this data dictionary must be adjusted to match the dictionary version of the receiving CDB. Whether you do this at first or after plugin, is up to you. But here, I will plug in at first, then upgrade my non-CDB databases from our Hands-On Lab. And of course, I will use the AutoUpgrade. If you plan to do this exercise without the new AutoUpgrade, you can either follow my blog post or one of the use case scenarios in the Oracle Documentation.

Plugin Check Operation

For the plugin operation, I need to create an XML manifest file. Once the XML manifest file has been created, I can connect to my CDB and plug in the DB12 database.

As I received a lot of ERRORs in my other blog post from DB12 due to the existence of ORDIM and JAVAVM, I remove these components upfront to avoid getting the same issues again. I will cover this entire topic in a separate blog post about the typical pitfalls.

exec DBMS_PDB.DESCRIBE('/home/oracle/DB12.xml');

Once I created the XML manifest file, I need to execute the recommended plugin-compatibility check in my CDB.

set serveroutput on

compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/home/oracle/DB12.xml', pdb_name => 'DB12') WHEN TRUE THEN 'YES' ELSE 'NO'
DBMS_OUTPUT.PUT_LINE('Is the future PDB compatible?  ==>  ' || compatible);

A typical result is this here in the lab:

SQL> start db12_compatible.sql
Is the future PDB compatible?  ==>  NO
PL/SQL procedure successfully completed.

Why is it not compatible?

Let us check PDB_PLUG_IN_VIOLATIONS for the reason why DB12 is not compatible to be plugged into CDB2. I use this query to check what I need to fix:

set pages 2000

column message format a50
column status format a9
column type format a9
column con_id format 9
column name format a8

select con_id, name, type, message, status
 where status<>'RESOLVED'
 order by name,time;

And the result is interesting:

------ -------- --------- -------------------------------------------------- ---------
     1 DB12	WARNING   PDB plugged in is a non-CDB, requires noncdb_to_pd PENDING
			  b.sql be run.

     1 DB12	ERROR	  PDB's version does not match CDB's version: PDB's  PENDING
			  version CDB's version

     1 DB12	WARNING   CDB parameter sga_target mismatch: Previous 1200M  PENDING
			  Current 1504M

     1 DB12	WARNING   CDB parameter compatible mismatch: Previous '12.2. PENDING
			  0' Current '19.0.0'

     1 DB12	WARNING   CDB parameter _fix_control mismatch: Previous '254 PENDING
			  76149:1', '23249829:1', '26019148:1', '26986173:1'
			  , '27466597:1', '20107874:1', '27321179:1', '25120
			  742:1', '26536320:1', '26423085:1', '28072567:1',
			  '25405100:1' Current NULL

     1 DB12	WARNING   CDB parameter pga_aggregate_target mismatch: Previ PENDING
			  ous 120M Current 200M

     1 DB12	ERROR	  DBRU bundle patch 190416 (DATABASE APR 2019 RELEAS PENDING
			  E UPDATE Not installed in the CD
			  B but installed in the PDB

     1 DB12	ERROR	  ' Release_Update 1904101227' is installe PENDING
			  d in the CDB but no release updates are installed
			  in the PDB

The ERRORs are quite disturbing and some of them – the patch errors – make no real sense to me. Of course a RU can’t be installed in a 19c CDB (hopefully!). Funny to read the next ERROR telling me now that the CDB has an RU (which is correct) but the future PDB has none?!?!

Well, let me ignore these – but focus on the first ERROR:

“PDB’s version does not match CDB’s version: PDB’s version CDB’s version”

It is misleading to log an error here – even though ERROR may be correct. But in fact the documentation promises that I can plugin this non-CDB – I just need to upgrade it afterwards.

Plugin Operation

I will simply ignore the above ERROR – and pretend that I know better. Let me see if I can plugin the non-CDB as a new PDB:

create pluggable database DB12 using '/home/oracle/DB12.xml' file_name_convert=('DB12','CDB2/DB12');
Pluggable database created.

This looks good. Let me do a quick check:

SQL> show pdbs

---------- ------------------------------ ---------- ----------
	 3 DB12 			  MOUNTED

I will open the PDB but it will open RESTRICTED only of course as it needs to be upgraded and assimilated as a “real” PDB.

I did not specify any command option such as COPY or NOCOPY with the create pluggable database command. Hence, the default – COPY – will be used. This has the advantage of keeping my source untouched in case anything fails during the plugin operation. But the downside clearly is that I need twice as much disk space. And in case the database is large, the COPY operation may take longer.

SQL> alter pluggable database all open;
Warning: PDB altered with errors.

SQL> show pdbs

---------- ------------------------------ ---------- ----------
	 3 DB12 			  MIGRATE    YES

Please note that it automatically gets opened in MIGRATE mode – which is equivalent to STARTUP UPGRADE.

Can I run noncdb_to_pdb.sql in this stage already?

No, of course not. The new PDB hasn’t been upgraded. You will received this error – and the session gets terminated:

ERROR at line 1:
ORA-04023: Object SYS.STANDARD could not be validated or authorized

Upgrade the PDB after plug in

The new PDB needs to be upgraded at first. If you do the steps in this order, the new AutoUpgrade doesn’t support upgrading a single PDB at the moment.

Hence, I’m running my upgrade with the simple command:

dbupgrade -l /home/oracle/logs -c "DB12"

And 20 minutes later, my new PDB “DB12” is upgraded.

This is not the end of the story as the PDB needs to be recompiled with utlrp.sql – and postupgrade_fixups.sql need to be run as well.

Resistance is futile

And still, even thought the PDB is now upgraded to Oracle 19c, it can’t be used yet. The final step in this process is the execution of the noncdb_to_pdb.sql script. It “assimilates” (no relation or connection to The Borg) the PDB and makes it fully operational.

SQL> alter session set container=DB12;
Session altered.

SQL> set timing on
SQL> set serverout on
SQL> set echo on
SQL> set termout on
SQL> spool /home/oracle/logs/db12_noncdbtopdb.log
SQL> start ?/rdbms/admin/noncdb_to_pdb.sql

Once noncdb_to_pdb.sql has completed its work (in my case, 10 minutes later), I can restart the PDB to get it out of RESTRICTED mode.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 DB12                           READ WRITE YES
SQL> shutdown
Pluggable Database closed.
SQL> startup
Pluggable Database opened.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         3 DB12                           READ WRITE NO

Now my new PDB is fully upgraded and operational.


An important topic you need to plan for in all your upgrade and migration tasks are the various fallback opportunities. In the above case, there is no direct fallback for the upgrade. Once you plugged in DB12, you can’t define a guaranteed restore point as the PDB isn’t a PDB yet. It results in:

SQL> create restore point PDB1_GRP1 guarantee flashback database;
create restore point PDB1_GRP1 guarantee flashback database
ERROR at line 1:
ORA-39893: PDB restore point could not be created

Makes sense – but isn’t nice as this means, your only fallback from now on is to drop the PDB and repeat the action.

Once the plug in, upgrade and convert all succeeded, then you have a fallback option again: AFTER upgrade fallback is an export and import with Oracle Data Pump. Transportable Tablespaces can’t be used when your process did include an upgrade as TTS works only within the same or to a higher version. If you seek for minimal downtime you can use Oracle GoldenGate on top. I’m not aware of any other fallback options once you started production on the plugged in PDB.

Further Information and Links

Typical Plugin Issues and Workarounds

Related Posts




Share this: