As first blog post of this series about Database Migration from non-CDB to PDB – Upgrade, plug in, convert I will explain the most straight forward approach of going to Single- or Multitenant. But you have other options as well. which I’ll showcase in the other blog posts. I demonstrated this technique on the blog already a while ago. But since Oracle 12.2.0.1, there are a few other options available.
High Level Overview
Endianness change possible: | No |
Source database versions: | Oracle 11.2.0.4 or newer |
Characteristic: | Upgrade non-CDB first |
Upgrade necessary: | Yes, before plugin |
Downtime: |
Upgrade, plugin, copy (optional) and noncdb_to_pdb.sql |
Minimal downtime option(s): | Oracle GoldenGate |
Process overview: | Upgrade to matching version of CDB, then plugin non-CDB as PDB, and finally convert it to a PDB |
Fallback after plugin: |
Data Pump – optional: Oracle GoldenGate |
Database Migration from non-CDB to PDB – Upgrade, plug in, convert
As of now, you can’t plugin an Oracle 11.2 database. When your source is Oracle 12.2.0.1 or higher, you have more freedom if you’d like to plugin at first, and then upgrade, or if you upgrade first, and then plugin. I prefer the latter solution which I describe in this article for the simple reason that there’s proven fallback for the upgrade case whereas there’s no easy fallback once you plugged in. But I leave it to you to choose your preferred option.
Why “upgrade”?
Your non-CDB has a data dictionary. And this data dictionary must be adjusted to match the dictionary version of the receiving CDB. Whether you do this at first or after plugin, is up to you. But here, I will upgrade my 11.2.0.4 and 12.2.0.1 non-CDB databases in our Hands-On Lab at first. And of course, I will use the AutoUpgrade. If you plan to do this exercise without the new AutoUpgrade, you can either follow my blog post or one of the use case scenarios in the Oracle Documentation.
For the autoupgrade.jar
I’m using the following config file:
global.autoupg_log_dir=/home/oracle/logs upg1.dbname=DB12 upg1.start_time=NOW upg1.source_home=/u01/app/oracle/product/12.2.0.1 upg1.target_home=/u01/app/oracle/product/19 upg1.sid=DB12 upg1.log_dir=/home/oracle/logs upg1.upgrade_node=localhost upg1.target_version=19 upg2.dbname=FTEX upg2.start_time=NOW upg2.source_home=/u01/app/oracle/product/11.2.0.4 upg2.target_home=/u01/app/oracle/product/19 upg2.sid=FTEX upg2.log_dir=/home/oracle/logs upg2.upgrade_node=localhost upg2.target_version=19
Following best practices, I’m running the analyze
mode at first, followed by deploy
.
java -jar $OH19/rdbms/admin/autoupgrade.jar -config DB12_FTEX.cfg -mode analyze java -jar $OH19/rdbms/admin/autoupgrade.jar -config DB12_FTEX.cfg -mode deploy
An hour later, my two databases, FTEX
and DB12
, are both upgraded to Oracle Database 19c and ready to be plugged in.
Plugin Check Operation
For the plugin operation, I need to create an XML manifest file. Once the XML manifest file has been created, I can connect to my CDB and plug in FTEX
and DB12
databases.
exec DBMS_PDB.DESCRIBE('/home/oracle/FTEX.xml');
I will have to repeat the same step with the DB12 database.
Once I created the two XML manifest files, I need to execute the recommended plugin-compatibility check in my CDB.
set serveroutput on DECLARE compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/home/oracle/FTEX.xml', pdb_name => 'FTEX') WHEN TRUE THEN 'YES' ELSE 'NO' END; BEGIN DBMS_OUTPUT.PUT_LINE('Is the future PDB compatible? ==> ' || compatible); END; /
A typical result is this here in the lab:
SQL> start ftex_compatible.sql Is the future PDB compatible? ==> YES PL/SQL procedure successfully completed. SQL> start db12_compatible.sql Is the future PDB compatible? ==> NO PL/SQL procedure successfully completed.
One gets a “Yes
” (FTEX
), the other one a “No
” (DB12
). You will need to check now PDB_PLUG_IN_VIOLATIONS for the reason why DB12
is not compatible to be plugged into CDB2
. I use this query to check what I need to fix:
set pages 2000 column message format a50 column status format a9 column type format a9 column con_id format 9 column name format a8 select con_id, name, type, message, status from PDB_PLUG_IN_VIOLATIONS where status<>'RESOLVED' order by name,time;
And the result is interesting:
CON_ID NAME TYPE MESSAGE STATUS ------ -------- --------- -------------------------------------------------- --------- 1 DB12 WARNING PDB plugged in is a non-CDB, requires noncdb_to_pd PENDING b.sql be run. 1 DB12 ERROR Database option CATJAVA mismatch: PDB installed ve PENDING rsion 19.0.0.0.0. CDB installed version NULL. 1 DB12 ERROR Database option JAVAVM mismatch: PDB installed ver PENDING sion 19.0.0.0.0. CDB installed version NULL. 1 DB12 ERROR Database option ORDIM mismatch: PDB installed vers PENDING ion 19.0.0.0.0. CDB installed version NULL. 1 DB12 ERROR Database option XML mismatch: PDB installed versio PENDING n 19.0.0.0.0. CDB installed version NULL. 1 DB12 WARNING CDB parameter sga_target mismatch: Previous 1200M PENDING Current 1504M 1 DB12 WARNING CDB parameter compatible mismatch: Previous '12.2. PENDING 0' Current '19.0.0' 1 DB12 WARNING CDB parameter _fix_control mismatch: Previous '201 PENDING 07874:1', '23249829:1', '25120742:1', '25405100:1' , '25476149:1', '26019148:1', '26423085:1', '26536 320:1', '26986173:1', '27321179:1', '27466597:1', '28072567:1' Current NULL 1 DB12 WARNING CDB parameter pga_aggregate_target mismatch: Previ PENDING ous 120M Current 200M 1 FTEX WARNING PDB plugged in is a non-CDB, requires noncdb_to_pd PENDING b.sql be run. 1 FTEX WARNING Database option OLS mismatch: PDB installed versio PENDING n NULL. CDB installed version 19.0.0.0.0. 1 FTEX WARNING Database option OWM mismatch: PDB installed versio PENDING n NULL. CDB installed version 19.0.0.0.0. 1 FTEX WARNING CDB parameter sga_target mismatch: Previous 1104M PENDING Current 1504M 1 FTEX WARNING CDB parameter compatible mismatch: Previous '11.2. PENDING 0.4.0' Current '19.0.0' 1 FTEX WARNING CDB parameter pga_aggregate_target mismatch: Previ PENDING ous 120M Current 200M
Quite a long list. For FTEX
I received only WARNING
s. But the ERROR
conditions for DB12
will block the plugin operation. You see that these are serious errors. The CDB does not contain options which exist in DB12. I could now either install these options into CDB2 – or I remove them from DB12. I remove them following these ideas. Installing them into my CDB may have implications on future PDBs. But before removing these options you should check if the components are in use.
Once I removed the components ORDIM, JAVA and XML from my DB12 database, I create the manifest file again and repeat the compatibility check:
SQL> start db Is the future PDB compatible? ==> YES PL/SQL procedure successfully completed.
I can safely ignore the other warnings except the one which tells me to execute noncdb_to_pdb.sql
.
Plugin Operation
Now, as everything seems to be ok, I can login to my CDB and plug both databases in:
create pluggable database FTEX using '/home/oracle/FTEX.xml' file_name_convert=('FTEX','CDB2/FTEX'); create pluggable database DB12 using '/home/oracle/DB12.xml' file_name_convert=('DB12','CDB2/DB12');
Quick check:
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 FTEX MOUNTED 5 DB12 MOUNTED
I will open the PDBs but they will open RESTRICTED
only of course as they need to assimilated as “real” PDBs.
SQL> alter pluggable database all open; Warning: PDB altered with errors. SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 FTEX READ WRITE YES 5 DB12 READ WRITE YES
I did not specify any command option such as COPY
or NOCOPY
with the create pluggable database
command. Hence, the default – COPY
– will be used. This has the advantage of keeping my source untouched in case anything fails during the plugin operation. But the downside clearly is that I need twice as much disk space. And in case the database is large, the COPY
operation may take longer.
SQL> select con_id, file_name, tablespace_name from cdb_data_files order by con_id, relative_fno; CON_ID FILE_NAME TABLESPACE ------ ------------------------------------------ ---------- 1 /u02/oradata/CDB2/system01.dbf SYSTEM 1 /u02/oradata/CDB2/sysaux01.dbf SYSAUX 1 /u02/oradata/CDB2/undotbs01.dbf UNDOTBS1 1 /u02/oradata/CDB2/users01.dbf USERS 2 /u02/oradata/CDB2/pdbseed/system01.dbf SYSTEM 2 /u02/oradata/CDB2/pdbseed/sysaux01.dbf SYSAUX 2 /u02/oradata/CDB2/pdbseed/undotbs01.dbf UNDOTBS1 4 /u02/oradata/CDB2/FTEX/system01.dbf SYSTEM 4 /u02/oradata/CDB2/FTEX/sysaux01.dbf SYSAUX 4 /u02/oradata/CDB2/FTEX/users01.dbf USERS 4 /u02/oradata/CDB2/FTEX/undotbs100.dbf UNDOTBS100 5 /u02/oradata/CDB2/DB12/system01.dbf SYSTEM 5 /u02/oradata/CDB2/DB12/sysaux01.dbf SYSAUX 5 /u02/oradata/CDB2/DB12/undotbs01.dbf UNDOTBS1 5 /u02/oradata/CDB2/DB12/users01.dbf USERS
Another positive side effect of the COPY
operation: The files will be under one directory tree. Of course you could move them afterwards with the online datafile move feature we include since Oracle 12.1.0.2. as well.
Sanity Work
The final part of the entire migration is the execution of noncdb_to_pdb.sql
. Unfortunately the runtime of this once-in-a-lifetime-of-a-database script is unpredictable. It depends a lot on the dictionary content and especially on the objects requiring recompilation.
SQL> alter session set container=FTEX; Session altered. SQL> set timing on SQL> set serverout on SQL> set echo on SQL> set termout on SQL> spool /home/oracle/logs/ftex_noncdbtopdb.log SQL> start ?/rdbms/admin/noncdb_to_pdb.sql
In my example it takes almost exactly 10 minutes for each database to complete.
Be aware that these are only toy databases with no user data in it.
Finally I SHUTDOWN
and STARTUP
the PDBs as noncdb_to_pdb.sql
puts them into RESTRICTED
mode again after completion.
SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 FTEX READ WRITE NO 5 DB12 READ WRITE NO
Fallback?
An important topic you need to plan for in all your upgrade and migration tasks are the various fallback opportunities. In the above case, for the upgrade itself, the fallback is very straight forward. When you use the AutoUpgrade and your database is in ARCHIVELOG
mode, then a guaranteed restore point (GRP) gets created. You can easily flashback. Or you can downgrade in case you deleted the GRP and went live. As long as COMPATIBLE
remains on the pre-upgrade value, downgrade is a viable and fast option.
But as soon as you invoke the plugin command, there is no easy fallback. Hence, you need to test this carefully. As you see, in the above plugin operation I didn’t specify NOCOPY
on purpose. This mean, all my files get copied to the destination defined in FILE_NAME_CONVERT
. COPY
is the default and the safer command here in this case. Your source database remains untouched in case the plugin or any subsequent operation fails.
Another pitfall you should be aware of – and it happens in our Hands-On Lab as well:
Once I plugin, the only AFTER upgrade fallback is an export and import with Oracle Data Pump. Transportable Tablespaces can’t be used as TTS works only within the same or to a higher version. If you seek for minimal downtime you can use Oracle GoldenGate on top. I’m not aware of any other fallback options once you started production on the plugged in PDB.
Further Information and Links
- The new AutoUpgrade Blog Post Series
- Oracle Database 19c Hands-On Lab
- Hand-On Lab: Plugin UPGR into CDB2
- Plugin 18c – Compatibility Check does not work – August 14, 2018
- Cloning with Pluggable Database in 18c – August 16, 2018
- noncdb_to_pdb.sql – Why does it take so long – June 8, 2017
- Upgrade to Oracle Database 12.2 and plugin into a 12.2 CDB – March 8, 2017
- 9 Upgrade and Migration Scenarios – August 1, 2018
Typical Plugin Issues and Workarounds
- Typical Plugin Issues and Workarounds
- The Compatible Pitfall
- The Time Zone Pitfall
- The Component Pitfall
- The Patch Level Pitfall
- Various Pitfalls
- The Fallback Challenge
- The Minimal Downtime Challenge
Related Posts
- Upgrade, plug in, convert (noncdb_to_pdb.sql) – 11.2.0.4 and higher
- Plugin, upgrade, convert (noncdb_to_pdb.sql) – 12.2.0.1 and higher
- Clone via NON$CDB, upgrade, convert – 12.1.0.2 and higher
–Mike
hello,
I get the feeling that the procedure that uses Transportable Tablespaces to upgrade+convert an 11.2.0.4 to a 19C pdb is much easier (and maybe even faster) than using the procedure explained above.
see : https://mikedietrichde.com/database-upgrade-hands-on-lab-oracle-18c-and-19c/hol-18c-migrate-ftex/
A big advantage I see is also the fact that you can migrate/convert primary+standby dataguard DB’s in 1 shot.
We use rman copy or a standby DB to move the datafiles in the correct directory prior to upgrading/converting to minimize downtime.
Do you have timing information for the 2 methods (autoupgrade and TTS)?
Any advantages/disadvantages of the 2 methods?
Regards,
Jan
Hi Jan,
autoupgrade upgrades your database unattended in 15-60 (usually) minutes. It depends mainly on the number of installed components.
More components mean more scripts to run.
Then you have the plugin – and noncdb_to_pdb.sql
In usual cases this completes entirely within 2 hours or less. But there are edge cases where it takes much longer.
TTS requires a lot more extra work and logic as you transport “data” only but need to rebuild everything.
But please stay tuned – I will write up a blog post for the TTZ case – will be there next week most likely.
Cheers,
Mike
Mike,
Is there a post upgrade validation? Let’s say I use DBUA to upgrade a database, say from 18c CDB/PDB to 19c or from 11g to 19c non-CDB. The DBUA reports everything fine. There are no invalid sys/system objects, timezone files get upgraded, dba_registry shows all components valid, dba_registry_sqlpatch reports appropriate patch level, TDE wallet is open. What else can I check?
Thanks,
Arun
Hi Arun,
the database upgrade procedures (not the DBUA) have validation routines run once a component gets upgraded and recompiled.
What are you looking for?
The AutoUpgrade does a bit more than the DBUA – but that it all we can do.
Cheers,
Mike