You may have realized that there are a few techniques missing describing how to do a Database Migration from non-CDB to PDB – Migration with Data Pump is one of them. I will explain the most simple approach of going to Single- or Multitenant. It isn’t the coolest – and it isn’t very fast as soon as your database has a significant size. But it is not complex. And it allows you to move even from very old versions directly into an Oracle 19c PDB – regardless of patch levels or source and destination platform.
High Level Overview
|Endianness change possible:||Yes|
|Source database versions:||Oracle 10.1.0.2 or newer|
|Characteristic:||Direct migration into PDB|
|Upgrade necessary:||No, happens implicitly|
||Migration – mainly depending on size and complexity|
|Minimal downtime option(s):||Oracle GoldenGate|
|Process overview:||Export from source, import into destination – either via dump file or via Database Link|
|Fallback after plugin:
||Data Pump – optional: Oracle GoldenGate|
Database Migration from non-CDB to PDB – Migration with Data Pump
Well, I think I don’t need to explain Oracle Data Pump to anybody. At the end of this blog post you will find a long list of links pointing to the documentation and various workarounds. The big advantages of using Data Pump to migrate from a non-CDB into a PDB are:
- Works with every version since Oracle 10.1.0.2
- Works regardless of patch level
- Does not require any upgrade
- Works across all platforms
- Works regardless of encryption
- Allows multiple transformations
But the disadvantages of Data Pump are obvious as well as the duration depends mostly on:
- Amount of data
- Complexity of meta information
- Special data types such as LONG and LOB
I’d call Data Pump the most flexible approach but of course potentially also the slowest of all options.
Using Data Pump either allows you to export into a dump file, and import from this dump file afterwards.
Or you setup a database link from destination to source, and run the import from the destination of the database link using the
The advantage of using a database link is not-writing a dump file which does not need to be copied over. But not all actions can run in parallel. Plus, not every data type is supported (
LONG for instance until 12.2). And your limiting factor is always the source side as Data Pump implicitly calls
expdp on the source side. To my experience this can be faster but especially when you work on the same storage or SAN, and you don’t have to move the dump file around, the first approach often works better.
Some Best Practices
There are some recommendations for both approaches:
- Always use a par file
- For a consistent export, use either
EXCLUDE=STATISTICS– regather stats in destination is faster, or transport with a
METRICS=Yand since 12.1,
PARALLEL=<2x number of cpu cores>
- Since Oracle 12.2, meta data gets exported in parallel – but not with
- Since Oracle 12.2, meta data gets exported in parallel – but not with
STREAMS_POOL_SIZE=128M(or in the range of 64M-256M)
- BasicFile LOB (old 8i LOBs) are always slow
LOB_STORAGE=SECUREFILEto convert to SecureFile LOBs as part of the migration
When you used Data Pump as a migration approach to move from non-CDB to PDB, then I don’t expect you to force a fast fallback scenario in case of failure. The important task is to use the
VERSION parameter correctly when you export from the destination PDB. You need to set it to the source’s release in order to export in the format and with the contents, the (old) source will understand when you reimport. Make sure, there’s an empty database waiting in case fallback is important. And don’t cleanup your old source home too early.
Be aware of one major pitfall: The time zone version. As typically your source database has a lower time zone version than the destination, you can migrate “forward” (same or higher version) but not “backwards” (lower version). Hence, in case of fallback you most likely need to apply a DST Time Zone patch to the older home in order to allow Data Pump to import. And make sure you follow the supported configurations setup from MOS Note:553337.1 carefully.
Unfortunately, the fallback strategy over a
NETWORK_LINK does not work. Even though the below scenario looks promising, you’ll receive an error when you call imdp from the lower version over the DB Link. I’d assume that the VERSION parameter does not get propagated in a way to convince the expdp side to export in 188.8.131.52 format.
This will be the error you’ll receive:
$ impdp system/oracle@ftex network_link=sourcedb version=184.108.40.206 tables=tab1 metrics=y exclude=statistics directory=mydir logfile=pdb2.log Import: Release 220.127.116.11.0 - Production on Wed Aug 14 20:27:52 2019 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 18.104.22.168.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39001: invalid argument value ORA-39169: Local version of 22.214.171.124.0 cannot work with remote version of 126.96.36.199.0.
Further Links and Information
- MOS Note:553337.1 – Data Pump Compatibility and Version Changes
- MOS Note:1264715.1 – Master Note for Data Pump
- Data Pump: Time Zone Pitfalls
- Behavior Changes: No symbolic links in Data Pump directories in 18c/19c
- Data Pump: Metrics and Logtime Parameters
- Data Pump: Exclude Statistics
- Parallel Index Creation with Data Pump
Things to know
- Typical Plugin Issues and Workarounds
- The Compatible Pitfall
- The Time Zone Pitfall
- The Component Pitfall
- The Patch Level Pitfall
- Various Pitfalls
- The Fallback Challenge
- The Minimal Downtime Challenge
- Upgrade, plug in, convert (noncdb_to_pdb.sql) – 188.8.131.52 and higher
- Plugin, upgrade, convert (noncdb_to_pdb.sql) – 184.108.40.206 and higher
- Clone via NON$CDB, upgrade, convert – 220.127.116.11 and higher
- Migration with Data Pump
Is it “clean” to do a full expdp / impdp?
Will “full” also import any old oracle owned stuff like EXFSYS, OWBSYS,…?
Yes, it is 🙂
Do you have expdp/impdp samples or Doc material for a DataPump migration using Transportable tablespaces . I am asking because I have a new 19.7.0 HOME that I want my 12c user data moved too.
Don’t seem to see a lot of parfiles samples online .
Unfortunately not – but you can find examples for Full Transportable Export Import on my blog as this is the much smoother solution when you’d like to do TTS 🙂
And remember to apply patch 30321076 on target if you plan to use network_link and have partition table in source which version below 12.2.
Thanks – that is a good hint as it is not obvious – there are one-off patches on top of all 19c RUs (and even for some RURs) available as Data Pump patches don’t get included into RUs and RURs.
Datapump import over a network link, where the target version is 12.2 or
greater and the source version is less than 12.2, could result in the
internal error ora-600 [qesmaGetPamR-NullCtx] when importing a partitioned
Acutely, I have Oracle EBS, DB 10.2.5, and EBS R12.1.3 on OS Redhat 5.8-32 Bit
I would like to upgrade the DB to 19C on the new platform Redhat 64 Bit.
I mean will migrate to OS-64 along with upgrade the DB from 10g to 19c.
What is the best approach to achieve this mission?
Can we upgrade directly from 10g to 19 c via a data pump?
As per Oracle if we go with the normal upgrade, we can not upgrade from 10g to 19c direct, only upgrade to 11g then 19c?
So, needs your advice for above mentioned, and take into your consideration the migrate OS from 32 to 64 bit.
since the higher releases from Oracle, Db does not work with OS-32.
Thanks in advance.
this is a way too big step. You please need to open an SR and check with Oracle Support. From EBS projects I’ve been involved into in the past I remember that especially the EBS upgrade path requires a lot of tasks. So this is something I can’t give advice on via the blog.
I have been reading your posts on Oracle Multi-tenant architecture. I have a quick question and hope you can help me out here or direct in a right direction. I am pretty new to Multi tenant architecture and do not care about fastest / Zero downtime aspects at this point.
I have oracle 11g (11.2.01) databases, that we are planning to migrate to oracle 19c (18.104.22.168) multi-tenant platform on Exadata, Can you please let me know if the plan that I intend to follow is correct and feasible?
1. Create an empty 19c CDB with Local Undo for PDB option.
2. Take full import of Source 11g database (excluding the stats).
3. Move the export Dump-files to Target 19 Exadata server.
4. Create a the PDB from scratch with the user and roles specified.
5. Create necessary all necessary Tablespaces, Users, Roles, and Profiles within the 19c PDB.
6. Run the import job on PDB database excluding the default 19c database Users, Roles and Profiles.
What is your take and opinion in the above approach, Am i missing anything here? Is this even supported/doable? Any other tips and considerations, will be highly appreciated and helpful.
Thanks & regards.
I will try to answer your questions:
1) Yes, and a new 19c CDB has always local undo by default, hence, no worries here
2) If your source database is on the same server, I see no deeper reason to use export/import except for the fact that you can do this while the actual database is up and running. But normally I would use AutoUpgrade to complete this step. Only if you have really 22.214.171.124 and not 126.96.36.199, then expdp/impdp would be a good option to avoid double-hop upgrades.
And regarding the further steps, that is fine.