You may have realized that there are a few techniques missing describing how to do a Database Migration from non-CDB to PDB – Migration with Data Pump is one of them. I will explain the most simple approach of going to Single- or Multitenant. It isn’t the coolest – and it isn’t very fast as soon as your database has a significant size. But it is not complex. And it allows you to move even from very old versions directly into an Oracle 19c PDB – regardless of patch levels or source and destination platform.
High Level Overview
Endianness change possible: | Yes |
Source database versions: | Oracle 10.1.0.2 or newer |
Characteristic: | Direct migration into PDB |
Upgrade necessary: | No, happens implicitly |
Downtime: |
Migration – mainly depending on size and complexity |
Minimal downtime option(s): | Oracle GoldenGate |
Process overview: | Export from source, import into destination – either via dump file or via Database Link |
Fallback after plugin: |
Data Pump – optional: Oracle GoldenGate |
Database Migration from non-CDB to PDB – Migration with Data Pump
Well, I think I don’t need to explain Oracle Data Pump to anybody. At the end of this blog post you will find a long list of links pointing to the documentation and various workarounds. The big advantages of using Data Pump to migrate from a non-CDB into a PDB are:
- Works with every version since Oracle 10.1.0.2
- Works regardless of patch level
- Does not require any upgrade
- Works across all platforms
- Works regardless of encryption
- Allows multiple transformations
But the disadvantages of Data Pump are obvious as well as the duration depends mostly on:
- Amount of data
- Complexity of meta information
- Special data types such as LONG and LOB
I’d call Data Pump the most flexible approach but of course potentially also the slowest of all options.
Process Overview
Using Data Pump either allows you to export into a dump file, and import from this dump file afterwards.
Or you setup a database link from destination to source, and run the import from the destination of the database link using the NETWORK_LINK
parameter.
The advantage of using a database link is not-writing a dump file which does not need to be copied over. But not all actions can run in parallel. Plus, not every data type is supported (LONG
for instance until 12.2). And your limiting factor is always the source side as Data Pump implicitly calls expdp
on the source side. To my experience this can be faster but especially when you work on the same storage or SAN, and you don’t have to move the dump file around, the first approach often works better.
Some Best Practices
There are some recommendations for both approaches:
- Always use a par file
- For a consistent export, use either
FLASHBACK_TIME=SYSTIMESTAMP
orCONSISTENT=Y
(since 11.2) - Always
EXCLUDE=STATISTICS
– regather stats in destination is faster, or transport with aSTATS
table fromDBMS_STATS
- Set
METRICS=Y
and since 12.1,LOGTIME=ALL
- Use
PARALLEL=<2x number of cpu cores>
- Since Oracle 12.2, meta data gets exported in parallel – but not with
NETWORK_LINK
- Since Oracle 12.2, meta data gets exported in parallel – but not with
- Preallocate
STREAMS_POOL_SIZE=128M
(or in the range of 64M-256M) - BasicFile LOB (old 8i LOBs) are always slow
- Use
LOB_STORAGE=SECUREFILE
to convert to SecureFile LOBs as part of the migration
- Use
Fallback
When you used Data Pump as a migration approach to move from non-CDB to PDB, then I don’t expect you to force a fast fallback scenario in case of failure. The important task is to use the VERSION
parameter correctly when you export from the destination PDB. You need to set it to the source’s release in order to export in the format and with the contents, the (old) source will understand when you reimport. Make sure, there’s an empty database waiting in case fallback is important. And don’t cleanup your old source home too early.
Be aware of one major pitfall: The time zone version. As typically your source database has a lower time zone version than the destination, you can migrate “forward” (same or higher version) but not “backwards” (lower version). Hence, in case of fallback you most likely need to apply a DST Time Zone patch to the older home in order to allow Data Pump to import. And make sure you follow the supported configurations setup from MOS Note:553337.1 carefully.
Unfortunately, the fallback strategy over a NETWORK_LINK
does not work. Even though the below scenario looks promising, you’ll receive an error when you call imdp from the lower version over the DB Link. I’d assume that the VERSION parameter does not get propagated in a way to convince the expdp side to export in 11.2.0.4 format.
This will be the error you’ll receive:
$ impdp system/oracle@ftex network_link=sourcedb version=11.2.0.4 tables=tab1 metrics=y exclude=statistics directory=mydir logfile=pdb2.log Import: Release 11.2.0.4.0 - Production on Wed Aug 14 20:27:52 2019 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39001: invalid argument value ORA-39169: Local version of 11.2.0.4.0 cannot work with remote version of 19.0.0.0.0.
Further Links and Information
- MOS Note:553337.1 – Data Pump Compatibility and Version Changes
- MOS Note:1264715.1 – Master Note for Data Pump
- Data Pump: Time Zone Pitfalls
- Behavior Changes: No symbolic links in Data Pump directories in 18c/19c
- Data Pump: Metrics and Logtime Parameters
- Data Pump: Exclude Statistics
- Parallel Index Creation with Data Pump
Things to know
- Typical Plugin Issues and Workarounds
- The Compatible Pitfall
- The Time Zone Pitfall
- The Component Pitfall
- The Patch Level Pitfall
- Various Pitfalls
- The Fallback Challenge
- The Minimal Downtime Challenge
Related Posts
- Upgrade, plug in, convert (noncdb_to_pdb.sql) – 11.2.0.4 and higher
- Plugin, upgrade, convert (noncdb_to_pdb.sql) – 12.2.0.1 and higher
- Clone via NON$CDB, upgrade, convert – 12.1.0.2 and higher
- Migration with Data Pump
–Mike
Is it “clean” to do a full expdp / impdp?
Will “full” also import any old oracle owned stuff like EXFSYS, OWBSYS,…?
Yes, it is 🙂
Cheers,
Mike
Hi Mike,
Do you have expdp/impdp samples or Doc material for a DataPump migration using Transportable tablespaces . I am asking because I have a new 19.7.0 HOME that I want my 12c user data moved too.
Don’t seem to see a lot of parfiles samples online .
Thanks
Unfortunately not – but you can find examples for Full Transportable Export Import on my blog as this is the much smoother solution when you’d like to do TTS 🙂
https://mikedietrichde.com/2016/11/03/full-transportable-exportimport-par-file-examples/
Cheers,
Mike
And remember to apply patch 30321076 on target if you plan to use network_link and have partition table in source which version below 12.2.
Thanks – that is a good hint as it is not obvious – there are one-off patches on top of all 19c RUs (and even for some RURs) available as Data Pump patches don’t get included into RUs and RURs.
Datapump import over a network link, where the target version is 12.2 or
greater and the source version is less than 12.2, could result in the
internal error ora-600 [qesmaGetPamR-NullCtx] when importing a partitioned
table.
Thanks
Mike
Hello,
Acutely, I have Oracle EBS, DB 10.2.5, and EBS R12.1.3 on OS Redhat 5.8-32 Bit
I would like to upgrade the DB to 19C on the new platform Redhat 64 Bit.
I mean will migrate to OS-64 along with upgrade the DB from 10g to 19c.
What is the best approach to achieve this mission?
Can we upgrade directly from 10g to 19 c via a data pump?
As per Oracle if we go with the normal upgrade, we can not upgrade from 10g to 19c direct, only upgrade to 11g then 19c?
So, needs your advice for above mentioned, and take into your consideration the migrate OS from 32 to 64 bit.
since the higher releases from Oracle, Db does not work with OS-32.
Thanks in advance.
Hi Ahmed,
this is a way too big step. You please need to open an SR and check with Oracle Support. From EBS projects I’ve been involved into in the past I remember that especially the EBS upgrade path requires a lot of tasks. So this is something I can’t give advice on via the blog.
Thanks,
Mike
Hello Mike,
I have been reading your posts on Oracle Multi-tenant architecture. I have a quick question and hope you can help me out here or direct in a right direction. I am pretty new to Multi tenant architecture and do not care about fastest / Zero downtime aspects at this point.
I have oracle 11g (11.2.01) databases, that we are planning to migrate to oracle 19c (19.0.0.0) multi-tenant platform on Exadata, Can you please let me know if the plan that I intend to follow is correct and feasible?
Steps:
————————-
1. Create an empty 19c CDB with Local Undo for PDB option.
2. Take full import of Source 11g database (excluding the stats).
3. Move the export Dump-files to Target 19 Exadata server.
4. Create a the PDB from scratch with the user and roles specified.
5. Create necessary all necessary Tablespaces, Users, Roles, and Profiles within the 19c PDB.
6. Run the import job on PDB database excluding the default 19c database Users, Roles and Profiles.
What is your take and opinion in the above approach, Am i missing anything here? Is this even supported/doable? Any other tips and considerations, will be highly appreciated and helpful.
Thanks & regards.
Hi Shelley,
I will try to answer your questions:
1) Yes, and a new 19c CDB has always local undo by default, hence, no worries here
2) If your source database is on the same server, I see no deeper reason to use export/import except for the fact that you can do this while the actual database is up and running. But normally I would use AutoUpgrade to complete this step. Only if you have really 11.2.0.1 and not 11.2.0.4, then expdp/impdp would be a good option to avoid double-hop upgrades.
And regarding the further steps, that is fine.
Cheers,
Mike
Hi Mike,
I have a question for you about the information in this blog entry. If I use datapump export/import to move/upgrade/convert my on-premises 18c non-CDB to an Oracle Cloud Infrastructure 19c pdb, are you saying that I do not need to make a manifest file and run the noncdb_to_pdb.sql script?
Thanks,
Lynne Ray
Hi Lymne,
yes, this is correct.
You ONLY need a manifest file when you “relocate” your PDB. Then, the target needs to learn what belongs to the PDB. But when you export, all is in a dump file (or in multiple dump files), and you will import from it then.
Cheers,
Mike
Thanks, Mike!
Welcome 🙂
Cheers,
Mike