There are several pitfalls when you plugin a non-CDB into a CDB environment. I’d like to highlight some of them – and show you potential workarounds as well. This is part of a series of blog posts to make your migration from non-CDB to PDB a bit smoother.
Database Migration from non-CDB to PDB – The Minimal Downtime Challenge
When we start to look at a migration from non-CDB to PDB for an important system, regardless of migrating on-prem or to the cloud, reducing the downtime is very important. In this blog post I’d like to highlight the different aspects and pitfalls. You won’t find detailed SQL instructions but get an idea about the conceptual exercises.
The Main Challenges
In such scenarios, regardless of staying on the same platform, even on the same hardware, or migration off to another OS or to the cloud, you will always have to deal with these main downtime factors:
- Same Endianness – Upgrade. Plugin. Convert.
- Upgrade downtime
- Plugin downtime
- Conversion downtime
- Different Endianness – Migration
- Migration downtime
And of course, you will need to test this before. In best case, you have a duplicate test environment available. Or you can leverage at least a physical standby database copy to work with. In any case, testing makes only real sense when your test database is a copy of the production and not a stripped down database holding only 10% of your data or containing just the schema information.
Upgrade. Plugin. Convert. Minimal Downtime.
Let us look into the common cases I described in these 3 blog posts in detail:
- Upgrade, plug in, convert (noncdb_to_pdb.sql) – 11.2.0.4 and higher
- Plugin, upgrade, convert (noncdb_to_pdb.sql) – 12.2.0.1 and higher
- Clone via NON$CDB, upgrade, convert – 12.1.0.2 and higher
and see how we could ensure minimal or close to zero downtime in each of these cases.
Case 1 – Upgrade, plugin, convert
I favor this procedure over the others for the simple reason that we’ll have a proven upgrade fallback in case something goes wrong.
Question no.1 is:
Can I have full production workload while I upgrade my source database?
Of course no, you can’t. Your production database will go into UPGRADE
mode. And hence, you’ll encounter downtime as for every other database upgrade.
In order to achieve a minimal downtime requirement you will need run the upgrade on a copy of your source. The most straight forward approach would be to have a physical standby copy, then activate and upgrade the standby, use it to plug in, and finally convert it. But of course you will need to sync with production afterwards. And this can be only ensured with Oracle GoldenGate.
This would be a valid “close to zero downtime” concept.
To my knowledge there are no other options available to reduce downtime in any of the steps.
Case 2 – Plugin, upgrade, convert
Also for the second case, which has the downside that you don’t even have a valid fallback for the upgrade part as you’d plugin at first (and there’s no fallback for this operation), the minimal downtime approach looks very similar to case 1. Oracle GoldenGate will be the only option to bridge the downtime gap from “plug in” to “upgrade” to “convert“.
And in order to ensure that your production can continue while you run through this process, you can do it only with a copy of your production database.
Case 3 – Clone via NON$CDB
The NON$CDB Clone operation leverages a database link. The non-CDB will be copied over this link.
But this case is not much different from case 2. As this is not a clone from a PDB with Local Undo but instead just copying a non-CDB, you can’t have load going on on production and keep this in sync while the cloning happens. Hence, the minimal downtime setup looks very similar to case 2, too.
Migration. Minimal Downtime.
For the other cases using either Data Pump, Transportable Tablespaces or a combination of both called Full Transportable Export Import, the scenario in all cases looks always the same. You provision a PDB at first, then you migrate your database (or schemas) over.
But of course, none of these techniques offers you embedded synchronization with the production database. Hence, Oracle GoldenGate is necessary. But this is not different from any other migration approach.
Regardless of migration to non-CDB or PDB, you can decrease the downtime especially for large databases with RMAN Incremental Backups. But in order to achieve downtime in the range of only a few minutes or even less, Oracle GoldenGate is the only viable approach.
I used Oracle 10.1.0.2 as a source example as this is the lowest possible version allowing Data Pump or cross-platform Transportable Tablespaces to be used. But of course I hope that nobody uses still Oracle 10.1.0.2 – and RMAN Incremental Backups would require a higher source version (10.2.0.3 with patches as far as I remember). And of course, using Oracle GoldenGate in such a 10.1.0.2 setup would require a lot of extra work to do.
Summary
You’ll see that in all cases for the Upgrade/Plugin/Convert no matter in which order you’ll approach it, you will need a physical standby database to decouple from production while your workload can continue.
And you will need Oracle GoldenGate in order to capture from the point you are deferring the log transport, and before activating the standby. At the very end, when the conversion to PDB is completed, you can start the delivery and sync with your production.
The usage of Oracle GoldenGate has one big advantage: As your production remains untouched before you start the load on the new PDB, you’ll reverse the OGG processes and sync all changes from the new PDB back into the old non-CDB. This is an ideal fallback scenario.
For the “real” migration scenarios with Data Pump, Transportable Tablespaces (xTTS), Full Transportable Export/Import (FTEX) or even the old export/import, only Oracle GoldenGate can ensure a real minimal downtime approach. You may be able to reduce downtime significantly with the use of RMAN Incremental Backups for xTTS or FTEX, but the meta information needs to be rebuild. Depending on the complexity of your database this may take a while.
Typical Plugin Issues and Workarounds
- Typical Plugin Issues and Workarounds
- The Compatible Pitfall
- The Time Zone Pitfall
- The Component Pitfall
- The Patch Level Pitfall
- Various Pitfalls
- The Fallback Challenge
- The Minimal Downtime Challenge
Related Posts
- Upgrade, plug in, convert (noncdb_to_pdb.sql) – 11.2.0.4 and higher
- Plugin, upgrade, convert (noncdb_to_pdb.sql) – 12.2.0.1 and higher
- Clone via NON$CDB, upgrade, convert – 12.1.0.2 and higher
–Mike
Hi,
The conversion from non-CDB to PDB seems to be the quickest but dirty way to go for multitenant.
The classical expdp/impdp from non-CDB to PDB is the slow but clean way to do this.
Would you ever convert a non-CDB to a PDB in case there are no time or size constraints?
Cheers, Markus
Hi Markus,
seriously I have no idea why you speak of quick/dirty vs clean way.
Actually 99% of the customers I work with go the way you describe as quick/dirty – even for banking or telco or EBS systems. SAP does the same.
And btw, Swiss insurers go this route, too 🙂
Cheers,
Mike
Hi,
I’m not sure if the conversion script will really erase all old fashioned non-CDB fragments? Will I really get the identical structure regarding users / roles / privileges / objects / etc. in contrast to the creation of a new PDB?
Cheers, Markus
Yes, you will.
Otherwise it is a bug.
Hope this helps – and of course, you still can use expdp/impdp if you are not fully convinced. But with larger databases you’ll be in pain then I’d guess.
Cheers,
Mike