Database Migration from non-CDB to PDB – Various Pitfalls

There are several pitfalls when you plugin a non-CDB into a CDB environment. I’d like to highlight some of them – and show you potential workarounds as well. This is part of a series of blog posts to make your migration from non-CDB to PDB a bit smoother.

Database Migration from non-CDB to PDB - Various Pitfalls

Photo by timJ on Unsplash

Database Migration from non-CDB to PDB – Various Pitfalls

In all the previous blog posts of this series I tried to explain specific pitfalls, and how you can workaround them. This article is meant to collect the “leftovers”, the minor issues and pitfalls which you may not see to often. None of them is either worth a full blog post – or I blogged about it in the past already. This blog post may grow over time – I will add the date in each headline when I either blogged already about it or when I added an issue here.

Different Block Sizes when you plugin – May 8, 2018

Roy came across this issue at first, I have seen it at a large US customer recently as well.

When you plugin your non-CDB and make it a PDB, or when you unplug and plug PDBs between different CDBs, all is fine when you use the standard (or a common) DB_BLOCK_SIZE across your entire landscape. DB_BLOCK_SIZE=8k is the default.

But as soon as you have a mix, for instance your source being a DWH with 16k blocks, and you’d like to plug it in, you need to allocate a special cache at first:

alter system set DB_16k_CACHE_SIZE=100M scope=both;

Otherwise the plug in will fail. The plug in compatibility check will tell you this ERROR:

CON_ID TYPE	 MESSAGE					      STATUS
------ --------- ---------------------------------------------------- ---------
     1 ERROR	 system tablespace block size(16384) does not match   PENDING
		  configured block sizes, plug in is not allowed

     1 WARNING	 CDB parameter db_block_size mismatch: Previous 16384 PENDING
		  Current 8192

Please read the detailed blog post for more information.

A Health Check may not be a bad idea – July 29, 2019

Today one of our top banking customers alerted me as their plugin from a lower version to a higher version – which required an upgrade – failed with an ORA-600 [qespcNpSC:objnotfound]. As this issue is not an exclusive unplug/plug/upgrade issue but can also happen when you plugin a non-CDB as I showed in Plugin/Upgrade/Convert and in Clone via NON$CDB, Upgrade, Convert, I’d like to highlight it here.

The root cause for this issue the customer saw may have been an old bug introduced in the Oracle 11g days. By then, DEFERRED_SEGMENT_CREATION got enabled by default.  Roy and I for many years recommend to set DEFERRED_SEGMENT_CREATION = FALSE unless you have a clear reason for having it TRUE. For instance, you have an application which creates tons of empty object which may never be used unless all the apps modules are enabled. We saw Data Pump contention issues with it. We saw really bad performance issues in OLTP systems when the first row gets inserted. Then the entire maintenance process happens now delayed – and potentially at exactly the wrong time.

The remedy for the banking customer finally was fixing inconsistencies in TAB$ and IND$. The PDBs could be upgraded afterwards and all is good now.

But what revealed the corruption was hcheck.sql. hcheck.sql is a script you can download from MOS. It checks for know dictionary corruptions and gives advice what to do. It detected this one here as well. Hence, for really important databases, and especially when you either would like to plugin at first, or when you unplug/plug/upgrade, you may want to run hcheck.sql on your most important databases once or twice a year.

Typical Plugin Issues and Workarounds

Related Posts

–Mike

Share this: