There are several pitfalls when you plugin a non-CDB into a CDB environment. I’d like to highlight some of them – and show you potential workarounds as well. This is part of a series of blog posts to make your migration from non-CDB to PDB a bit smoother.

Photo by timJ on Unsplash
Database Migration from non-CDB to PDB – Various Pitfalls
In all the previous blog posts of this series I tried to explain specific pitfalls, and how you can workaround them. This article is meant to collect the “leftovers”, the minor issues and pitfalls which you may not see to often. None of them is either worth a full blog post – or I blogged about it in the past already. This blog post may grow over time – I will add the date in each headline when I either blogged already about it or when I added an issue here.
Different Block Sizes when you plugin – May 8, 2018
Roy came across this issue at first, I have seen it at a large US customer recently as well.
When you plugin your non-CDB and make it a PDB, or when you unplug and plug PDBs between different CDBs, all is fine when you use the standard (or a common) DB_BLOCK_SIZE
across your entire landscape. DB_BLOCK_SIZE=8k
is the default.
But as soon as you have a mix, for instance your source being a DWH with 16k blocks, and you’d like to plug it in, you need to allocate a special cache at first:
alter system set DB_16k_CACHE_SIZE=100M scope=both;
Otherwise the plug in will fail. The plug in compatibility check will tell you this ERROR
:
CON_ID TYPE MESSAGE STATUS ------ --------- ---------------------------------------------------- --------- 1 ERROR system tablespace block size(16384) does not match PENDING configured block sizes, plug in is not allowed 1 WARNING CDB parameter db_block_size mismatch: Previous 16384 PENDING Current 8192
Please read the detailed blog post for more information.
A Health Check may not be a bad idea – July 29, 2019
Today one of our top banking customers alerted me as their plugin from a lower version to a higher version – which required an upgrade – failed with an ORA-600 [qespcNpSC:objnotfound]. As this issue is not an exclusive unplug/plug/upgrade issue but can also happen when you plugin a non-CDB as I showed in Plugin/Upgrade/Convert and in Clone via NON$CDB, Upgrade, Convert, I’d like to highlight it here.
The root cause for this issue the customer saw may have been an old bug introduced in the Oracle 11g days. By then, DEFERRED_SEGMENT_CREATION
got enabled by default. Roy and I for many years recommend to set DEFERRED_SEGMENT_CREATION =
FALSE
unless you have a clear reason for having it TRUE
. For instance, you have an application which creates tons of empty object which may never be used unless all the apps modules are enabled. We saw Data Pump contention issues with it. We saw really bad performance issues in OLTP systems when the first row gets inserted. Then the entire maintenance process happens now delayed – and potentially at exactly the wrong time.
The remedy for the banking customer finally was fixing inconsistencies in TAB$
and IND$
. The PDBs could be upgraded afterwards and all is good now.
But what revealed the corruption was hcheck.sql
. hcheck.sql
is a script you can download from MOS. It checks for know dictionary corruptions and gives advice what to do. It detected this one here as well. Hence, for really important databases, and especially when you either would like to plugin at first, or when you unplug/plug/upgrade, you may want to run hcheck.sql
on your most important databases once or twice a year.
- MOS Note: 136697.1 – hcheck.sql – Script to Check for Known Problems in Oracle8i, Oracle9i, Oracle10g, Oracle 11g and Oracle 12c
- Parameter Recommendations for Oracle Database 12c – Part II – March 8, 2016
Typical Plugin Issues and Workarounds
- Typical Plugin Issues and Workarounds
- The Compatible Pitfall
- The Time Zone Pitfall
- The Component Pitfall
- The Patch Level Pitfall
- Various Pitfalls
- The Fallback Challenge
- The Minimal Downtime Challenge
Related Posts
- Upgrade, plug in, convert (noncdb_to_pdb.sql) – 11.2.0.4 and higher
- Plugin, upgrade, convert (noncdb_to_pdb.sql) – 12.2.0.1 and higher
- Clone via NON$CDB, upgrade, convert – 12.1.0.2 and higher
–Mike
Mike,
Can you please check on USE_LARGE_PAGES parameter in 19c? By default, this parameter has an undocumented value of AUTO_ONLY and I found that upgrade from even 18c to 19c kept failing with ORA-27102 error. The server had 190 GB free memory and kernel parameters were configured to use up to 227 GB memory. The database only used 30 GB. There were 16k large pages configured and other databases had exhausted them. In 18c, if that happens, the database would fallback to using standard memory pages. For some reason, 19c did not. That’s when I looked at the USE_LARGE_PAGES and found AUTO_ONLY. There is nothing in documentation or MOS on this setting. Then I changed the value to FALSE and got a successful upgrade, but I am still curious about this undocumented value.
Thanks,
Arun
Hi Arun,
I informed our doc writer – but you please need to open an SR for this one. I can confirm that this setting exists, and it seems to be standard on Exadata.
But I don’t know the reason.
Thanks, Mike
SR 3-20689936911
Thanks Arun – I see what the engineer wrote 🙁
In parallel, I informed the doc writers already. It seems to be that the setting was not documented on purpose which does not make sense to us as it is a standard parameter, and not an underscore. Stay tuned …
Cheers,
Mike
Mike,
Another question. What is _ipddb_enable=TRUE? I cannot find anything on MOS but it seems to be set in every 18c/19c database by default. The preupgrade checks complain about this parameter. If I am upgrading from 18c to 19c, should I unset this parameter?
Thanks,
Arun
Arun,
I can only tell you what the description says:
_ipddb_enable ==> Enable IPD/DB data collection
It was FALSE in 11.2.0.4, TRUE in all 12.1 versions, and it is FALSE by default in ever 12.2 database (incl 18c and 19c).
Meaning, it shouldn’t be there in your environments. At least, it is not there in mine.
You may need to check with Support if there’s a special use case or why this is TRUE in your environments whereas the default is FALSE since 12.2.0.1.
Cheers,
Mike
Mike,
What happens to the built-in users from the 11g database that I upgraded and plugged into the 19c database? I noticed that now there are two of each. One set is what was created when the CDB was created, second set is from the 11g database. One set has INHERITED=YES and CON_ID of the PDB and the creation date of when they were created in 11g database. The other set has INHERITED=NO, CON_ID of the CDB$ROOT and creation date of when the CDB was created. Both set of users are common users. The SYS/SYSTEM users from the 11g database lose their super powers and passwords. Are these users preserved for their objects/grants etc.?
Thanks,
Arun
Hi Arun,
with “built-in users” you mean the “Oracle supplied user schemas” such as SYS, SYSTEM, OLAPSYS, WMSYS etc?
Are you saying you have two of each now in CDB_USERS, one of each for instance with CON_ID=3, and the other one (the twin brother or sister) with CON_ID=1?
Thanks,
Mike
Yes, the Oracle supplied user schemas. I have two SYS, two SYSTEM, two DBSNMP, two AUDSYS, two CTXSYS and so on. One set of Oracle supplied users have CON_ID=3 (the PDB CON_ID) and the other set has CON_ID=1. This is the query I ran:
select * from cdb_users where oracle_maintained=’Y’ order by username;
Thanks,
Arun
Hi Arun,
I believe this is expected but you may please open an SR and check with Oracle Support.
As I’m not responsible for Oracle Multitenant, I can’t give you an exact explanation.
Cheers,
Mike
Thanks Mike.
SR 3-20783525631 : Duplicate Oracle supplied users after plug-in
Thanks Arun – as I was out now for a while, did you receive a satisfying explanation yet?
Cheers,
Mike
Mike,
I did not get far with support, but 12.1 Concepts manual, chapter 18 Overview of Common and Local Users in a CDB has excellent information.
https://docs.oracle.com/database/121/CNCPT/cdblogic.htm#CNCPT89257
What I was able to piece together is that when I plug-in a non-CDB as a PDB, it’s Oracle administrative user accounts turn into common user accounts. We will see these accounts in CDB and PDBs, which explains why they appear multiple times. So, if we have n PDBs plugged into a CDB we will have n+1 SYS/SYSTEM/DBSNMP etc. users.
These accounts are exempt from the c## or C## naming convention. The password for these accounts becomes same as the password for these accounts in the CDB, so they lose their passwords from the non-CDB database. We can control privileges of these accounts for each PDB, but not for the CDB. So, if I give out the password for SYSETM account for a PDB, that account will have DBA privileges in the CDB, but can have less privileges in any of the PDBs. To locally administer a given PDB, I can create local user accounts so they cannot administer any other PDB in the CDB.
Thanks,
Arun
Hi Arun,
thanks for the update. I think I need to dig a bit deeper and make my passwords not all “oracle” in my lab environments 😉
Cheers,
Mike
Mike,
There is another pre-upgrade requirement that does not show up anywhere in database checks. MOS note 2542082.1 mentions that patch 28553832 needs to be applied to 11.2.0.4 DB_HOME prior to upgrade. I was under the impression that this patch is more relevant to 11g clusterware, so I opened SR 3-21108777451. Support has confirmed that this patch needs to be applied to 11g DB HOME. The need to apply this patch to 12c GI shows up in the GI pre-upgrade checks. Is this something you can double check? This will increase the time to upgrade.
Thanks,
Arun
Hi Arun,
thanks for the hint. But we can’t recommend this here. preupgrade checks the database – NOT the GI home. Hence, we can’t even check or foresee if this patch is present in the GI home. BUT I wonder if OraCHK wouldn’t have logged this (I’d assume, it would).
Thanks,
Mike
Hi Mike/Arun,
For GI upgrades CVU will check for patch 28553832. For database homes (11.2.0.4,12.1.0.2,12.2.0.1,18c) we also need this patch before upgrading to 19c. This patch is available to all versions of home from APRIL 2019 quarterly patch.
Hi Sebas,
but this is necessary only if the Docker RPM is installed, right?
==> Clusterware process like OCSSD may report error, if the Docker Engine RPM is installed:
CRS-1726: Process failed to run in real-time priority.
Cheers,
Mike
Right, on Exadata Docker is installed by default, but this patch is required for RAC in general. Patch must be applied to GI and DBs homes before upgrading GI to 19c, CVU is checking this. So, by the time that we are upgrading the DB, after GI, this patch should be already installed in the DB Home, otherwise GI Upgrade will fail. I’m not sure if AU should check this again, neither dbua nor preupgrade.jar are doing it
Thanks for clarification. I had opened a SR last year and support did ask me to install this patch on both GI and DB homes, even though GI was 12.1 and DB was 11g. We have Exadata. So, prior to upgrade, I am installing this patch. It only takes few extra minutes. cluvfy only checks for this patch for 12.1 GI home. For 11g DB home, hcheck, preupgrade.jar, dbupgdiag, dbua, nothing checks for this patch. But it is always good to get a confirmation from Oracle directly that the patch needs to be applied to DB home also. Much appreciated.
Hi Mike — I like your info a lot. What do you do for instance memory when migrating a non-CDB into a CDB as a PDB? In other words, how do you figure out how much to increase it in the CDB?
Hi Andrew,
this is very hard to say, especially since it depends heavily from the workloads and such.
Usually you will see a very good ratio of leveraging resources. But it depends on the workloads.
What we usually do is adding one, then another …
And when you size the SGA, we assume less in sum as for all the databases before – but “less” is a weak factor.
What you need to keep an eye on is the PGA – that may consume more mem than before certainly since you have now more sessions on one single DB since the CDB “owns” SGA and PGA.
Cheers,
Mike