Can you apply Bundle Patches to SE2 databases?

Can you apply Bundle Patches (BP) to Standard Edition (SE2) databases?

This question got raised by Adam (thanks!) via the comments section of the blog. But a few days later I read it on the internal mailing lists as well. Adam was referring to this sentence in the BP READMEs since October 2015:

“In this document Oracle Database Home refers to Enterprise Edition. Standard Edition Database software installs should install Database PSU.”

Why was this sentence added?

There were issues with enabling functionality on SE2 databases by accident with a Proactive Bundle Patch – something which got fixed and corrected later on.

Is this still valid?

No, it isn’t anymore – and future READMEs will reflect this. So YES, you can apply (and you should IMHO) Proactive Bundle Patches in Oracle Database 12c (12.1 and 12.2) to Standard Edition 2 (SE2) databases.

More information?

SE2 – Some questions, some answers …



Adaptive Execution Plans – not available in Oracle SE2

I received an interesting question on the blog the other day:

“Are Adaptive Plans an Enterprise Edition only feature?”

What does the License Guide say?

The Oracle Database 12.2 License Guide says:

Adaptive - EE only

Adaptive Features: Enterprise Edition only – from the Oracle 12.2 License Guide

What does say?

This is the link the customer came across – also saying that Adaptive Execution Plans are an Enterprise Edition only feature:

Adaptive Plans - EE only

Adaptive Execution Plans – Enterprise Edition only – from

What does this mean for SE2?

That is the question Adam had. Does this impact SE2? Will I get an error or – even worse – a license violation if I have optimizer_adaptive_plans=TRUE – which is the default?

No, none of it will happen.

First of all, the information displayed is correct:
Adaptive Execution Plans are an Enterprise Edition only feature.

But there’s no need to tweak any parameters or fear license issues. The feature is simply turned off in our code for Standard Edition 2 (SE2) regardless of how the parameter optimizer_adaptive_plans has been set.

Thanks Nigel for shedding some light!

Further Information?

SE2 – Some questions, some answers …



noncdb_to_pdb.sql – why does it take so long?

What is noncdb_to_pdb.sql?

The script noncdb_to_pdb.sql gets run only once in the life span of your database. It gets executed when a regular Oracle database (non-CDB) gets move to become part of a Single-/Multitenant deployment as a pluggable database (PDB).

You will never run this script again for this database. The script is a sanity script to clean up and change things in a non-CDB, and unite it finally and irreversible with the CDB.

How long does it take to complete?

Runtime varies a lot. It depends mainly on the number of objects it has to adjust and map to the CDB’s dictionary. And on the time it takes for recompilation as it does forced recompilations.

16:21:48 SQL> DECLARE
16:21:48   2     threads pls_integer := &&1;
16:21:48   3  BEGIN
16:21:48   4     utl_recomp.recomp_parallel(threads);
16:21:48   5  END;
16:21:48   6  /

PL/SQL procedure successfully completed.
Elapsed: 00:10:12.67

16:32:01 SQL>
16:32:01 SQL> SELECT dbms_registry_sys.time_stamp('utlrp_end') as timestamp from dual;

COMP_TIMESTAMP UTLRP_END              2017-06-08 16:32:01

1 row selected.

Can you speed it up? Can it be restarted?

Based on my experience you can’t speed it up except for adding CPU power for the recompilation. Therefore test the runtime upfront if you depend on certain downtime restrictions and rules.

In Oracle Database the script can’t be restarted or resumed. Once it failed the entire setup had to be restored and the process need to be restarted from the beginning. That’s why a backup is absolutely essential.

Since Oracle Database the script is supposed to be resumable – and I did check back with my Support colleagues in Romania who tested this already with positive results. Bu the overall runtime hasn’t been improved.

More information?

Please find a complete upgrade followed by a conversion into a PDB here:

Upgrade to Oracle Database 12.2 and Converting it into an 12.2 PDB (plugin)


Thank you, United Airlines

Well, we all have read a lot of stories about United Airlines in the past months. And my experience with United isn’t that good either.

In the past, UA managed to either let my luggage sit for 5 days in London while I was already in Redwood Shores, and I spent roughly 10 hours in their terrible telephone system waiting for anybody tracking my luggage. Then UA let my luggage sit for only a day and a bit in Frankfurt while I was staying in Pleasanton, CA. And last weekend all my team mates flying in from Mexico to Nashua via Houston, TX, saw their luggage arriving a day later – despite the 3 hours overlay they’ve had.

I’m currently at Boston Logan Airport. I was there in time. I’m flying rarely domestic in the US – but I’ve took the same connection on my way to Orlando last year in August.

This is from last year’s August:

UA is always delayed

United Airlines UA 363 on Aug 16, 2016

And this is from today:

UA is always delayed

United Airlines UA 363 on June 6, 2017

Does it ring a bell? Do you see the similarities?

Yes, of course, today there’s light rain in Boston. And some runway construction going on I’d guess UA has learned about just around noon today.

To me this seems to be a generic issue. And I’m not even speaking about serving the most unhealthy food (potato chips!) on the planet as a regular meal in First.

Yes, the world has bigger problems than that. But it still annoys me when I realize that an airline does not serve its customers well, but operates mostly on increasing its margin by all means.


PS: Update a few minutes later – now we are at over 3 hours. I wait for the “canceled” message …

UA is delayed - always ...

United Airlines – delayed over 3 hrs for a 1.5 hours flight

PPS: Even better, now United is closing the Club at 21:30h and kicking out all passengers waiting for their flights to depart. That is such a wonderful service … amazing … the plane which is going to take me hopefully to Washington Dulles at least is taxiing now at Dulles airport. This will end up it an almost 4 hour delay. United has no replacement jets, no replacement crews, nothing. Hilarious …

New version of preupgrade.jar (build 5) is available

There’s a new and improved version of the preupgrade.jar (build 5 – May 2017) available for download from MyOracle Support:

The same note will get you access also to the most recent builds of preupgrd.sql for upgrades to Oracle Database 12.1 and utlu112i.sql for the unlikely event you’ll have to upgrade a database to Oracle

Please always exchange the version getting installed by default with the version we offer via MOS Note:884522.1 as the downloadable version is usually much newer and improved over the one getting deployed with the installation image.

MOS Note 884522.1 preupgrade.jar

MOS Note:884522.1 – Download always the newest preupgrade.jar

The most important changes over the previous version are:

  • Adds UTF-8 character set capabilities
  • Adds a check to see if CONCURRENT STATISTICS are set when RESOURCE MANAGER IS OFF
  • Adds a block of code which will be run if the DEPEND_USR_TABLES check is run after the database has been upgraded
  • A new autofixup function is also added, which will automatically be run by the postupgrade_fixups.sql script

Find an example on how to use preupgrade.jar here:

Upgrade to Oracle Database 12.2 and Converting it into an 12.2 PDB (plugin)


Alert.log: New timestamp format in Oracle 12.2

Timestamp Format Change

There’s an interesting change in the alert.log since Oracle Database The format of the timestamps has changed.

If you compare the timestamps in Oracle 12.1 vs Oracle 12.2 you’ll recognize the change immediately:

Oracle 12.1 Oracle 12.2
Wed Feb 08 15:39:29 2017
Database mounted in Exclusive Mode
Lost write protection disabled
Wed Feb 08 15:39:29 2017
Wed Feb 08 15:39:29 2017
Ping without log force is disabled.
Starting background process TMON

Database mounted in Exclusive Mode
Lost write protection disabled
Ping without log force is disabled:
  instance mounted in exclusive mode.
Endian type of dictionary set to little

Revert to the old format?

In some cases you may wish to revert to the old display format, for instance if you extract information from the alert.log and rely on the old timestamp format. In case you’d like to change it please use the init.ora/spfile parameter UNIFORM_LOG_TIMESTAMP_FORMAT. Default setting is TRUE. Once you switch it to FALSE the timestamp in the alert.log is in pre-Oracle-12.2 format dynamically.

ALTER SYSTEM SET uniform_log_timestamp_format=FALSE SCOPE=BOTH;
Shared IO Pool defaulting to 64MB. Trying to get it from Buffer Cache for process 20948.
Dumping current patch information
Patch Id: 25862693
Patch Description: DATABASE BUNDLE PATCH: (25862693)
Patch Apply Time: 2017-05-19T17:49:59+02:00
Bugs Fixed: 23026585,24336249,24385983,24923215,24929210,24942749,25036474,
Mon May 29 14:37:08 2017
ALTER SYSTEM SET uniform_log_timestamp_format=FALSE SCOPE=BOTH;
Mon May 29 14:37:25 2017
Thread 1 advanced to log sequence 16 (LGWR switch)
  Current log# 1 seq# 16 mem# 0: /u02/oradata/CDB2/redo01.log

Oracle Clusterware, ASM, Grid Infrastructure?

Luckily Anil Nair, our RAC PM, has been blogged about the influence of this new setting on the Clusterware and ASM logs:

Further Information


PS: Thanks to Roderick for CC:ing me and for filing a doc bug 26145504 to get this added into the next revision of the Upgrade Guide as Behavior Change.

Does DBCA execute “datapatch” in Oracle 12.2?

Does the Database Configuration Assistant (DBCA) execute “datapatch -verbose” in Oracle 12.2 when you create a new database?

Does DBCA in Oracle 12.2 execute datapatch when you create new database? I was curious if this misbehavior from the previous release has been fixed. Good news: It got fixed with Oracle Database The DBCA does execute “datapatch -verboseautomatically now when you create a new database. In the previous release this did not happen – you had to execute it manually after creating a new database.

DBCA 12c and “” – things to know

Quick Test Scenario

I applied the most recent May 2017 BP for Oracle (see here: Oracle Bundle Patch DBBP on Linux x86-64 is available) and created a fresh database with DBCA.

DBCA Oracle

Database Configuration Assistant (DBCA) – Oracle

Please make sure you ALWAYS create a CUSTOM DATABASE. If you “create” a DWH or OLTP database it just copies a precreated database. It will have all options in it, and you’ll have to execute database -verbose manually.

I did monitor it afterwards with check_patches.sql:

-------------------- ---------- ---------- ---------------------------------------- ---------- ---------- ----------

The REGISTRY$HISTORY does not seem to be updated correctly. This is fixed with Oracle including bug 25269268 (DBA_REGISTRY_HISTORY NEEDS TO INCLUDE BP/PSU APPLY AND ROLLBACK ROWS).



Is “imp” still supported in Oracle Database 12.2?

exp imp

Good question we receive quite often:

Is “imp” still supported in Oracle Database 12.2?

For clarification: I don’t talk about “impdp” but about old “imp” (import). And yes, it is still supported in Oracle 12.2.

The oldexp” got desupported with Oracle Database – but “imp” is still supported, even in Oracle Database 12.2.0.x. You should be able to import your old dumps into the newest release of the database and migrate even VERY old releases directly to Oracle Database 12.2.

Actually, the biggest step upwards we’ve heard about a year ago was an exp from Oracle V5 and migration into Oracle Single Tenant. No joke – and it worked!

Only real pitfall we are aware of right now: The MOS Note 132904.1 (Compatibility Matrix for Export And Import Between Different Oracle Versions) hasn’t been updated recently for whatever reason.


Issue with PDB Archives in Oracle in ASM

There is a fancy new command to unplug a PDB in Oracle Database


The nice thing with this command differing in the file ending of ‘pdb‘ instead of ‘xml as you used it in Oracle 12.1 (and the ‘xml‘ option is still available of course): Instead of just creating an xml description file it zips everything together into a PDB archive.

SQL> show pdbs

---------- ------------------------------ ---------- ----------
SQL> create pluggable database PDB1 admin user adm identified by adm
  2  file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb1');
Pluggable database created.

SQL> alter pluggable database pdb1 open;
Pluggable database altered.

SQL> alter pluggable database pdb1 close;
Pluggable database altered.

SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.pdb';
Pluggable database altered.

Now lets have a quick look into the created file:

-rw-r--r--. 1 oracle dba   108414 May 19 16:20 pdb1.pdb
$ unzip pdb1.pdb 
Archive:  pdb1.pdb
  inflating: system01.dbf            
  inflating: sysaux01.dbf            
  inflating: undotbs01.dbf           
warning:  stripped absolute path spec from /home/oracle/pdb1.xml
  inflating: home/oracle/pdb1.xml    

Not bad. Actually pretty cool.

Of course the same command fails in Oracle Database

SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.pdb';
alter pluggable database pdb1 unplug into '/home/oracle/pdb1.pdb'
ERROR at line 1:
ORA-65125: valid XML file name is required

Unfortunately there’s a significant flaw when your PDB is stored in ASM:
The zip file does not contain the datafiles of your PDB.

Please see:

It is supposed to be fixed with patch set

A really handy feature – but don’t use it when your PDBs are located in ASM.


Issue with 2k and 4k db_block_size – ORA-1450 when upgrading to Oracle

Recently we got alerted by a customer and a colleague from Italy about an issue with the upgrade from Oracle 12.1.0.x to Oracle if – and only if – your database got created with 2k or 4k db block size.


If your database got created with 2k or 4k db block size, and you attempt an upgrade from any upgrade-supported Oracle version to Oracle Database the upgrade – regardless of DBUA or – will fail with an ORA-1450: maximum key length (1478) exceeded.


In the catupgrd0.log – regardless of using the DBUA or the command line upgrade with – you’ll get the following error pattern:

SQL> create unique index i_radm_pe1 on sys.radm_pe$(pe_name)
   2  /
create unique index i_radm_pe1 on sys.radm_pe$(pe_name)
ERROR at line 1:
ORA-01450: maximum key length (1478) exceeded

SQL> create index i_radm_pe2 on sys.radm_pe$(pe_obj#, pe_name)
  2  /
create index i_radm_pe2 on sys.radm_pe$(pe_obj#, pe_name)
ERROR at line 1:
ORA-01450: maximum key length (1478) exceeded


Please apply the one-off patch 24714096/25976885 (backport on top of for bug 24714096 (HIT ORA-01450 WHEN UPGRADE SI DB FROM 11204 TO 12.2 WITH DB_BLOCK_SIZE 2K).

You may verify your current database block size with this queries:

SQL> show parameter db_block_size

------------------------------------ ----------- ------------------------------
db_block_size			     integer	 8192


------------------------------ ----------
SYSTEM				     8192
SYSAUX				     8192
USERS				     8192
TEMP				     8192
UNDOTBS1			     8192

Further Information

Please see MOS Note: 2268981.1 (DB Upgrade to fails with Error ORA-01450: Maximum Key Length (3118) Exceeded – create unique index i_radm_pe1 on sys.radm_pe$(pe_name)

Sorry for the inconvenience – and thanks to Francesco for logging the issue and to Alessandro for alerting our team.


PS: Based on feedback by several people I did update this blog post – it hits everybody upgrading to Oracle with db_block_size 2k and 4k. And it happens regardless of the (supported for direct upgrade) source version.