Does DBCA execute “datapatch” in Oracle 12.2?

Does the Database Configuration Assistant (DBCA) execute “datapatch -verbose” in Oracle 12.2 when you create a new database?

Does DBCA in Oracle 12.2 execute datapatch when you create new database? I was curious if this misbehavior from the previous release has been fixed. Good news: It got fixed with Oracle Database 12.2.0.1. The DBCA does execute “datapatch -verboseautomatically now when you create a new database. In the previous release this did not happen – you had to execute it manually after creating a new database.

DBCA 12c and “datapatch.pl” – things to know

Quick Test Scenario

I applied the most recent May 2017 BP for Oracle 12.2.0.1 (see here: Oracle 12.2.0.1 Bundle Patch 12.2.0.1.170516 DBBP on Linux x86-64 is available) and created a fresh database with DBCA.

DBCA Oracle 12.2.0.1

Database Configuration Assistant (DBCA) – Oracle 12.2.0.1

Please make sure you ALWAYS create a CUSTOM DATABASE. If you “create” a DWH or OLTP database it just copies a precreated database. It will have all options in it, and you’ll have to execute database -verbose manually.

I did monitor it afterwards with check_patches.sql:

ACTION_TIME	     ACTION	STATUS	   DESCRIPTION				    VERSION	 PATCH_ID BUNDLE_SER
-------------------- ---------- ---------- ---------------------------------------- ---------- ---------- ----------
23-MAY-2017 12:56:52 APPLY	SUCCESS    DATABASE BUNDLE PATCH 12.2.0.1.170516    12.2.0.1	 25862693 DBBP

The REGISTRY$HISTORY does not seem to be updated correctly. This is fixed with Oracle 12.2.0.2 including bug 25269268 (DBA_REGISTRY_HISTORY NEEDS TO INCLUDE BP/PSU APPLY AND ROLLBACK ROWS).

–Mike

 

Is “imp” still supported in Oracle Database 12.2?

exp imp

Good question we receive quite often:

Is “imp” still supported in Oracle Database 12.2?

For clarification: I don’t talk about “impdp” but about old “imp” (import). And yes, it is still supported in Oracle 12.2.

The oldexp” got desupported with Oracle Database 11.1.0.6 – but “imp” is still supported, even in Oracle Database 12.2.0.x. You should be able to import your old dumps into the newest release of the database and migrate even VERY old releases directly to Oracle Database 12.2.

Actually, the biggest step upwards we’ve heard about a year ago was an exp from Oracle V5 and migration into Oracle 12.1.0.2 Single Tenant. No joke – and it worked!

Only real pitfall we are aware of right now: The MOS Note 132904.1 (Compatibility Matrix for Export And Import Between Different Oracle Versions) hasn’t been updated recently for whatever reason.

–Mike

Issue with PDB Archives in Oracle 12.2.0.1 in ASM

There is a fancy new command to unplug a PDB in Oracle Database 12.2.0.1:

ALTER PLUGGABLE DATABASE pdb1 UNPLUG INTO 'pdb1.pdb';

The nice thing with this command differing in the file ending of ‘pdb‘ instead of ‘xml as you used it in Oracle 12.1 (and the ‘xml‘ option is still available of course): Instead of just creating an xml description file it zips everything together into a PDB archive.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
SQL> create pluggable database PDB1 admin user adm identified by adm
  2  file_name_convert=('/u02/oradata/CDB2/pdbseed','/u02/oradata/CDB2/pdb1');
Pluggable database created.

SQL> alter pluggable database pdb1 open;
Pluggable database altered.

SQL> alter pluggable database pdb1 close;
Pluggable database altered.

SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.pdb';
Pluggable database altered.

Now lets have a quick look into the created file:

-rw-r--r--. 1 oracle dba   108414 May 19 16:20 pdb1.pdb
$ unzip pdb1.pdb 
Archive:  pdb1.pdb
  inflating: system01.dbf            
  inflating: sysaux01.dbf            
  inflating: undotbs01.dbf           
warning:  stripped absolute path spec from /home/oracle/pdb1.xml
  inflating: home/oracle/pdb1.xml    

Not bad. Actually pretty cool.

Of course the same command fails in Oracle Database 12.1.0.2:

SQL> alter pluggable database pdb1 unplug into '/home/oracle/pdb1.pdb';
alter pluggable database pdb1 unplug into '/home/oracle/pdb1.pdb'
                                          *
ERROR at line 1:
ORA-65125: valid XML file name is required

Unfortunately there’s a significant flaw when your PDB is stored in ASM:
The zip file does not contain the datafiles of your PDB.

Please see:

It is supposed to be fixed with patch set 12.2.0.2.

A really handy feature – but don’t use it when your PDBs are located in ASM.

–Mike

Issue with 2k db_block_size – ORA-1450 when upgrading from Oracle 12.1 to Oracle 12.2.0.1

Recently we got alerted by a customer and a colleague from Italy about an issue with the upgrade from Oracle 12.1.0.x to Oracle 12.2.0.1 if – and only if – your database got created with 2k block size.

Problem

If your database got created with 2k block size, and you attempt an upgrade from Oracle 12.1.0.x to Oracle 12.2.0.1 the upgrade – regardless of DBUA or catctl.pl – will fail with an ORA-1450: maximum key length (1478) exceeded.

Analysis

In the catupgrd0.log – regardless of using the DBUA or the command line upgrade with catctl.pl – you’ll get the following error pattern:

SQL> create unique index i_radm_pe1 on sys.radm_pe$(pe_name)
   2  /
create unique index i_radm_pe1 on sys.radm_pe$(pe_name)
                                      *
ERROR at line 1:
ORA-01450: maximum key length (1478) exceeded

SQL> create index i_radm_pe2 on sys.radm_pe$(pe_obj#, pe_name)
  2  /
create index i_radm_pe2 on sys.radm_pe$(pe_obj#, pe_name)
                               *
ERROR at line 1:
ORA-01450: maximum key length (1478) exceeded

Solution

Please apply the one-off patch 24714096/25976885 (backport on top of 12.2.0.1 for bug 24714096 (HIT ORA-01450 WHEN UPGRADE SI DB FROM 11204 TO 12.2 WITH DB_BLOCK_SIZE 2K).

You may verify your current database block size with this queries:

SQL> show parameter db_block_size

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_block_size			     integer	 8192


SQL> select TABLESPACE_NAME, BLOCK_SIZE from DBA_TABLESPACES order by 2;

TABLESPACE_NAME 	       BLOCK_SIZE
------------------------------ ----------
SYSTEM				     8192
SYSAUX				     8192
USERS				     8192
TEMP				     8192
UNDOTBS1			     8192

Sorry for the inconvenience – and thanks to Francesco for logging the issue and to Alessandro for alerting our team.

–Mike

Oracle 12.2.0.1 Bundle Patch 12.2.0.1.170516 DBBP on Linux x86-64 is available

All credits go to Ricardo Maeda as I knew that we’ll release a Bundle Patch for Oracle 12.2.0.1 sometime this week – but I couldn’t find it linked from the usual MOS notes. And please don’t ask my why that is.

Anyhow, with patch 2579308 you’ll get access to the first BP for Oracle Database 12.2.0.1. There will be a first bigger Proactive Bundle Patch in July at the usual schedule – but this one is at least a start.

12.2.0.1.170516BP

Patch 2579308 – Oracle 12.2.0.1.170516BP

Plus in addition get the OPatch version 12.2.0.1.7 via patch 6880880.

The BP contains:

12.2.0.1.170516BP

First 12.2.0.1 Bundle Patch – Contents: Database BP and GI PSU

Obrigado, Ricardo!

–Mike

PERL scripts for large migrations supported on all platforms (except Windows)

This topic fits very well as I present about +100 TB migrations today at the “Harmony” User Group Conference in Finland.

Finland May Helsinki 2017

Finland in May

The question whether the PERL scripts for RMAN incrementally rolled forward backups we deliver via MOS Note 1389592.1 (11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup) will be supported for migrations to non-Exadata systems such as Oracle SuperCluster as well.

And yes, now we got an update into the note saying:

Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other Unix based operating systems. However, any non-Linux operating system must be on 11.2.0.4.

Please be aware that “Unix” is important here:
The scripts are not supported (and won’t work) if you use them with MS Windows platforms.

Another caveat with the note to be aware of:

It does not talk explicitly about “Full Database Migrations”. And not about Full Transportable Export/Import. I’ve had several customers hitting this trap using the procedure described in the note but ending up with just parts of their database being migrated. Please keep in mind that you will have to move everything in SYSTEM tablespace (and potentially in SYSAUX such as AWR) as well as otherwise you’ll end up with data but will miss your grants, roles, synonyms etc. That’s why we highly recommend the Full Transportable Export/Import procedure as it takes care on all this.

You may please download my slide deck about how to use the PERL scripts for a large migration – it’s the talk I gave at Harmony Conference 2017:

–Mike

Can you EXCLUDE tablespaces from Full Transportable Export/Import?

Question: Can you EXCLUDE one or more tablespaces when doing a Full Transportable Export/Import?

First of all, this question came up already twice in real world customer migrations. So it’s not a totally unusual question. In one case a tablespace called USERS got precreated and some data did get stored. In the second case we did use RMAN incremental backups to migrate a very large database (>100TB) and some tablespaces weren’t part of the backup set.

I did brainstorm with Roy – and I dug into my notes from some years ago when the question was raised to me as part of a migration project to Oracle 12.1.0.1.

Roy and I had basically 2 main approaches to validate.

  1. Run expdp with FULL=Y and TRANSPORTABLE=ALWAYS but EXCLUDE some tablespaces
  2. Run expdp with FULL=Y and TRANSPORTABLE=ALWAYS but EXCLUDE some tablespaces during impdp only

In addition for the customer case where some tablespaces where initially not included into the RMAN Incremental Backupset strategy, the files for these tablespaces can be copied (and converted) in addition. The impdp parameter file will need to take care of it.

First of all, here’s my test setup. The two tablespaces MIKE1 and MIKE2 are the ones I will treat separately.

SQL> select status, tablespace_name from dba_tablespaces;

STATUS	  TABLESPACE_NAME
--------- ------------------------------
ONLINE	  SYSTEM
ONLINE	  SYSAUX
ONLINE	  UNDOTBS1
ONLINE	  TEMP
READ ONLY USERS
ONLINE	  MIKE1
ONLINE	  MIKE2

SQL> create directory oradump as '/home/oracle';
SQL> grant read, write on directory oradump to SYSTEM;

SQL> EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'MIKE1, MIKE2', incl_constraints => TRUE);
PL/SQL procedure successfully completed.

SQL> SELECT * FROM transport_set_violations;
no rows selected

Case 1 – Run expdp but EXCLUDE some tablespaces

l will use this par file:

directory=oradump
dumpfile=exp.dmp
logfile=exp.log
transportable=always
full=y
metrics=y
version=12
exclude=table_statistics,index_statistics
exclude=tablespace:"IN ('MIKE1', 'MIKE2')"

Now invoking Data Pump with expdp:

$ expdp system/oracle parfile=exp.par

Export: Release 11.2.0.4.0 - Production on Tue May 16 13:17:41 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** parfile=ex.par 
Startup took 1 seconds
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
     Completed 1 PLUGTS_BLK objects in 1 seconds
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
     Estimated 1 TABLE_DATA objects in 1 seconds

[...]

     Completed 1 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 0 seconds
     Completed 30 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 1 seconds
     Completed 5 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 0 seconds
     Completed 59 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 0 seconds
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /home/oracle/exp.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
  /u02/oradata/FTEX/users01.dbf
Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Tue May 16 13:18:37 2017 elapsed 0 00:00:56

As next step I copy the datafile(s) to the destination’s directory and configure my import par file:

directory=oradump
dumpfile=exp.dmp
logfile=imp.log
metrics=y
logtime=all
transport_datafiles='/u02/oradata/DB12/users01.dbf'

And importing into my new destination database with impdp:

$ impdp system/oracle parfile=imp.par

Import: Release 12.1.0.2.0 - Production on Tue May 16 13:52:46 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
16-MAY-17 13:57:08.681: Startup took 0 seconds
16-MAY-17 13:57:09.299: Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
16-MAY-17 13:57:09.544: Source time zone is +02:00 and target time zone is +01:00.
16-MAY-17 13:57:09.546: Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** parfile=imp.par
16-MAY-17 13:57:09.596: Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER

[...]

16-MAY-17 13:57:45.160: Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
16-MAY-17 13:57:45.238: Completed 1 PLUGTS_BLK objects in 0 seconds
16-MAY-17 13:57:45.243: Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
16-MAY-17 13:57:47.868: Completed 1 MARKER objects in 2 seconds
16-MAY-17 13:57:47.875: Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
16-MAY-17 13:57:48.368: Completed 2 PROCACT_SCHEMA objects in 1 seconds
16-MAY-17 13:57:48.373: Processing object type DATABASE_EXPORT/AUDIT
16-MAY-17 13:57:48.499: Completed 29 AUDIT objects in 0 seconds
16-MAY-17 13:57:48.504: Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
16-MAY-17 13:57:49.501: Completed 1 MARKER objects in 1 seconds
16-MAY-17 13:57:49.552: Completed 30 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 0 seconds
16-MAY-17 13:57:49.558: Completed 5 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 2348 seconds
16-MAY-17 13:57:49.567: Completed 57 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 2349 seconds
16-MAY-17 13:57:49.628: Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 44 error(s) at Tue May 16 13:57:49 2017 elapsed 0 00:00:41

I silently ignore the errors here. Some additional EXCLUDE statements will make this run flawless. Main reason for the errors is simply that with a 11.2.0.4 source database not everything gets excluded internally what should be excluded.

Anyhow, Case 1 looks ok.

SQL> select tablespace_name, status from dba_tablespaces;

TABLESPACE_NAME 	       STATUS
------------------------------ ---------
SYSTEM			       ONLINE
SYSAUX			       ONLINE
UNDOTBS1		       ONLINE
TEMP			       ONLINE
USERS			       ONLINE

But there are some details which may cause trouble. Quick look into the log file:

16-MAY-17 13:57:15.553: ORA-39083: Object type USER:"MIKE1" failed to create with error:
ORA-00959: tablespace 'MIKE1' does not exist
Failing sql is:
 CREATE USER "MIKE1" IDENTIFIED BY VALUES 'S:67D1974017CA6517A6A9BA1655182FEA352800F1460501DCD29602A666B3;CF5B00ED3595B543' DEFAULT TABLESPACE "MIKE1" TEMPORARY TABLESPACE "TEMP"
16-MAY-17 13:57:15.553: ORA-39083: Object type USER:"MIKE2" failed to create with error:
ORA-00959: tablespace 'MIKE2' does not exist
Failing sql is:
 CREATE USER "MIKE2" IDENTIFIED BY VALUES 'S:8915BFF218BB5A49958F5C12F4ED161BBB9EC71A54E1FFA471F94002F8C2;335808EE482DFA17' DEFAULT TABLESPACE "MIKE2" TEMPORARY TABLESPACE "TEMP"

Sure, as tablespace MIKE1 does not exist, user MIKE1 who had the default tablespace MIKE1 can’t be created either. This is ok in my case. But I’d like to point out that you have to pay close attention to such errors.

And of course, the same applies to objects in this tablespace(s) as well:

16-MAY-17 13:57:44.958: Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
16-MAY-17 13:57:45.068: ORA-39083: Object type TABLE:"MIKE1"."TEST1" failed to create with error:
ORA-00959: tablespace 'MIKE1' does not exist
Failing sql is:
CREATE TABLE "MIKE1"."TEST1" ("OBJ#" NUMBER NOT NULL ENABLE, "DATAOBJ#" NUMBER, "OWNER#" NUMBER NOT NULL ENABLE, "NAME" VARCHAR2(30 BYTE) NOT NULL ENABLE, "NAMESPACE" NUMBER NOT NULL ENABLE, "SUBNAME" VARCHAR2(30 BYTE), "TYPE#" NUMBER NOT NULL ENABLE, "CTIME" DATE NOT NULL ENABLE, "MTIME" DATE NOT NULL ENABLE, "STIME" DATE NOT NULL ENABLE, "STATUS" NUMBER NOT NULL ENABLE, "R

Be careful!

Case 2 – Run expdp but EXCLUDE some tablespaces during impdp

Parameter files look like this for exp.par:

directory=oradump
dumpfile=exp.dmp
logfile=exp.log
transportable=always
full=y
metrics=y
version=12
exclude=table_statistics,index_statistics

and for imp.par:

directory=oradump
dumpfile=exp.dmp
logfile=imp.log
metrics=y
logtime=all
exclude=tablespace:"IN ('MIKE1', 'MIKE2')"
transport_datafiles='/u02/oradata/DB12/users01.dbf

But the result unfortunately is different. While the expdp runs smoothly of course, the impdp fails:

$ impdp parfile=imp.par

Import: Release 12.2.0.1.0 - Production on Thu May 16 17:18:54 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
ORA-39002: invalid operation
ORA-39352: Wrong number of TRANSPORT_DATAFILES specified: expected 3, received 1

Case 2 does not work.

Summary

You can EXCLUDE one or more tablespaces when doing a Full Transportable Export/Import but you have to be careful. My approach from above is simple and does not have any dependencies. Be aware thst the Full Transportable Export/Import will try to create the users and objects – and will fail of course if the tablespace is not part of the operatio

–Mike

Unified Auditing – is it ON or OFF in Oracle Database 12.2.0.1?

Just a quick update to my blog post from September 2014:

Unified Auditing – is it ON or OFF in Oracle 12c?

Any changes in Oracle Database 12.2.0.1?

Yes. Significant changes happen to Unified Auditing in Oracle Database 12.2.0.1. Mostly internally as the mechanism used to dump the audit records in Oracle 12.1.0.x when Unified Auditing was on turned out to be very good for write performance, but not so good when you tried to read data. See my blog post from a few weeks ago how to deal with this performance implication and a potential patch:

Unified Auditing – Performance Improvements in Oracle 12.1.0.2

But I’d like to understand if the so called “Mixed Mode” is still existent in Oracle Database 12.2.0.1, meaning you can have the old audit trail enabled but the database is still auditing some activities via the newer Unified Auditing policies.

Comparison Oracle 12.1.0.2 vs Oracle 12.2.0.1

Quick check in Oracle 12.1.0.2:

SQL> column policy_name format a25
SQL> column user_name format a14
SQL> column enabled format a7
SQL> set line 200
SQL> set pages 1000
SQL> SELECT policy_name, enabled_opt, user_name FROM audit_unified_enabled_policies;

POLICY_NAME		  ENABLED_ USER_NAME
------------------------- -------- --------------
ORA_SECURECONFIG	  BY	   ALL USERS
ORA_LOGON_FAILURES	  BY	   ALL USERS

And now the same query in Oracle 12.2.0.1:

SQL> column policy_name format a25
SQL> column user_name format a14
SQL> column enabled format a7
SQL> set line 200
SQL> set pages 0
SQL> SELECT policy_name, enabled_opt, user_name
  FROM audit_unified_enabled_policies  2  ;
ORA_SECURECONFIG	  BY	   ALL USERS
ORA_LOGON_FAILURES	  BY	   ALL USERS

SQL> set pages 1000
SQL> r
  1  SELECT policy_name, enabled_opt, user_name
  2*   FROM audit_unified_enabled_policies

POLICY_NAME		  ENABLED_ USER_NAME
------------------------- -------- --------------
ORA_SECURECONFIG	  BY	   ALL USERS
ORA_LOGON_FAILURES	  BY	   ALL USERS

No change.

As in Oracle 12.1.0.x, in Oracle Database 12.2.0.1 two default Unified Auditing Policies are enabled. And still I’d recommend to turn them off if you are either going to use the old auditing via audit_trail or don’t want to have auditing at all.

SQL> noaudit policy ORA_SECURECONFIG;
Noaudit succeeded.

SQL> noaudit policy ORA_LOGON_FAILURES;
Noaudit succeeded.

SQL> SELECT policy_name, enabled_opt, user_name FROM audit_unified_enabled_policies;
no rows selected

Again, to be clear, I’m not saying that you shouldn’t use the new Unified Auditing. But disable the Mixed Mode. Use the real (and enabled, i.e. linked into your kernel) Unified Auditing instead if you would like to audit in Oracle Database 12.2.0.1. Or stay with the old auditing if it does what you want and expect.

–Mike

PSU or BP? Patch Set Update or Bundle Patch?

Well, in my new role as unofficial Junior Product Manager for Patching (just kidding) I get asked once a day (at least!) via email or in customer meetings or workshops: Should we take the PSUs or the BPs?

Should we take the PSUs or the BPs ?

PSUs are Patch Set Updates, BPs are (sometimes called: Proactive) Bundle Patches.

And the answer is very simple:

  • If you have an Oracle Engineered System: Take the Bundle Patches for Engineered Systems
  • In all other cases:
    • If you are on Oracle Database 12.1.0.x or newer: Take the Bundle Patches
    • If you are on Oracle Database 11.2.0.4: Take the Patch Set Updates
    • If you are on a release below Oracle Database 11.2.0.4: Upgrade!

Is there an official recommendation or guideline?

Yes, of course, there’s one. Very well explained in MOS Note: 1962125.1 – Overview of Database Patch Delivery Methods:

Bundle Patch Recommendation

Recommendation for Oracle 12c: Bundle Patches

What else do you need to know?

  1. Q: Is there a difference between “Bundle Patch” and “Proactive Bundle Patch“.
    A: It’s the same thing.
  2. Q: Can I apply Exadata Bundle Patches on non-Exadata Systems in Oracle 11g?
    A: Even though this is technically possible for Linux, there are certain restrictions. This is only supported in a standby configuration where one part is operated on an Exadata. See the FAQ at the end of MOS Note: 1962125.1 – Overview of Database Patch Delivery Methods
  3. Q: Can I flip from PSUs to BPs?
    A: When you approach a release or patch set upgrade (i.e. Oracle Database 11.2.0.3 to Oracle Database 12.1.0.2, or Oracle Database 12.1.0.1 to Oracle Database 12.1.0.2) you will start from scratch and have the full choice. But in-between a release you’ll have to deinstall at least the sql changes and roll in the new sql changes when you change between PSUs and BPs or vice versa. See my previous blog posts about switching from PSUs to BPs: https://mikedietrichde.com/2016/05/03/can-i-apply-a-bp-on-top-of-a-psu-or-vice-versa/
  4. Q: Why does Oracle still deliver PSUs in Oracle 12c even though it recommends to use the BPs?
    A: I don’t know. I can just assume that some customers insist to get the PSUs having a smaller number of fixes meaning potentially lower effect on their systems. But personally I totally disagree. When you look up the current issues list for Oracle 12.1.0.2 you will find out that many of the fixes are included in the BPs but not in the PSUs. If it would be my choice, I can perfectly (and better) live without PSUs but only getting BPs instead.
  5. Q: I have issues with the patches’ readme – can you explain it to me?
    A: No. Please log an SR. I get the “readme complaint” from almost every customer I see. And I see all the points and agree in most of them. Still, I’m not the owner of patching nor the owner of the readme’s. Telling it me is good – but telling it Oracle Support via an SR, and force a bug to be logged is the much better solution. Please please please, do log SRs when you are not happy with the patches’ readme, when things are unclear or wrongly carried over or whatever. The readmes get written by humans and they will need your feedback to improve the readmes.

–Mike

Can I restart a failed Multitenant Upgrade as well?

A while back I did blog about the new -R option of the parallel upgrade tool catctl.pl in Oracle Database 12.2.

Restarting a failed Database Upgrade with catctl.pl

And in case you will do a real Multitenant upgrade and fail – as it happened to me today due to “no space left on device” (no audit files could be written anymore) I tried the -R option as well based on Joe’s (our lead catctl.pl developer) recommendation:

$ $ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -R -l /home/oracle/mike2 catupgrd.sql

Argument list for [catctl.pl]
Run in                c = 0
Do not run in         C = 0
Input Directory       d = 0
Echo OFF              e = 1
Simulate              E = 0
Forced cleanup        F = 0
Log Id                i = 0
Child Process         I = 0
Log Dir               l = /home/oracle/mike2
Priority List Name    L = 0
Upgrade Mode active   M = 0
SQL Process Count     n = 6
SQL PDB Process Count N = 0
Open Mode Normal      o = 0
Start Phase           p = 0
End Phase             P = 0
Reverse Order         r = 0
AutoUpgrade Resume    R = 1
Script                s = 0
Serial Run            S = 0
RO User Tablespaces   T = 0
Display Phases        y = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0

catctl.pl VERSION: [12.2.0.1.0]
           STATUS: [production]
            BUILD: [RDBMS_12.2.0.1.0_LINUX.X64_170125]


/u01/app/oracle/product/12.2.0.1/rdbms/admin/orahome = [/u01/app/oracle/product/12.2.0.1]
/u01/app/oracle/product/12.2.0.1/bin/orabasehome = [/u01/app/oracle/product/12.2.0.1]
catctlGetOrabase = [/u01/app/oracle/product/12.2.0.1]

Analyzing file /u01/app/oracle/product/12.2.0.1/rdbms/admin/catupgrd.sql

Log file directory = [/home/oracle/mike2]

catcon: ALL catcon-related output will be written to [/home/oracle/mike2/catupgrd_catcon_10640.lst]
catcon: See [/home/oracle/mike2/catupgrd*.log] files for output generated by scripts
catcon: See [/home/oracle/mike2/catupgrd_*.lst] files for spool files, if any

Number of Cpus        = 2
Database Name         = CDB1
DataBase Version      = 12.2.0.1.0
Parallel SQL Process Count (PDB)      = 2
Parallel SQL Process Count (CDB$ROOT) = 6
Concurrent PDB Upgrades               = 3
PDB$SEED Open Mode = [READ ONLY] NO UPGRADE WILL BE PERFORMED
PDB1 Open Mode = [MOUNTED] NO UPGRADE WILL BE PERFORMED
PDB2 Open Mode = [MOUNTED] NO UPGRADE WILL BE PERFORMED
PDB4 Open Mode = [READ WRITE] NO UPGRADE WILL BE PERFORMED
Generated PDB Inclusion:[PDB3 PDB5]
Components in [CDB$ROOT]
    Installed [CATALOG CATPROC XDB]
Not Installed [APEX APS CATJAVA CONTEXT DV EM JAVAVM MGW ODM OLS ORDIM OWM RAC SDO WK XML XOQ]

** Database CDB$ROOT has already been upgraded successfully. **
    Time: 2s

Start processing of PDB3
[/u01/app/oracle/product/12.2.0.1/perl/bin/perl catctl.pl -n 2 -R -l /home/oracle/mike2 -I -i pdb3 -c 'PDB3' catupgrd.sql]

Start processing of PDB5
[/u01/app/oracle/product/12.2.0.1/perl/bin/perl catctl.pl -n 2 -R -l /home/oracle/mike2 -I -i pdb5 -c 'PDB5' catupgrd.sql]

Argument list for [catctl.pl]
Run in                c = PDB3
Do not run in         C = 0
Input Directory       d = 0
Echo OFF              e = 1
Simulate              E = 0
Forced cleanup        F = 0
Log Id                i = pdb3
Child Process         I = 1
Log Dir               l = /home/oracle/mike2
Priority List Name    L = 0
Upgrade Mode active   M = 0
SQL Process Count     n = 2
SQL PDB Process Count N = 0
Open Mode Normal      o = 0
Start Phase           p = 0
End Phase             P = 0
Reverse Order         r = 0
AutoUpgrade Resume    R = 1
Script                s = 0
Serial Run            S = 0
RO User Tablespaces   T = 0
Display Phases        y = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0

catctl.pl VERSION: [12.2.0.1.0]
           STATUS: [production]
            BUILD: [RDBMS_12.2.0.1.0_LINUX.X64_170125]



Argument list for [catctl.pl]
Run in                c = PDB5
Do not run in         C = 0
Input Directory       d = 0
Echo OFF              e = 1
Simulate              E = 0
Forced cleanup        F = 0
Log Id                i = pdb5
Child Process         I = 1
Log Dir               l = /home/oracle/mike2
Priority List Name    L = 0
Upgrade Mode active   M = 0
SQL Process Count     n = 2
SQL PDB Process Count N = 0
Open Mode Normal      o = 0
Start Phase           p = 0
End Phase             P = 0
Reverse Order         r = 0
AutoUpgrade Resume    R = 1
Script                s = 0
Serial Run            S = 0
RO User Tablespaces   T = 0
Display Phases        y = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0

catctl.pl VERSION: [12.2.0.1.0]
           STATUS: [production]
            BUILD: [RDBMS_12.2.0.1.0_LINUX.X64_170125]


/u01/app/oracle/product/12.2.0.1/rdbms/admin/orahome = [/u01/app/oracle/product/12.2.0.1]
/u01/app/oracle/product/12.2.0.1/rdbms/admin/orahome = [/u01/app/oracle/product/12.2.0.1]
/u01/app/oracle/product/12.2.0.1/bin/orabasehome = [/u01/app/oracle/product/12.2.0.1]
catctlGetOrabase = [/u01/app/oracle/product/12.2.0.1]

Analyzing file /u01/app/oracle/product/12.2.0.1/rdbms/admin/catupgrd.sql

Log file directory = [/home/oracle/mike2]

/u01/app/oracle/product/12.2.0.1/bin/orabasehome = [/u01/app/oracle/product/12.2.0.1]
catctlGetOrabase = [/u01/app/oracle/product/12.2.0.1]

Analyzing file /u01/app/oracle/product/12.2.0.1/rdbms/admin/catupgrd.sql

Log file directory = [/home/oracle/mike2]

catcon: ALL catcon-related output will be written to [/home/oracle/mike2/catupgrdpdb5_catcon_10837.lst]
catcon: See [/home/oracle/mike2/catupgrdpdb5*.log] files for output generated by scripts
catcon: See [/home/oracle/mike2/catupgrdpdb5_*.lst] files for spool files, if any
catcon: ALL catcon-related output will be written to [/home/oracle/mike2/catupgrdpdb3_catcon_10835.lst]
catcon: See [/home/oracle/mike2/catupgrdpdb3*.log] files for output generated by scripts
catcon: See [/home/oracle/mike2/catupgrdpdb3_*.lst] files for spool files, if any

Number of Cpus        = 2

Number of Cpus        = 2
Database Name         = CDB1
Database Name         = CDB1
DataBase Version      = 12.2.0.1.0
DataBase Version      = 12.2.0.1.0
Generated PDB Inclusion:[PDB5]
CDB$ROOT  Open Mode = [OPEN]
Generated PDB Inclusion:[PDB3]
Components in [PDB5]
    Installed [CATALOG CATPROC XDB]
Not Installed [APEX APS CATJAVA CONTEXT DV EM JAVAVM MGW ODM OLS ORDIM OWM RAC SDO WK XML XOQ]
CDB$ROOT  Open Mode = [OPEN]
Components in [PDB3]
    Installed [CATALOG CATPROC XDB]
Not Installed [APEX APS CATJAVA CONTEXT DV EM JAVAVM MGW ODM OLS ORDIM OWM RAC SDO WK XML XOQ]

*******Upgrade being restarted on database PDB5 from failed phase 40*******

------------------------------------------------------
Phases [40-115]         Start Time:[2017_05_09 15:25:29]
Container Lists Inclusion:[PDB5] Exclusion:[NONE]
------------------------------------------------------

*******Upgrade being restarted on database PDB3 from failed phase 109*******

------------------------------------------------------
Phases [109-115]         Start Time:[2017_05_09 15:25:29]
Container Lists Inclusion:[PDB3] Exclusion:[NONE]
------------------------------------------------------
   Time: 2s
***************   Catproc DataPump   ***************
Serial   Phase #:40   [PDB5] Files:3    Time: 2s
*******************   Migration   ******************
Serial   Phase #:109  [PDB3] Files:1    

And yes, it works!

The upgrade will be restarted for PDB3 and PDB5 exactly in the failed phased from the first run. Upgrades for CDB$ROOT, PDB$SEED, PDB1, PDB2 and PDB4 are completed already and will be skipped.

–Mike