Data Pump Super Patch for Oracle 19.10 and newer

Just a few days we released a Data Pump Super Patch for Oracle 19.10 and newer. “Newer” refers to the fact that once 19.11.0 will be available, this merge patch I’m writing about will be available as well right away (sorry, got delayed a bit). And if you work with Data Pump, you may want to consider this performance patch collection.

Why do you need this patch?

Many customer will know the issue that adding changes with datapatch may take a long time as soon as dpload.sql is involved. And this Super Patch fixes almost all of the known issues. We put a lot of performance fixes into this single merge patch. “Merge” patch means that it combines a number of one-off patches. These fixes are mainly regarding dpload.

What is the patch number?

This generic patch can be downloaded from MOS with patch number 32551008. PLEASE USE the new patch 32919937 instead since it fixes an issue we found in the previous one. MOS will reflect this soon as well.

We set the bug flag to “publish” so you can see details of the bug on MOS as well:

“Generic” means that you will get one patch for all platforms.

What’s in it?

This merge patch contains the following fixes:

  • 29835516 – Improve approach to drop objects: execute in anonymous PLSQL block
  • 30051876 – Move HCS objects out of our files to their own for easier HCS patching
  • 31014419 – Try to drop all Data Pump AQ tables
  • 31054943 – Vast performance improvement of MDAPI’s catmetgrant1/2.sql scripts
  • 31068789 – Conditionalize creating of dummy XDB views in catmetviews.sql, minimize row deletions in catmetinsert.sql
  • 31086613 – Simplify Data Pump backporting by splitting kupcmsg types specs/bodies into new files from the often updated prvtkupc
  • 31113196 – Move creation of ku$xktfbue from catmettypes.sql to catdpb.sql
  • 31207242 – Remove DBMS_OUTPUT.PUT_LINE from catmetgrant1/2.sql
  • 31207542 – Remove unnecessary SET commands from catnodpaq.sql
  • 31207734 – Remove unnecessary SET commands from catnodp.sql/ catnomta.sql scripts
  • 31214625 – Cleanup/rewrite of dpload; This subsumes fix for bug 29284656
  • 31727647 – Don’t always drop MDAPI’s XDB views during upgrade/ patch (dpload)
  • 32195077 – Split off grants/inserts in catdpb.sql into new catdpbgi.sql script
  • 32195274 – Cleanup catmetviews/catmet2/catmetx to reduce dependancies and # invalid objects.
  • 32195313 – CREATE OR REPLACE FORCE instead of DROP/ CREATE. Add dbmsplts.sql, prvtplts.plb to dpload.sql
  • 32316344 – Add sqlsessstart/end invocations to five of our install scripts
  • 32388966 – dpload concurrency improvement: remove running of dbmspump.sql
  • 32479451 – dpload concurrency improvement: only run catmet2.sql and not all of catdph.sql
  • 32525862 – dpload: fixup version checks to deal with latest db version# change

All of them are meant to allow better and faster patching of Data Pump / Metadata API.

And here is a short overview on what is included directly from the developer:

Things you will notice with this patch:

    • Huge performance improvement.  The elapsed time to install a patch via the dpload.sql apply script should run 60-90+ percent faster
    • Subsequent invocations can be even faster, at well under a minute
    • Number of objects that are left invalid has dropped by 50-90% (about 10 or less) and down to zero with subsequent invocation
    • Contention issues should be less as dpload.sql removed some operations that were unnecessary during the applying of a patch.

What’s missing?

Be aware – this patch is not on MOS yet (May 2, 2021):

Actually, there is nothing “missing”. You may want to add one additional fix which isn’t included here: bug 32233175,

This fix could also help with the elapsed time of the initial execution of dpload.sql in a PDB. Without this fix, there is an unnecessary invalidation of views in catmetviews.sql that required all of them to be recompiled, which can take minutes. But this one will require a merge with the above patch.

But it isn’t mandatory. And the fix for bug 32233175 can exist on its own, and does not need the big merge patch as basis.

Rolling – yes or no?

Surprise, surprise – this patch is rolling and standby-first applicable. See the README – it is correct. Hence, there is no outage on a RAC environment required. Technically speaking this patch is even “hot patchable” as it will only the basis to apply other patches faster and easier. So you could plop it into an Oracle Home of a running system. The files don’t get executed until the next time you are going to apply a Data Pump patch with datapatch.

Will it be in 19.11.0 or 19.12.0?

Unfortunately it won’t be part of 19.11.0 or 19.12.0 but the merge patch on top of 19.11.0 is already available. And the one for 19.12.0 will be available soon after 19.12.0 RU is available. We hope to include it into one of the next RUs.

Applying 32551008

I downloaded patch 32551008 from MyOracle Support and unzipped it (PLEASE USE patch 32919937 instead)

$ cd 32551008/
[CDB2] oracle@hol:~/32551008
$ $ORACLE_HOME/OPatch/opatch apply
Oracle Interim Patch Installer version 12.2.0.1.23
Copyright (c) 2021, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/oracle/product/19
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/oracle/product/19/oraInst.loc
OPatch version    : 12.2.0.1.23
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-15_18-17-42PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   32551008  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '32551008' to OH '/u01/app/oracle/product/19'

Patching component oracle.rdbms.dbscripts, 19.0.0.0.0...

Patching component oracle.rdbms, 19.0.0.0.0...
Patch 32551008 successfully applied.
Log file location: /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-15_18-17-42PM_1.log

OPatch succeeded.

I didn’t shutdown my database instance – and when you check $ORACLE_HOME/rdbms/admin, you will see what the patch exchanged in this directory:

-rw-r--r--. 1 oracle dba   44334 Mar 29 13:36 prvtkupc_typespec.plb
-rw-r--r--. 1 oracle dba   28090 Mar 29 13:36 prvtkupc_typebody.plb
-rw-r--r--. 1 oracle dba   24749 Mar 29 13:36 prvtkupc.plb
-rw-r--r--. 1 oracle dba   68896 Mar 29 13:36 dpload.sql
-rw-r--r--. 1 oracle dba   63825 Mar 29 13:36 dbmsplts.sql
-rw-r--r--. 1 oracle dba    4433 Mar 29 13:36 catpspec.sql
-rw-r--r--. 1 oracle dba   11951 Mar 29 13:36 catpdeps.sql
-rw-r--r--. 1 oracle dba   76380 Mar 29 13:36 catnomta.sql
-rw-r--r--. 1 oracle dba   23315 Mar 29 13:36 catnodp.sql
-rw-r--r--. 1 oracle dba    1289 Mar 29 13:36 catnodpobs.sql
-rw-r--r--. 1 oracle dba    3576 Mar 29 13:36 catnodp_hcs.sql
-rw-r--r--. 1 oracle dba    3440 Mar 29 13:36 catnodpaq.sql
-rw-r--r--. 1 oracle dba    1464 Mar 29 13:36 catnodpall.sql
-rw-r--r--. 1 oracle dba   13876 Mar 29 13:36 catmetx.sql
-rw-r--r--. 1 oracle dba  546052 Mar 29 13:36 catmetviews.sql
-rw-r--r--. 1 oracle dba   21620 Mar 29 13:36 catmetviews_hcs.sql
-rw-r--r--. 1 oracle dba  389133 Mar 29 13:36 catmettypes.sql
-rw-r--r--. 1 oracle dba   18474 Mar 29 13:36 catmettypes_hcs.sql
-rw-r--r--. 1 oracle dba    1259 Mar 29 13:36 catmetloadxsl.sql
-rw-r--r--. 1 oracle dba 1071276 Mar 29 13:36 catmetinsert.sql
-rw-r--r--. 1 oracle dba    5208 Mar 29 13:36 catmetgrants_hcs.sql
-rw-r--r--. 1 oracle dba   40289 Mar 29 13:36 catmetgrant2.sql
-rw-r--r--. 1 oracle dba   42842 Mar 29 13:36 catmetgrant1.sql
-rw-r--r--. 1 oracle dba    2668 Mar 29 13:36 catmet2.sql
-rw-r--r--. 1 oracle dba   46549 Mar 29 13:36 catdwgrd.sql
-rw-r--r--. 1 oracle dba   28804 Mar 29 13:36 catdpb.sql
-rw-r--r--. 1 oracle dba   29446 Mar 29 13:36 catdpbgi.sql

Pretty straight forward.

Potential Issue 1 – ORA-00001

You reported some issues here on the blog, others got raised via SRs (which should be the ideal way in addition to placing a comment here). And actually this helped us a lot to track down a general issue with the patch apply scripts.

This issue is fixed in the new patch 32919937.

In case you see an ORA-00001 Unique Constraint Violated error when you apply this patch to a Multitenant environment, for instance such as the below one from the PDB$SEED:

 
813097 22:24:55 SQL> exec DBMS_METADATA_DPBUILD.create_table_export;
813098 BEGIN DBMS_METADATA_DPBUILD.create_table_export; END;
813099 
813100 *
813101 ERROR at line 1:
813102 ORA-31642: the following SQL statement fails:
813103 insert into sys.metaview$ (type, flags, properties, model, version, xmltag,
813104 udt, schema, viewname) values
813105 ('TBL_1',0,6,'ORACLE',1202000000,
813106 NULL,NULL,NULL,NULL)
813107 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 2873
813108 ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
813109 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1590
813110 ORA-00001: unique constraint (SYS.I_METAVIEW$) violated
813111 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1573
813112 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1721
813113 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1721
813114 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 2847
813115 ORA-06512: at "SYS.DBMS_METADATA_DPBUILD", line 9590
813116 ORA-06512: at line 1
813117 
813118 
813119 Elapsed: 00:00:01.18
813120 22:24:56 SQL> exec DBMS_METADATA_DPBUILD.create_schema_export;
813121 BEGIN DBMS_METADATA_DPBUILD.create_schema_export; END;
813122 
813123 *
813124 ERROR at line 1:
813125 ORA-31642: the following SQL statement fails:
813126 insert into sys.metaview$ (type, flags, properties, model, version, xmltag,
813127 udt, schema, viewname) values
813128 ('SE_2',0,6,'ORACLE',1202000000,
813129 NULL,NULL,NULL,NULL)
813130 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 2873
813131 ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
813132 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1590
813133 ORA-00001: unique constraint (SYS.I_METAVIEW$) violated
813134 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1573
813135 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1721
813136 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1721
813137 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 2847
813138 ORA-06512: at "SYS.DBMS_METADATA_DPBUILD", line 9810
813139 ORA-06512: at line 1
813140 
813141 
813142 Elapsed: 00:00:00.76

Then this can be ignored as it has no harm or consequences. A fix hasn’t been developed yet but will be created.

And find here a bit more technical explanation why this error happens:

The MetaData API’s meta* dictionary tables are shared (sharing=object) which mean they are shared across all containers. The only meaningful inserts occur when loading into the CDB$ROOT. The same insert scripts get run when patching PDBs. What used to always be ignored DML operations in a PDB, can now occur, resulting in the ORA-00001 error being raised.
This is a change in internal behavior in multi-tenant from 12.1.0.2 to 12.2.0.1.  The log file provided in one of the SRs where this error is reported confirms that execution of the catmet2.sql script continues to completion even when this error is raised.

Potential Issue 2 – Upgrade fails with XDB issue and ORA-7445

I don’t want to turn this blog post into a “Known Issues” note but since another person reported an issue, let me quickly document it here in case you see something similar.

This customer reported initially a failed database upgrade with AutoUpgrade when this patch here is in place:

The following errors occur on autoupgrade :
Oracle XML Database
ORA-00932: inconsistent datatypes: expected BINARY got BLOB
ORA-00932: inconsistent datatypes: expected BINARY got BLOB
ORA-04063: view "XDB.RESOURCE_VIEW" has errors
ORA-04063: view "XDB.RESOURCE_VIEW" has errors
ORA-04063: view "XDB.RESOURCE_VIEW" has errors
ORA-04063: view "XDB.RESOURCE_VIEW" has errors
Datapatch
Error: prereq checks failed!

In the allert log :

2021-04-24T14:12:39.860739+02:00
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x10] [PC:0x96066FD, kktcrt()+7341] [flags: 0x0, count: 1]
Errors in file /u01/app/oracle/diag/rdbms/abc123/webf/trace/webf_ora_17135.trc (incident=1908096):
ORA-07445: exception encountered: core dump [kktcrt()+7341] [SIGSEGV] [ADDR:0x10] [PC:0x96066FD] [Address not mapped to object] []
Incident details in: /u01/app/oracle/diag/rdbms/abc123/webf/incident/incdir_1908096/webf_ora_17135_i1908096.trc

The logfiles show this error:

08:40:48 SQL> create or replace trigger xdb.xdb_rv_trig INSTEAD OF insert or delete or update
08:40:48   2  on xdb.resource_view for each row
08:40:48   3  begin
08:40:48   4    if inserting then
08:40:48   5      xdb_rvtrig_pkg.rvtrig_ins(:new.res, :new.any_path);
08:40:48   6
08:40:48   7
08:40:48   8    end if;
08:40:48   9
08:40:48  10    if deleting then
08:40:48  11       xdb_rvtrig_pkg.rvtrig_del(:old.res, :old.any_path);
08:40:48  12
08:40:48  13
08:40:48  14    end if;
08:40:48  15
08:40:48  16    if updating then
08:40:48  17       xdb_rvtrig_pkg.rvtrig_upd(:old.res,    :new.res,
08:40:48  18                                 :old.any_path,   :new.any_path );
08:40:48  19    end if;
08:40:48  20  end;
08:40:48  21  /
create or replace trigger xdb.xdb_rv_trig INSTEAD OF insert or delete or update
                     *
ERROR at line 1:
ORA-03113: end-of-file on communication channel
ERROR:
ORA-03114: not connected to ORACLE


SP2-1519: Unable to write to the error log table sys.registry$error
Process ID: 23451
Session ID: 573 Serial number: 8243

But we found another error just a few seconds before:

08:40:45 SQL> create index xdb.xdbhi_idx on xdb.xdb$resource p (value(p)) indextype is xdb.xdbhi_idxtyp;
create index xdb.xdbhi_idx on xdb.xdb$resource p (value(p)) indextype is xdb.xdbhi_idxtyp
                                                            *
ERROR at line 1:
ORA-00932: inconsistent datatypes: expected BINARY got BLOB


Elapsed: 00:00:00.02
08:40:45 SQL> 
08:40:45 SQL> /*-----------------------------------------------------------------------*/
08:40:45 SQL> /*  VIEWS                                    */
08:40:45 SQL> /*-----------------------------------------------------------------------*/
08:40:45 SQL> create or replace view xdb.resource_view as
08:40:45   2      select value(p) res, abspath(8888) any_path, sys_nc_oid$ resid
08:40:45   3      from xdb.xdb$resource p
08:40:45   4      where under_path(value(p), '/', 8888) = 1 ;
  select value(p) res, abspath(8888) any_path, sys_nc_oid$ resid
                *
ERROR at line 2:
ORA-00932: inconsistent datatypes: expected BINARY got BLOB

And our developer clearly identified this bug:

  • BUG 31823327 – FUSAPPS13:ORA-00932,ORA-24344,ORA-04063 DURING UPGRADE

as the root cause for the above issue. You please may need to check whether a one-off is already available on MOS. If not, please request it in case you see the above error sequence during your test runs.

Further Links and Information

–Mike

Share this: