Just a few days we released a Data Pump Super Patch for Oracle 19.10 and newer. “Newer” refers to the fact that once 19.11.0 will be available, this merge patch I’m writing about will be available as well right away (sorry, got delayed a bit). And if you work with Data Pump, you may want to consider this performance patch collection.
Why do you need this patch?
Many customer will know the issue that adding changes with datapatch may take a long time as soon as dpload.sql is involved. And this Super Patch fixes almost all of the known issues. We put a lot of performance fixes into this single merge patch. “Merge” patch means that it combines a number of one-off patches. These fixes are mainly regarding dpload.
What is the patch number?
This generic patch can be downloaded from MOS with
patch number 32551008. PLEASE USE the new patch 32919937 instead since it fixes an issue we found in the previous one. MOS will reflect this soon as well.
We set the bug flag to “publish” so you can see details of the bug on MOS as well:
“Generic” means that you will get one patch for all platforms.
What’s in it?
This merge patch contains the following fixes:
- 29835516 – Improve approach to drop objects: execute in anonymous PLSQL block
- 30051876 – Move HCS objects out of our files to their own for easier HCS patching
- 31014419 – Try to drop all Data Pump AQ tables
- 31054943 – Vast performance improvement of MDAPI’s catmetgrant1/2.sql scripts
- 31068789 – Conditionalize creating of dummy XDB views in catmetviews.sql, minimize row deletions in catmetinsert.sql
- 31086613 – Simplify Data Pump backporting by splitting kupcmsg types specs/bodies into new files from the often updated prvtkupc
- 31113196 – Move creation of ku$xktfbue from catmettypes.sql to catdpb.sql
- 31207242 – Remove DBMS_OUTPUT.PUT_LINE from catmetgrant1/2.sql
- 31207542 – Remove unnecessary SET commands from catnodpaq.sql
- 31207734 – Remove unnecessary SET commands from catnodp.sql/ catnomta.sql scripts
- 31214625 – Cleanup/rewrite of dpload; This subsumes fix for bug 29284656
- 31727647 – Don’t always drop MDAPI’s XDB views during upgrade/ patch (dpload)
- 32195077 – Split off grants/inserts in catdpb.sql into new catdpbgi.sql script
- 32195274 – Cleanup catmetviews/catmet2/catmetx to reduce dependancies and # invalid objects.
- 32195313 – CREATE OR REPLACE FORCE instead of DROP/ CREATE. Add dbmsplts.sql, prvtplts.plb to dpload.sql
- 32316344 – Add sqlsessstart/end invocations to five of our install scripts
- 32388966 – dpload concurrency improvement: remove running of dbmspump.sql
- 32479451 – dpload concurrency improvement: only run catmet2.sql and not all of catdph.sql
- 32525862 – dpload: fixup version checks to deal with latest db version# change
All of them are meant to allow better and faster patching of Data Pump / Metadata API.
And here is a short overview on what is included directly from the developer:
Things you will notice with this patch:
- Huge performance improvement. The elapsed time to install a patch via the dpload.sql apply script should run 60-90+ percent faster
- Subsequent invocations can be even faster, at well under a minute
- Number of objects that are left invalid has dropped by 50-90% (about 10 or less) and down to zero with subsequent invocation
- Contention issues should be less as dpload.sql removed some operations that were unnecessary during the applying of a patch.
Be aware – this patch is not on MOS yet (May 2, 2021):
Actually, there is nothing “missing”. You may want to add one additional fix which isn’t included here: bug 32233175,
This fix could also help with the elapsed time of the initial execution of dpload.sql in a PDB. Without this fix, there is an unnecessary invalidation of views in catmetviews.sql that required all of them to be recompiled, which can take minutes. But this one will require a merge with the above patch.
But it isn’t mandatory. And the fix for bug 32233175 can exist on its own, and does not need the big merge patch as basis.
Rolling – yes or no?
Surprise, surprise – this patch is rolling and standby-first applicable. See the README – it is correct. Hence, there is no outage on a RAC environment required. Technically speaking this patch is even “hot patchable” as it will only the basis to apply other patches faster and easier. So you could plop it into an Oracle Home of a running system. The files don’t get executed until the next time you are going to apply a Data Pump patch with datapatch.
Will it be in 19.11.0 or 19.12.0?
Unfortunately it won’t be part of 19.11.0 or 19.12.0 but the merge patch on top of 19.11.0 is already available. And the one for 19.12.0 will be available soon after 19.12.0 RU is available. We hope to include it into one of the next RUs.
patch 32551008 from MyOracle Support and unzipped it (PLEASE USE patch 32919937 instead)
$ cd 32551008/ [CDB2] oracle@hol:~/32551008 $ $ORACLE_HOME/OPatch/opatch apply Oracle Interim Patch Installer version 188.8.131.52.23 Copyright (c) 2021, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 184.108.40.206.23 OUI version : 220.127.116.11.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-15_18-17-42PM_1.log Verifying environment and performing prerequisite checks... OPatch continues with these patches: 32551008 Do you want to proceed? [y|n] y User Responded with: Y All checks passed. Backing up files... Applying interim patch '32551008' to OH '/u01/app/oracle/product/19' Patching component oracle.rdbms.dbscripts, 18.104.22.168.0... Patching component oracle.rdbms, 22.214.171.124.0... Patch 32551008 successfully applied. Log file location: /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-15_18-17-42PM_1.log OPatch succeeded.
I didn’t shutdown my database instance – and when you check $ORACLE_HOME/rdbms/admin, you will see what the patch exchanged in this directory:
-rw-r--r--. 1 oracle dba 44334 Mar 29 13:36 prvtkupc_typespec.plb -rw-r--r--. 1 oracle dba 28090 Mar 29 13:36 prvtkupc_typebody.plb -rw-r--r--. 1 oracle dba 24749 Mar 29 13:36 prvtkupc.plb -rw-r--r--. 1 oracle dba 68896 Mar 29 13:36 dpload.sql -rw-r--r--. 1 oracle dba 63825 Mar 29 13:36 dbmsplts.sql -rw-r--r--. 1 oracle dba 4433 Mar 29 13:36 catpspec.sql -rw-r--r--. 1 oracle dba 11951 Mar 29 13:36 catpdeps.sql -rw-r--r--. 1 oracle dba 76380 Mar 29 13:36 catnomta.sql -rw-r--r--. 1 oracle dba 23315 Mar 29 13:36 catnodp.sql -rw-r--r--. 1 oracle dba 1289 Mar 29 13:36 catnodpobs.sql -rw-r--r--. 1 oracle dba 3576 Mar 29 13:36 catnodp_hcs.sql -rw-r--r--. 1 oracle dba 3440 Mar 29 13:36 catnodpaq.sql -rw-r--r--. 1 oracle dba 1464 Mar 29 13:36 catnodpall.sql -rw-r--r--. 1 oracle dba 13876 Mar 29 13:36 catmetx.sql -rw-r--r--. 1 oracle dba 546052 Mar 29 13:36 catmetviews.sql -rw-r--r--. 1 oracle dba 21620 Mar 29 13:36 catmetviews_hcs.sql -rw-r--r--. 1 oracle dba 389133 Mar 29 13:36 catmettypes.sql -rw-r--r--. 1 oracle dba 18474 Mar 29 13:36 catmettypes_hcs.sql -rw-r--r--. 1 oracle dba 1259 Mar 29 13:36 catmetloadxsl.sql -rw-r--r--. 1 oracle dba 1071276 Mar 29 13:36 catmetinsert.sql -rw-r--r--. 1 oracle dba 5208 Mar 29 13:36 catmetgrants_hcs.sql -rw-r--r--. 1 oracle dba 40289 Mar 29 13:36 catmetgrant2.sql -rw-r--r--. 1 oracle dba 42842 Mar 29 13:36 catmetgrant1.sql -rw-r--r--. 1 oracle dba 2668 Mar 29 13:36 catmet2.sql -rw-r--r--. 1 oracle dba 46549 Mar 29 13:36 catdwgrd.sql -rw-r--r--. 1 oracle dba 28804 Mar 29 13:36 catdpb.sql -rw-r--r--. 1 oracle dba 29446 Mar 29 13:36 catdpbgi.sql
Pretty straight forward.
Potential Issue 1 – ORA-00001
You reported some issues here on the blog, others got raised via SRs (which should be the ideal way in addition to placing a comment here). And actually this helped us a lot to track down a general issue with the patch apply scripts.
This issue is fixed in the new patch 32919937.
In case you see an ORA-00001 Unique Constraint Violated error when you apply this patch to a Multitenant environment, for instance such as the below one from the PDB$SEED:
813097 22:24:55 SQL> exec DBMS_METADATA_DPBUILD.create_table_export; 813098 BEGIN DBMS_METADATA_DPBUILD.create_table_export; END; 813099 813100 * 813101 ERROR at line 1: 813102 ORA-31642: the following SQL statement fails: 813103 insert into sys.metaview$ (type, flags, properties, model, version, xmltag, 813104 udt, schema, viewname) values 813105 ('TBL_1',0,6,'ORACLE',1202000000, 813106 NULL,NULL,NULL,NULL) 813107 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 2873 813108 ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86 813109 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1590 813110 ORA-00001: unique constraint (SYS.I_METAVIEW$) violated 813111 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1573 813112 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1721 813113 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1721 813114 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 2847 813115 ORA-06512: at "SYS.DBMS_METADATA_DPBUILD", line 9590 813116 ORA-06512: at line 1 813117 813118 813119 Elapsed: 00:00:01.18 813120 22:24:56 SQL> exec DBMS_METADATA_DPBUILD.create_schema_export; 813121 BEGIN DBMS_METADATA_DPBUILD.create_schema_export; END; 813122 813123 * 813124 ERROR at line 1: 813125 ORA-31642: the following SQL statement fails: 813126 insert into sys.metaview$ (type, flags, properties, model, version, xmltag, 813127 udt, schema, viewname) values 813128 ('SE_2',0,6,'ORACLE',1202000000, 813129 NULL,NULL,NULL,NULL) 813130 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 2873 813131 ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86 813132 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1590 813133 ORA-00001: unique constraint (SYS.I_METAVIEW$) violated 813134 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1573 813135 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1721 813136 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 1721 813137 ORA-06512: at "SYS.DBMS_METADATA_BUILD", line 2847 813138 ORA-06512: at "SYS.DBMS_METADATA_DPBUILD", line 9810 813139 ORA-06512: at line 1 813140 813141 813142 Elapsed: 00:00:00.76
Then this can be ignored as it has no harm or consequences. A fix hasn’t been developed yet but will be created.
And find here a bit more technical explanation why this error happens:
The MetaData API’s meta* dictionary tables are shared (sharing=object) which mean they are shared across all containers. The only meaningful inserts occur when loading into the CDB$ROOT. The same insert scripts get run when patching PDBs. What used to always be ignored DML operations in a PDB, can now occur, resulting in the ORA-00001 error being raised.
This is a change in internal behavior in multi-tenant from 126.96.36.199 to 188.8.131.52. The log file provided in one of the SRs where this error is reported confirms that execution of the catmet2.sql script continues to completion even when this error is raised.
Potential Issue 2 – Upgrade fails with XDB issue and ORA-7445
I don’t want to turn this blog post into a “Known Issues” note but since another person reported an issue, let me quickly document it here in case you see something similar.
This customer reported initially a failed database upgrade with AutoUpgrade when this patch here is in place:
The following errors occur on autoupgrade : Oracle XML Database ORA-00932: inconsistent datatypes: expected BINARY got BLOB ORA-00932: inconsistent datatypes: expected BINARY got BLOB ORA-04063: view "XDB.RESOURCE_VIEW" has errors ORA-04063: view "XDB.RESOURCE_VIEW" has errors ORA-04063: view "XDB.RESOURCE_VIEW" has errors ORA-04063: view "XDB.RESOURCE_VIEW" has errors Datapatch Error: prereq checks failed! In the allert log : 2021-04-24T14:12:39.860739+02:00 Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x10] [PC:0x96066FD, kktcrt()+7341] [flags: 0x0, count: 1] Errors in file /u01/app/oracle/diag/rdbms/abc123/webf/trace/webf_ora_17135.trc (incident=1908096): ORA-07445: exception encountered: core dump [kktcrt()+7341] [SIGSEGV] [ADDR:0x10] [PC:0x96066FD] [Address not mapped to object]  Incident details in: /u01/app/oracle/diag/rdbms/abc123/webf/incident/incdir_1908096/webf_ora_17135_i1908096.trc
The logfiles show this error:
08:40:48 SQL> create or replace trigger xdb.xdb_rv_trig INSTEAD OF insert or delete or update 08:40:48 2 on xdb.resource_view for each row 08:40:48 3 begin 08:40:48 4 if inserting then 08:40:48 5 xdb_rvtrig_pkg.rvtrig_ins(:new.res, :new.any_path); 08:40:48 6 08:40:48 7 08:40:48 8 end if; 08:40:48 9 08:40:48 10 if deleting then 08:40:48 11 xdb_rvtrig_pkg.rvtrig_del(:old.res, :old.any_path); 08:40:48 12 08:40:48 13 08:40:48 14 end if; 08:40:48 15 08:40:48 16 if updating then 08:40:48 17 xdb_rvtrig_pkg.rvtrig_upd(:old.res, :new.res, 08:40:48 18 :old.any_path, :new.any_path ); 08:40:48 19 end if; 08:40:48 20 end; 08:40:48 21 / create or replace trigger xdb.xdb_rv_trig INSTEAD OF insert or delete or update * ERROR at line 1: ORA-03113: end-of-file on communication channel ERROR: ORA-03114: not connected to ORACLE SP2-1519: Unable to write to the error log table sys.registry$error Process ID: 23451 Session ID: 573 Serial number: 8243
But we found another error just a few seconds before:
08:40:45 SQL> create index xdb.xdbhi_idx on xdb.xdb$resource p (value(p)) indextype is xdb.xdbhi_idxtyp; create index xdb.xdbhi_idx on xdb.xdb$resource p (value(p)) indextype is xdb.xdbhi_idxtyp * ERROR at line 1: ORA-00932: inconsistent datatypes: expected BINARY got BLOB Elapsed: 00:00:00.02 08:40:45 SQL> 08:40:45 SQL> /*-----------------------------------------------------------------------*/ 08:40:45 SQL> /* VIEWS */ 08:40:45 SQL> /*-----------------------------------------------------------------------*/ 08:40:45 SQL> create or replace view xdb.resource_view as 08:40:45 2 select value(p) res, abspath(8888) any_path, sys_nc_oid$ resid 08:40:45 3 from xdb.xdb$resource p 08:40:45 4 where under_path(value(p), '/', 8888) = 1 ; select value(p) res, abspath(8888) any_path, sys_nc_oid$ resid * ERROR at line 2: ORA-00932: inconsistent datatypes: expected BINARY got BLOB
And our developer clearly identified this bug:
- BUG 31823327 – FUSAPPS13:ORA-00932,ORA-24344,ORA-04063 DURING UPGRADE
as the root cause for the above issue. You please may need to check whether a one-off is already available on MOS. If not, please request it in case you see the above error sequence during your test runs.
Further Links and Information
- Download patch 32919937
- Bug 32551008 – CONSOLIDATED BUG OF IMPROVEMENTS TO DATA PUMP / MDAPI PATCHING PROCEDURES
- MOS Note: 2819284.1 – Data Pump Recommended Proactive Patches For 19.10 and Above
- Does Data Pump import serially into PDBs?
- Database Migration with Data Pump from non-CDB to PDB
Hi Mike. I see this a GENERIC patch to 19.10. Can I install this on a Windows system? Thanks
in theory, yes – but as Susanne Jahr posted below, it does not work at the moment. We will sort this out.
once again, you are out of luck running on Windows… Windows has got Bundle Patch 19.10 with the Patch number 32062765 (and not, like the RU 19.10, 32218454). So if I run the conflict check on my windows system which is patched to 19.10, I get
“Interim patch 32551008 requires prerequisite patch(es)  which are not present in the Oracle Home. Apply prerequisite patch(es)  before applying interim patch 32551008.”
Tried to install the patch anyway, now I get:
“Skip patch 32551008 from list of patches to apply: This patch is not needed. After skipping patches with missing components, there are no patches to apply.”
I mean – really?
great feedback – let me send this to the developer right away.
patch is not public yet.
Ah sorry, overlooked your introduction sentence – waiting for 19.11.
There is a typo in: 3223175.
Thanks Martin – corrected it.
32233175 – look like it something unpublic, can’t find anything on my oracle support by this number
Also for datapump one more important patch required by my opinion 32325781
we switched the bug to PUBLIC – and the 19.11.0 version should be available now as well.
Will there be a new patch for 19.11 or can this be installed when Oracle 19C patched with the latest RU?
Sorry, missed the text in the beginning ;D
Should be there soon – we are working on it.
you wrote that the merge for 19.11 is already internally available. Is it already released, as 19.11 is out for a few days now? I did a search on MOS and couldn’t find it.
yes, it was ready to go – but we have found a conflict with 19.11.0 – we need to fix this at first. Please stay tuned. I will update the initial blog post.
Cheers, and sorry for the inconvenience,
Hi Mike. I may be missing it, but it appears that the 32551008 merge patch for 19.11 has not been released on MOS yet. Have you heard when it may be available? Looking forward to this being rolled into a future RU.
Thank you for all you do.
Sorry Michael – it got delayed but should be available by now.
Perhaps mention this patch in doc 555.1? 😉
Good advice – let me check this with the owners.
Hi Mike, very useful post, thanks. However, the info about additional (not mandatory) patch 32233175 you talk about does not seem to be accessible (nor bug nor patch).
sorry for the confusion – my fault, the patch is not on MOS yet as it is still in a regression test run.
I will add a comment to the post 🙁
there is a generic version + a linux x86-64 version available for 19.11. I am on linux x86-64 and now I am confused which one to take. Why is there a linux specific version if a generic one is available as well?
unfortunately it got delayed due to an internal process issues. I will update the blog post as soon as it is available.
I downloaded the DataPump Super Patch for the 19.11 RU, and updated my Databases to said version.
I normally patch the new Home while it is installed (via ./runInstaller) as you described in one of your previous posts.
The updates worked flawlessly so far on multiple machines, but now we ran into an error while executing datapatch: ORA-22866
To point out, the 19.11 RU could be applied without any problems, just the DataPump Super patch seems to be causing problems.
Do you have any ideas about this or is this more of a case for a MOS service request?
Thanks in advance,
the Data Pump Super Patch does not require a datapatch run.
Are you saying:
1) You install 19.3.0, then add 19.11.0 and the Data Pump Super Patch
2) Then you obviously need to run “datapatch” for the 19.11.0 RU
3) And this now results in this error ORA-22866?
Just that I understand you correctly.
I’m hitting the same ORA-22866 issue as Niklas.
In my case, the upgrade from 19.09 RU to 19.11 RU went well as well as the datapatch command.
Then I applied 32551008 Datapump superpatch, all good until running datapatch as the post installation documentation, it returned ORA-22866 error. The utlrp.sql recompile went through without any issue and no invalid object.
Extract from DBA_REGISTRY_SQLPATCH:
PATCH_ID ACTION STATUS TARGET_VERSION DESCRIPTION SOURCE_BUILD_DESCRIPTION ACTION_TIME
———- ——– ———— ——————– ————————————————————————— —————————— ——————————
31771877 APPLY SUCCESS 184.108.40.206.0 Database Release Update : 220.127.116.11.201020 (31771877) Release_Update 13-NOV-20 08.12.35.225999 AM
32545013 APPLY SUCCESS 18.104.22.168.0 Database Release Update : 22.214.171.124.210420 (32545013) Release_Update 23-APR-21 02.44.12.829371 PM
32551008 APPLY WITH ERRORS 126.96.36.199.0 CONSOLIDATED BUG OF IMPROVEMENTS TO DATA PUMP / MDAPI PATCHING PROCEDURES. Release_Update 03-MAY-21 03.59.01.769372 PM
Any idea ?
actually this patch has no “datapatch” component. So even if you call datapatch, there should be no action.
But I see that it gets an entry in DBA_REGISTRY_SQLPATCH which is kind of strange.
What I did now:
1. I provisioned a fresh 19.3.0 home, applied 19.11.0 to it and the Data Pump Super Patch (and another patch).
2. Then I stopped my 19.9.0 instance and started it in this new home
3. Then I ran “datapatch -verbose” as the other patch required it
DBA_REGISTRY_SQLPATCH is flawless.
Then, in a 2nd attempt, I patched my 19.11.0 home and downloaded and applied the Super Patch to it – and ran intentionally datapatch.
But still, my DBA_REGISTRY_SQLPATCH is flawless afterwards.
$ $ORACLE_HOME/OPatch/datapatch -verbose
SQL Patching tool version 188.8.131.52.0 Production on Wed May 5 00:20:58 2021
Copyright (c) 2012, 2021, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_5772_2021_05_05_00_20_58/sqlpatch_invocation.log
Connecting to database...OK
Gathering database info...done
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of interim SQL patches:
No interim patches found
Current state of release update SQL patches:
184.108.40.206.0 Release_Update 210413004009: Installed
Applied 220.127.116.11.0 Release_Update 210413004009 successfully on 21-APR-21 02.43.40.035316 PM
Applied 18.104.22.168.0 Release_Update 210413004009 successfully on 21-APR-21 02.43.41.532161 PM
Applied 22.214.171.124.0 Release_Update 210413004009 successfully on 21-APR-21 02.43.43.025186 PM
Adding patches to installation queue and performing prereq checks...done
For the following PDBs: CDB$ROOT PDB$SEED PDB1
No interim patches need to be rolled back
No release update patches need to be installed
No interim patches need to be applied
SQL Patching tool complete on Wed May 5 00:21:19 2021
You please need to open an SR and check with Support as I can’t reproduce it.
Cheers, and sorry for the inconvenience,
Thanks Mike for your tests, interesting your DBA_REGISTRY_SQLPATCH is flawless.
I guess the run datapatch step is in most patch documentation by default including that one, that’s why I ran it.
I’ll open a CR and keep you updated on the findings as it could help other people.
I checked the README of the patch for 19.10.0 and 19.11.0 – and it has no datapatch in it. I know that it is standard in many patches – and I agree that it shouldn’t do any harm if it is not necessary. But the patches README is correct here:
To install the patch, follow these steps:
1. Set your current directory to the directory where the patch is located and then run the OPatch utility by entering the following commands:
$ opatch apply
2. Verify whether the patch has been successfully installed by running the following command:
$ opatch lsinventory
3. Start the services from the Oracle home.
These are the instructions – and no difference between 19.10.0 and 19.11.0.
does this mean that we should wait with applying 32551008 (Linux x86-64) on top of 19.11.0 or that the merge with 32233175 is delayed?
there won’t be a merge with the Super Patch – and no, please don’t wait.
The 322… is not available yet – i added a comment to the blog post.
Sorry for the confusion.
Thanks for clarification. I just realized that patch 32551008 got re-released at May 1st for platform generic.
Unfortunately, Patch 32551008 has conflicts with Patches 32792937 or 32812169 (for 19.11) that contain real business crtitical fixes.
This makes 32551008 irrelevant for many production systems. Right where it should bring value.
thanks for the hint. Please simply open an SR and request a merge patch for: Patches 32551008, 32792937 and 32812169 on top of 19.11.0 on your platform. Please upload your “opatch lsinventory” to this SR as well. You’ll find this in a million cases unfortunately for obvious reasons. And such a patch can’t include all potential conflicts right away as in such an attempt it would never be ready to be released.
Could you please just open such an SR – I checked the bug, and there is (as far as I can see) no such merge requested yet.
upgrade from 18.7.0 to 19.11.0 including this patch 3551008 (and datapump patches 30763851 and 31650438, 32691756)
seems to hit this bug:
31504881 – DATAPATCH THROWS ERRORS FOR PDBS WITH MLR 31073862: ORA-00001: UNIQUE CONSTRAINT (SYS.I_METAVIEW$) VIOLATED
Currently open SRs:
SR 3-25946821991 : autoupgrade error: ugprading from 18.7.0 to 19.11.0 stoppped.
SR 3-25913443471 : RMAN Catalog Database: datapatch fails at dpload.sql with ORA-1
Hoping that this gets sorted out quickly as we have a huge queue of databases to be upgraded to 19.11.0.
Could it be that super-patch has problems when other datapump patches are also applied? dpload.sql reports ORA-1 when inserting into sys.i_metaview$.
I’m checking with the developer – I will drop you an email once we have sorted out the issue.
>Potential Issue 1 – ORA-00001
>Then this can be ignored as it has no harm or consequences. A fix hasn’t been developed yet but will be created.
There are no many options to ignore this, as this will be succesfully applied on CDB$ROOT and fail on all PDBS, so PDBS will be opened only in restricted mode, that will make them pretty much useless.
I guess you can wrap in rdbms/admin/catmet2.sql calls for the failing functions by:
when others then
if sqlcode = -31642 then
that will do the trick.
From this (changed) article I see that you suggest to install the 32919937 patch instead of the original 32551008.
From MOS I cannot understand if the 32919937 does contain everything fixed in 32551008, if it does contain a single separate fix or if they are needed both. I’ve seen that 32551008 is not available anymore on MOS so I’m confused.
Actually I did include 32551008 in our “standard” patch list for 19.11 in may and I’ve already patched tens of db homes but yesterday I hit the ORA-22866 issue during datapatch (after installing ru 19.11 and 32551008 on a home who previously had only 19.5 installed) and the only way to solve it was 32551008 rollback (followed by 35 minutes waiting for datapatch on a single-tenant SE2 database).
Should I avoid 32551008 in future patching plans? Should I instead add 32919937? Should I install only 32919937?
The situation is very confusing..
please use the 329… one instead of the 325…
It contains everything from 325… plus two additional fixes (and one is for the ORA-1 issue we have seen).
The dev and the engineering teams are sorting out things to make it:
– and make sure you don’t download the previous patch anymore.
Unfortunately this is a bit more complicated and may take some additional days.
Thank you Mike
I’m going to replace the 325.. with the 329.. in our standard patch-set.
Obviously the best should be to have it included in the next RUs, but at least we get a clear reply from you.
we need customer downloads of it 🙂 This will push it up in the ranks. We are trying to include it into 19.13 (Oct 2021) but it is not confirmed yet.
Hi Mike. And what to do if 325… is already installed. Do we need to replace it with the 329… patch? Thanks Peter
yes, it would be good to refresh it – please see the comments section of this blog post. Customers found issues and we fixed them. Those fixes are in the 329.. version included.
I just wanted to share the feedback, that 28771564 still applies very slowly with datapatch even with 32919937 on top of 19.12. It still takes ~ 10 – 15 minutes for each pdb.
Thanks for the update, Robert!
Wow! just tested it for patching my EBS database from 19.10 to 19.12 with few additional patchs and initially my patching time was a little bit over 2 hours. With this patch I am down to 40-45 min.
This is a huge improvement.
thanks for the feedback 🙂
And we are happy that you see this improvement – this was our goal!
Hi Mike, we also ran into the issue which Niklas and Stephane mentioned above with ORA-22866: cannot replace a type with table dependents. I do see an entry for it in dba_registry_sqlpatch. I wanted to see if you were aware of the solution for Niklas and staphane.
I’m not – you please need to open an SR.
there is a confusing situation about the Data Pump Recommnded Merge Patch(es) since a couple of days (2 weeks): “Data Pump Recommended Proactive Patches For 19.10 and Above (Doc ID 2819284.1)” is not accessible, Patch 34271488 “Recommended Data Pump Merge for 19.15 – 34059555 34059654 30928455 31725941 33346378” is not downloadable + Links from Doc. 555.1 only lead to “Patch 34271488 not found”. Patch 34271472 which is recommended as a successor for 33976098 (19.14 both) is not downloadable too. Is there something important to consider when using Data Pump 19.14 or 19.15 at the Moment?
Best regards, Olaf
yes, I know – there were several issues, and we just recut the BPs. Currently the process has started to re-release them again.
Sorry for all the inconvenience – I just learned very late about it, too.
With the July quarterly patching I applied 34294932 “MERGE ON Database RU 126.96.36.199.0 OF 34059555”. However, when I look at Doc ID 2819284.1 it states the recommended patch for 19.16 is 34620690. The document wasn’t updated when I patched my databases – I opened an SR to get the recommendation of which patch to apply.
However, as I am prepping for my next patch cycle I see both of these patches bundles available:
Patch 34510904: MERGE ON DATABASE RU 188.8.131.52.0 OF 34294932
Patch 34660465: MERGE ON DATABASE RU 184.108.40.206.0 OF 34620690
Is there a recommendation for which one I should apply? I am perplexed by the two patch bundles that seem to do the same thing.
Thank you so much for all the valuable information you post. You have scratched some pretty big itches for us! It has been incredibly helpful.
I deeply apologize – we try to clean this out right now since the MOS note (not owned by us) doesn’t get updated as quickly as we expect it to, and Support is giving mixed and incorrect information. If you have received this via an SR, please share the SR number with me.
This is the patch you need to apply please:
MERGE ON DATABASE RU 220.127.116.11.0 OF 34620690(Patch 34660465)
Thanks, and sorry for the confusion.
Excellent! I imagined that would be your answer and I have already begun apply that merge patch in my non-prod databases.
This is the SR I referenced: SR 3-30177439221 : Need new merge patch for data pump bundle
Be careful: patch 34620690 has been withdrawn last week, there is a successor coming soon, see doc 2819284.1
where do you see that 34620690 had been withdrawn?
There was such an issue on Oct 3 – but the fix available on MOS via the note you link is not withdrawn.
I wanted to let you know that Doc ID 2819284.1 was updated again yesterday and now it says that patch 34734035 will be the recommended patch for 19.17.0.
By the way patch 34734035 is not released yet.
Thanks Martin – the bundle is available since last night:
DPBP on top of 19.17.0
I was in Cloud World and saw your recommendation for Datapump patching. I have never applied it in past.
I am now very confused on how to apply this.
I had completed Jul 2022 patching last quarter and have p34133642_190000_Linux-x86-64 and p34086870_190000_Linux-x86-64 applied in environment.
After seeing ( Doc ID 2819284.1 ) I applied datapump bundle 34620690.
Then when I went to apply the Oracle 19.17 patch from Oct 22 I got the following error
There are no patches that can be applied now.
Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches :
Does this mean I have to rollback 34620690 and then apply 19.17 patch 34419443? And then apply the merge patch 34660465? Also how will I know which merge patch to get every quarter. I did write an SR and the recommendation is to rollback 34620690 and apply 19.17 patch.
I think I am not getting the order of patching right. Can you please clarify how I can patch the database release every quarter and then do the datapump patching.
Thanks in advance
how do you patch? In-place, i.e. into the existing home, or out-of-place, i.e. into a separate new home?
When you patch in-place, then you need to roll out the binary one-off and merge patches since those will conflict with the new RU you’d like to install. And since the DPBP and any other one-off patch has been built on to of 19.16.0, it must be deinstalled before the merge with the 19.17.0 RU can happen.
Let me put this on the stack of blog-posts-to-write since this is an interesting topic.
When you deploy a new home, i.e. you patch out-of-place, then the new home will have 19.3.0 (the base release) and 19.17.0 (the RU) plus your desired one-offs and merge patches on top (and the Data Pump Bundle Patch is a merge of currently 75 one-off patches).
Hope this helps – cheers,
Mike thanks for your reply. This helps me understand the process. One observation though is that time between rollback of datapump patch for previous release 19.16 (for example) and the new release of datapump merge patch for the release 19.17 (for example) seems to be atleast a few weeks if not more. In that time period we have lost all the bug fixes and improvements for datapump. Of course this only affects the in-place patching.
I see your point – and I hope we’ll have it for 19.18.0 at the day of the release of 19.18.0.
There was a significant internal process issue with the automation which led to the owner needed to build the merge manually – this led to the delay you saw.