Yesterday I have blogged about an issue with the highly recommended Data Pump Bundle Patches. And today I would like to shed some light on how to Apply the Data Pump Bundle Patch – non-rolling but online?

Photo by eberhard π grossgasteiger on Unsplash
Download the Data Pump Bundle Patch
You can download the correct Data Pump Bundle Patch from MOS Note: 2819284.1 – Data Pump Recommended Proactive Patches For 19.10 and Above. It’s just a few MB, and it is a generic patch with a separate download for the Windows platform. So far so good.
Why the heck …
You may ask yourself “Oh, another Oracle patch. Why should I apply this one now?“. I can see the question coming asking if this is really necessary for an occasional Data Pump user.
The clear answer is: Yes, it is.
It does not only fix a lot of issues, not only for obvious Data Pump but also for DBMS_METADATA and Transportable Tablespaces. But it also speeds up patching whenever dpload.sql is involved dramatically. We saw patching times of over 1 hour when datapatch gets executed just with 4 PDBs – and it went down to 4 minutes with the Bundle Patch in place.
There are a ton of optimizations you’d only recognize if we’d go through all the patches included.
But overall, if you don’t use Data Pump or any of the above mentioned functions, if you don’t care about patching duration, you may not need this one. It is optional, and not a must. But it is recommended to apply it for most customers.
Data Pump Bundle Patch – not RAC-rolling
But when you study the README of any of these Bundle Patches, for instance the 19.17.0 one 34734035, you will at first spot:
This patch is non-RAC Rolling Installable.
Ouch.
But why is that?
Actually, there is a simple explanation. And I tried to describe this in my previous blog post already. When you’d run datapatch from node 1 while somebody starts a Data Pump job, regardless whether it will be export, import or TTS on node 2, this will cause a problem for the datapatch run.
Technically, you could apply the Data Pump Bundle Patch RAC-rolling as long as you would prevent any jobs from accessing the packages which include not only DBMS_DATAPUMP but also DBMS_METADATA for instance.
Now you may ask yourself:
If it is that simple, why don’t we do it? Actually, it is not that simple since we can’t prevent a job from being started on any other node or even on the same node. To implement a correct solution, we need to pause any job and restart it again. From certain releases, this pause is already implemented for regular Data Pump jobs but not for Transportable jobs. We are working on this.
A closer look at the contents
Ok, if you feel challenged by now, then let us have a closer look into the contents of the Data Pump Bundle Patch.
βββ etc βΒ Β βββ config βΒ Β βββ actions.xml βΒ Β βββ inventory.xml βββ files βΒ Β βββ rdbms βΒ Β βΒ Β βββ admin βΒ Β βΒ Β βββ xml βΒ Β βββ sqlpatch βΒ Β βββ 34734035 βββ README.txt
At first, you may find that there are no binary patches included. Hence, nothing needs to be linked in. This is intentional. And it leads to the interesting situation that this bundle patch is not RAC-rolling but in fact can be applied ONLINE to every home, environment and node. It will copy in only files which are not linked into the executable.
And this is what you actually can do.
OPatch continues with these patches: 34620690 Do you want to proceed? [y|n] y User Responded with: Y All checks passed. Backing up files... Applying interim patch '34734035' to OH '/u01/app/oracle/product/19' Patching component oracle.rdbms, 19.0.0.0.0... Patching component oracle.rdbms.dbscripts, 19.0.0.0.0... Patch 34734035 successfully applied.
Follow me
Just follow the README and apply the patch to an actively used home. Use “opatch apply” and it will work smoothly. You can do this on every involved node, node by node.
But datapatch is the crucial part as explained above.
Unfortunately, the README does not explain neither the possibility to apply the patch online nor does it tell you anything about the potential pitfall with datapatch run while a Data Pump job gets executed. It just adds the requirement to execute utlrp.sql in case the patch contains a new version of dpload.sql.
Well, as you can see from the below complete list of the 19.17.0 Data Pump Bundle Patch, there is dpload.sql included. And you can silently assume that every DPBP contains a new version of dpload.sql.
βββ etc βΒ Β βββ config βΒ Β βββ actions.xml βΒ Β βββ inventory.xml βββ files βΒ Β βββ rdbms βΒ Β βΒ Β βββ admin βΒ Β βΒ Β βΒ Β βββ bug_33517865_apply.sql βΒ Β βΒ Β βΒ Β βββ c18.sql βΒ Β βΒ Β βΒ Β βββ catdpbgi.sql βΒ Β βΒ Β βΒ Β βββ catdpb.sql βΒ Β βΒ Β βΒ Β βββ catdwgrd.sql βΒ Β βΒ Β βΒ Β βββ catmet2.sql βΒ Β βΒ Β βΒ Β βββ catmetgrant1.sql βΒ Β βΒ Β βΒ Β βββ catmetgrant2.sql βΒ Β βΒ Β βΒ Β βββ catmetgrants_hcs.sql βΒ Β βΒ Β βΒ Β βββ catmetinsert.sql βΒ Β βΒ Β βΒ Β βββ catmetloadxsl.sql βΒ Β βΒ Β βΒ Β βββ catmettypes_hcs.sql βΒ Β βΒ Β βΒ Β βββ catmettypes.sql βΒ Β βΒ Β βΒ Β βββ catmetviews_hcs.sql βΒ Β βΒ Β βΒ Β βββ catmetviews_mig.sql βΒ Β βΒ Β βΒ Β βββ catmetviews.sql βΒ Β βΒ Β βΒ Β βββ catmetx.sql βΒ Β βΒ Β βΒ Β βββ catnodpall.sql βΒ Β βΒ Β βΒ Β βββ catnodpaq.sql βΒ Β βΒ Β βΒ Β βββ catnodp_hcs.sql βΒ Β βΒ Β βΒ Β βββ catnodpobs.sql βΒ Β βΒ Β βΒ Β βββ catnodp.sql βΒ Β βΒ Β βΒ Β βββ catnomta.sql βΒ Β βΒ Β βΒ Β βββ catpdeps.sql βΒ Β βΒ Β βΒ Β βββ catpspec.sql βΒ Β βΒ Β βΒ Β βββ dbmsmeta.sql βΒ Β βΒ Β βΒ Β βββ dbmsmetb.sql βΒ Β βΒ Β βΒ Β βββ dbmsmetu.sql βΒ Β βΒ Β βΒ Β βββ dpload.sql βΒ Β βΒ Β βΒ Β βββ prvtbpci.plb βΒ Β βΒ Β βΒ Β βββ prvtbpdi.plb βΒ Β βΒ Β βΒ Β βββ prvtbpd.plb βΒ Β βΒ Β βΒ Β βββ prvtbpf.plb βΒ Β βΒ Β βΒ Β βββ prvtbpm.plb βΒ Β βΒ Β βΒ Β βββ prvtbpui.plb βΒ Β βΒ Β βΒ Β βββ prvtbpu.plb βΒ Β βΒ Β βΒ Β βββ prvtbpvi.plb βΒ Β βΒ Β βΒ Β βββ prvtbpv.plb βΒ Β βΒ Β βΒ Β βββ prvtdp.plb βΒ Β βΒ Β βΒ Β βββ prvthpci.plb βΒ Β βΒ Β βΒ Β βββ prvthpf.plb βΒ Β βΒ Β βΒ Β βββ prvthpui.plb βΒ Β βΒ Β βΒ Β βββ prvthpu.plb βΒ Β βΒ Β βΒ Β βββ prvthpw.plb βΒ Β βΒ Β βΒ Β βββ prvtkupc.plb βΒ Β βΒ Β βΒ Β βββ prvtkupc_typebody.plb βΒ Β βΒ Β βΒ Β βββ prvtkupc_typespec.plb βΒ Β βΒ Β βΒ Β βββ prvtmet2.plb βΒ Β βΒ Β βΒ Β βββ prvtmeta.plb βΒ Β βΒ Β βΒ Β βββ prvtmetb.plb βΒ Β βΒ Β βΒ Β βββ prvtmetd.plb βΒ Β βΒ Β βΒ Β βββ prvtmeti.plb βΒ Β βΒ Β βΒ Β βββ prvtmetu.plb βΒ Β βΒ Β βΒ Β βββ prvtxdba.plb βΒ Β βΒ Β βΒ Β βββ sqlfiles.xml βΒ Β βΒ Β βββ xml βΒ Β βΒ Β βββ xsl βΒ Β βββ sqlpatch βΒ Β βββ 34734035 βΒ Β βββ 24992341 βββ README.txt
By default, datapatch will recompile up to 300 invalid objects – this is the threshold. And it will take care on Oracle maintained objects only while utlrp.sql will also attempt to recompile user objects.
What does the inventory tell you?
In the inventory you can easily identify the Data Pump Bundle Patch as MERGE ON DATABASE RU 19.17.0.0.0 OF 34650250 34660465 24338134 25143018 26565187:
$ $OH19/OPatch/opatch lspatches 34734035;MERGE ON DATABASE RU 19.17.0.0.0 OF 34650250 34660465 24338134 25143018 26565187 34411846;OJVM RELEASE UPDATE: 19.17.0.0.221018 (34411846) 34419443;Database Release Update : 19.17.0.0.221018 (34419443) 29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
The risk is very low
In addition to prevent any sort of issues we put in some safeguards.
At first, there is a built-in 3 minute time-out before signaling an error:
BEGIN ku$_dpload.initial_phase; END; * ERROR at line 1: ORA-20000: Retry dpload.sql script later when Data Pump and Metadata API are not in use; current users are: pid:11720, user:SYS, machine:<Machine>, sid:263, module:sqlplus@<ConnectString> (TNS V1- ORA-06512: at "SYS.KU$_DPLOAD", line 1042 ORA-06512: at line 1
And this is the error you would see when you’d try to execute Data Pump while it gets patched since the bundle for 19.14.0:
ORA-39442: Data Pump software update in progress
In previous versions, this was the error sequence not telling you exactly what is going (wr)on(g).
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production ORA-31626: job does not exist ORA-31637: cannot create job SYS_EXPORT_FULL_01 for user SYSTEM ORA-06512: at "SYS.KUPV$FT", line 1142 ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT", line 1751 ORA-39062: error creating master process DM00 ORA-39107: Master process DM00 violated startup protocol. Master error:
In-place and out-of-place patching
Since I can foresee questions and criticism coming up, let me explain the handling of these bundles when you patch forward, i.e. you go for instance to 19.18.0. Let us assume you are on 19.17.0 and you applied the Data Pump Bundle Patch for 19.17.0 on top of 19.17.0. There is no difference whether you are on Unix or Windows.
Out-of-place patching
This is the most convenient setup. You deploy a new home with the correct patches including the future 19.18.0 DPBP. Then you’ll stop your database in the previous 19.17.0 home, you copy eventually some files, you may change the services in clusterware – and then you start in 19.18.0. Start all PDBs, and then invoke datapatch. That’s it.Β datapatch will sort out everything for you, roll back what needs to be rolled back in the right order, and then apply what needs to be applied.
In-place patching
This is what we don’t recommend but what I do – and according to a poll Tim Hall recently did on Twitter, more than 50% of people answering the poll still do: In-place patching into the existing home.
In this case it becomes a bit more tricky. While the RUs are mergable, so you can apply 19.18.0 to a home which has 19.17.0, the one-off and merge patches need to be rolled back at first. For you, this means:
- Rollback the DPBP
- Rollback the MRP
- Rollback other one-offs
- Apply 19.18.0
- Apply OJVM 19.18.0 if JVM is present in your database and you don’t use the mitigation patch
- Apply DPBP for 19.18.0
- Apply MRP for 19.18.0
- Apply other required one-offs for 19.18.0
- Then start your database and all PDBs
- Then invoke datapatch
- Recompile if there were more than 300 invalid objects
Life will be easier when you do out-of-place patching.
Summary
You can apply the Data Pump Bundle Patch online node-by-node. Of course, you don’t have to. But you could. Only datapatch is the crucial part. We put in some safeguards for now to prevent issues so you can give it a try.
In addition, we are working strongly on the automatic pause and restart mechanism. Once this is fully implemented and tested, we can add Data Pump patches to the regular Release Updates (RUs) as well. For now they have to stay separate Bundle Patches.
Further Links and Information
- Blog Post: Data Pump Bundle Patches – You may need to download and apply again
- MOS Note: 2819284.1 – Data Pump Recommended Proactive Patches For 19.10 and Above
- MOS Note: 2912069.1 – ALERT: Data Pump Bundle Patches (DPBPs) From 19.14.0 and on Are Missing Files
–Mike
Hi Mike,
Somehow it is tempting to apply the patch, but OCI does in-place patching of RU when you use the supplied patching method (via OCI console). And this -out of the box- patching does not install the DPBP (!).
So I think that the quarterly RU should just include this DPBP. It is not really a solution to start patching all cloud installations manually – and continue to do this until eternity.
Hi Paul,
we are working on having the RUs contain the DPBPs as well with highest priority – but some functionality needs to be backported to 19c where we work on right now.
Cheers,
Mike
Hi Mike,
If it is applicable online on all homes in a RAC, wouldn’t it be poseible to apply it in a standby first manner as well? Since no binaries are involved switchover is unlikely to be affected. Running datapatch after all stand by homes are patched will propergate the changes to all standbys.
By design even on a ADG you can’t run a datapump job. So you only have to worry about primary DB not running a datapump job.
Cheers,
Asanga
Yes, that should be possible too – but I see no major benefit when you apply it on the standby at first.
Do you?
Thanks
Mike
Hi Mike,
We only have RAC databases and I will definitely test an OOP Patching.
But maybe only the dbhome. I’m a little bit skeptical that all works as expected in a GRID env. (see. Doc ID 2419319.1), too. I’ve made some bad experiences with patching scripts.
Especially you have to pay attention on the deinstall routine of old Oracle Homes.
But OK, I clearly see the great advantage of OOP if you have to rollback all OnOffs and to re-apply all OnOffs after each RU. This is with InPlace an unjustifiable effort and impossible. For me this avoids to apply any OnOffs.
Mike, could you take away the fear from us π
Cheers Peter
Thanks Peter – and since we exchange emails already, i will get back to you once I receive more information regarding GI patching from my mates in the US.
Cheers
Mike
Hi Mike, You state that this patch (e.g. 34734035 for 19.17) can be installed ONLINE. But the README.txt states:
9. Ensure that you shut down all the services running from the Oracle home.
Note:
– For a Non-RAC environment, shut down all the services running from the Oracle home.
– For a RAC environment, shut down all the services (database, ASM, listeners, nodeapps, and CRS daemons) running from the Oracle home of the node you want to patch. After you patch this node, start the services on this node.Repeat this process for each of the other nodes of the Oracle RAC system. OPatch is used on only one node at a time.
– Please use -local option to apply the patch to the particular node. e.g., opatch apply -local
So, do we need to shut down all Oracle Services (e.g. Databases / Listener) to install this patch?
Thanks
Peter Jansen
Hi Peter,
thanks for this hint – and the README is not correct here.
I will follow up with the README owners.
Cheers,
Mike
Hi Mike,
I re-deployed the DPBP just over a week ago. Had no trouble on over 30 single-instance multi-tenant databases, all non-production.
My experience was a bit different on our production single-tenant RAC databases. For each CDB:
1. Using opatch, rolled back patch 34620690 from the Oracle Home on each compute node.
2. Ran datapatch once to rollback the database portion of the patch.
3. Ran opatch on each node to deploy the patch.
4. Ran datapatch once to deploy database portion of the patch.
There were 2-3 databases that raised the “ORA-20000: Retry dpload.sql script later” error due to inactive SQL Developer sessions holding a lock on DBMS_METADATA. Killed the offending sessions and re-ran datapatch.
Looking at the timeline of rollbacks/applies, using your check_patches_19.sql script, I am concerned that there are rollback entries that are newer than any of the apply entries.
Without using the registry, is there a way to confirm that the datapatch has been deployed?
Also had one database where DBMS_SCHEDULER jobs failed to run due to the METADATA LINK package body DBMS_SNAP_INTERNAL having gone invalid shortly after re-deploying the patch.
BTW, not making any progress with the 2 SRs that I opened for these issues.
Regards,
Doug
When re-running the
Hi Doug,
at first, thanks for the advice – I have not seen SQL Developer jobs zombiing around. But it is a very good advice.
Other than the registry, you could check the logs. But this is a painful exercise as I can tell you. The registry should have it right in all cases.
Cheers
Mike
Hi Mike,
I wonder if there is any good technical reason why Editioned Based Redefinition [1] could NOT solve exactly these problems of re-compiling packages while they are used by other instances?
As there are no binaries involved at all, this special case (dependencies on EXTERNAL LANGUAGE C LIBRARY) doesn’t count.
And if there are good reasons, what’s easier than telling the EBR ProductManager about it’s product missing features?
Beside these little issues,
thank you for sharing such honest details. They really help to do the best possible implementation without intensive research on internals.
Martin
[1] https://www.oracle.com/a/tech/docs/ebr-technical-deep-dive-overview.pdf
Hi Martin,
HAPPY NEW YEAR.
And in theory, EBR could solve all this – if it would use with SYS owned objects. But it doesn’t. This was planned for 23c but it is highly complex and requires a lot of code changes (A LOT, trust me). So while it looks so promising, it will require a massive amount of development. We are working on it but it won’t come in 23c.
Cheers
Mike
Hi Mike,
We got an interesting issue with datapump patch v19.17. When we are running serveral expdp/impdp jobs in parallel, one of the jobs are failing if two jobs starts at the same second.
With the v19.17 datapump patch
SQL> select name from dba_queues where name like ‘KU%’;
KUPC$C_1_20221222172631_0
KUPC$C_1_20221222172632_0
KUPC$C_1_20221222172633_0
And we get this error messages:
ORA-31626: job does not exist
ORA-31637: cannot create job for user
ORA-06512: at “SYS.KUPV$FT”, line 1142
ORA-06512: at “SYS.KUPV$FT”, line 1763
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 1098
ORA-06512: at “SYS.KUPV$FT”, line 1653
ORA-06512: at “SYS.KUPV$FT”, line 1103
Without datapump patch (or with v19.15 of the patch)
SQL> select name from dba_queues where name like ‘KU%’;
KUPC$C_1_20221222231025_0
KUPC$C_1_20221222231026_0
KUPC$C_1_20221222231026_1
KUPC$C_1_20221222231026_2
KUPC$C_1_20221222231026_3
As you can see, with the v19.17 patch, it gives every job the same name (within the same second). But without the patch (or with v19.15 of the patch) it manage to give every job a unique name.
We have tested on serveral instances, and all have the same problem. We are running non-CDB’s
We have also downloaded the patch after November 21, 2022, so it should be the newest version of it.
Have you seen this before?
Regards
Kristoffer
Hi Kristoffer,
no, I didn’t see this before. Can you please log an SR and send me the SR number?
Please upload the necessary logs etc so Support can have a look at first.
You can send the SR number either via the blog or via email (mike.dietrich …. at ……. oracle.com).
Thanks,
Mike
Kristoffer,
the support engineer created a bug on our request and assigned it to a developer for further checks.
Bug 35017871 – AQ OBJECTS INVALID AFTER 19.17 DATAPUMP PATCH EXPDP FAILING ORA-31626 ORA-31637
Cheers,
Mike
Hi Mike, sorry to be the voice of doom for windows again, but this does not work on Windows as the patch number of the windows BP 19.17 (Windows Database Bundle Patch : 19.17.0.0.221018 (34468114)) is not the same as that of the Linux RU – but unfortunately, this one is the one the opatch is looking for while trying to install the dp patch. Result is 1) the bp 19.17 is allegedly missing (which is not true, it just looks for the wrong number) and 2) the existing windows BP is regarded as conflicting the dp patch:
1) Interim patch 34734035 requires prerequisite patch(es) [34419443] which are not present in the Oracle Home.
2) Conflicts/Supersets for each patch are: Patch : 34734035 Conflict with 34468114
opatch lspatches
34411846;OJVM RELEASE UPDATE: 19.17.0.0.221018 (34411846)
34468114;Windows Database Bundle Patch : 19.17.0.0.221018 (34468114)
It is not the first time this has happened…*sigh* … I think the check needs to be adjusted to look for the correct Patch number of the Windows BP.
Maybe next time π
Cheers and still a happy new year!
Susanne
Hi Susanne,
thanks a lot – and Happy New Year to you as well!
Did you have an SR for this?
Just checking, I will forward it to the dev team right away certainly since this is BAD.
Thanks, and SORRY for the inconvenience.
Mike
Hi Susanne,
the owners are informed – we hope that they’ll fix it quickly.
Thanks for spotting this!
Cheers,
Mike
Hi Susanne,
I’ve had a developer check this and he replied back:
Are we sure they downloaded the right patch? Itβs easy to mix up the generic βevery platform but Windowsβ patch with the Windows-specific patch. I see in inventory.xml for the 19.17 Windows-specific DPBP:
BUG 34468114 – WINDOWS DATABASE BUNDLE PATCH 19.17.0.0.221018
So the assumption is that you accidentally picked the wrong patch, the one for Unix instead the one for Windows.
Can you please check this?
Thanks,
Mike