Uh … my blog counter tells me I have 57 unanswered comments in the queue at the moment. Forgive me, but July and August so far were really very busy months with some travel activity as well. And another proof that I’m not used to travel anymore: Last week I forgot the power adapter for my Mac at home and just realized it when I pulled out my Mac to do emails. Anyhow, this is a short Monday-vacation-time-blog post about two recent issues you may need one-off patches for since you otherwise may get trapped by them as well. Both happen only with more recent RUs.

Photo by Paola Chaaya on Unsplash
Data Guard Broker configuration issue when you patch to 19.16.0
This issue was found by a Swiss customer who patched from 19.15.0 to 19.16.0 a few days after the RU got released. All worked fine except that as soon as they switched the standby environment to 19.16.0 (before invoking datapatch) they received an ORA-16705: internal error in Data guard broker. Of course, they found this on their test environment at first. But disabling and enabling the broker configuration did not help. And recreating the configuration was not an option since they operate over 200 Data Guard environments.
Luckily the MAA and Oracle Support reacted quickly – and a one-off patch is available. As far as I see, the fix may be included in a future RU. Meanwhile, please see:
- Bug 34446152 – Broker: 19.16 onward broker shows “ORA-16705: internal error in Data guard broker
- MOS Note: 2887535.1 – ORA-16705: Internal Error In Data Guard Broker After Applying Release Update 19.16 Patch
So you should apply this one-off on top of your 19.16.0 installation before attempting the patch run to 19.16.0 if you have Data Guard environments.
Addition:
The fix is confirmed to be included in the 19.18.0 RU in January 2023.
Datapatch errors out with prereq: archived patch directory before 19.15.0
This is a really obscure issue. And it took me a bit to understand it – which required the help of out datapatch team (Thanks a lot, Santosh!). Again, in this case a VERY large customer hosting a lot of Oracle database environments made me aware of this issue (Thanks, Amit!!).
Before I explain the issue with my own words, let me point you to:
- MOS Note: 2235541.1 – datapatch -verbose Fails with Error :” Patch xxxxxx: Archived Patch Directory Is Empty
- Bug 33557344 – 19.x: datapatch fails in out of place patching with prereq: archived patch directory
This problem happens when you patch to 19.12.0, 19.13.0 or 19.14.0 – and then later you go to 19.15.0 or newer with – and this is important – one-off patches in your source home. So those of you who run a plain 19.14.0 right now for example won’t be affected. But if you have one-off patches on top, then datapatch will attempt to roll them back at first when you jump to a higher home, for instance 19.16.0.
Even though the patch for Bug 33557344 is already included, the issue is silently sleeping in your home. So to clarify again: When you upgrade from 12.2.0.1 to 19.15.0, you will never see this issue. And in case you have no one-off patches applied, you won’t see this issue.
But since I know that many of you readers are patching experts, I assume that you at first patched to 19.12.0, 19.13.0 or 19.14.0, and you needed to apply one-off patches on top as well.
Now, let me summarize the various scenarios as far as I understood them – with potential solutions of course where applicable:
- If you are on 19.12.0, 19.13.0 or 19.14.0 with one-off patches, you may see this issue when you patch to a newer RU.
- Solution 1:
Before you patch to 19.15.0 or newer, you roll back the one off patches in your current home manually with:datapatch -rollback all -force
- Solution 2:
Copy the contents of your source $ORACLE_HOME/sqlpatch to your target $ORACLE_HOME/sqlpatch
- Solution 1:
- If you used cloning to create your new home, then you apply let’s say 19.16.0 to it, then you won’t see this issue since you cloned the $ORACLE_HOME/sqlpatch as well.
- If you have no one-off patches in your 19.12.0, 19.13.0 or 19.14.0 homes, you won’t be affected
- If you are on 19.11.0 or lower at the moment, even with one-off patches, and you will patch to 19.15.0 or newer, you won’t see this issue since the fix is included from 19.15.0 on already.
- If you plan to patch to 19.12.0, 19.13.0 or 19.14.0, then please apply the one-off for Bug 33557344 BEFORE you patch or upgrade to 19.12.0, 19.13.0 or 19.14.0. If you do so, then you won’t see this issue and you don’t have to juggle with manual rollbacks or directory tree copies.
–Mike
Hi Mike,
thanks for all your work updating us on such bugs, I noticed the DG broker Bug 34446152 is not listed in the metalink note – Oracle Database 19c Important Recommended One-off Patches (Doc ID 555.1)
This is the first place I go when applying patches.
Hi Petar,
I asked to have it included, especially since I know several customers being trapped by this. But I have no influence on this note unfortunately.
Cheers
Mike
Hi Mike,
I have 19.15.0 with one-off patches, tried to rollback the patches but it failed, seems that also the 19.5.0 RU is being rolled back and that is were the error occurred.
Would be sufficient to just rollback the one-off patches one by one and leave the 19.15.0 patch in place in order to avoid the problems when going to 19.16.0 ?
best regards / Curt
oracle@lxoratscb01@seccb19 $ opatch lspatches
33970238;MERGE ON DATABASE RU 19.15.0.0.0 OF 26749785 29213893
26724511;AUTO OPTIMIZER STATS RUN MULTIPLE JOBS DURING MAINTENANCE WINDOWS
29899384;GETTING INTERNAL ERROR ON NATIVE MODE WHEN CREATING PROCEDURE USING JSON . ERROR MSG ->PLS-00801 INTERNAL ERROR [*** ASSERT AT FILE PDYN.C, LINE 7921; UNSUPPORTED MODE 10; PROC1_JSON__JSON000U__P__77004[6, 1]]
30978304;ORA-20000 DURING IMPDP WITH STATS AND THE UNIQUE INDEX FOR THE PK IS NOT CREATED
33121934;IAD E23POD LIBRARY CACHE LOCK / LOAD LOCK / MUTEX X DURING CONNECTION STORM
32455516;ORA-600 [KTSL_ALLOCATE_DISP KCBZ_OBJDCHK] ORA-600 [KCBZIB_6] SECUREFILE DOUBLE ALLOCATION
33806152;Database Release Update : 19.15.0.0.220419 (33806152)
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
OPatch succeeded.
oracle@lxoratscb01@seccb19 $ datapatch -rollback all -force
SQL Patching tool version 19.15.0.0.0 Production on Wed Aug 17 09:43:03 2022
Copyright (c) 2012, 2022, Oracle. All rights reserved.
Log file for this invocation: /opt/oracle/base/cfgtoollogs/sqlpatch/sqlpatch_32179_2022_08_17_09_43_03/sqlpatch_invocation.log
Connecting to database…OK
Gathering database info…done
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions…done
Determining current state…done
Current state of interim SQL patches:
Interim patch 30978304 (ORA-20000 DURING IMPDP WITH STATS AND THE UNIQUE INDEX FOR THE PK IS NOT CREATED):
Binary registry: Unknown as -force or -noqi specified
PDB CDB$ROOT: Applied successfully on 03-JUN-22 09.31.57.509100 AM
PDB PDB$SEED: Applied successfully on 03-JUN-22 09.31.59.525698 AM
PDB SEPCB19: Applied successfully on 03-JUN-22 09.32.01.569577 AM
Interim patch 33588396 (MERGE ON DATABASE RU 19.14.0.0.0 OF 33422777):
Binary registry: Unknown as -force or -noqi specified
PDB CDB$ROOT: Rolled back successfully on 03-JUN-22 09.31.55.785252 AM
PDB PDB$SEED: Rolled back successfully on 03-JUN-22 09.31.57.786085 AM
PDB SEPCB19: Rolled back successfully on 03-JUN-22 09.31.59.809563 AM
Interim patch 33661078 (MERGE ON DATABASE RU 19.14.0.0.0 OF 26749785 29213893):
Binary registry: Unknown as -force or -noqi specified
PDB CDB$ROOT: Rolled back successfully on 03-JUN-22 09.31.55.788937 AM
PDB PDB$SEED: Rolled back successfully on 03-JUN-22 09.31.57.789397 AM
PDB SEPCB19: Rolled back successfully on 03-JUN-22 09.31.59.812946 AM
Interim patch 33970238 (MERGE ON DATABASE RU 19.15.0.0.0 OF 26749785 29213893):
Binary registry: Unknown as -force or -noqi specified
PDB CDB$ROOT: Applied successfully on 03-JUN-22 09.31.57.723806 AM
PDB PDB$SEED: Applied successfully on 03-JUN-22 09.31.59.740464 AM
PDB SEPCB19: Applied successfully on 03-JUN-22 09.32.01.788790 AM
Current state of release update SQL patches:
Binary registry:
Unknown as -force or -noqi specified
PDB CDB$ROOT:
Applied 19.15.0.0.0 Release_Update 220331125408 successfully on 03-JUN-22 09.31.56.386470 AM
PDB PDB$SEED:
Applied 19.15.0.0.0 Release_Update 220331125408 successfully on 03-JUN-22 09.31.58.393719 AM
PDB SEPCB19:
Applied 19.15.0.0.0 Release_Update 220331125408 successfully on 03-JUN-22 09.32.00.410398 AM
Adding patches to installation queue and performing prereq checks…done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED SEPCB19
The following interim patches will be rolled back:
30978304 (ORA-20000 DURING IMPDP WITH STATS AND THE UNIQUE INDEX FOR THE PK IS NOT CREATED)
33970238 (MERGE ON DATABASE RU 19.15.0.0.0 OF 26749785 29213893)
Patch 33806152 (Database Release Update : 19.15.0.0.220419 (33806152)):
Rollback from 19.15.0.0.0 Release_Update 220331125408 to 19.1.0.0.0 Feature Release
No interim patches need to be applied
Installing patches…
Patch installation complete. Total patches installed: 9
Validating logfiles…done
Patch 30978304 rollback (pdb CDB$ROOT): SUCCESS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/30978304/24722745/30978304_rollback_SECCB19B_CDBROOT_2022Aug17_09_43_11.log (no errors)
Patch 33970238 rollback (pdb CDB$ROOT): SUCCESS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/33970238/24723513/33970238_rollback_SECCB19B_CDBROOT_2022Aug17_09_48_59.log (no errors)
Patch 33806152 rollback (pdb CDB$ROOT): WITH ERRORS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/33806152/24713297/33806152_rollback_SECCB19B_CDBROOT_2022Aug17_09_49_00.log (errors)
-> Error at line 14390: script rdbms/admin/catbctab.sql
– ORA-00904: “K”.”CERTIFICATE_ID”: invalid identifier
Patch 30978304 rollback (pdb PDB$SEED): SUCCESS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/30978304/24722745/30978304_rollback_SECCB19B_PDBSEED_2022Aug17_09_51_40.log (no errors)
Patch 33970238 rollback (pdb PDB$SEED): SUCCESS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/33970238/24723513/33970238_rollback_SECCB19B_PDBSEED_2022Aug17_10_07_37.log (no errors)
Patch 33806152 rollback (pdb PDB$SEED): SUCCESS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/33806152/24713297/33806152_rollback_SECCB19B_PDBSEED_2022Aug17_10_07_41.log (no errors)
Patch 30978304 rollback (pdb SEPCB19): SUCCESS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/30978304/24722745/30978304_rollback_SECCB19B_SEPCB19_2022Aug17_09_51_40.log (no errors)
Patch 33970238 rollback (pdb SEPCB19): SUCCESS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/33970238/24723513/33970238_rollback_SECCB19B_SEPCB19_2022Aug17_10_06_27.log (no errors)
Patch 33806152 rollback (pdb SEPCB19): SUCCESS
logfile: /opt/oracle/base/cfgtoollogs/sqlpatch/33806152/24713297/33806152_rollback_SECCB19B_SEPCB19_2022Aug17_10_06_34.log (no errors)
Automatic recompilation incomplete; run utlrp.sql to revalidate.
PDBs: PDB$SEED SEPCB19
Please refer to MOS Note 1609718.1 and/or the invocation log
/opt/oracle/base/cfgtoollogs/sqlpatch/sqlpatch_32179_2022_08_17_09_43_03/sqlpatch_invocation.log
for information on how to resolve the above errors.
Hi Curt,
I am very late with my reply. Did you encounter the same issue with newer RUs, too?
Cheers
Mike
Hey Mike,
Thanks for the great blog.
And I’m also facing this same internall error after applied the 19.16 patches on the 19.15 database. And my concern is I need to take downtime to apply this bug patche and my client is very concern about it.
So is it ok if I ignore this error for a while and wait for till next quarter Oracle data patche? Does it impact anything on current running database?
Hi Jubin,
in the first case, you could also recreate the DG configuration (I think). But for the 2nd issue, I don’t have a useful w/a.
Cheers
Mike
No, recreating the DG configuration won’t help. But patch 34446152 have an online installation option. Read the readme)
Thanks for the hint, Alex 🙂
Mike
Hi Mike!
We have oracle 19.14 installed.
We are installing the p33557344_1914000DBRU patch.
Then we install the database release update: 19.15.0.0.220419 (33806152). Whoa!
Fixes [ 33557344 ] will be reverted.
OPatch continues with these fixes: 33806152.
Why install 33557344_1914000DBRU if it will be immediately rolled back?
Hi Alex,
actually there will be a merge, not a complete removal.
Cheers,
Mike
Hi Mike,
Just a friendly notes- we’re hitting a potential bug on Jul 2022 RU where the database on DNFS may have the same issues. Reported to ORACLE support and it seems fairly new . Some databases crashed if the timeout happens on the mount for restore points.
Bug 34560213 – DNFS ENVIRONMENT REPORT ORA-17500: ODM ERR:SKGNFS OPERATION TIMEDOUT DURING BACKUP AFTER 19.16 PATCHING
Jennifer
Thank you Jennifer!!
The above one you mentioned has been closed as a duplicate of:
BUG 34366627 – DNFS IO HANG DURING STRESS TEST
The inclusion got rejected at last minute for 19.17.0 as far as I can see but has been approved and will be included in 19.18.0. One-off patches seem to be available as well.
Thanks for the heads-up.
Cheers,
Mike