The headline of this blog post may sound a bit cryptic: Binary patching is slow because of the inventory. But actually this seems to be one of the main causes when your binary patching gets slower and slower with every patch you apply.
The more you patch, the worse it gets
Well, this is a tough headline. But you may have read my blog posts about my quarterly patching experience. It actually started with the July 2021 RU in my 19c environment when I wrote:
At least in my environment, both checks with the new opatch 25 take an awful long time. I think I waited 3 minutes for each check to return back to the command prompt. And since opatch does the checks again when I call the apply in the next stage, the wait time happens again when it tells me “Verifying environment and performing prerequisite checks…“.
I didn’t investigate much further as you can see in my blog post from July 2021. But I saw the same behavior in the following patch cycles for October 2021 and January 2022. And of course, in April 2022 it were now three times 5 minutes and more of waiting time.
Now you’d say: Who cares about five minutes more or less? But keep in mind that I patch only the database home, no GI patching involved. And furthermore, I have just a toy environment where I patch quarterly.
From customer’s feedback such as Peter’s nailed down in:
Bug 33425636 – OPATCHAUTO VER 27 BINARIES APPLY TAKES LONG TIME 19.12 Sev 1 SR
it clearly looked like as if the more patches you’ve applied, the longer it finally takes.
Which is bad since we ask you to patch quarterly. And it would mean that you would suffer more from this effect the more often you patch your database.
A larger development test
Since we’ve had now enough information from various customers we could discuss this based on a lot of data in development. My mates started a large test with a multiple-node RAC environment and a CDB with many PDBs.
As of now, there were several effects reported by several customers:
- The more patches you have, the longer the binary patching takes
- The more cluster nodes you have, the longer it takes
- The more databases you run on each cluster node, the longer it takes
- The more PDBs you have, the longer it takes
And even though some of the above statements seem to be obvious, the effect in some cases was “interesting”.
Since this blog post here is only about the binary patching, I will divert the other questions (and hopefully answers) to additional blog posts in this series.
Back to the development test runs, it became clear that we can’t see the drastic effect for the binary patching. Instead, datapatch seemed to have a bigger effect. So while customers such as Peter or Sven had terrible and longer patching “experiences”, we internally didn’t see this. At least not as impacting as with the customer environments.
I’m not alone
Luckily, there are very smart people out there. And some read my blog.
I quoted Balaji Govindu’s blog post on LinkedIn before already (you need to have a LinkedIn user to see it). But let me describe what he did since he saw the same boring messages in his patch logs I saw: He edited a file in the inventory keeping track of the applied patches on the system: $ORACLE_HOME/inventory/ContentsXML/oui-patch.xml. And by doing so, he could drastically reduce the binary patching time.
It seems to be obvious when you tail -f the logs during a patch run. Opatch is checking for dependencies. And unfortunately it looks like that with every other patch the check of dependencies takes factor X longer than before. So it is not just “one single check more” but a multiplier. The more you patch, the worse it gets.
In addition, a colleague from our Support team (thanks Akash!) made me aware of this bug being filed several months ago describing the impact of this xml file, and the relief when you edit it:
BUG 33606688 – OPATCH APPLY AND ROLLBACK GETTING SLOWER AFTER ONEOFF OR RU PATCH NUMBERS INCREASED
So when you open an SR regarding the patching slowness, you may want to refer to this (unpublished) bug as well.
A quick comparison
So before I show you the workaround, I will test this in my toy environment, our Hands-On Lab. The setup is a 19.14.0 database home with OJVM installed. The target will be 19.15.0. I will do two runs and compare the timings and some information from the logs.
Run 1 – no changes to oui-patch.xml
When I do the checks for space and conflicts at first, I already recognize a long runtime.
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -ph ./
each take several minutes. To be precise, each of the checks in my tiny little playground environment takes 5 min 29 seconds. Each of them of course. So you need to multiply the wait time by two. Hence, these two simple prechecks run 11 minutes together. Keep in mind at this stage that I patch only my database home. No Grid Infrastructure is involved which has the exact same checks with maybe less patches.
In the log file which I tailed in parallel, I see the following messages – and then for several minutes … nothing.
[May 9, 2022 6:23:41 PM] [INFO] [OPSR-TIME] Loading raw inventory [May 9, 2022 6:23:42 PM] [INFO] [OPSR-MEMORY] Loaded all components from inventory. Heap memory in use: 25 (MB) [May 9, 2022 6:23:42 PM] [INFO] [OPSR-MEMORY] Loaded all one offs from inventory. Heap memory in use: 34 (MB) [May 9, 2022 6:23:42 PM] [INFO] [OPSR-TIME] Raw inventory loaded successfully [May 9, 2022 6:23:49 PM] [INFO] [OPSR-TIME] Loading cooked inventory [May 9, 2022 6:23:49 PM] [INFO] [OPSR-MEMORY] : Loading cooked one offs. Heap memory used 848 (MB) [May 9, 2022 6:23:56 PM] [INFO] [OPSR-MEMORY] : Loaded cooked oneoffs. Heap memory used : 682 (MB) [May 9, 2022 6:23:56 PM] [INFO] [OPSR-TIME] Cooked inventory loaded successfully [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 29517242 is a subset of other poh CUP: 33192793 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 29517242 is a subset of other poh CUP: 33515361 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 30557433 is a subset of other poh CUP: 30869156 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 30557433 is a subset of other poh CUP: 31281355 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32218454 is a subset of other poh CUP: 32545013 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32218454 is a subset of other poh CUP: 32904851 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32218454 is a subset of other poh CUP: 33192793 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32218454 is a subset of other poh CUP: 33515361 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32545013 is a subset of other poh CUP: 32904851 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32545013 is a subset of other poh CUP: 33192793 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32545013 is a subset of other poh CUP: 33515361 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32904851 is a subset of other poh CUP: 33192793 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 32904851 is a subset of other poh CUP: 33515361 [May 9, 2022 6:23:56 PM] [INFO] CUP_LOG: Found poh CUP 33192793 is a subset of other poh CUP: 33515361
You recognized the patch IDs of my previously applied patches. I silently assume that opatch now tries to sort out dependencies and potential conflicts.
Of course, I can do nothing than waiting for it to complete. Unfortunately these checks run again later when I invoke opatch to apply the patch bundle. And now you can do the math what happens on a 4-node cluster. Yes, opatch will do this exercise over and over again, node by node.
And please don’t ask me why this takes so hilariously long. In an internal discussion a colleague said: “Well, you’d think that if the data is in memory it should be processes within a few seconds.”. Hm …
And it happens again
Now when I apply the patch, oh wonder, opatch does the same checks again by itself. So I wonder why I’m always following the README and run the checks when opatch is just repeating them. Does it expect my disk to be full a few minutes later? Or does it expect that I secretly applied several other patches in between?
Anyhow, you know it already – my log file looks the same from start with another 5.5 minutes of waiting time when this happens:
$ $ORACLE_HOME/OPatch/opatch apply Oracle Interim Patch Installer version 126.96.36.199.30 Copyright (c) 2022, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 188.8.131.52.30 OUI version : 184.108.40.206.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2022-05-09_18-29-45PM_1.log Verifying environment and performing prerequisite checks...
Here I sit and wait again.
And then, finally, opatch applies the patch bundle to my existing home.
So let me summarize this quickly:
- Precheck manually for conflicts: 5 minutes 29 seconds
- Precheck manually for space: 5 minutes 26 seconds
- Precheck done by opatch during “apply”: 5 minutes 29 seconds
- Raw patch apply duration for 19.15.0:
3 minutes 58 seconds including copying files for rollback
That doesn’t look pleasant. Over 16 minutes waiting for checks versus less than 4 minutes for the actual patching activity.
Something seems to be wrong here. Especially since I can’t see the same behavior in my 21c or my 220.127.116.11 homes despite the fact that I use the exact same opatch version in all three homes and my 18.104.22.168 home has seen almost as many patch bundles as my 19c home has gotten.
This is not the end of the story
I’d like to apply the OJVM bundle from April 2022 as well. Hence, I need to run the checks again according to the README.
You may guess what happens again? Of course, opatch mows through the same inventory and does the exact same checks again. And they take as long as the previous ones with one correction: Since my source home now contains the database RU 19.15.0 already, there is one more patch bundle than before.
I save you the time and reveal the result. 5 minutes 52 seconds now for every check. Times three since I do a conflict check, a space check – and even if I’d save both and leave it to opatch, it will do the same check again. Over 18 minutes for checks now, or at least almost 6 minutes during the apply run.
The actual patching of the OJVM bundle itself took only 52 seconds.
Ok, enough read about how long it takes and what potentially went wrong or is not optimal. You came here to read about the workaround. And the effect it has. So let us start here with some editing work. And keep one important thing in mind: This is NOT an officially supported workaround from Oracle. This is just a hint how you could improve patching duration. Always keep the original oui-patches.xml file. And be aware that it didn’t get updated when you patch with a edited copy of it.
In order to emphasize this, the file even contains a header line telling you:
<!-- Copyright (c) 2022 Oracle Corporation. All Rights Reserved. Do not modify the contents of this file by hand.
I guess now it is clear that you won’t convince Oracle Support to approve manual edits.
Run 2 – manual edit of oui-patch.xml
Let me wipe out opatches’ memory a bit. Why does opatch need to check for a patch which I removed already months ago from my hard disk?
My $ORACLE_HOME/inventory/ContentsXML/oui-patch.xml has 1001 lines at the begin of my little experiment.
You will see that the structure is simple. A patch bundle is enclosed with XML tags between:
<ONEOFF REF_ID="30125133" UNIQ_ID="23151502" TYPE="upgrade" LANGS="en" ROLLBACK="F" XML_INV_LOC="oneoffs/30125133/" ACT_INST_VER="22.214.171.124.0" INSTALL_TIME="2019.Oct.16 19:27:15 CEST"> <DESC>Database Release Update : 126.96.36.199.191015 (30125133)</DESC> <REF_LIST> ... </BUG_LIST> <FILE_LIST/> <PROPERTY_LIST/> </ONEOFF>
I will carefully remove now all blocks except the block(s) for 19.14.0 patches.
A very important annotation since I received a comment on the blog:
When you have one off and/or merge patches for this specific RU in your XML file, then please leave them and don’t wipe them out. Only remove entries which are older. So I consider for instance “n-1” as 19.14.0 with the OJVM 19.14.0 and with one-off patches on top of it.
As a result, my file contains only 136 lines and just two patch bundles, the 19.14.0 RU and the 19.14.0 OJVM:
<?xml version="1.0" encoding="UTF-8"?> <!-- Copyright (c) 2022 Oracle Corporation. All Rights Reserved. Do not modify the contents of this file by hand. --> <ONEOFF_LIST> <ONEOFF REF_ID="33197296" UNIQ_ID="24445578" TYPE="unknown" LANGS="en" ROLLBACK="F" XML_INV_LOC="oneoffs/33197296/" ACT_INST_VER="188.8.131.52.0" INSTALL_TIME="2021.Dec.16 09:42:01 CET"> <DESC>JDK BUNDLE PATCH 184.108.40.206.211019</DESC> <REF_LIST> <REF NAME="oracle.jdk" VER="220.127.116.11.0"/> <REF NAME="oracle.jdk" VER="18.104.22.168.0"/> </REF_LIST> <BUG_LIST> <BUG>33197296</BUG> </BUG_LIST> <FILE_LIST/> <PROPERTY_LIST/> </ONEOFF> <ONEOFF REF_ID="33515361" UNIQ_ID="24589353" TYPE="upgrade" LANGS="en" ROLLBACK="F" XML_INV_LOC="oneoffs/33515361/" ACT_INST_VER="22.214.171.124.0" INSTALL_TIME="2022.Jan.19 21:37:55 CET"> <DESC>Database Release Update : 126.96.36.199.220118 (33515361)</DESC> <REF_LIST> <REF NAME="oracle.help.ohj" VER="188.8.131.52.0" PATCH_LEVEL="3"/> <REF NAME="oracle.perlint" VER="184.108.40.206.0" PATCH_LEVEL="3"/> <REF NAME="oracle.rdbms.locator" VER="220.127.116.11.0" PATCH_LEVEL="3"/> <REF NAME="oracle.perlint.expat" VER="18.104.22.168.4" PATCH_LEVEL="3"/> <REF NAME="oracle.rdbms.rsf" VER="22.214.171.124.0" PATCH_LEVEL="3"/> ... here I cut several lines to shorten it only for display - these lines were left in the file I used ... <REF NAME="oracle.precomp.lang" VER="126.96.36.199.0" PATCH_LEVEL="3"/> <REF NAME="oracle.jdk" VER="188.8.131.52.0" PATCH_LEVEL="3"/> <REF NAME="oracle.jdk" VER="184.108.40.206.0" PATCH_LEVEL="3"/> </REF_LIST> <BUG_LIST> <BUG>7391838</BUG><BUG>8460502</BUG><BUG>8476681</BUG><BUG>14570574</BUG><BUG>14735102</BUG><BUG>15931756</BUG><BUG>15959416</BUG><BUG>16662822</BUG><BUG>16664572</BUG><BUG>16750494</BUG><BUG>17275499</BUG><BUG>17395507</BUG><BUG>17428816</BUG><BUG>17468475</BUG><BUG>17777718</BUG><BUG>18534283</BUG><BUG>18697534</BUG><BUG>19080742</BUG><BUG>19138896</BUG><BUG>19697993</BUG><BUG>20007421</BUG><BUG>20083476</BUG><BUG>20313356</BUG><BUG>20319830</BUG><BUG>20479545</BUG><BUG>20867658</BUG><BUG>20922160</BUG><BUG>21119541</BUG><BUG>21232786</BUG><BUG>21245711</BUG><BUG>21374587</BUG><BUG>21528318</BUG><BUG>21629064</BUG> ... here I cut over 100 lines of bug numbers only for display - those were left in the file I used ... <BUG>33482590</BUG><BUG>33489699</BUG><BUG>33490031</BUG><BUG>33500486</BUG><BUG>33507610</BUG><BUG>33507953</BUG><BUG>33514179</BUG><BUG>33515115</BUG><BUG>33540746</BUG><BUG>33558058</BUG><BUG>33558251</BUG><BUG>33558391</BUG><BUG>33559316</BUG><BUG>33563167</BUG><BUG>33599734</BUG><BUG>33613512</BUG><BUG>33632051</BUG><BUG>33635094</BUG><BUG>33651003</BUG><BUG>33656104</BUG><BUG>33656608</BUG><BUG>33674035</BUG><BUG>33661960</BUG><BUG>30166257</BUG><BUG>33384092</BUG><BUG>33618962</BUG> </BUG_LIST> <FILE_LIST/> <PROPERTY_LIST/> <SUPERSEDES> <REF_ID REF_ID="33192793" UNIQ_ID="24462514" LANGS="en"/> </SUPERSEDES> </ONEOFF> <ONEOFF REF_ID="33561310" UNIQ_ID="24538862" TYPE="unknown" LANGS="en" ROLLBACK="F" XML_INV_LOC="oneoffs/33561310/" ACT_INST_VER="220.127.116.11.0" INSTALL_TIME="2022.Jan.19 22:12:28 CET"> <DESC>OJVM RELEASE UPDATE: 18.104.22.168.220118 (33561310)</DESC> <REF_LIST> <REF NAME="oracle.javavm.server" VER="22.214.171.124.0"/> <REF NAME="oracle.javavm.server.core" VER="126.96.36.199.0"/> <REF NAME="oracle.rdbms.dbscripts" VER="188.8.131.52.0"/> <REF NAME="oracle.rdbms" VER="184.108.40.206.0"/> <REF NAME="oracle.javavm.client" VER="220.127.116.11.0"/> </REF_LIST> <BUG_LIST> <BUG>29445548</BUG><BUG>29254623</BUG><BUG>29540327</BUG><BUG>29774362</BUG><BUG>30134746</BUG><BUG>30160625</BUG><BUG>30534662</BUG><BUG>29512125</BUG><BUG>29942275</BUG><BUG>30855101</BUG><BUG>31306261</BUG><BUG>31359215</BUG><BUG>30895577</BUG><BUG>29224710</BUG><BUG>26716835</BUG><BUG>31668872</BUG><BUG>32165759</BUG><BUG>32069696</BUG><BUG>32032733</BUG><BUG>30889443</BUG><BUG>30674373</BUG><BUG>32167592</BUG><BUG>32523206</BUG><BUG>29415774</BUG><BUG>28777073</BUG><BUG>32124570</BUG><BUG>31247838</BUG><BUG>29540831</BUG><BUG>32892883</BUG><BUG>31776121</BUG><BUG>33223248</BUG><BUG>33563137</BUG><BUG>33184467</BUG><BUG>31844357</BUG> </BUG_LIST> <FILE_LIST/> <PROPERTY_LIST/> </ONEOFF> </ONEOFF_LIST>
The results after wiping out opatches’ memory
You’ll be surprised … NOT … I guess.
Applying the database 19.15.0 RU to my environment with all the checks run manually upfront took:
- Precheck manually for conflicts: 9 seconds
- Precheck manually for space: 8 seconds
- Precheck done by opatch during “apply”: 12 seconds
- Raw patch apply duration for 19.15.0:
3 minutes 51 seconds including copying files for rollback
I guess you’ll have no further questions anymore.
For the OJVM RU in addition, the numbers for the checks went up by 1-2 seconds. Both checks took 10 seconds each.
And the final patch apply took end-to-end 1 minute 11 seconds including everything.
So in sum, patching my environment with 19.15.0 RU Database and 19.15.0 OJVM took less than 6 minutes. That is roughly the time a single check on my system took before with the 1001 line long patch inventory file.
The n-1 discussion … revisited
As you read potentially already in the initial blog post for this series, I always get thrown back on the n-1 topic. Since opatch does not offer a possibility at the moment to just purge everything older than n-1 patches, e.g. keeping only the current patch information such as April 2022 and the previous patch bundle, January 2022, but getting rid of everything older than January 2022, we help ourselves here.
Basically, the workaround does exactly do what a n-1 purging mechanism would do: Wipe out opatches’ memory and just keep information about the previous patch.
Now I discussed this with the opatch team as well. And I’ve been told that it is not trivial to implement such a purge behavior. But the team will discuss and consider it.
I’m still very positive that it will come eventually the sooner or later.
Just keep in mind when you use the workaround to protect your previous oui-patches.xml file.
I’d like to share the results from Sven since he redid and compared the runs, without changes to the XML file and with my proposed changes. Before, the patching times were the worst I have ever seen. Sven is on SPARC Solaris here, and the below times are just for 19.15.0 for the database home.
- Test run 1 with opatch 18.104.22.168.29 (Jan 22 opatch)
Time taken to complete the session 146 minutes, 11 seconds
- Test run 2 with opatch 22.214.171.124.30 (Apr 22 opatch)
Time taken to complete the session 111 minutes, 51 seconds
- Test run 3 with opatch 126.96.36.199.30 and oui-patch.xml edit to n-1
Time taken to complete the session 40 minutes, 58 seconds
So you recognize two things:
- There is a dependency on the opatch version (future blog post)
- There is a significant improvement with a n-1 inventory file
Let me summarize this tiny little exercise here.
At first, it is clear that the manual edit of $ORACLE_HOME/inventory/ContentsXML/oui-patch.xml is not supported. But it is VERY efficient to do it. But keep in mind that you will see this problem I described above only with in-place patching. When you do out-of-place patching as we recommend, you will start with a fresh vanilla $ORACLE_HOME/inventory/ContentsXML/oui-patch.xml. Hence, this issue won’t affect you at all.
But in case you are suffering from long binary patching times, check your logs. Even when you are not doing the prechecks beforehand. Especially in RAC environments where more than one node gets patches, and especially in environments where you patch not only the database but also the Grid Infrastructure, you may want to check your opatch logfiles. opatch always tells you the location of the logs when you call opatch. And you will always find them in $ORACLE_HOME/cfgtoollogs/opatch.
The result above from my tiny little test environment is impressive. I compare more than 16 minutes patching duration with the large xml file to 5.5 minutes with a reduced file – even when I don’t run the checks beforehand and leave it to opatch. The result gets worse when I run all the recommended checks manually by myself.
Of course, development has taken notice of this as well. And I guess, the sooner or later we may see a fix or a purge mechanism. But since this tool is not owned by our team, I can’t promise anything.
Finally, let me please ask you NOT to shoot the messenger. Thanks 🙂
Further Links and Information
- Patching all my environments with the July 2021 Patch Bundles
- Balaji Govindu’s blog post on LinkedIn about Patching Slowness
- Unpublished BUG 33606688 – OPATCH APPLY AND ROLLBACK GETTING SLOWER AFTER ONEOFF OR RU PATCH NUMBERS INCREASED
- Unpublished Bug 33425636 – OPATCHAUTO VER 27 BINARIES APPLY TAKES LONG TIME 19.12 Sev 1 SR
I suspect same issue is relevant to OPatch for FMW (Patch 28186730). Might be those two even share same code paths.
April 2022 security patching on one of the FMW installs took ~4h for 10 patches.
you may be very correct with your assumption.
Thanks for the post. I have the exact same problem on several customer’s systems. I am looking forward to a fixed opatch version that handles this automatically. I cannot go with a not supported solution on production systems. Furthermore, I have a lot of DB Systems in OCI where dbcli/Oracle handles the patching, hence I expect OCI to fix this for me :-).
It worries me a bit that the opatch team cannot fix this issue easily and instead leaves the responsibility for users to manipulate files in an unsupported way.
I see your point. I just can’t give an estimate. But maybe the blog post and customer feedback helps!
I really appreciate you are using your “Muscle” to reach development and discuss this matters. I did not want to shoot the messenger 🙂
I really hope you will continue to put high pressure on development regarding this topic. It does not help much that Oracle continues to release quarterly patches, if it takes 4 hours to apply them. It is no go in production – and the result will be no patching at all. It should be top priority by the opatch team. We are talking about an xml file (?!) – I am still finding it hard to understand why it takes 5-10 minutes for each opatch command (apply,rollback,CheckActiveFilesAndExecutables etc.) to complete – even though 30 older patches have been apply before this.
Unfortunately, Oracle support is hopeless on this (like “Talk to the hand”). You get no useful answer from them.
the opatch development team is working on it. And hopefully in 2023 we will have a solution for it.
Thanks for the awesome work you and the team is doing!
What happens if you have multiple one-off’s or interim patches? I assume they should be kept in the inventory?
Thanks a lot, Jonas – this is a very valid point.
I updated the blog post with an annotation in RED.
So if you keep “n-1” in the xml file, let’s say 19.14.0 RU with the 19.14.0 OJVM, and you had one-off patches and/or merge patches on top, then you need to keep them in the XML as well as otherwise a conflict check may give not the correct warnings when you evaluate 19.15.0 now.
I tried to clarify this – thanks a lot for the hint!
Thank you a lot for this clarification Mike. Really this helps a lot, to put light on this hidden mechanism that Opatch uses, and also for the workaround.
Just to comment on OCI Databases that have now 14 Total DB patches including OJVM and 7 GI Patches. Due to this issue, it is now exceeding 2 hour timeout window that OCI uses and it cancels the patch job and we need to resubmit it. For a 2 node RAC Primary and 2 node RAC DB, it is taking 16 hours to complete the entire patching. This issue needs to be raised at highest level.
Also in OCI Patch 34088989;ONE-OFF REQUEST FOR DELETE DIR FOR SUPTOOLS/TFA + SUPTOOLS/ORACHK + SUPTOOLS/ORACHK.ZIP FROM DB PSU/BP/RU/RUR is automatically applied and it fails as there is a symbolic link for orachk that belongs to root. Simple solution is to run below.
login as opc
ls -lrt /u01/app/oracle/product/188.8.131.52/dbhome_1/suptools/orachk
sudo rm /u01/app/oracle/product/184.108.40.206/dbhome_1/suptools/orachk/orachk
ls -lrt /u01/app/oracle/product/220.127.116.11/dbhome_1/suptools/orachk
and I know that this is an issue in OCI as well. I hope that we’ll fix this soon. You hopefully opened an SR and told support about it.
does opatch util cleanup help this process speed up?
I’ve used it to cleanup the patch_storage area to remove un-needed patches.
at least as far as I know it doesn’t cleanup the XML file.
But at least, it cleans up things now for the first time as customers proved.
Thanks for your nice blogs – very useful!
Did a “opatch util archive” the way you described in blog https://mikedietrichde.com/2022/01/21/can-i-cleanup-the-previous-patch-bundles-with-opatch/ and inspected file oui-patch.xml today.
All the archived patches are no longer mentioned in this file. Seems that this is the better solution than manually edit oui-patch.xml ?
I will verify this since there seems to be changes to the “util” behavior recently.
me again – have to correct me: “opatch util archive” does not update file oui-patch.xml.
On production servers, I’ve got the complete history here: starting with 19.4 and all following RUs until 19.14. On my test-server, which I inspected first, it starts with 19.10 and the database says “19.4 was the first patch”. But the DB is a RMAN duplicate from production. The test-server was built later; with 19.10.
Sorry for the wrong hint.
Thanks Martin 🙂
a bit of topic, but do you know if there are any ReleaseNotes/ChangeLogs available for opatch?
I can’t hear you … 😉 …
(sorry for my sense of humor)
Every few weeks/months a new version of opatch gets released.
But didn’t find anything were I could check which bugs got fixed or which improvements were made or which new features got added.
Maybe I just searched the wrong places…?
I wish I’d have access to it – but I am as surprised as you are 🙁
We raised this internally a few times already …
I also noticed all the applied RU patch during last years are stored on $ORACLE_HOME/.patch_storage, and old patches are not removed. I know it was needed in case you need to rollback a patch, but
if patches are included in last RU or previous RU, why to keep all applied RU patch older than N -1 here? Maybe they have some dependence with the XML file?
this is a topic we discussed internally a lot.
Unfortunately we don’t own the product.
Thanks Mike. Unfortunately, we use Database System on OCI. Patching the 19.15 takes hours to complete. We are getting used to this. Any suggestion how to fix this? Pre-creating a new custom image could be a solution, but this image depends on the availability of the patch (to create to image) and the patch is mostly weeks/months delayed after release.
No, I’m not really impressed with patching on OCI 🙁
several people reported this problem. You may need open an SR and get Support on this,
Did another patch tonight 19.14 to 19.15…. taking 3 hours and 11 minutes…. Started SR, because this is totally nonsense.
It seems this is not tested at all by Oracle…
did you receive a useful reply?
My RU patching took hours to complete on DBCS (Database systems).
Oracle Support came up with an explanation that didn’t convince me in the first place: the dataguard was having difficulties with keeping up during the patch.
Instead of first patching the primary, I would have to patch standby first, switchover, patch the other node and switch back. After that, manually run the datapatch verbose on primary.
To my surprise, this really helped, and instead of up to three hours, the patch was now ready in 30mins, comparable with on-premise installations.
Just wanted to share this with you, to let know it is solved.
thanks for the feedback. I learned about this issue being present in DBCS and ExaCS as well at a Patching workshop we delivered at AOUG in Vienna in May. I think this is supposed to vanish soon (if it hasn’t already) since the homes should be freshly deployed in DBCS etc soon.
Cheers, and thanks a lot for the feedback!
no, i did not get a useful reply from support. They said it had to do with the fact that dataguard was in place. I should have patched the standby first, switchover, patch old primary, switch back. The whole scenario did not convince me that it would speed up anything. I will follow the steps as suggested for the 19.16 patch, and will need start the next SR by then (with a probability of 99%).
I may need to escalate this somehow by then.
this is not satisfying – I don’t see how this should have speed up anything 🙁
Thanks for your reply!
Hi, a bit late to jump on the train, but here is:
I have also been struggling with this problem, both in production and in my own test environment. I found that opatch …util..archive works, but only in that it frees space, the prereq times will still be the same.
The work-around I now use in my test environment is to deinstall *all* patches in the chain (using opatch nrollback -id patchid1,patchid2…) so the install finally gets back to version 19.0 without any patches.
This has to be run in several stages, after every run, the last RU applied will be marked active.
After this I run opatch nrollback patchid1,patchid2… for the currently applied patches before every install of the new quarterly patch.
It’s tedious, especially the first time when I had to deinstall every RU between 19.3 and 19.15, but it works, and is (probably) supported, since I’m not fiddling around manually with any files.
This will save space in .patch_storage and save time when doing prechecks. It will not however remove the history in directory sqlpatches, but this used space is not that big, so I am ignoring that.
hopefully in the first half of 2023 we finally may see a solution for this.
from my Ticket “opatchauto V27 binaries apply takes long time”
Development has mentioned that Fix is included in “Fix By: 18.104.22.168.35DB”
Don’t know which fix was meant there – datapatch or binary patch speed.
But my last patching experience from RU17 in a RAC was good:
Binary patching speed has reduced from 50 minutes per node to 30 minutes.
getting rid of the opatch history is the change which is supposed to be in the 35 release. I just double-checked last night if this is still on track.
@Peter: you already got opatch 22.214.171.124.35?
Currently I only can download 126.96.36.199.33…
he can’t since this version is not available before spring 2023 yet.
188.8.131.52.35 was released 12 days ago.
I haven’t checked how it’s improved yet.
Since the opatch team does not document a lot, I guess it will be to “try it out” to see whether there are improvements.
Thanks for the heads-up!
184.108.40.206.35 did not contain any improvements.
BUG 33606688 documentation says:
“For now, ETA is July23,. If things go well, we can deliver early, but not guaranteed.”
Yes, I know – let’s keep fingers crossed.
We did an out off place patching and when we run datapatch apply we run into an error “32876380 Archived patch directory is empty”
Well, docId 2235541.1 gave the solution:
Datapatch tried to rollback same datapatches.
But as we started with 19.3 and applied 19.16 on it, no information of former patches was in the datapach directory.
Perhaps a hint to other users might be helpfull.
FYI, OPatch 220.127.116.11.36 is no better.
Yes, this is expected.
The feature hopefully will appear this year, spring/summer northern hemisphere time frame.