Several of you mailed me already and commented. And I know that I promised this blog post for weeks, if not longer. So let me try to explain How to speed up your database and GI patching. Since this blog post has a longer history, I’d say far over 6 months, it will be a bit longer than the usual posts. At when I started writing it, I decided to split it up into several pieces to make it more digestible but also to allow easier finding of certain topics. I hope it may help easing some recent patching pain a bit until relief in the software becomes available.

Photo by Michael Fousert on Unsplash
Some history
No worries, if you are not interested and where I’m coming from with this topic, just skip fast forward and scroll down. All others, let me explain a bit what has happened.
I think I started somewhere around the 19.12 patching time frame, maybe 19.13, when I received several emails and comments from customers. Some of them I know very well. And hence I knew and understood from second one that these weren’t the usual complaints. Peter, Els, Sven and others: Thanks a lot for your heads up. And thanks to everybody who was willing to test and dig deeper in the following months.
Those emails or SRs shared with me all had more or less the same common denominator: Patching is awfully slow. Some of the complaints were regarding GI patching and opatchauto, others were more towards database patching and datapatch, and some seemed to be affected by both. The problem wasn’t a local one since I received mails and comments from customers from basically everywhere. And “awfully slow” in this context meant in one case: More than 12 hours for a two node RAC with just a few databases on it.
Customers shared SRs with me. And all the SRs left the Support engineers pretty clueless about the root cause. Even worse, in some cases customers had to read comments such as “3 hour patching time aren’t bad, what is your complaint” or were faced with miscalculations of the patching duration. I read a lot of angriness in some SRs but pure frustration in others where experienced DBAs just gave up and closed the SR at some point.
The first bug
Many thanks to Peter who really pushed things forward and convinced Support to open a bug:
Bug 33425636 – OPATCHAUTO VER 27 BINARIES APPLY TAKES LONG TIME 19.12 Sev 1 SR
A lot of the SRs I’ve read in the past months were tied to this bug. The subject of Peter’s SR was simply that opatchauto took significantly longer to patch from 19.12 to 19.13 than it had taken from 19.11 to 19.12. And in the next months, I read similar complaints quite often.
But you are not here to get a history lesson. I guess you read this post because you have seen similar issues. And you are highly interested in a solution.
There is not a single solution
At first, let me warn you that there is no single solution. So you may need to read further and identify your area(s). And in addition, let me tell you that some of the topics will affect only environments where you patch in-place while others apply to Multitenant only. And of course, it could be easily that you may be affected by more than one issue.
What is the difference between in-place and out-of-place patching? With in-place patching I refer to applying a patch bundle or a patch into the existing home. This is the classical way how you patch Grid Infrastructure. With out-of-place patching I mean that you install the new base release into a new home, apply all the necessary patches, may it be RUs or OJVM PSU or one-offs or merges. Then you stop your database instance in home_old, start it in home_new, and once you did this on all available nodes, you invoke datapatch to apply the necessary changes to the database(s).
The latter process is the recommended approach. But even I by myself patch in-place in my tiny testing environments since I have not enough space available for a proper dual-home strategy.
You can read now through the areas where we found issues with. It spans from opatch to opatchauto to datapatch. I don’t claim that all issues can be solved this way. And please be assure that development has recognized all of the below topics and is working on fixes. Some are fixed already, other require more changes and may take a bit longer. Where I know workarounds, I of course mentioned them. But please be also aware that some of the workarounds are not well regarded. Still, they should work quite well.
The n-1 discussion
I’ve had a good number of discussions with the owner of the patching vehicle – and of course with customers as well. My take is that nobody on earth needs more than “n-1” patch bundles on disk. When you move up from 19.13.0 to 19.15.0 then you won’t rollback to 19.11.0 anymore. At least I never saw anybody doing this. And even if you needed to do this, you could always provide a home with a 19.11.0 install in such ultra-rare uncommon cases.
But as you and we all know, opatch carries a history with it unless you don’t use out-of-place patching.
And as of now, there is no way to tell opatch (as I do with my Linux environment which I configure to keep only the current kernel and the previous one but not the ones from 2019) to keep only the n-1 version of patch bundles.
I hope that one fine day we’ll get such a configuration option which allows me to configure the patch history. Let’s see if this will happen. I’m still positive.
Why are there several blog posts?
As I explained above, when I started writing about this topic, I realized that it will become a terribly long blog post. So I split it up which allows me to publish a separate post for each topic. Since I don’t monetize this blog I have zero interest in guiding you through several connected pages. But this way it may be easier to find the topic you are interested it quicker via your usual search engine.
So please find the following related blog posts:
<stay tuned – links will be added during the following days with blog posts being available>
Further Links and Information
Read on:
–Mike
Hello Mike,
I found this post and I believe it makes a lot of sense but I didn’t get to test it. May be a point to be corrected by Oracle or investigated
https://www.linkedin.com/pulse/why-my-oracle-19c-db-patching-slow-balaji-govindu
See here please:
https://mikedietrichde.com/2022/05/10/binary-patching-is-slow-because-of-the-inventory/
This is the longer version of Balaji’s LinkedIn post
Thanks
Mike
Hi Mike,
please don’t forget to update this article with the links to the following posts, like https://mikedietrichde.com/2022/05/10/binary-patching-is-slow-because-of-the-inventory/
Already looking forward to the next follow-up post 🙂
Thanks
I will, no worries 🙂
Thanks
Mike
So to be clear, out-of-place patching sounds like it’s the recommended approach anyway, and all this performance trouble is just one particular reason to avoid in-place patching, out of many. Fair statement?
Yep – that is the case.
BUT unfortunately, some environments (for instance in certain clouds) get patched in-place generally, I guess because of space constraints).
And it doesn’t prevent you from datapatch trouble (blog posts to come).
Cheers
Mike
Looking forward to the blog posts. I think it is an area with oportunity for lots of improvements. Based on Doc ID 2853839.1 it seems that out-of-place patching is recomended for GI but in-place is recomended for database home. And for the database home then we have the OJVM patch. In the case of RAC I would like to patch all the components (GI, RDBMS, OJVM) affecting services in each node only once, not several times. And one of the problems with out-of-place patching is the extra work required after patching (for example changing the new home directory in Enterprise Manager for the affected targets).
Hi Rafael,
more is coming soon – I just lack time to write more posts at the moment.
Cheers
Mike
Hi Mike,
thanks for this great block.
As far as I know, you can do an Out Of Place Patch for GI Home, too (Doc ID 2419319.1).
However, you have to clone the current HI Home first. This makes the GI Home getting bigger and bigger.
We consider something like this:
Current GI ist running o 19.11.
We create a new home. 19.3 and aplly the Patch 19.15
Then configure the node to start from the new Home.
Is there any supported way to handle this?
Thx
Christian
Hi Christian,
you are right, when you clone the existing home, you don’t cure the initial problem.
Only a fresh install will fix it.
And certainly this should be supported. But you may please need to check with Support whether there is a MOS note available describing the best approach.
Cheers
Mike
Hi Mike,
thanks a lot.
Then thinking about this approach I got a quite unsure about this one:
We installed 19.10 and lots of one-offs.
Is there a quick way to find out, if all one-offs are includes in 19.16?
My first thought was to check which bugid is fixed with the one-off. Next Step would be to check if 19.16 covers this.
Is there a better and more secure way to check? Or should we open an SR just to be sure?
Regards
Christian
Hi Christian,
sorry for replying with such a delay.
Actually the patch advisor in MOS should be able to do this for you.
The other option I was thinking about:
Request a merge patch “on top of 19.16.0” for these one-offs.
Then Support should check and tell you which ones are already included in 19.16.0.
Cheers
Mike