Oh well, time flies. And it is April 2021, and hence I will start Patching all my environments with the April 2021 Patch Bundles. In my case, this will be 19.11.0 and 220.127.116.11 Release Updates. But there will be an additional blog post for the OJVM bundle, too.
As usual, an important annotation upfront: I patch in-place due to space issues. But in reality, you please patch always out-of-place with a separate home. Please see this blog post about how to apply the RU directly when you provision a new home with OUI.
Security Alert April 2021
Find the Security Alert for April 2021 here. And don’t forget to take a look at the Oracle Database Server Risk Matrix for April 2021. This time the risk score for core database products is relatively low – and also the usual OJVM issue is leveled at 5.3 “only” on a scale up to 10.0 max.
But to be frank, the reason why 19.11.0 is so important is the number of fixes included into it. But the list of Fixed Bugs (MOS Note: 2523220.1) hasn’t been updated yet while I’m writing this article.
Database Patch Bundles
You will find the links to the individual patch bundles in MOS Note: 2749094.1 – Critical Patch Update (CPU) Program Apr 2021 Patch Availability Document (PAD). And please note that a patch number in the document does not necessarily mean that your patch is available already. Please find a discussion about this recurring topic here: Why is the release update not available on my platform yet? Check especially the Section 2.2 (Post Release Patches).
- Oracle Database 19c
- Database Release Update 18.104.22.168.210420 Patch 32545013 for UNIX
- List of fixes: MOS Note: 2523220.1 (as usual, it does not include any 19.11.0 fixes while I write this blog post unfortunately)
- Oracle Database 22.214.171.124
- Database Apr2021 Release Update 126.96.36.199.210420 Patch 32507738 for UNIX
- List of fixes: MOS Note: 2245178.1 (also not updated yet)
One of the most important notes, MOS Note: 2118136.2 – Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases has been updated already with the links for the April 2021 bundles. Just be aware that some platforms are delayed and check section 2.2 of MOS Note: 2725756.1. For instance, all ports except Linux have currently a projected availability date of April 30, 2021 for the RU 19.11 including Windows.
Do I need a new OPatch?
First check while the download is progressing: Do I need to refresh my OPatch versions?
- 19.11.0 requires opatch 188.8.131.52.24 or later
- 184.108.40.206 April 2021 requires opatch 220.127.116.11.23 or later
In my case I need to update opatch in my 19c home. The 6880880 link from the 19.11.0 Readme brings you directly to the correct download:
Download OPatch via 6880880. Wipe out your current
OPatch directory in your homes. Once you unzip the new OPatch bundles. you can proceed.
Applying RU 19.11.0 to my 19c home
At first, I unzip the patch into a separate directory and place myself into this directory ~/32545013 . This time my apply run will be a bit more interesting since I have one-off patches for Blockchain Tables and for DBMS_OPTIM_BUNDLE in my 19.10.0 home. But you will see below under 2. that opatch solves this flawlessly.
And just to be clear, I install the RUs in-place only because I have a toy-environment and not much space left. In your real wold environments you please always either clone the home and apply the new RU, or you install 19.3.0 with the newest RU on top in one pass. See this blog post about how to apply the RU directly when you provision a new home with OUI for more information.
Patch conflict check
$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./ Oracle Interim Patch Installer version 18.104.22.168.24 Copyright (c) 2021, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 22.214.171.124.24 OUI version : 126.96.36.199.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-21_13-47-57PM_1.log Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded.
In addition, a quick space check:
$ $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -ph ./ Oracle Interim Patch Installer version 188.8.131.52.24 Copyright (c) 2021, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 184.108.40.206.24 OUI version : 220.127.116.11.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-21_14-04-51PM_1.log Invoking prereq "checksystemspace" Prereq "checkSystemSpace" passed. OPatch succeeded.
All looks good, my checks ran fine.
$ $ORACLE_HOME/OPatch/opatch apply Oracle Interim Patch Installer version 18.104.22.168.24 Copyright (c) 2021, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 22.214.171.124.24 OUI version : 126.96.36.199.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-21_14-08-04PM_1.log Verifying environment and performing prerequisite checks... Conflicts/Supersets for each patch are: Patch : 32545013 Bug Superset of 32431413 Super set bugs are: 32431413 Bug Superset of 31862593 Super set bugs are: 31862593 Patches [ 32431413 31862593 ] will be rolled back. -------------------------------------------------------------------------------- Start OOP by Prereq process. Launch OOP... Oracle Interim Patch Installer version 188.8.131.52.24 Copyright (c) 2021, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 184.108.40.206.24 OUI version : 220.127.116.11.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-21_14-10-26PM_1.log Verifying environment and performing prerequisite checks... Conflicts/Supersets for each patch are: Patch : 32545013 Bug Superset of 31862593 Super set bugs are: 31862593 Bug Superset of 32431413 Super set bugs are: 32431413 Patches [ 32431413 31862593 ] will be rolled back. OPatch continues with these patches: 32545013 Do you want to proceed? [y|n] y User Responded with: Y All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/oracle/product/19') Is the local system ready for patching? [y|n] y User Responded with: Y Backing up files... Applying interim patch '32545013' to OH '/u01/app/oracle/product/19' ApplySession: Optional component(s) [ oracle.network.gsm, 18.104.22.168.0 ] , [ oracle.rdbms.ic, 22.214.171.124.0 ] , [ oracle.rdbms.tg4db2, 126.96.36.199.0 ] , [ oracle.tfa, 188.8.131.52.0 ] , [ oracle.options.olap.api, 184.108.40.206.0 ] , [ oracle.ons.cclient, 220.127.116.11.0 ] , [ oracle.options.olap, 18.104.22.168.0 ] , [ oracle.network.cman, 22.214.171.124.0 ] , [ oracle.oid.client, 126.96.36.199.0 ] , [ oracle.ons.eons.bwcompat, 188.8.131.52.0 ] , [ oracle.net.cman, 184.108.40.206.0 ] , [ oracle.xdk.companion, 220.127.116.11.0 ] , [ oracle.jdk, 18.104.22.168.0 ] not present in the Oracle Home or a higher version is found. Rolling back interim patch '32431413' from OH '/u01/app/oracle/product/19' Patching component oracle.rdbms, 22.214.171.124.0... Patching component oracle.rdbms.rsf, 126.96.36.199.0... RollbackSession removing interim patch '32431413' from inventory Rolling back interim patch '31862593' from OH '/u01/app/oracle/product/19' Patching component oracle.rdbms, 188.8.131.52.0... RollbackSession removing interim patch '31862593' from inventory OPatch back to application of the patch '32545013' after auto-rollback. Patching component oracle.rdbms.rsf, 184.108.40.206.0... Patching component oracle.rdbms.util, 220.127.116.11.0... Patching component oracle.rdbms, 18.104.22.168.0... [... cut and shorten output ...] Patching component oracle.ovm, 22.214.171.124.0... Patching component oracle.rdbms.rsf.ic, 126.96.36.199.0... Patching component oracle.precomp.common, 188.8.131.52.0... Patching component oracle.precomp.lang, 184.108.40.206.0... Patching component oracle.jdk, 220.127.116.11.0... Patch 32545013 successfully applied. Sub-set patch  has become inactive due to the application of a super-set patch . Please refer to Doc ID 2161861.1 for any possible further required actions. OPatch Session completed with warnings. Log file location: /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-04-21_14-10-26PM_1.log OPatch completed with warnings.
Ok, the two one-offs seem to be included in the RU 19.11.0 as there was no conflict flagged. But what do I do with the warning at the end of the opatch run about Sub-set patch ?
In such situations I know why you all love patching so much. A quick check with 32218454 on MOS reveals: This is the 19.10.0 RU. No idea why this gets flagged as a warning as it is expected certainly that 19.11.0 invalidates and obsoletes 19.10.0 – especially since RUs are cumulative.
$ $ORACLE_HOME/OPatch/datapatch -verbose SQL Patching tool version 18.104.22.168.0 Production on Wed Apr 21 14:40:26 2021 Copyright (c) 2012, 2021, Oracle. All rights reserved. Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_2302_2021_04_21_14_40_26/sqlpatch_invocation.log Connecting to database...OK Gathering database info...done Note: Datapatch will only apply or rollback SQL fixes for PDBs that are in an open state, no patches will be applied to closed PDBs. Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation (Doc ID 1585822.1) Bootstrapping registry and package to current versions...done Determining current state...done Current state of interim SQL patches: No interim patches found Current state of release update SQL patches: Binary registry: 22.214.171.124.0 Release_Update 210413004009: Installed PDB CDB$ROOT: Applied 126.96.36.199.0 Release_Update 210108185017 successfully on 20-JAN-21 01.03.57.749255 AM PDB PDB$SEED: Applied 188.8.131.52.0 Release_Update 210108185017 successfully on 20-JAN-21 01.03.58.234326 AM PDB PDB1: Applied 184.108.40.206.0 Release_Update 210108185017 successfully on 20-JAN-21 01.03.58.234326 AM Adding patches to installation queue and performing prereq checks...done Installation queue: For the following PDBs: CDB$ROOT PDB$SEED PDB1 No interim patches need to be rolled back Patch 32545013 (Database Release Update : 220.127.116.11.210420 (32545013)): Apply from 18.104.22.168.0 Release_Update 210108185017 to 22.214.171.124.0 Release_Update 210413004009 No interim patches need to be applied Installing patches... Patch installation complete. Total patches installed: 3 Validating logfiles...done Patch 32545013 apply (pdb CDB$ROOT): SUCCESS logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/32545013/24175065/32545013_apply_CDB2_CDBROOT_2021Apr21_14_41_01.log (no errors) Patch 32545013 apply (pdb PDB$SEED): SUCCESS logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/32545013/24175065/32545013_apply_CDB2_PDBSEED_2021Apr21_14_42_22.log (no errors) Patch 32545013 apply (pdb PDB1): SUCCESS logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/32545013/24175065/32545013_apply_CDB2_PDB1_2021Apr21_14_42_22.log (no errors) SQL Patching tool complete on Wed Apr 21 14:43:47 2021
This worked fine, too. And as always, please ensure that all your PDBs are open read-write when you execute datapatch.
Applying RU 126.96.36.199.210420 to my 188.8.131.52 home
I won’t copy/paste all steps as the output is similar to the above. And I can still use the opatch from my January 2021 patch apply operation.
- Patch conflict check
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
- opatch apply
All steps worked fine and flawless.
But there may be a bit more to do in addition.
Important Fixes on top?
Please check MOS Note: 555.1 (Oracle Database 19c Important Recommended One-off Patches) from time to time. Currently there are no recommended one-off patches on top of 19.11.0 noted. Even though, the Data Pump Super Patch is coming to my mind immediately. And it should be available for 19.11.0 soon as well under patch 32551008.
For Grid Infrastructure 19.11.0 you definitely should apply the fix mentioned in MOS Note: 2774284.1 – ALERT: While applying or after applying 19.11 GI RU, “crsctl start crs -wait” hangs or databases fail to start. While I’m adding this on May 17, 2021, the one-off patch 32847378 for Linux has been made available already.
Enabling optimizer fixes
As you can see in my previous blog post, in 19.10.0 an extra patch was necessary to tell DBMS_OPTIM_BUNDLE about the existing optimizer fixes turned off by default.
But this time all seems to work fine and as expected:
SQL> set serveroutput on; SQL> execute dbms_optim_bundle.getBugsforBundle; 184.108.40.206.210420DBRU: Bug: 32037237, fix_controls: 32037237 Bug: 30927440, fix_controls: 30927440 Bug: 31788104, fix_controls: 30822446 Bug: 24561942, fix_controls: 24561942 Bug: 31625959, fix_controls: 31625959 Bug: 31976303, fix_controls: 31579233 Bug: 29696242, fix_controls: 29696242 Bug: 31626438, fix_controls: 31626438 Bug: 30228422, fix_controls: 30228422 Bug: 32122574, fix_controls: 17295505 Bug: 29725425, fix_controls: 29725425 Bug: 30618230, fix_controls: 30618230 Bug: 30008456, fix_controls: 30008456 Bug: 30537403, fix_controls: 30537403 Bug: 30235878, fix_controls: 30235878 Bug: 30646077, fix_controls: 30646077 Bug: 29657973, fix_controls: 29657973 Bug: 30527198, fix_controls: 29712727 Bug: 20922160, fix_controls: 20922160 Bug: 30006705, fix_controls: 30006705 Bug: 29463553, fix_controls: 29463553 Bug: 30751171, fix_controls: 30751171 Bug: 31009032, fix_controls: 31009032 Bug: 30207519, fix_controls: 30063629, 30207519 Bug: 31517502, fix_controls: 31517502 Bug: 30617002, fix_controls: 30617002 Bug: 30483217, fix_controls: 30483217 Bug: 30235691, fix_controls: 30235691 Bug: 30568514, fix_controls: 30568514 Bug: 28414968, fix_controls: 28414968 Bug: 32014520, fix_controls: 32014520 Bug: 30249927, fix_controls: 30249927 Bug: 31580374, fix_controls: 31580374 Bug: 29590666, fix_controls: 29590666 Bug: 29435966, fix_controls: 29435966 Bug: 29867728, fix_controls: 28173995, 29867728 Bug: 30776676, fix_controls: 30776676 Bug: 26577716, fix_controls: 26577716 Bug: 30470947, fix_controls: 30470947 Bug: 30979701, fix_controls: 30979701 Bug: 31435308, fix_controls: 30483184, 31001295 Bug: 31191224, fix_controls: 31191224 Bug: 31974424, fix_controls: 31974424 Bug: 29385774, fix_controls: 29385774 Bug: 28234255, fix_controls: 28234255 PL/SQL procedure successfully completed.
So I can easily enable the fixes:
SQL> execute dbms_optim_bundle.enable_optim_fixes('ON','BOTH', 'YES')
Further Links and Information
- Security Alert for April 2021
- Oracle Database Server Risk Matrix for April 2021
- List of Fixed Bugs (MOS Note: 2523220.1)
- MOS Note: 2749094.1 – Critical Patch Update (CPU) Program Apr 2021 Patch Availability Document (PAD)
- MOS Note: 555.1 – Oracle Database 19c Important Recommended One-off Patches
- MOS Note: 2118136.2 – Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases
- Opatch 6880880
- Patching all my environments with the January 2021 Patch Bundles
- How to apply the RU directly when you provision a new home with OUI
- MOS Note: 2774284.1 – ALERT: While applying or after applying 19.11 GI RU, “crsctl start crs -wait” hangs or databases fail to start
- GI Patch 32847378
patch 32356628 (SIGNIFICANT INCREASE IN LIBRARY CACHE MUTEX X WAIT TIME AFTER 19c UPGRADE) has to be reinstalled for 19.11.
and no correction for bug 32649737 included in this RU.
thanks for this hint. As you can see in my recent blog post:
I added your recommendation.
Thanks a lot,
Hey Mike, should the opatch download be for “OPatch 220.127.116.11.24 for DB 19.x releases”? Not “OPatch 18.104.22.168.24 for DB 20.x releases”?
it is the exact same opatch, no worries 🙂
Is there an OJVM patch for 22.214.171.124?
Please find it in:
Check for 126.96.36.199 – and then scroll to the end of the table section listing the 12.2 fixes.
A question regarding GI and DB version compatibility.
We are about to start ipgrading our GI to 19. We will use 19.11.0.
DB upgrades will start later on the year, by when I assume 19.12.0 (or even 19.13.0) will have been released.
In the past, DB version had to less or equal to GI version. Am I correct in recalling that that restriction has now been lifted?
If so (assuming no restrictions to the contrary when released) can we run, say, 19.12.0 DB on 19.11.0 GI?
please see here:
DB version can be higher in the 2nd and 3rd number of the release – but must be equal or lower in the first number.
today I’ve applied RU19.11 on GI and DB with newest opatch (V24).
So far all went fine but my impression is from RU to RU the update needs longer. RU 19.11 needed 50 minutes per node, RU 19.10. 30 minutes RU 19.9. 25 minutes.
Is that caused by new opatch versions I always used? Do you or customer made e same expereinces?
I assume that this has to do with the VERY high number of pages in 19.11.0.
Do you refer to the “opatchauto” part or to the “datapatch” part?
it was the opatchauto part.
I can only guess that it is because of the higher number of fixes. While 19.10.0 had way more than 1000 fixes, 19.11.0 has even way more than 19.10.0 had, in both cases, for the GI and especially for the DB part. But we would need to look at the logs to determine why it has taken so long.
The other day I provisioned a new home and applied 19.11.0. straight away plus two one offs with -applyRU and -applyOneOffs – and my feeling was that it took significantly longer when the OUI says “applying patches” for the 19.11.0 part. But I didn’t dig deeper.
Just heads up – We’re using DNFS on Linux x86-64 (Red Hat Linux 7) and after applying Apr 2021 RU on Oracle 19 , and turn DNFS back on, we get an error in sqlplus. If we turn off DNFS, the error message will be gone but it’s not an acceptable solution. We’re working with Oracle support now
SQL*Plus: Release 188.8.131.52.0 – Production on Fri May 7 15:28:57 2021
Copyright (c) 1982, 2020, Oracle. All rights reserved.
ORA-17505: ksfdrsz:5 Failed to resize file to size 6 blocks
ORA-17505: ksfdrsz:5 Failed to resize file to size 6 blocks
I couldn’t find any useful MOS note or bug yet. Do you have an SR number you may be able to share with me please?
Hi Mike and Jennifer, I resolved this issue on AIX with relink -all from ORACLE_HOME/bin.
Caution with the relink -all command, it disabled dnfs.
I’d recommend checking if unified auditing is disabled as well, and if any options that were disabled are re-enabled after relinking.
Thanks for the feedback, Monte!
After some more experimentation, we discovered that (unfortunately) we need to turn off unified auditing to apply the patch successfully.
This sounds not good to me 🙁
we have exact the same error on SPARC. After disabling dnfs with relink -all we can finish the patch without error.
Please let me know if you are interested in our SR-Number.
yes, please share the SR number with me. I will investigate further.
I am new to Database SysOps.
Is a Database Patch Set Update different from a Critical Patch Update?
please see our first Virtual Classroom Seminar:
or click on the picture and get the slides. Advance them a little bit and you will find it all in graphical representation.
a) CPU – Critical Patch Update – only security and regression fixes until 184.108.40.206
b) PSU – Similar to CPUs but with a bit more content – available until 220.127.116.11
c) RU – Release Update, available since 18.104.22.168
There are no PSUs or CPUs anymore, neither the name nor the content compilation exists. In previous releases, RUs were called BPs (Bundle Patches).
My dayli morning routine: I’ve just looked in the MOS alerts and saw this scary one:
Alert: While applying or after applying 19.11 GI RU, “crsctl start crs -wait” hangs or databases fail to start (Doc ID 2774284.1)
Good for me, I’ve only patched our test env a week ago. But the MOS description notes “The Grid Infrastructure (GI) environment running with 19.11 GI RU in GI HOME may have problems starting databases, especially if the physical standby database is running and active dataguard is NOT being used”
I’m not sure if there is only an Oracle Restart with Dataguard or an Oracle RAC with Dataguard issue and Oracle RAC Only is not affected.
Do you have this information? I thought I asked it here, because it would be from interest for all to know that
Thank you very much!
we mailed already in addition – and the MOS note has been precised I think.
The Apr 2021 RU is generating few issues for us .
NFS only: We have a large number of DB servers and Oracle_home is installed on NFS/Netapp storage
Direct NFS: Enabled for database files ONLY for performance reasons ( ORACLE_HOME is NOT direct NFS)
Pure auditing : ON
It’s working perfectly with any Oracle 19 and we have Jan RU without issue.
Upon testing instaling Apr 2021 RU on a new ORACLE_HOME , we can’t even get to sqlplus , not even able to get trying to startup any database; The error is
sqlplus / as sysdba
SQL*Plus: Release 22.214.171.124.0 – Production on Mon May 17 14:19:33 2021
Copyright (c) 1982, 2020, Oracle. All rights reserved.
ORA-17505: ksfdrsz:5 Failed to resize file to size 2 blocks
ORA-17505: ksfdrsz:5 Failed to resize file to size 2 blocks
The fun part is if I turn off ‘Pure Auditing’, sqlplus is happy; If I turn off ‘Direct NFS’, it works as well but we can’t accept turning off either options , which has been working fine.
What I found is that if I point /oracle/audit (which is currently under NFS) to a local drive , sqlplus is happy.
This behaviour is only happening in Apr 2021 RU , but none of the previous RU. I think some integration testing is missing. Upon checking oracle documentation, it said direct NFS is not supported for audit files, hey but my $ORACLE_HOME is NFS, not Direct NFS.
Did any other customers complain about this?
There’re some other minor issues with this Apr 2021 RU, I tested to deinstall and it’s giving me wierd JAVA error. I could only remove the Apr 2021 RU ORACLE_HOME by first uninstalling the Apr 2021 RU and then deinstall is then working. Something is missing …
can you please share the SR number with me?
I have been having the same issues on AIX 64bit since RU19.11 release. It took me some time to identify it is a case of – either DNFS or Unified Audit only, that can be enabled. I have an SR logged, but the engineer has not replied to any of my posts for 3 days, of me trying to let them know I’d learned this myself. The ‘chat’ option is greyed out also.
I also had to edit $ORACLE_HOME//rdbms/admin/bundlefcp_DBBP.xml
and remove lines
..Due to error, BEGIN dbms_optim_bundle.enable_optim_fixes(‘ON’,’BOTH’, ‘YES’); END;
ERROR at line 1:
ORA-20002: ORA-20002: ORA-20002: get_bundle_fixes_inmemory_val: bundle bug
30483151 not present in PUBLIC.v$system_fix_control,
ORA-06512: at “SYS.DBMS_OPTIM_BUNDLE”, line 1440
ORA-06512: at line 1
To get deinstall to work, cd to $ORACLE_HOME/suptools and
Our Binaries are on NFS , is a separate mount to the datafiles which are on NFS also.
Looking at a DB running 19.10 on the same host & same binaries mount, it shows that both data datafiles and the binaries are opened DNFS.
Yet another host uses local storage for binaries (but datafiles on DNFS) – and it has no issues at all.
I just tried a softlink of the 19.11 $ORACLE_HOME/rdbms/audit to a location on local (non NFS) storage but that didn’t work for me. Is that the location you meant by “/oracle/audit”?
I’d (relative recently) also removed “nosuid” from both binaries and data NFS mount options in /etc/filesystems – but maybe I should have left it “on” for the binaries mount….
Hope this gets us all further.
this worries me a lot – especially since several people reported problems with dNFS which seems to be related to a wrong “make” command on AIX.
Would you mind to share the SR number with me please?
I read your SR now – and I’m quite unhappy.
You did everything right, you uploaded everything I would need to analyze it – and you had to call after 6 days to get an analyst looking at it.
The only thing you could improve:
Please raise the severity from (currently) 3 to either 2 or to 1 (but NON-24×7 please).
Unfortunately I don’t have an AIX box to verify this issue – but please give it the right severity as I just can assume that this hinders you from going to 19.11.0 on AIX.
Additional info: Doing a ‘base’ 19.3 install (before anything is then patched in any way), I noted that the Oracle binary size is different on the hosts where the binaries are ‘internal’, as opposed to a binary location on an external mounted NFS volume. I suspect (but haven’t tried) that enabling both dNFS and pure Unified Auditing would work fine for 19.3 just fine, as it did for ru’s 19.4 & 19.10.
With the 19.11 RU, pure Unified Auditing seems to expect, that the binaries and datafiles mounts are either _all_ on dNFS – or none are. And that is where the conflict is occurring with nNFS.
this looks so strange – and please, raise the severity of the SR to Sev.1 (but non-24×7 as otherwise the SR will travel the world).
You waited now 16 days, and I see no sign of a proper analysis on the Support side. And you uploaded everything upfront already.
The only thing you may want to add are the logfiles from the “make” runs.
And this issue seems to be “AIX only”.
To enable the deinstall script to work without having to rollback 32545013 first, cd $ORACLE_HOME/suptools and unzip $ORACLE_HOME/.patch_storage/32545013_Apr_19_2021_08_07_33/files/suptools/tfa.zip
Today, I started testing the Latest Oracle Patch 19.11 and got an error message:
The following actions have failed:
Copy failed from ‘/u01/oracle/software/oracle_patching/patch/32218454/files/sqlpatch/sqlpatch’ to ‘/u01/app/oracle/product/19.3.0/dbhome_1/sqlpatch/sqlpatch’…
Is there any workaround for the issue or did any customer has reported that issue?
you please need to open an SR – I personally haven’t seen or experienced this issue – did you copy in the most recent OPatch already (I’d guess so but I just wanted to ask)?
Thank you for your quick response. Yes, I copied the latest one which is “126.96.36.199.25” but still the same. I will proceed an SR and see how it goes.
Have you heard something about installing 19.3 Grid with -applyRU GIRU 19.10 on Solaris SPARC?
Seems it doesn’t work – doesnt apply RU
-bash-4.4$ /u01/app/19.0.0/grid/OPatch/opatch version
OPatch Version: 188.8.131.52.25
-bash-4.4$ /u01/app/19.0.0/grid/gridSetup.sh -applyRU /u01/stage/32226239
Preparing the home to patch…
Applying the patch /u01/stage/32226239…
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2021-05-18_11-43-08AM/installerPatchActions_2021-05-18_11-43-08AM.log
Launching Oracle Grid Infrastructure Setup Wizard…
19.11 GI RU for Solaris SPARC not yet available.
I was only able to install 19.9 GURU with opatch 184.108.40.206.21, but 19.10 requires .23
please log an SR – I haven’t heard anything but I know that there were issues with AIX which required a previous opatch as the newest which was mentioned in the README did not work.
I haven’t tried the install on SPARC – but an SR hopefully leads to a resolution quickly.
While running datapatch after applying the 19.11 DB RU and then the 19.11 OJVM RU, we hit the following oracle error: ORA-04021: timeout occurred while waiting to lock object SYS.SQLSET_ROW, while applying Patch 32545013. Datapatch then added Patch 32545013 to a retry installation queue and it subsequently applied successfully.
I notice the README for the DB RU suggests just “startup” while the README for the OJVM RU suggests “startup upgrade”. Since we apply both patches, should we be using “startup upgrade”? Is it possible that would have prevented the ORA-04021 error? Or, should we apply the DB first, run startup, run datapatch, then apply the OJVM, run startup upgrade, run datapatch again?
I think there are some general issues with 19.10 and 19.11. with ORA-4021 – and yes, a STARTUP UPGRADE may have prevented it but this shouldn’t be the case anymore in a RAC environment. A lot of people internally work on this. Usually, another call to datapatch clears it up, but we have seen cases already where you needed another datapatch after you ran datapatch for the 2nd time.
It can happen with the regular RU as well.
To clarify, we are running single instance on prem not RAC. Further, we only ran datapatch the one time, Oracle detected the contention, put the object into some kind of exception queue, and re-ran datapatch apply on its own. To confirm, sounds like you are saying we can continue applying the DB RU and OJVM RU and just run datapatch the one time using STARTUP versus STARTUP UPGRADE. We have been doing that for years with no issues.
even then the issue can happen with the ora-4021 as far as I’m aware. But it is less likely. And yes, you can use STARTUP.
I have applied the 19.11.0 RU on top of 19.6 and got below error after running datapatch —
Automatic recompilation incomplete; run utlrp.sql to revalidate.
also found MDS schema(SPATIAL) having lot of invalid objects. The component is not in use.
COMP_ID COMP_NAME SCHEMA STATUS CON_ID
—————————— —————————————- —————————————- ——————————————– ———-
SDO Spatial MDSYS VALID 1
SDO Spatial MDSYS INVALID 3
SDO Spatial MDSYS INVALID 4
you please need to check what’s invalid. If an utlrp.sql can’t solve it, it needs to be traced down by Support. If you are certain that you don’t need SPATIAL, you may be able to remove it. Please see:
See the last part, 12.2 for CDBs.
Hi Mike. Since 11Gr2 was so long existing and mature through to support expiry last december. If we were to compare with 12cR2 and now 19c the terminal release. How do you think the number of bugs is looking per version / release? With the shorter turn around time in releases and move to RU/RUR are the number of patches applied and therefore number of bugs detected higher than in the older releases? I wonder if anyone looks at the stats on this at all?
Any insights or comments would be welcome.
please understand that I can’t say anything about bug numbers. What I can tell you is that 11gR2 had the longest lifespan of all database releases. Alone 220.127.116.11 was under Extended Support for almost 5 full years. Your target release will be 19c as it is the terminal release of the 12.2 family. Premier Support runs until end of April 2024, Extended Support as of now is added until end of April 2027.
I have a question. I have applied 19.9 patch. Along with the patch, I applied 14 one-off patches, just for the heck of it (management asked for it). All the one-off patches are binary only, no per database part. Now I want to apply 19.11 patch but by creating a separate 19.11 DB home, starting the DB from 19.11 home and running datapatch. I do not want to apply one-off patches to the 19.11 home. Is it possible or am I stuck forever with these one-offs?
you won’t be stuck. You need to invoke datapatch anyway, but as there were no SQL or PL/SQL changes associated with the one-offs, datapatch won’t do anything for those.
So just changing homes did the trick in this case.
I want to apply 19.11 patch on my 19.9 database. But this Database has compatible value is still 18.104.22.168.
And I cant change the compatible now.
Can I continue with patching?
will it cause any problems to downgrade plan if asked to in future??
Applying a patch bundle has no connection or relation to COMPATIBLE. Hence, leave it as it is.
I’m running an Data Guard 19c on an GI 19c.
Which steps should be taken to apply an PU?
opatchauto is no option as noted in the readme. Therefore I think this is right way:
On both nodes:
1) Run roothas.sh -prepatch
2) Patch GI with opatch
3) Patch OH with opatch
Thx a lot
4) Run roothas.sh -postpach
Run datapatch -verbose on primary
I guess this may work but it will be a lot of manual tasks. Especially since you can’t deploy the patch remotely anymore but need to do this by yourself, node by node.
The doc is a bit “short” on information:
won’t help you much.
A patch for the v19 RU11 problem, of sql*plus not starting (with text shown below), when dNFS and pure Unified Auditing are both compiled in:
[ORA-17505: ksfdrsz:5 Failed to resize file to size 2 blocks]
…Search for Patch 32965460: ORA-17500 WHEN STARTING THE DATABASE USING “SQLPLUS / AS SYSDBA” , for Bug 32924668 .
Thanks for the information!
I recently applied PSUApr2021 to an 22.214.171.124 GRID and DB homes. Both were successful. But afterwards I noticed that 4 folders in the GRID home (bin, jdk, lib, perl) had changed owner from oracle to root. Most of the files within those 4 folders also changed owner from Oracle to root.
When I look at an installation of 126.96.36.199 GRID home, I see that this is normal in 12cR2 – bin, jdk, lib, perl are owned by root.
Do you know if PSUApr2021 is supposed to have the effect of changing bin, jdk, lib, perl folders (and files) in the 11204 GRID home to be owned by root.
(This is a standalone Oracle restart home I am talking about – not RAC.)
I received several similar messages but I haven’t verified it by myself yet.
You may need to check with an SR please.
I am managing to do a research about the compatibility of oracle fusion middleware 12c (188.8.131.52.0) with database 19c. I would like to know what the possible errors or issues of having these two together. Because I search a lot, but I didn’t find enough information. please can you provide me with links or anything that can help me in my research!
sorry but I can’t help you here unfortunately since I have no knowledge about MW.
I want to rename our 19c GI from /u01/grid1 to /u01/grid2
I have copied all files from /u01/grid1 to /u01/grid2
I want to clone new GI (/u01/grid2) using gridSetup.sh but silent option
Is there a better way please, considering we are only allowed to use silent option
I really can’t tell you since I’m not an expert in this area. Did you check MOS? I would almost swear that there must be a MOS note about this task available.