Patching is fun, isn’t it? And you may have heard already that the April 2018 patch bundles got released on April 17, 2018. And I thought I share a little bit of fun with you with a quick guide to patching my databases with the April 2018 PSU, BP and RU. For this exercise I use our Hands-On Lab with Oracle 11.2.0.4, Oracle 12.1.0.2 and Oracle 12.2.0.1 installed.
Prerequisites for patching my databases with the April 2018 PSU, BP and RU
First of all, in my VBox environment, I shutdown the databases for each release before patching them – and the central listener I use from within the Oracle 12.2.0.1 environment:
. cdb2 lsnrctl stop
In addition I will update OPatch in my 12.2 home and use this version for all 12c patch activities on the system. Please download the newest OPatch version 12.2.0.1.13 via patch 6880880 from MyOracle Support. Choose “12.2.0.1.0” in the drop-down list.

Download OPatch 12.2.0.1.13 via MyOracle Support’s patch 6880880
The readme
explains that the previous OPatch directory should be backed up and the new OPatch should be unpacked from the destination’s $ORACLE_HOME
. I just heard from Kay Liesenfeld that “removing” the old directory may not be a good idea as he had trouble with the most recent GI patches then.
For the 11g database patching I will use my current OPatch version 11.2.0.3.12.
Furthermore, to identify the correct patches for each release I used MOS Note: 2353306.1 (Critical Patch Update (CPU) Program April 2018 Patch Availability Document (PAD)) as neither the Availability and Known Issues notes nor the Patch Download Assistant were updated two days after patch release. Go to bullet point 3.1.4 in MOS Note: 2353306.1 for Oracle Database Patch Availability. The other notes should be updated already when you read this blog post.
MOS Note: 2118136.2 – Assistant: Download Reference for Oracle Database/GI RU, BP, PSU was not updated by the time I wrote this blog post but is now.
Patching Oracle 11.2.0.4 with the April 2018 Patch Set Update
To patch my Oracle 11.2.0.4 databases I downloaded the Patch Set Update April 2018 as Bundle Patches in Oracle 11g are meant for Engineered Systems only:
First steps after changing into the patches directory are the checks:
cd /media/sf_TEMP/p27338049_112040_Linux-x86-64/27338049
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2018, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u01/app/oracle/product/11.2.0.4
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/11.2.0.4/oraInst.loc
OPatch version : 11.2.0.3.12
OUI version : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0.4/cfgtoollogs/opatch/opatch2018-04-20_14-50-41PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
Once the check is passed I can apply the patch – and of course, I must ensure to have shutdown my databases (UPGR and FTEX in the hands-on lab) first:
$ $ORACLE_HOME/OPatch/opatch apply
Oracle Interim Patch Installer version 11.2.0.3.12
Copyright (c) 2018, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/11.2.0.4
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/11.2.0.4/oraInst.loc
OPatch version : 11.2.0.3.12
OUI version : 11.2.0.4.0
Log file location : /u01/app/oracle/product/11.2.0.4/cfgtoollogs/opatch/opatch2018-04-20_14-54-30PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 27338049
...
Backing up files...
Applying sub-patch '27338049' to OH '/u01/app/oracle/product/11.2.0.4'
...
Patching component oracle.rdbms, 11.2.0.4.0...
Patching component oracle.rdbms.rman, 11.2.0.4.0...
...
Composite patch 27338049 successfully applied.
Log file location: /u01/app/oracle/product/11.2.0.4/cfgtoollogs/opatch/opatch2018-04-20_14-54-30PM_1.log
OPatch succeeded.
Finally I will have to execute catbundle.sql
in all my 11.2.0.4 databases (UPGR and FTEX):
. upgr cd $ORACLE_HOME/rdbms/admin sqlplus / as sysdba @catbundle.sql psu apply exit
Done!
Patching Oracle 12.1.0.2 with the April 2018 Bundle Patch
For Oracle Database 12.1.0.2 I downloaded the Database Proactive Bundle Patch April 2018:
This patch bundle contains also Clusterware and Client patches – but the database patch is actually 27338029. According to the REAME.html I execute the conflict check – but in my case using the 12.2.0.1.12 OPatch previously installed:
/u01/app/oracle/product/12.2.0.1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /media/sf_TEMP/p27486326_121020_Linux-x86-64/27486326/27338029
Both checks succeed.Afterwards I initiate the space check:
/u01/app/oracle/product/12.2.0.1/OPatch/opatch prereq CheckSystemSpace --phBaseDir /media/sf_TEMP/p27486326_121020_Linux-x86-64/27486326/27338029
Now I am applying the patch:
$ cd /media/sf_TEMP/p27486326_121020_Linux-x86-64/27486326/27338029 $ /u01/app/oracle/product/12.2.0.1/OPatch/opatch apply Oracle Interim Patch Installer version 12.2.0.1.13 Copyright (c) 2018, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/12.1.0.2 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/12.1.0.2/oraInst.loc OPatch version : 12.2.0.1.13 OUI version : 12.1.0.2.0 Log file location : /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2018-04-19_20-51-40PM_1.log Verifying environment and performing prerequisite checks... OPatch continues with these patches: 27338029 ... Backing up files... Applying sub-patch '27338029' to OH '/u01/app/oracle/product/12.1.0.2' ApplySession: Optional component(s) [ oracle.has.crs, 12.1.0.2.0 ] , [ oracle.assistants.asm, 12.1.0.2.0 ] not present in the Oracle Home or a higher version is found. ... Composite patch 27338029 successfully applied. Log file location: /u01/app/oracle/product/12.1.0.2/cfgtoollogs/opatch/opatch2018-04-19_20-51-40PM_1.log OPatch succeeded.
In addition, datapatch
needs to be executed now in both existing databases. Hence I’m starting up both databases, make sure the environment is set correctly (in the Lab: . db121
and . cdb1
) and invoke datapatch
:
$ /u01/app/oracle/product/12.2.0.1/OPatch/datapatch -verbose SQL Patching tool version 12.1.0.2.0 Production on Thu Apr 19 21:00:31 2018 Copyright (c) 2012, 2017, Oracle. All rights reserved. Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_10973_2018_04_19_21_00_31/sqlpatch_invocation.log Connecting to database...OK Bootstrapping registry and package to current versions...done Determining current state...done Current state of SQL patches: Bundle series DBBP: ID 180417 in the binary registry and ID 180116 in the SQL registry Adding patches to installation queue and performing prereq checks... Installation queue: Nothing to roll back The following patches will be applied: 27338029 (DATABASE BUNDLE PATCH 12.1.0.2.180417) Installing patches... Patch installation complete. Total patches installed: 1 Validating logfiles... Patch 27338029 apply: SUCCESS logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27338029/22055339/27338029_apply_DB12_2018Apr19_21_00_56.log (no errors) SQL Patching tool complete on Thu Apr 19 21:01:02 2018
Once I repeated this for the second database, patching for both 12.1.0.2 databases done as well.
Patching Oracle 12.2.0.1 with the April 2018 Update
As you may know already, there are no PSUs anymore in Oracle 12.2 and higher. You should patch with the Updates. Therefore I downloaded the Database April 2018 Update first. As my favorite note to download the correct patch just gets updated overnight I used MOS Note:2239820.1 – 12.2.0.1 Base Release – Availability and Known Issues to access the patch.
It contains the Database 12.2.0.1 Update April 2018 and the OJVM (which I won’t need). I unpack it to a share: /media/sf_TEMP
and install it from there.
$ cd /media/sf_TEMP/p27726453_122010_Linux-x86-64/27726453/27674384 [CDB2] oracle@localhost:/media/sf_TEMP/p27726453_122010_Linux-x86-64/27726453/27674384
Then I call opatch
for a conflict check:
$ /u01/app/oracle/product/12.2.0.1/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.2.0.1.13
Copyright (c) 2018, Oracle Corporation. All rights reserved.
PREREQ session
Oracle Home : /u01/app/oracle/product/12.2.0.1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/12.2.0.1/oraInst.loc
OPatch version : 12.2.0.1.13
OUI version : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0.1/cfgtoollogs/opatch/opatch2018-04-19_14-51-43PM_1.log
Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.
And finally I apply the patch – I’m still in /media/sf_TEMP/p27726453_122010_Linux-x86-64/27726453/27674384
, the directory where the Database 2018 Update is located in.
$ /u01/app/oracle/product/12.2.0.1/OPatch/opatch apply
Oracle Interim Patch Installer version 12.2.0.1.13
Copyright (c) 2018, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/12.2.0.1
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/product/12.2.0.1/oraInst.loc
OPatch version : 12.2.0.1.13
OUI version : 12.2.0.1.4
Log file location : /u01/app/oracle/product/12.2.0.1/cfgtoollogs/opatch/opatch2018-04-19_14-52-04PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 27674384
...
Backing up files...
Applying interim patch '27674384' to OH '/u01/app/oracle/product/12.2.0.1'
ApplySession: Optional component(s) [ oracle.has.crs, 12.2.0.1.0 ] , [ oracle.oid.client, 12.2.0.1.0 ] , [ oracle.ons.daemon, 12.2.0.1.0 ] , [ oracle.network.cman, 12.2.0.1.0 ] not present in the Oracle Home or a higher version is found.
Patching component oracle.network.rsf, 12.2.0.1.0...
...
Patching component oracle.sdo, 12.2.0.1.0...
Patch 27674384 successfully applied.
Sub-set patch [27105253] has become inactive due to the application of a super-set patch [27674384].
Please refer to Doc ID 2161861.1 for any possible further required actions.
Log file location: /u01/app/oracle/product/12.2.0.1/cfgtoollogs/opatch/opatch2018-04-19_14-52-04PM_1.log
OPatch succeeded.
To finalize the patch application I need to start my database(s) and all PDBs:
SQL> startup SQL> alter pluggable database all open; SQL> exit
And then run datapatch
:
$ /u01/app/oracle/product/12.2.0.1/OPatch/datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Thu Apr 19 14:55:05 2018
Copyright (c) 2012, 2018, Oracle. All rights reserved.
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_7777_2018_04_19_14_55_05/sqlpatch_invocation.log
...
For the following PDBs: CDB$ROOT PDB$SEED
The following patches will be rolled back:
25862693 (DATABASE BUNDLE PATCH 12.2.0.1.170516)
The following patches will be applied:
27674384 (DATABASE APR 2018 RELEASE UPDATE 12.2.0.1.180417)
Installing patches...
Patch installation complete. Total patches installed: 4
Validating logfiles...
That’s it.
At the end of the entire exercise I can restart my listener finally:
. cdb2 lsnrctl start
–Mike
Hi Mike,
“Kay Liesenfeld that “removing” the old directory may not be a good idea as he had trouble with the most recent GI patches then”.
What kind of trouble he had while GI-Patching ?
Thank you, Peter
Peter,
I can’t say it – but Kay mentioned explicitly to me: Don’t remove the old OPATCH directory but install the new OPATCH into the existing directory.
Cheers,
Mike
Hi Peter, Mike,
gridSetup.sh Fails with this error message:
[INS-42505] The installer has detected that the Oracle Grid Infrastructure home software at (/u01/app/12.2.0.1/grid) is not complete.
This issue hits us while trying to install GI 12.2.0.1 with RU 01/2018 already applied.
MOS Note:
12.2 gridSetup.sh: INS-42505 File missing GRID_HOME/OPatch/opatchdiag (Doc ID 2321749.1)
Regards
Kay.
THANK YOU, Kay!!! 🙂
Cheers,
Mike
Hi Mike,
we did patch 12.2 with April RU, now I noticed that the description for OJVM in dba_registry_sqlpatch is missing:
select patch_id, action, status, action_time, DESCRIPTION from dba_registry_sqlpatch order by 4 desc;
PATCH_ID ACTION STATUS ACTION_TIME DESCRIPTION
———- ————— ————— ————————————————————————— —————————————————————————————————-
27475613 APPLY SUCCESS 20-APR-18 02.44.04.245488 PM
27674384 APPLY SUCCESS 20-APR-18 02.43.07.598686 PM DATABASE APR 2018 RELEASE UPDATE 12.2.0.1.180417
27105253 ROLLBACK SUCCESS 20-APR-18 11.15.08.705247 AM DATABASE RELEASE UPDATE 12.2.0.1.180116
27001739 ROLLBACK SUCCESS 20-APR-18 11.15.06.859815 AM OJVM RELEASE UPDATE: 12.2.0.1.180116 (27001739)
27105253 APPLY SUCCESS 14-APR-18 11.01.32.817759 AM DATABASE RELEASE UPDATE 12.2.0.1.180116
27001739 APPLY SUCCESS 14-APR-18 11.01.31.754178 AM OJVM RELEASE UPDATE: 12.2.0.1.180116 (27001739)
Same problem with two other patched databases. Bug?
Best Regards, Mika
Mika,
you mean this line here, correct?
“27475613 APPLY SUCCESS 20-APR-18 02.44.04.245488 PM”
where the DESCRIPTION is missing. I’d guess it’s a bug that it doesn’t update the DESCRIPTION column. But 27475613 is the April OJVM.
Cheers,
Mike
Yes, that line is missing the description. Should be a bug.
// Mika
Only fyi – filed bug 27912487 for it.
Cheers,
Mike
Nice, thanx Mike!!!
// Mika
Sorry Mike, found the reason why the description was missing on the OJVM patch.
We have always had the OPatch within the same folder where the upgrade is. Now when we patched we did have to install the patch with -force, else we got an error. With -force tha patches installed successfully but the description was missing.
Did troubleshoot it yesterday on my labbmachines and found that the OPatch should be in OH folder, and that the patches should be run from that destination. Now I was able to install patches without the -force and the description is also there for OJVM.
Maybe we have Always done the wrong way with the OPatch but it has worked great and this is the first time we got an problem.
Still, big thanx for the bug# but you can clear that one as resolved, our mistake.
We got a Little bit confued since we got also another error. In a container where we got 8 databases one of the databases did not patch correctly. We did at last manage to install the patches but then we were not able to startup the database. So, to be short of several hours of troubleshooting, my colleague found that this database did not have the ctssys schema. After we installed the ctxsys, all worked perfectly. Phew, now all is OK.
// Mika
Thanks for the update, Mika!
Cheers,
Mike
I tried to apply APR 2018 RU and datapatch failed with
DBD::Oracle::db prepare failed: ORA-00942: table or view does not exist (DBD ERROR: error possibly near indicator at char 50 in ‘SELECT patch_id, patch_uid, rowid
FROM sys.dba_registry_sqlpatch_ru_info’) [for Statement “SELECT patch_id, patch_uid, rowid
FROM sys.dba_registry_sqlpatch_ru_info”] at /opt/oracle/product/18c/racdb18.1.0/sqlpatch/sqlpatch.pm line 2788.
Line 2788 from sqlpatch.pm
my $ru_info_query =
“SELECT patch_id, patch_uid, rowid
FROM sys.dba_registry_sqlpatch_ru_info”;
my $ru_info_stmt = $dbh->prepare($ru_info_query);
The DB does not have this view as this is first time its getting patched after it was created in March under the base release.
Sreedhar,
which version of opatch (which includes datapatch) are you using?
“opatch version” is the command.
Cheers,
Mike
12.2.0.1.13.
What I have found is sys.dba_registry_sqlpatch_ru_info is missing and therefore datapatch is failing.
Hi Sreedhar,
OMG … and yes, this MUST fail as there’s neither a view called DBA_REGISTRY_SQLPATCH_RU_INFO nor a column in DBA_REGISTRY_SQLPATCH called “RU INFO”.
Did you log an SR? Then I would send it to the appropriate person to take care on it.
Thanks!
Mike
Sreedhar,
we spoke to the person who’s responsible for datapatch. The table should have been created during datapatch boostrap.
Can we see the contents of the invocation directory if possible.
The output from datapatch will show the invocation log. We would need the entire directory.
Thanks,
Mike
I have opened SR #3-17501006071 and mostly a new bug will be filed.
Thanks Sreedhar!
Cheers,
Mike
Hi Mike
Just wondering why you are not using opatchauto to manage the patching process for you and handle the post upgrade datapatch runs?
Type “opatchauto” in MOS and you know why.
Cheers,
Mike
Mike,
I have a general question about applying PSU along with a version upgrade. The environment is as follows. Oracle 12.1 running on a 2-node RAC (RHEL) with physical standby on similar hardware. Assume that new hardware is purchased (2-nodes for new prod and 2 nodes for new standby. The plan is to have a standby running on the new hardware in Oracle 12.1 During prod cutover to 12.2, the standy will be activated and upgraded. In this type of overall scenario, I would like to know when the PSUs would be applied? In other words, would the upgrade to 12.2 need to be done first and then the PSUs applied, or can it be applied to the 12.2 installation before prod cutover. Here are the steps for the overall upgrade.
1. Prod is running on 2-node RAC and physical standby is also on a 2-node RAC. These are all old hardware.
2. Purchase new hardware similar to the above. Let’s call the servers newsrv1, newsrv2, newstdbysrv1, newstdbysrv2. Install Oracle 12.1 and 12.2 on the new hardware. Do not install PSU on 12.2 or Install PSU on 12.2?
3. Create 2 physical standbys on the two sets of new hardware.
4. Prod cutover night – Activate standby on newsrv1,2. Upgrade from 12.1 to 12.2. Standby 2 on the new hardware will be in dataguard config with the activated and upgraded database (standby1) and get upgraded. The original prod and standby databases on the old hardware would be left intact and retired later.
Should the upgrade from 12.1 to 12.2 be done first and then PSUs applied? or Can the PSU be applied to the 12.2 installation BEFORE the upgrade? I am trying to understand what is the correct way to do an upgrade from 12.1 to 12.2 on new hardware and also apply PSUs? Thank you!
Raj,
first of all let me HIGHLY recommend the Bundle Patch – DON’T USE THE PSU in 12.1.
Furthermore, in 12.2 I don’t have to speak this out as there are luckily no PSUs anymore. But please go forward with the Updates (RU) and ignore the RUR (Revisions).
The install the most recent July RU into your future 12.2. home BEFORE upgrading.
The upgrade procedure will consume the patch scripts automatically. You can check afterwards with my “check_patches.sql” script on mikedietrichde.com/scripts.
For the Grid Infrastructure part – which has to be done BEFORE upgrading the databases – you have the advantage of having new hardware. This makes the procedure much more simple.
You install 12.2.0.1 GI on the new cluster, then apply the July 2018 RU to it. Then install your DB Home(s) and apply the most recent RU to them. Then you build your standbys, set a Guranteed Restore Point, then activate and upgrade them. Then flashback and try it again if necessary. And afterwards you say “Hello” to clusterware as for a command line upgrade you’ll have to do the srvctl part manually afterwards.
But just a hint: Use 18.3.0 Grid Infrastructure right away.
1: this is 12.2.0.2
2: it allows you to have 12.2 OR 18 databases whereas 12.2 GI limits you to 12.2 or below but doesn’t allow 18c databases (more flexibility)
3: you save the time of applying a RU as 18.3.0 delivers the RU patches automatically (less work for you)
And just in addition:
When you set up the standbys you don’t need 12.1 software on the destination machine as long as you make sure you copy your archives (in ASM, RMNAN will do this for you) over. You will need only 12.1 software on the destination in case you would like to use the data guard processes as 12.1 on the source will only speak to 12.1 on the destination for physical standby matters.
Please see our slide deck “Upgrade / Migrate / Consolidate to 12.2” on mikedietrichde.com/slides – Case 2 (RAC) has the standby example, case 3/2 (Payback – part 2) has the MOS note describing this standby case (for Exadata V1 to X2 but that doesn’t matter).
Cheers,
Mike
Thanks for the excellent suggestion!! Especially the hint about using a higher version of GI.
Raj
Hello Mike,
Thanks for your site – very helpful!
I have a stand alone 12.1.0.2 GI ASM home (grid user is owner), and a 12.1.0.2 DB home, both fresh installs, no databases yet. I do not find the cases in the 180417 DBBP patch instructions on how to patch this type of GI home (it is non-RAC, but not a DB home). Can you provide insight on the method? Am I correct in that this type of home is not directly mentioned in those patch instructions? I followed both:
https://updates.oracle.com/Orion/Services/download?type=readme&aru=22118389
https://support.oracle.com/epmos/faces/DocumentDisplay?id=1591616.1
Thanks,
Dennis
Hi Dennis,
you need to look at table 1-2 in your first link:
https://updates.oracle.com/Orion/Services/download?type=readme&aru=22118389
It lists 3 times:
GI Home in conjunction with RAC, RACOne, or Single Instance home
in the first 3 rows.
Then use “opatchauto”.
It should patch both homes, your GI stand-alone and your database home.
And I know that the readme is not clear – but forgive me, I’ve got soooo tired in mentioning this over and over again with no result.
You need to open an SR and check with the support folks. This will increase the SR count for such issues and hopefully one day somebody will listen to us.
Thanks,
Mike
Will this patch work in container database PDB CDB?
Please use Bundle Patches in 12.1, Release Updates from 12.2. on.
PSUs were meant for 11g databases and have MUCH less content than BPs.
Cheers,
Mike
PS: And yes, applied the right way, all patches work for non-CDB and CDB environments.
Hi Mike,
In an upgrade 12.1 to 19c autoupgrade fails in datapatch due to non-existing “sys.dba_registry_sqlpatch_ru_info”, this problem is discussed in the comments to this blog post, do you know if there was a bug or if a solution was found ?, I have SR 3-27852854341 open for this problem but the support engineer does not seem to familiar with autoupgrade….
Thanks
Curt
Hi Curt,
I’m reading your SR now (I had off the past 2.5 weeks).
Oh … 🙁
I mailed the SR owner.
And I checked the bug DB.
If this is a test environment you can restore easily, then please try the following workaround from bug33365780 (bug is not fixed yet):
Drop table registry$sqlpatch_ru_info in the offending pdb, then run datapatch again.
alter session set container=;
alter session set “_oracle_script”=TRUE;
drop table registry$sqlpatch_ru_info;
@?/rdbms/admin/catsqlreg.sql
Then execute datapatch.
BUT … the above bug has been triggered via an ORA-600, and not via an ORA-8103. So your issue may be a different one.
The best way to progress would be:
1. Try the workaround and check whether this fixes the issue
2. Update the SR with your findings
3. Have the support engineer collect all required logs (they should be already in the AutoUpgrade zip, maybe some additional datapatch logs are required in addition)
4. Have the support engineer file a bug for this issue
5. Tell me the bug number
Thanks,
Mike