I was traveling last week and had not enough time to download and apply the January 2020 Release Updates and PSUs. Yes, I’m one week late. But nevertheless, I’d like to check whether patching my databases with the January 2020 RUs will work fine.

Wörthsee near Munich in January 2020 – Water temperature is 3.4°C
Security Alert January 2020
My usual approach is to start with the Security Alerts for January 2020. It leads me to the January 2020 Critical Patch Advisory. As I’m a database guy, this is the line I’m interested in: Oracle Database Server, versions 11.2.0.4, 12.1.0.2, 12.2.0.1, 18c, 19c. And this link brings me directly to the Risk Matrix for the database products.
You will spot four >7 CVE score issues, two of them in the Core RDBMS, and the usual OJVM vulnerability. Basically for me this means: You must apply it. Security is the most important topic.
My next click on Database directs me to MOS Note: 2602410.1 – Critical Patch Update (CPU) Program Jan 2020 Patch Availability Document. This is the document containing the links to download the patch bundles. Section 3.1.4 is the overview about Database Patch Bundles for each release.
I download the following patch bundles:
- Oracle Database 19c
- Database Release Update 19.6.0.0.200114 Patch 30557433 for UNIX
- Readme
- Oracle Database 12.2.0.1
- Database Jan 2020 Release Update 12.2.0.1.200114 Patch 30593149 for UNIX
- Readme
- Oracle Database 11.2.0.4 (non-Engineered System => PSU!)
- Database PSU 11.2.0.4.200114 Patch 30298532 for UNIX
- Readme
Do I need a new OPatch?
First check I’m doing once the download is running: Do I need to refresh my OPatch versions?
- 19.6.0 requires opatch 12.2.0.1.17 or later
- 12.2.0.1 requires opatch 12.2.0.1.13 or later
- 11.2.0.4 requires opatch 11.2.0.3.20 or later
Quick check with opatch version
in all my three homes. Everything ok. Otherwise I would have to download and refresh opatch via patch 6880880. My 11.2 home has opatch “21”, the 19c and the 12.2.0.1 home both use the “17” version. All set.
Applying RU 19.6.0 to my 19c home
At first I apply 19.6.0 to my existing Oracle 19c home (currently at 19.5.0).
After I unzipped the patch into a separate directory, I run:
- opatch conflict check
$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./ Oracle Interim Patch Installer version 12.2.0.1.17 Copyright (c) 2020, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 12.2.0.1.17 OUI version : 12.2.0.7.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2020-01-21_21-04-06PM_1.log Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded. [CDB2] oracle@hol:~/30557433
- opatch apply
$ $ORACLE_HOME/OPatch/opatch apply Oracle Interim Patch Installer version 12.2.0.1.17 Copyright (c) 2020, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 12.2.0.1.17 OUI version : 12.2.0.7.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2020-01-21_21-05-18PM_1.log Verifying environment and performing prerequisite checks... Prerequisite check "CheckSystemSpace" failed. The details are: Required amount of space(4575.498MB) is not available. UtilSession failed: Prerequisite check "CheckSystemSpace" failed. Log file location: /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2020-01-21_21-05-18PM_1.log OPatch failed with error code 73
Uhh … I know that I don’t have a lot of disk space available but why does a 1GB sized patch request almost 5GB of free space. I don’t like such surprises. These make patching such a relaxing fun experience.
I clean up all my
.patch_storage
directories manually to free space. All together I can get rid of 2GB. And interestingly enough, once I do this,opatch
doesn’t complain anymore about 4.6 GB not being available. Well …$ $ORACLE_HOME/OPatch/opatch apply Oracle Interim Patch Installer version 12.2.0.1.17 Copyright (c) 2020, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 12.2.0.1.17 OUI version : 12.2.0.7.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2020-01-21_21-12-23PM_1.log Verifying environment and performing prerequisite checks... OPatch continues with these patches: 30557433 Do you want to proceed? [y|n] y User Responded with: Y All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/oracle/product/19') Is the local system ready for patching? [y|n] y User Responded with: Y Backing up files... Applying interim patch '30557433' to OH '/u01/app/oracle/product/19' ApplySession: Optional component(s) [ oracle.network.gsm, 19.0.0.0.0 ] , [ oracle.rdbms.ic, 19.0.0.0.0 ] , ... Patching component oracle.xdk, 19.0.0.0.0... Patching component oracle.xdk.parser.java, 19.0.0.0.0... Patching component oracle.rdbms.rsf.ic, 19.0.0.0.0... Patching component oracle.precomp.lang, 19.0.0.0.0... Patching component oracle.precomp.common, 19.0.0.0.0... Patching component oracle.jdk, 1.8.0.201.0... Patch 30557433 successfully applied. Sub-set patch [30125133] has become inactive due to the application of a super-set patch [30557433]. Please refer to Doc ID 2161861.1 for any possible further required actions. Log file location: /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2020-01-21_21-12-23PM_1.log OPatch succeeded.
The note mentioned as well that the JDK will be updated. That’s a good thing – but it makes the patch apply taking a bit longer, at least in my environment. I get some CPU spikes but this may have to do with my VBox environment.
As final action, I will start my database and all pluggable databases again. The next step,
datapatch
, requires my databases and PDBs to be open. - datapatch
$ $ORACLE_HOME/OPatch/datapatch -verbose SQL Patching tool version 19.6.0.0.0 Production on Tue Jan 21 21:20:48 2020 Copyright (c) 2012, 2019, Oracle. All rights reserved. Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_16010_2020_01_21_21_20_48/sqlpatch_invocation.log Connecting to database...OK Gathering database info...done Note: Datapatch will only apply or rollback SQL fixes for PDBs that are in an open state, no patches will be applied to closed PDBs. Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation (Doc ID 1585822.1) Bootstrapping registry and package to current versions...done Determining current state...done Current state of interim SQL patches: No interim patches found Current state of release update SQL patches: Binary registry: 19.6.0.0.0 Release_Update 191217155004: Installed PDB CDB$ROOT: Applied 19.5.0.0.0 Release_Update 190909180549 successfully on 16-OCT-19 07.42.14.875068 PM PDB PDB$SEED: Applied 19.5.0.0.0 Release_Update 190909180549 successfully on 16-OCT-19 07.42.15.686615 PM Adding patches to installation queue and performing prereq checks...done Installation queue: For the following PDBs: CDB$ROOT PDB$SEED No interim patches need to be rolled back Patch 30557433 (Database Release Update : 19.6.0.0.200114 (30557433)): Apply from 19.5.0.0.0 Release_Update 190909180549 to 19.6.0.0.0 Release_Update 191217155004 No interim patches need to be applied Installing patches... Patch installation complete. Total patches installed: 2 Validating logfiles...done Patch 30557433 apply (pdb CDB$ROOT): SUCCESS logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30557433/23305305/30557433_apply_CDB2_CDBROOT_2020Jan21_21_22_01.log (no errors) Patch 30557433 apply (pdb PDB$SEED): SUCCESS logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/30557433/23305305/30557433_apply_CDB2_PDBSEED_2020Jan21_21_22_52.log (no errors) SQL Patching tool complete on Tue Jan 21 21:23:34 2020
Applying RU 12.2.0.1.200114 to my 12.2.0.1 home
I won’t copy/paste all steps as the output is similar to the above.
- opatch conflict check
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
- opatch apply
$ORACLE_HOME/OPatch/opatch apply
Then I will need to startup my databases and all PDBs.
- datapatch -verbose
Final steps is “datapatch”. It requires the databases and all its PDBs to be up and running.$ORACLE_HOME/OPatch/datapatch -verbose
Applying PSU 11.2.0.4.200114 to my Oracle 11.2 home
For Oracle 11.2 I can only take the PSU. Bundle patches (BP) in Oracle 11.2 were only meant for Exadata systems. But the path to apply them is the same as for later bundles.
Again, I won’t copy/paste all steps as the output is similar to the above.
- opatch conflict check
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
- opatch apply
$ORACLE_HOME/OPatch/opatch apply
Then I will need to startup my databases.
- catbundle.sql
Final steps is “catbudle.sql”. It requires the databases to be up and running.cd $ORACLE_HOME/rdbms/admin sqlplus / as sysdba @?/rdbms/admin/catbundle.sql psu apply
Everything fine now. No major issues seen. But from a colleague I received an email about some issues patching a RAC environment. I will update you once I find out if it isn’t just a singular issue.
Further Information and Links
- Oracle Security Alerts
- January 2020 Critical Patch Advisory
- Oracle Database Server, versions 11.2.0.4, 12.1.0.2, 12.2.0.1, 18c, 19c
- Risk Matrix Database Products
- MOS Note: 2602410.1 – Critical Patch Update (CPU) Program Jan 2020 Patch Availability Document
- Opatch download via patch 6880880
- Patching all my environments with the Oct 2019 RUs
–Mike
Hi Mike,
“But from a colleague I received an email about some issues patching a RAC environment. I will update you once I find out if it isn’t just a singular issue.”
I had problems on RAC too:
…
MGTCA-1005 : Could not connect to the GIMR.
Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor
2020/01/22 11:45:06 CLSRSC-180: An error occurred while executing the command ‘/opt/grid_19.0.0.0/bin/mgmtca applysql’
After fixing the cause of failure Run opatchauto resume
…
It seems to me that the pdb GIMR_DSCREP_10 of the mgmtdb got lost during the RU.
mgmtdb shows:
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED READ ONLY NO
I’ve recreated the mgmtdb pdb GIMR_DSCREP_10 and started opatchauto resume as workaround:
12.2: How to Create GI Management Repository (Doc ID 2246123.1)
/bin/mgmtca -local
mgmtdb shows now:
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED READ ONLY NO
3 GIMR_DSCREP_10 READ WRITE NO
sudo /OPatch/opatchauto resume
Best regards
Sven
Hi Sven,
thanks for sharing this – actually I double checked, and the issues my colleague logged are for 12.1.0.2, not for 19c.
I shared your input with the RAC folks – did you open an SR for this or did you fix this by yourself?
In case you didn’t open an SR, do you still have the logs from the rootupgrade.sh run?
Cheers,
Mike
Hi Mike,
I havn’t logged a SR because I fixed it by myself.
I still have the logs from opatchauto but I can’t find any informations in the logs about whether the mgmtdb pdb got lost during the RU or if it was already missing before the RU.
I don’t know if it’s worth to further analyze the issue.
Is my 19c issue similar to the issue on 12.1.0.2?
Best regards
Sven
Hi Sven,
I did check with the RAC-PM – he will test your case.
Thanks,
Mike
Hi all, any news with the case? I face the same issue and I cant resolve it.
Thanks for the reminder.
I sent this on – but except for “sorry I missed this one” I haven’t gotten a reply or clarification.
Do you have an SR open? If yes, please let me know the SR number and I’ll use the last reply to remind my colleague 🙂
Thanks a lot,
Mike
Hi,
we’ve used the workaround above in the first place but removed the mgmtdb and upgraded to 19c in the meantime. I don’t have a SR for this issue.
Best regards
Sven
the sr number is 3-22702149501 : CLSRSC-180: An error occurred while executing the command ‘/oravol/app/19.6.0.0/bin/mgmtca applysql.
Hello Mike. Thanks for the info. Can you please elaborate on this sentence?
I clean up all my .patch_storage directories manually to free space.
What is the process you use to determine what can be cleaned up?
Also, did you know that Oracle can do a check for you to see if you have enough free space? Here are the steps from my July 2019 Combo patch?
1. Using vi, create and save a file named July_2019_RU_Combo_Patchlist.txt in the /tmp directory that contains only the following two lines:
/home/hems/oracle_patches/12201/29699168/29757449
/home/hems/oracle_patches/12201/29699168/29774415
2. To run this utility, enter the following command:
opatch prereq CheckSystemSpace -phBaseFile /tmp/July_2019_RU_Combo_Patchlist.txt
Hi Edmond,
something you shouldn’t do: “rm -rf” 😉
It is my lab environment and I know that I created the disk group with not enough space.
You could use “opatch util cleanup” would be the official command but I was too lazy as it does not clean everything.
And thanks, yes, I know that. But I’d rather rely on the opatch check when it kicks in.
But you are right, I should have checked space with CheckSystemSpace – your approach is the better one.
Cheers and thanks for sharing,
Mike
Answered my own question LOL! Oracle has a Metalink doc on the topic, How To Avoid Disk Full Issues Because OPatch Backups Take Big Amount Of Disk Space. (Doc ID 550522.1)
Oh, I didn’t know this one …
THANKS!!! BIG thanks actually!
Mike
You’re very welcome. Glad to be of service! Thanks for all you do for all of us out here my brother! It’s very much appreciated!
Tschuss!
Hi,
you may notice that “opatch util cleanup” mentioned in this Doc ID is not working.
I opened SR 3-20261324631 : opatch util cleanup does nothing
a few month ago and the supporter mentioned that this behavior was introduced by: OPatch: Behavior Changes starting in OPatch 12.2.0.1.5 and 11.2.0.3.14 releases ( Doc ID 2161861.1 )
The manual steps for further cleanup of the $ORACLE_HOME/.patch_storage described in Doc ID 550522.1 are working though wihout any issues. We are even using a shell script to automate this task.
Best regards
Sven
Thanks a lot, Sven – I’ve had the feeling that “util cleanup” would do something but not enough.
That’s why I went the mean approach and wiped it all.
Thanks for your suggestions!!!
Mike
HI Mike
I already patched all my RAC in my lab test environment 12.1 , 12.2 , 18c and 19c
I found a bug patching ORACLE_HOME in RAC environment with opatchauto Rel 17 . Seems that opatch auto try to re-compile pro*cobol getting error. Applying single patch with opatch no problem
Not opened a SR but simply found that a new version of OPatch ( Rel 19 ) is available . With opatch Rel 19 no issue at all
Suggestion ( as you teach always ) is to use last version of OPatch , Rel 19 of 16 Jan 2020
RV
Thanks a lot, Roberto!
I think I saw this cobol issue by myself a few months ago. Potentially in a customer’s log.
Thanks for the update – very helpful!!!
Cheers,
Mike
Hi Mike,
Is there a reason for the missing “Recommondation” flag at the “GI JAN 2020 RELEASE UPDATE 12.2.0.1.200114 (System Patch) – 30501932” download? Compared to GI RU 12.1.0.2 – 30464119 this has a “recommondation” flag.
Cheers Peter
Hi Peter,
seriously … I have no clue 🙁
I can only guess that somebody has forgotten it …
Cheers,
Mike
Hi,
Just to inform all of you that this latest 19.6 RU (patch 30557433) has a bug which is not mentioned among known issues…
Bug 30521071 – 19.5 OJVMRU patch is mandatory to apply prior to 19.5DBRU/19.6DBRU bundle patch was created.
I’ve patched 19.3 oracle_home with this JAN2020 RU and after creating new instance out of this patched home there are:
– 256 invalid objects (mdsys objects)
– and here also EM Express does not work.. (on 19.3 it does).
– compiling invalids via @utlrp.sql takes ages…literally 10 min..(on 19.3 it finishes immediately)
The current workaround is to go from 19.3 to 19.4–>19.5–>19.6 applying each RU separately..
..that much about patches being “cumulative”..
Hi Jure,
thanks for sharing – I was not aware and will investigate as soon as time allows.
Yesterday, at OOW London, a customer came by and asked me exactly the same because he had so much trouble. And it worked only when he applied the OJVM patch.
Let me check this …
Thanks for sharing!
Mike
Hi Jure,
can you drop me an email please with your findings.
I tried to reproduce it but everything works fine for me.
What I did:
1. I installed 19.3.0 freshly into a new home
2. I applied _ONLY_ Database RU January 2020 (19.6.0) to this home
3. I didn’t apply any OJVM patches yet
4. I created a new database, ADVANCED => CUSTOM => CBD with 1 PDB
And I configured almost everything.
Java is there as is the JAVAVM and Spatial, too (SDO).
But everything is valid.
Cheers,
Mike
Hi Jure,
could it be that you use one of the precreated Oracle Databases, such as OLTP or DWH???
With these, I can reproduce the behavior – I get an incredible long recompilation phase, and I get 256 invalid objects in CDB$ROOT already.
I have an idea why this is happening … and I will blog about it.
But nevertheless, let me explain that I NEVER use the seed databases for many many reasons:
https://mikedietrichde.com/2017/07/11/always-create-custom-database/
Thanks for highlighting this issue!
Cheers,
Mike
Hi Mike,
Yes, this is exactly the case..I’ve created instance (although non-CDB single instance) based on OLTP template (“General purpose or Trans..” option) as you’ve already figured it out.
Thanks for the tip to create custom database..going to recreate my 19c instance(s).. 🙂
One thing more..when instance is already created & running (out of e.g. 19.3 home and ) then the same 19.6 RU (patch 30557433) does not cause issues described above. RU apply completes as expected and NO invalids are found after apply.
Regards!
Hi Jure,
I tried to test everything back and forth. I see the same issue with 19.6.0 – and I found a way to clear this up.
I wrote a blog post last night already and will publish it tomorrow.
Thanks again for alerting me – this is so important as otherwise I wouldn’t have seen this. And I did double-check. There is no MOS note so far explaining this. But it is a really nasty and mean issue.
Thanks,
Mike
I’m seeing this issue too, and I’ve found dbca to be incredibly slow after applying the January 2020 CPU patch. It took 3 hours to create the database,
See here, Jeremy:
https://mikedietrichde.com/2020/02/18/issues-with-seed-databases-patch-bundles-and-ojvm-in-19c/
Cheers,
Mike
Hi Mike,
yesterday I tried to patch a Singletenant 12.1.0.2 SIHA Database coming from July DBBP. The Instance failed to start up afterwards with the following error:
srvctl start database -d $ORA_DBNAME
PRCR-1079 : Failed to start resource ora.dbname.db
CRS-5017: The resource action “ora.dbname.db start” encountered the following error:
ORA-48189: OS command to create directory failed
Linux-x86_64 Error: 1: Operation not permitted
Additional information: 2
. For details refer to “(:CLSN00107:)” in “/oracle/GRD/base/diag/crs/hostname/crs/trace/ohasd_oraagent_oraplgrd.trc”.
Grid Infrastructure was already on 12.1.0.2.200114
After rolling back to the previous patchlevel, the Instance started as usual! The Issue seems to be the Database Patch component of the Proactive Bundle Patch and internal tests showed that it only comes up in SIHA environments. Database only Environments are fine.
I opened a SR for that this Morining. We will see….
Cheers,
Björn
Bjorn, we have this exact issue
PRCR-1079 : Failed to start resource ora.dbname.db
CRS-5017: The resource action “ora.dbname.db start” encountered the following error:
ORA-48189: OS command to create directory failed
Linux-x86_64 Error: 1: Operation not permitted
Was this resolved?
Regards, Jaap Krips
Please open an SR – Support needs to know about this – and has a solution.
Thanks,
Mike
Hi Mike and Jaap,
I resolved the issue myself yesterday. I played around with differend DBBPs and found out, that including the October 2019 DBBP everything was fine. January and April had that issue. To find out, where the permission problem exactly is (as no output or logfile would give that information), I finally did an strace on a sqlplus session where I did a startup of that Database instance:
# strace -f -s 2048 -o trace -p 925
(where 925 was the pid of my sqlplus session)
# less trace
9291 stat(“/oracle/diag”, {st_mode=S_IFDIR|0775, st_size=4096, …}) = 0
9291 geteuid() = 1203
9291 getegid() = 1001
9291 mkdir(“/oracle/diag”, 0775) = -1 EEXIST (File exists)
9291 chown(“/oracle/diag”, 4294967295, 1000) = -1 EPERM (Operation not permitted)
9291 stat(“/oracle/diag”, {st_mode=S_IFDIR|0775, st_size=4096, …}) = 0
9291 chmod(“/oracle/diag”, 0755) = -1 EPERM (Operation not permitted)
So, in our case, we decided years ago to have only one diagnostic_dest for both asm and database for easier administration and the diagnostic_dest is owned by the grid user but is having 775 permissions to enable the oracle user to write into it using its group membership. Oracle seems now to be forcing the default permissions and ownership for that directory and if not possible to do that, let the whole startup process fail. Maybe they are fixing other problems with that where people are having wrong permissions on their diagnostic_dest but this will only succeed, if the ownership is right as the database user normally do not have root-like rights.
Jaap: I am assuming, you have a similar setup. You could temporarily fix the issue by setting 777 on your diag directory, which isn’t a nice solution in terms of security but will enable the database to start up with shared diagnostic dest. Otherwise you will have to separate the diagnostic_dest again.
Mike: If it still is relevant to you: SR 3-22415708331
Cheers,
Björn
Bjorn, Brilliant, I changed the database to have a private diag dir. And now it works fine. I also ran the upgrade with opatch 12.2.0.1.17. That errors too. But later I read that also here. Many thanks! Regards, Jaap
Hi
not sure if you guys hit the bug of RSnn process, and oracle recommend to apply a patch
Patch 30794929: RMV0/RMV1 WATING ON GES GENERIC EVENT AFTER APPLYING 12.2.0.1.20200114 DB RU
regards
Hong Yeow
I didn’t hit this one – thanks for letting us know!
Cheers,
Mike
The SR is still open and not solved, even escalating and then raising up to Sev1 didn’t speed things up so far. In the contrary, the new Sev1 Supporter started asking questions right from the beginning with requesting an lsinventory again and that’s, where we are right now after nearly two months. Very disappointing like many critical SR’s…
Cheers,
Björn
Hi Björn,
would you mind to share the SR# with me please?
Thanks,
Mike
Hi Mike,
A caution on just “clean up all my .patch_storage directories manually to free space”. This should not mean just deleting directories as I found the hard way.
I encountered a problem with a patch install and attempted to roll it back using opatch rollback -id . It attempted to find the previous version of a file in the .patch_storage folder from an update a year and a half earlier (the last time that particular file had been updated apparently). Unfortunately, that file was no longer there as we had “cleaned” (i.e. deleted the folder) to make room for the later patches. I ended up having to restore the binaries from an offsite backup instead to correct the bad patch install.
I now either compress the files/folders in place or move the folders in .patch_storage to another location to save space, so if need be, I can put the correct folders back again if I need to do another rollback.
Very Sincerely,
Nate
Hi Nate,
i fully agree with you for a prod system – I was just cleaning in my lab as I had set it up so badly 🙁
Thanks for your advice!
Cheers,
Mike
I have installed Jan2020 PSU to both Oracle Home and the Database itself. Is it possible to deinstall or uninstall Jan2020 PSU from the Database only (i.e. keep the Jan2020 PSU installed in the Oracle Home at binary level, but undo the changes made to the database itself)? If it is possible, I would like to startup the database back in its original Oracle Home (the one where it originated from before PSU was applied). Thanks!
Sure – you need to use the rollback option either of datapatch or catbundle.sql (if you really applied a PSU which would be either 11.2. or 12.1).
But what is the point of using a newer executable and not having the necessary SQL and PL/SQL changes in your db?
Cheers,
Mike