What happened so far on my Journey to the Cloud?
- Part I – Push a Button (Dec 3, 2015)
- Part II – Switch On/Off and Remove (Dec 4, 2015)
- You are here! ==> Part III – Patch, patch, patch (Dec 22, 2015)
- Part IV – Clean Up APEX (Jan 19, 2016)
- Part V – TDE is wonderful (Jan 28, 2016)
- Part VI – Patching does not work (Apr 18, 2016)
- Part VII – APEX is in CDB$ROOT again (Dec 21, 2016)
I haven’t stopped my journey if you wonder about the inactivity for two weeks. I just had to learn a bit as my pretty naive approach caused me some trouble. Based on my last experience I had to find out why APEX left so many leftovers when I removed it the friendly way. And I wanted to patch my environment to a higher version of PSUs as – learning experience – a new deployment does not get deployed with the most recent PSU applied from scratch. So there’s still plenty of stuff to do.
Patching my DBaaS database with the most recent PSU
If you’ve got frustrated in the past by yourself while applying a PSU then please read on – I guarantee you’ll smile more than once …
First of all what is my expectation from a cloud environment?
Yes, Push A Button.
Well … with the July 2015 PSU (184.108.40.206.4) this worked just fine even though it took a bit of time to download the PSU. But it got applied flawlessly by just pushing a button.
But we have December 2015 already. So I would like to apply the Oct 2015 PSU (220.127.116.11.5) to my environment. And it turns out that currently this is the only one getting offered in the DBaaS environment to be applied.
First step: Execute the precheck
See the results …
Oh, it failed. But why? It does not tell me anything about the reason why it had failed. Was it no space left? Or a patch conflict? Or something else? No idea as Error  isn’t very useful.
And what’s next? Live without the PSU even though I’m preaching since ages in our workshops that you MUST apply PSUs on a regular basis? Ok, so I have to sort this out.
Let’s find it out
I did login to my environment via SSH. Then:
Ah, and there’s a psu.n subdirectory – that looks promising. And there’s a file conflictlog in it – that does look even more promising.
It tells me:
Invoking prereq "checkconflictagainstohwithdetail" ZOP-47: The patch(es) has supersets with other patches installed in the Oracle Home (or) among themselves. ZOP-40: The patch(es) has conflicts with other patches installed in the Oracle Home (or) among themselves. Prereq "checkConflictAgainstOHWithDetail" failed. Summary of Conflict Analysis: There are no patches that can be applied now. Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches : 21359755, 21627366 Following patches will be rolled back from Oracle Home on application of the patches in the given list : 20281121 Whole composite patch Conflicts/Supersets are: Composite Patch : 21359755 Conflict with 21627366 Bug Superset of 20281121 Detail Conflicts/Supersets for each patch are: Sub-Patch : 21359755 Conflict with 21627366 Conflict details: /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kzan.o /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kspt.o /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kcb.o /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kcrfw.o /u01/app/oracle/product/12.1.0/dbhome_1/lib/libserver12.a:kokt.o Bug Superset of 20281121 Super set bugs are: 20281121 Patch failed with error code 1
Ah, my friend Error  again. Now all is clear, isn’t it? There’s a conflict as the previous installation deployed in the cloud does seem to have gotten some extra treatments – which is good in one way but bad in another as I will have to solve this now. The Cloud Console does not offer me anything to solve this.
I’m subscribed to our internal cloud mailing lists. And other people are way more smarter than I am so I found an email linking an explanation in the official documentation (Known Issues for the Database Cloud As A Service). There a quite a few known issues and it’s very useful to have such a document. And here we go with the solution to my problem:
- Applying or Prechecking the October 2015 PSU patch on a 12c service instance fails with a precheck error
Ok, I have two options, one in the graphical interface, the other one on the command line. I’ll go with the first option as this is meant to be Push A Button style and not type on the command line.
So first I click on PATCH in the hamburger menu:
And then I chose the FORCE option.
May the force be with me.
The alternative would have been on the command line using the dbpatchm subcommand of the dbaascli utility:
Before applying the patch, set the value of the ignore_patch_conflict key to 1 in the /var/opt/oracle/patch/dbpatchm.cfg patching configuration file; for example:
Et voilà …
I took a bit …
Actually a bit over 30 minutes … but finally …
The most recent PSU 18.104.22.168.5 from October 2015 has been applied to my DBaaS Cloud installation. It wasn’t that complicated – if I had known upfront to hit the magic “FORCE” button 😉
Finally let’s check:
COLUMN PATCH_ID FORMAT 99999999 COLUMN PATCH_UID FORMAT 99999999 COLUMN VERSION FORMAT A12 COLUMN STATUS A12 COLUMN DESCRIPTION A30 select PATCH_ID, PATCH_UID, VERSION, STATUS, DESCRIPTION from DBA_REGISTRY_SQLPATCH order by BUNDLE_SERIES;
Remove APEX from my database and install it into my PDB
In my previous journey removing APEX from my cloud database didn’t work quite well. I had leftovers afterwards, mainly from the Multitenant Self Service Provisioning APEX application (owner: C##PDBSS) and from the Database-As-A-Service-Monitor.
Removing Multitenant Self Service Provisioning Application
From some internal email conversation (and from the readme.pdf included in the download of the Multitenant Self Service Provisioning Application) I learned that there’s a pdbss_remove.sql. And a find showed me the unusual location in the cloud deployment:
So first of all I connected to my CDB$ROOT and started the removal script:
sqlplus / as sysdba spool /tmp/pdbss_remove.log @/var/opt/oracle/log/pdbss/pdbss/pdbss_remove.sql
It took 1:45 minutes to complete. Spool off is not necessary as the script ends SQL*Plus.
Then I started the APEX removal script from the $ORACLE_HOME/apex subdirectory:
cd $ORACLE_HOME/apex sqlplus / as sysdba @apxremov_con.sql
Well … but again something seemed to fail as I end up with a good bunch of invalid objects.
First I did recompile:
But even then those 3 objects kept INVALID after the removal.
SQL> select owner, object_name from dba_objects where status='INVALID'; OWNER OBJECT_NAME ------------- -------------------------- SYS WWV_DBMS_SQL FLOWS_FILES WWV_BIU_FLOW_FILE_OBJECTS APEX_040200 APEX$_WS_ROWS_T1
And furthermore a good number of APEX user schema did not get removed as well.
CON_ID USERNAME ------ -------------------------- 1 APEX_REST_PUBLIC_USER 1 APEX_PUBLIC_USER 1 APEX_LISTENER 1 APEX_040200 3 APEX_REST_PUBLIC_USER 3 APEX_PUBLIC_USER 3 APEX_LISTENER 3 APEX_040200 8 rows selected.
Will update this blog post as soon as I have more news about how to remove APEX flawless from the cloud deployment. The issue is under investigation.So better don’t remove APEX at the moment from the DBaaS cloud deployment.