Well, it is Friday night – and I’d rather should sleep or do something cool people tend to do on Friday night’s. But I try to verify a strange issue with DBMS_OPTIM_BUNDLE and the October 2018 Update for Oracle 12.2.0.1. I’m patching. Can you think of anything better on a Friday night than patching?
Of course I know that I’m not alone. I know at least two customers I work with right now who do some maintenance jobs tonight as well. And I’m struggling with OPatch … oh OPatch … why is datapatch so stubborn?
My test setup
I have a very simple setup:
- Oracle 12.2.0.1 vanilla install
- Oracle 18.4.0
- April 2018 Update for Oracle 12.2.0.1
- October 2018 Update for Oracle 12.2.0.1
- The newest OPatch in my Oracle 18.4.0 home
- Two 12.2.0.1 databases, DB12 and CDB1
Patching
Then I patch my 12.2.0.1 environment with the April 2018 Update. This is simple and done within minutes:
cd /media/sf_TEMP/p27674384_122010_Linux-x86-64/27674384 $OH18/OPatch/opatch apply
Done.
datapatch -verbose … ups
And now I’d like to invoke datapatch -verbose
to apply the SQL changes to the database.
$OH18/OPatch/datapatch -verbose
But this fails with a lovely error message:
SQL Patching tool version 12.2.0.1.0 Production on Fri Dec 14 23:43:19 2018 Copyright (c) 2012, 2018, Oracle. All rights reserved. Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_26838_2018_12_14_16_43_19/sqlpatch_invocation.log Connecting to database...OK Bootstrapping registry and package to current versions...done Queryable inventory could not determine the current opatch status. Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual' and/or check the invocation log /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_26838_2018_12_14_23_43_19/sqlpatch_invocation.log for the complete error. Prereq check failed, exiting without installing any patches. Please refer to MOS Note 1609718.1 and/or the invocation log /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_26838_2018_12_14_23_43_19/sqlpatch_invocation.log for information on how to resolve the above errors. SQL Patching tool complete on Fri Dec 14 23:43:24 2018
Ok, let me check the log for hopefully more information:
Bootstrapping registry and package to current versions...done
verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table
Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual'
Executing the query gives me the exact same error message.
Solution possible?
Let me ask the ‘oracle’ … pardon … MOS – and MOS returns this note as first result:
I quickly browse through the note and it dawns me (even though dawn is still some hours away) that I may not the only one seeing this error from time to time – and there are multiple causes for it.
But what should I do? I just randomly picked the first possible query and executed it:
select * from OPATCH_XML_INV;
And here I have my error sequence:
ERROR: ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app/oracle/product/12.2.0.1/QOpatch/qopiprep.bat encountered error "Inventory load failed... OPatch cannot load inventory for the given Oracle Home. LsInventorySession failed: Unable to create patchObject Possible causes are: " no rows selected
Well, I especially like the “Possible causes are:
” followed by empty lines. But luckily there’s this note showing 18 different symptoms.
Minutes later I got stuck with symptom 16 / cause:
1. Using older version of opatch utility.
2. OPatch folder is not in the default location
I can definitely exclude 1. as I have the newest OPatch (or at least a much newer one than the April Update). But yes, I call datapatch from my 18c home. The OPatch is exactly identical between 12.2.0.1 and 18c.
But could this be the case?
I’m trying:
rm -rf $ORACLE_HOME/OPatch cp -R $OH18/OPatch $ORACLE_HOME
and then I call datapatch
again:
$ $ORACLE_HOME/OPatch/datapatch -verbose SQL Patching tool version 12.2.0.1.0 Production on Fri Dec 14 23:57:02 2018 Copyright (c) 2012, 2018, Oracle. All rights reserved. Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30289_2018_12_14_23_57_02/sqlpatch_invocation.log Connecting to database...OK Bootstrapping registry and package to current versions...done Determining current state...done Current state of SQL patches: Bundle series DBRU: ID 180417 in the binary registry and not installed in the SQL registry Adding patches to installation queue and performing prereq checks... Installation queue: Nothing to roll back The following patches will be applied: 27674384 (DATABASE APR 2018 RELEASE UPDATE 12.2.0.1.180417) Installing patches... Patch installation complete. Total patches installed: 1 Validating logfiles... Patch 27674384 apply: SUCCESS logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27674384/22098633/27674384_apply_DB12_2018Dec14_23_57_11.log (no errors) SQL Patching tool complete on Fri Dec 14 23:58:26 2018
Wow! That was the cause. I called datapatch
from a different directory. And I simply could copy the entire OPatch directory to my current home – and then it worked.
I hope I don’t dream of datapatch
tonight … and I still wonder why the tool couldn’t tell me straight away the reason for the error – or why datapatch
does care from where it is called while opatch itself does not.
Why are you so stubborn, datapatch
??
Goodnight! And MOS is sometimes pretty helpful – even on a Friday night.
More Information
- Why you should use the most recent version of OPatch (May 14, 2018)
- How to find out if a PSU (or BP or RU or RUR) has been applied? (Feb 2, 2016)
- July 2018 patches – delays and other issues (Jul 20, 2018)
- Patching my database with the July 2018 PSU, BP and RU (Jul 19, 2018)
- The magic OPatch patch number to download opatch: 6880880.
–Mike
Hi Mike,
Thanks for sharing this.
i faced the same issue when applying July PSU for Oracle 12cR1 (12.1.0.2), i even blogged about it here:
https://geodatamaster.com/2018/07/19/oracle-psu-july-2018-error-ora-20001-latest-xml-inventory-is-not-loaded-into-table/
I think this kind of changes should be reflected in the HTML file that comes with the downloaded patches (where the procedure of patching is documented)…..unfortunately this is not the case.
Best Regards,
Emad
Hi Emad,
thanks for your feedback – and sorry to hear that you’ve had the same “lovely” experience I’ve had.
From the twitter stream regarding my post I can see that most people package it together when they roll out their new homes and thus don’t hit this.
And I agree that such things need to be documented. But I stopped asking for changes in the readmes …
Thanks!
Mike
Hi Mike
Have you seen this issue in 12.2
Datapatch failed.
select dbms_sqlpatch.verify_queryable_inventory from dual;
ORA-20004: Job configuration failed as nodename=instance name= is not active
I did find bug 27169796 which suggests to upgrade to 12.2 (which we are on) but still has a fix for version 12.2 (which is included in the Jan 2019 PSU as per below)
[oracle@cltsadm01vm01 OPatch]$ ./opatch lsinventory|grep 27169796
27153641, 27161071, 27162405, 27163928, 27165231, 27169796, 27170305
do you have any idea
Jack,
no – unfortunately I have no idea. I found the same MOS note you must have found already – you may please open an SR.
Cheers,
Mike
Thanks a lot. it helped me
Thanks MIke, When I downloaded the quarterly patches, the OPatch version had not changed from the previous quarter. I’ve been fighting this error for 2 days. Went through all the senarios on MOS (waste of time). Your post reminded me to check for an updated OPatch version. Sure enough there’s an update now and it works.
Glad that it works – and sorry for the inconvenience.
OPatch can be a pretty painful thing, I know π
Cheers.
Mike
Hi Mike, I was getting ORA-20001: Latest xml inventory is not loaded into table on a Windows 19c server and found, that oracle was blocking the %ORACLE_HOME%\rdbms\log\qopatch_log.log from a previous datapatch run on startup and thus preventing sqlpatch.bat from writing to it. Renaming the file solved the issue.
Thanks Armin – and I know, there are sooooo many potential reasons for this strange error.
Thanks for sharing!
Cheers,
Mike
I looked at your post, and it inspired me. I knew that OPatch was correct, so I had to dig to find the issue. In my case, I was running 19c, and it turned out that it was an environment problem from the connecting ssh client. I’m using the Ubuntu extension for Windows, and Oracle’s OPatch did NOT like the settings at all when I ran ./datapatch -verbose.
I tried it from ssh on a Mac, and it worked perfectly. Weirdorama!
Thanks Scott!
Cheers,
Mike
Yes, am as frustrated as you are, upgrading 12.1.0.2 to 19c using AutoUpgrade we almost always hits this error, while your workaround from
https://mikedietrichde.com/2020/07/31/upgrade-fails-with-ora-20001-during-datapatch-run/?unapproved=32719&moderation-hash=0ba86a72a5ecf4894ec71ad6aaca5639#comment-32719
works in most cases, this time error it doesn’t.
SQL> SQL> select * from OPATCH_XML_INV;
ERROR:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04095: preprocessor command
/opt/oracle/19.0.0.0/dbhome_1/QOpatch/qopiprep.bat encountered error “pipe read
timeout”
AutoUpgrade stalls on this for the whole 2+ hours and gave up to report it has exceeded the timeout threshold π And yes, we have referred to the same MOS Note: 1602089.1
And is looking at this. Can this be set on the 12.1.0.2 before the upgrade? Or not π
For Symptom 23
As a workaround set below parameters for DB and then run datapatch
*._bug27355984_xt_preproc_timeout=1000
*._enable_ptime_update_for_sys=TRUE
P.S.:
In regards to Oracle Support, the response is really depressing and frustrating. We have SRs sitting there for 5+ days with no update of any sort π
Hi Edwin,
at first, I would need the SR number. And second, see here:
https://blogs.oracle.com/support/request-support-management-attention-sr-attention
This is the way to get management attention for an SR when you are not happy with the progress.
Please apologize but our team is not “Oracle Support”. We are the PMs trying to help as much as our time allows. And we are happy to assist whenever possible. But we can’t replace Oracle Support – we help our colleagues when they ask us. Since I haven’t seen this issue, and since this isn’t an “upgrade issue” but a “datapatch problem”, this may be the simple explanation why we never got involved. You see this problem in AutoUpgrade obviously, but the underlying problem is most likely the patch inventory.
In the workaround you mentioned, you set the timeout very high. It should be fine with 180 seconds (that’s already 3 minutes useless waiting time for reading the patch inventory). Having it at 1000 is almost 17 minutes – that’s a lot.
Cheers,
Mike
Hi Mike,
We’re all done with the AutoUpgrade within the business acceptable time limit.
Further reading, I managed to get use of the add_during_upgrade_pfile to get around the problem and that seems to work fine. This and setting _with_subquery”=materialize as per your blog
https://mikedietrichde.com/2011/05/23/time-zone-upgrade-might-be-slow/