I encountered this issue now multiple times in my own environment. And I searched MOS and tried different things. But I couldn’t solve it yet. My Upgrade fails with ORA-20001 during datapatch run – and I’d like to show you how to bring the upgrade to an successful end.

Photo by chuttersnap on Unsplash
It all starts with an upgrade
In my lab environment I upgrade two databases often in parallel. An 11.2.0.4 database and a 12.2.0.1. The 11.2.0.4 upgrades always flawless. But the 12.2.0.1 occasionally fails. This is the screen AutoUpgrade is showing me – but this is not an AutoUpgrade problem. It is not even a “upgrade” but a patching problem.
At first, my source databases are patched to the most recent patch bundles. At the moment while I write this, it means the July 2020 patch bundles.
I use AutoUpgrade here simply because it is easier. Why should I type instead or click DBUA screens. Especially as DBUA wouldn’t be able to upgrade two databases in parallel.
This is my config file:
global.autoupg_log_dir=/home/oracle/upg_logs # # Database number 1 # upg1.dbname=DB12 upg1.start_time=NOW upg1.source_home=/u01/app/oracle/product/12.2.0.1 upg1.target_home=/u01/app/oracle/product/19 upg1.sid=DB12 upg1.log_dir=/home/oracle/upg_logs upg1.upgrade_node=localhost upg1.target_version=19 upg1.timezone_upg=no upg1.restoration=no # # Database number 2 # upg2.dbname=FTEX upg2.start_time=NOW upg2.source_home=/u01/app/oracle/product/11.2.0.4 upg2.target_home=/u01/app/oracle/product/19 upg2.sid=FTEX upg2.log_dir=/home/oracle/upg_logs upg2.upgrade_node=localhost upg2.target_version=19 upg2.timezone_upg=no upg2.restoration=no
Nothing unusual.
Upgrade is failing
And while the 11.2.0.4 database upgrades nicely, the 12.2.0.1 fails.
$ java -jar $OH19/rdbms/admin/autoupgrade.jar -config UP19.cfg -mode deploy AutoUpgrade tool launched with default options Processing config file ... +--------------------------------+ | Starting AutoUpgrade execution | +--------------------------------+ 2 databases will be processed Type 'help' to list console commands upg> ------------------------------------------------- Errors in database [DB12] Stage [DBUPGRADE] Operation [STOPPED] Status [ERROR] Info [ Error: UPG-1400 UPGRADE FAILED [DB12] Cause: Database upgrade failed with errors For further details, see the log file located at /home/oracle/upg_logs/DB12/103/autoupgrade_20200730_user.log] ------------------------------------------------- Logs: [/home/oracle/upg_logs/DB12/103/autoupgrade_20200730_user.log] ------------------------------------------------- Job 102 completed
Let me find out what has happend.
autoupgrade_user.log
As the output proposed, I will start with: /home/oracle/upg_logs/DB12/103/autoupgrade_20200730_user.log
---------+-------------+ | DB12|UPGRADE [95%]| +---------+-------------+ 2020-07-30 12:24:30.259 ERROR DATABASE NAME: DB12 CAUSE: ERROR at Line 756922 in [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log] REASON: Error: prereq checks failed! ACTION: [MANUAL] DETAILS: 2020-07-30 12:24:30.287 ERROR Database Upgrade Error in File [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log] on Database [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log] 2020-07-30 12:24:31.945 ERROR UPGRADE FAILED [DB12] 2020-07-30 12:24:31.945 ERROR Exception Error in Database Upgrade [UPG-1400#UPGRADE FAILED [DB12]] 2020-07-30 12:24:31.945 INFO End Upgrade on Database [DB12] 2020-07-30 12:24:34.524 ERROR UPGRADE FAILED [DB12] 2020-07-30 12:24:34.533 ERROR Exception Error in Database Upgrade [DB12] 2020-07-30 12:24:34.570 ERROR db12 Return status is ERROR 2020-07-30 12:24:34.572 ERROR Dispatcher failed: AutoUpgException [UPG-1400#UPGRADE FAILED [DB12]] 2020-07-30 12:24:34.574 INFO Starting error management routine 2020-07-30 12:24:34.585 INFO Ended error management routine 2020-07-30 12:24:34.590 ERROR Error running dispatcher for job 103 Cause: Database upgrade failed with errors 2020-07-30 12:24:34.590 ERROR Dispatcher failed: Error: UPG-1400 UPGRADE FAILED [DB12] Cause: Database upgrade failed with errors For further details, see the log file located at /home/oracle/upg_logs/DB12/103/autoupgrade_20200730_user.log; aborting job 103 for database DB12
Hm … CAUSE: ERROR at Line 756922 in [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log]
The main worker’s catupgr0.log
This is the main worker’s upgrade logfile.
[..] Serial Phase #:105 [DB12] Files:1 Time: 3s Serial Phase #:106 [DB12] Files:1 Time: 0s Serial Phase #:107 [DB12] Files:1 Time: 23s ------------------------------------------------------ Phases [0-107] End Time:[2020_07_30 12:24:07] ------------------------------------------------------ Grand Total Time: 1869s *** WARNING: ERRORS FOUND DURING UPGRADE *** 1. Evaluate the errors found in the upgrade logs and determine the proper action. 2. Rerun the upgrade when the problem is resolved REASON: ERRORS FOUND: During Upgrade FILENAME: /home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db120.log AT LINE NUMBER: 754911 ------------------------------------------------------ Identifier DATAPATCH_ 20-07-30 12:21:32 SCRIPT = [/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db12_datapatch_upgrade.log] ERROR = [Error: prereq checks failed! ] STATEMENT = [] ------------------------------------------------------ LOG FILES: (/home/oracle/upg_logs/DB12/103/dbupgrade/catupgrd20200730114721db12*.log) Upgrade Summary Report Located in: /home/oracle/upg_logs/DB12/103/dbupgrade/upg_summary.log End of Input Commands ------------------------------------------------------ Start of DataPatch Logs ------------------------------------------------------ stdout from running datapatch to install upgrade SQL patches and PSUs: SQL Patching tool version 19.8.0.0.0 Production on Thu Jul 30 12:19:29 2020 Copyright (c) 2012, 2020, Oracle. All rights reserved. Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30237_2020_07_30_12_19_29/sqlpatch_invocation.log Connecting to database...OK Gathering database info...done Bootstrapping registry and package to current versions...done Error: prereq checks failed! verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table Prereq check failed, exiting without installing any patches. Please refer to MOS Note 1609718.1 and/or the invocation log /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30237_2020_07_30_12_19_29/sqlpatch_invocation.log for information on how to resolve the above errors. SQL Patching tool complete on Thu Jul 30 12:21:31 2020 stderr from running datapatch to install upgrade SQL patches and PSUs: End of DataPatch Logs ------------------------------------------------------
So the upgrade itself completed successfully. But datapatch had an issue in the post upgrade phase.
ORA-20001
And this seems to be the problem:
Error: prereq checks failed! verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table
The most famous ORA-20001. On MyOracle Support you will find this note: MOS Note:1602089.1 – Queryable Patch Inventory – Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table. But I need more information to get an idea what’s wrong in my case.
And again for the records, I upgrade to an 19.8.0 home, but I saw the same issue with 19.7.0 already, too.
sqlpatch_invocation.log
Ok, let’s go to the next logfile – the /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30237_2020_07_30_12_19_29/sqlpatch_invocation.log. Maybe it has more details?
[2020-07-30 12:21:29] verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table [2020-07-30 12:21:29] Error: prereq checks failed! [2020-07-30 12:21:29] verify_queryable_inventory returned ORA-20001: Latest xml inventory is not loaded into table [2020-07-30 12:21:29] Prereq check failed, exiting without installing any patches. ... [2020-07-30 12:21:29] *** END final state end of patching *** [2020-07-30 12:21:31] Please refer to MOS Note 1609718.1 and/or the invocation log [2020-07-30 12:21:31] /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_30237_2020_07_30_12_19_29/sqlpatch_invocation.log[2020-07-30 12:21:31] for information on how to resolve the above errors.
That is not really helpful.
But there is an sqlpatch_debug.log in the same directory. And this has this error sequence:
[2020-07-30 12:21:29] QPI: Caught error while querying opatch_xml_inv [2020-07-30 12:21:29] QPI: in REFRESH_OPATCH_DATA, ERROR code -29913 : ORA-29913: error in executing ODCIEXTTABLEFETCH callout ORA-29400: data cartridge error KUP-04095: preprocessor command /u01/app [2020-07-30 12:21:29] QPI: Releasing lock SUCCESSFUL, qp_result=0 at : 30-JUL-20 12.21.29.797707000 PM +02:00 [2020-07-30 12:21:29] QPI: Releasing lock FAILED, qp_result=4 at : 30-JUL-20 12.21.29.798750000 PM +02:00 [2020-07-30 12:21:29] QPI: in get_pending_activity, ERROR code -20001 : ORA-20001: Latest xml inventory is not loaded into table
Unfortunately this doesn’t bring me forward either as my error pattern isn’t covered in MOS Note:1602089.1.
alert.log?
Maybe the alert.log has more information?
SERVER COMPONENT id=DP_UPG_BGN: timestamp=2020-07-30 12:19:27 2020-07-30T12:19:47.883482+02:00 XDB initialized. 2020-07-30T12:19:49.188762+02:00 QPI: opatch file present, opatch QPI: qopiprep.bat file present 2020-07-30T12:21:32.945903+02:00 SERVER COMPONENT id=DP_UPG_END: timestamp=2020-07-30 12:21:32
No.
Basically I know that the inventory couldn’t be queried at the end of the upgrade. But my database is upgraded. Hence, no need to restore it. But how do I solve this?
Query the inventory or cleanup metadata
At this stage I wanted to save my upgrade. So I queried the inventory by myself – and it takes quite a bit as you can see here:
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jul 30 15:18:48 2020 Version 19.8.0.0.0 Copyright (c) 1982, 2020, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.8.0.0.0 SQL> set timing on SQL> select * from OPATCH_XML_INV; XML_INVENTORY -------------------------------------------------------------------------------- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <Inv entoryInstance> <ora Elapsed: 00:00:35.22 SQL> exit
It may be also a solution to cleanup metadata with:
EXEC dbms_qopatch.clean_metadata();
Complete the upgrade
AutoUpgrade still waits for me – another reason why I don’t use the DBUA which typically is non-resumable. I will resume now my upgrade:
upg> lsj +----+-------+-----------+---------+--------+--------------+--------+-----------------+ |Job#|DB_NAME| STAGE|OPERATION| STATUS| START_TIME| UPDATED| MESSAGE| +----+-------+-----------+---------+--------+--------------+--------+-----------------+ | 102| FTEX|POSTUPGRADE| STOPPED|FINISHED|20/07/30 11:47|12:30:11|Completed job 102| | 103| DB12| DBUPGRADE| STOPPED| ERROR|20/07/30 11:48|12:24:34| UPG-1400| +----+-------+-----------+---------+--------+--------------+--------+-----------------+ Total jobs 2 upg> resume -job 103 Resuming job: [103][DB12]
The output looks promising:
And yes, my database upgrade finished now.
Alternative Workaround
You may use the following workaround – it will increase the threshold to read the inventory:
alter system set "_xt_preproc_timeout"=180 scope=both;
In this case, I set it to 180 seconds.
Annotation:
In a previous version of this blog post, I recommended to set _bug27355984_xt_preproc_timeout instead of _xt_preproc_timeout. Please use _xt_preproc_timeout since otherwise the upgrade to Oracle Database 21c and 23c will fail due to this parameter in your spfile.
Summary
The root cause for this issue may have been incorrect metadata from a previous patch run. Let me emphasize again that this is not an upgrade or AutoUpgrade error. It happens when datapatch gets invoked.
I’m very happy that AutoUpgrade (and the command line upgrade with “dbupgrade -R” are both resumable). Otherwise I wouldn’t have had a chance to fix this so easily. In case you still used the DBUA, you can still jump in with “dbupgrade -R” and complete the upgrade after cleaning up the metadata.
Why do I write such a blog post? Actually, when I hit such an issue with my limited number of upgrades, you may see it another day, too. And the solution is simple. But as I couldn’t find the key in MOS, I would like to show you here how easily you can resume a potentially failed database upgrade – even when the upgrade itself completed fully.
Further Links and Information
- Opatch … oh Opatch … why is Opatch so stubborn? – Dec 15, 2018
- MOS Note:1602089.1 – Queryable Patch Inventory – Issues/Solutions for ORA-20001: Latest xml inventory is not loaded into table
–Mike
Hi.
I try to upgrade from 11.2.0.4 with the las PSU applied to 19c with the last PSU applied, but I stuck in the error DBMS_STATS package body is not compile.
I don’t found any solution to this error.
Regards.
Please open an SR and check with Oracle Support.
Make sure you are upgrading to 19.12 (I guess, you upgrade to 19c).
If you are using AutoUpgrade, collect the logs with “java -jar autoupgrade.jar -config yourconf.cfg -zip” – add the alert.log and “opatch lsinventory” to avoid ping-pong.
CHeers,
Mike
Hi Mike
hypothetical question here, what if you get this error and you exit out of it from AutoUpgrade? Do you just re-run AutoUpgrade deploy and it will just resume from where you left off? Will it be the same if your AutoUpgrade was aborted due to a server reboot or something like that?
Hi Edwin,
you just restart AU with the same command – it has a memory via the logs and via tracking information within the database.
Cheers,
Mike
What about running EXEC dbms_qopatch.clean_metadata(); before the upgrade, is that not an option?
Try it – I don’t think so.
Otherwise let me know please.
Cheers,
Mike
Hi Mike,
I faced the same error UPG-1400 but it reference to catconExec failed during bootstrap (see below ). there is no error ORA-*** in log file. So that become difficult to find and focus my search.
Have you any idea Mike ? Thanks .
ERROR Exception Error in Database Upgrade [UPG-1400#UPGRADE
ERROR Dispatcher failed autoupgexception upg-1316 “UPG-1400”
ERROR Dispatcher failed: AutoUpgException [Database job failed with errors] – AutoUpgDispatcher.run
oracle.upgrade.autoupgrade.utils.errors.UPG1400: AutoUpgException [Database job failed with errors]
REASON: Error: prereq checks failed!
Datapatch Error: prereq checks failed!
Bootstrapping registry and package to current versions…
catconExec failed during bootstrap
Error: prereq checks failed!
catconExec failed during bootstrap
Prereq check failed, exiting without installing any patches.
Hi Djamel,
see your point – unfortunately I have no blank magic solution. You please need to work with Support on an SR.
Thanks
Mike
You are AWESOME 🙂
Thanks 😉
MIke
we got the same error above solution didnt work for us (may be due to hpux) . SR opened but there solution was not that effective so we resolve it by making changes in batfile.
OS :HPUX
Path : …../19.0.0.0/db/QOpatch
edit : qopiprep.bat and comment the part where its was trying to set the env.
#LANG_STR=`locale -a | grep -i ‘en_us’ | grep -i ‘utf’ | grep ‘8’`
#if [ $? -eq 0 ] ; then
# LANG=”${LANG_STR}”; #Use a valid string for each supported platform.
#else
# LANG=”en_US.utf8″; #if there was any utf-8 valid string, use default for Lin
#fi;
#unset LANG_STR;
>>>This resolved our issue . we resume the job and its done
>>in our case all above errors where same but we also found that in qopatch_log.log ( $ORACLE_HOME/rdbms/log/qopatch_log.log) there was an error
KUP-04095: preprocessor command /app/oracle/product/19.0.0.0/db/QOpatch/qopiprep.bat encountered error ”
Warning! One or more of your selected locales are not available.
Please invoke the commands “locale” and “locale -a” to verify your
we ran ./datapatch -verbose manually by setting env manually. (for linux LANG is different )
(export LANG=en_US.utf8
export LC_CTYPE=en_US.utf8
export LC_ALL=en_US.utf8)
it succeeded. so now we know the some how hpux or script was not able to set the env correctly.so thats why we edit the bat file
Thanks Gurjeet!
And thanks for sharing – plus, sorry for the inconvenience.
This error is really obscure and can have so many reasons …
Mike
Hi Mike, I stopped in the same error and after came here I decided to follow the logs and it turns out I had permission problem, don´t ask why and how
Error: prereq checks failed!
Could not open sqlpatch history file /software/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_history.txt for writing
so I corrected the permission (changed owner:group to the user doing the upgrade) and restart the job and guest what !!! Everything Finished beautifully
upg> Job 103 completed
——————- Final Summary ——————–
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Jobs restored [0]
Jobs pending [0]
—- Drop GRP at your convenience once you consider it is no longer needed —-
and happy,
Thanks for blog
Thanks for the update, William – and glad you could solve it!
Cheers,
Mike
Just sharing my experience:
I just had exactly the same error during upgrade a 12.2 DB to 19.8. Same XML error as yours.
The cause might be diffrerent as I’m using a soft link for 19c home (/u01 is running out of sapce…) and seems 19c does not like soft link in the path during executing the datapatch.
MOS gave some solution which could help normal running DB to overcome this by create OPatch related directory in database from normal $ORACLE_HOME/OPatch etc to absolute path.
But don’t think it could help as those directories got created/upated during the upgrade process…
I might just try to put thing back to /u01 and do the upgrade then.
Hi Ocean,
thanks for the hint – and I think we have some checks for soft links. But it is hard to cover everything.
Cheers,
Mike