In the previous blog posts you could read about how to create the configuration file, adjust it and tweak the init parameters for the AutoUpgrade tool. And then I described the different modes of it. In this blog post I’d like to give you some insights into AutoUpgrade: Where do you find all the logfiles?
AutoUpgrade – Step-by-step
- The new AutoUpgrade Utility – Download, documentation and supported versions
- Create and adjust the config file for AutoUpgrade
- Config file for AutoUpgrade – Advanced options
- Config file for AutoUpgrade – Tweaking init parameters
- AutoUpgrade: ANALYZE, FIXUPS, UPGRADE and DEPLOY modes
- AutoUpgrade: Where do you find all the logfiles?
- UPG: The AutoUpgrade Command Line Interface
- Upgrading Multitenant databases with AutoUpgrade
- Moving to a new server with AutoUpgrade
- How to tweak the hidden settings in AutoUpgrade
- AutoUpgrade and Data Guard, RAC, Restart and non-CDB to PDB
- AutoUpgrade and Wallets
The Config File
In the config file you set up earlier, you needed to define a global logging directory. And in addition, you needed to set one for each database.
global.autoupg_log_dir=/home/oracle/logs upg1.log_dir=/home/oracle/logs/upgr
That’s it.
You need to specify an global.autoup_log_dir
and upg1.log_dir
Parameters. Without these defined, the config file won’t be accepted by AutoUpgrade.
But you may ask yourself: Which files will the tool write, and where do I find the files I’m looking for? And which files does Support want in case something fails?
The Documentation
The log structure is pretty well described in the Database Upgrade 19c Guide:
But nevertheless, I will explain a bit more about it.
The Directory Structure
First of all, the directory structure unfolds under the global.autoup_log_dir
. You could have the log_dir
for individual databases in a separate structure but I totally disrecommend this. Let me give you an overview about the entire logging structure – and then I will describe the different directories and point to the most important logs.
In the below tree, I used the database names DB12
and UPGR
to avoid placeholders and the recommended approach of running –mode analyze
at first followed by -mode deploy
.
/cfgtoollogs
/upgrade/auto
/config_files
/lock
/status
/DB12
/100
/prechecks
/102
/preupgrade
/prechecks
/grp
/prefixups
/drain
/dbupgrade
/postchecks
/postfixups
/postupgrade
/temp
/DB12
Timezone
/UPGR
/101
/prechecks
/103
/preupgrade
/prechecks
/grp
/prefixups
/drain
/dbupgrade
/postchecks
/postfixups
/postupgrade
/temp
/UPGR
Timezone
/cfgtoollogs
Under this directory you will find a static structure, /upgrade/auto
, and then 3 subdirectories. The autoupgrade.jar
tool uses this structure for the tool itself, not for individual database information. From the diagnostic perspective, you will find the most important logfiles in the /upgrade/auto
subdirectory:
-rwx------. 1 oracle dba 158 Jul 15 22:24 autoupgrade_err.log drwx------. 2 oracle dba 4096 Jul 15 23:06 status -rwx------. 1 oracle dba 12376 Jul 15 23:06 state.html -rwx------. 1 oracle dba 781 Jul 15 23:06 autoupgrade_user.log -rwx------. 1 oracle dba 683294 Jul 15 23:06 autoupgrade.log drwx------. 2 oracle dba 22 Jul 15 23:06 config_files drwx------. 2 oracle dba 6 Jul 15 23:06 lock
The most important, and main log file is the autoupgrade.log
. The autoupgrade_user.log
contains information from the interaction with you. For instance, when I commented out the global logging directory parameter, the tool complained and exit – and this information from the command prompt gets written to the autoupgrade_user.log
. The other subdirectories contain information about the status of each phase, we handle locking – and we write a state.html
with the current status.

/cfgtoollogs/upgrade/auto/state.html
The ./status
subdirectory contains files indication the .success
or .failure
of a phase with file suffixes. And you will find there also two json files which get updated constantly throughout the upgrade: status.json
and progress.json
.
If you’d like to monitor the status during an upgrade, please see this blog post:
/<database>
Besides the general logs from the autoupgrade.jar
you will find individual logs in the subtree for each individual database. It unfolds into subdirectories for each job, simply named with the job number starting the count at 100, and another one called /temp
.
The /temp
directory contains temporary initialization parameter files the tool creates for the different phases of the upgrade. It is also used to maintain certain parameters and make sure these are set correctly after the upgrade. cluster_database
is a typical example as we need to switch it to FALSE
during the upgrade but the tool will revert it to the previous setting after the entire process has been completed. Additionally, it contains also sql
scripts the tool uses during the upgrade, such as <sid>_objcompare.sql
to compare invalid objects before-after.
More important for you may be the job number subdirectories. It starts with 100. If you follow the best practice of running the -mode analyze
at first, then followed by a -mode deploy
upgrade afterwards, then you will find two subdirectories, one for each job: 100 and 101.
I made a correction after the initial post: When you run -mode analyze
, then you will find only a /prechecks
subdirectory for job 100 for DB12
, and the same for job 101 for UPGR
in my above example. Initially I’ve had the same tree as for jobs 102 and 103 listed.
In the root of the job number, you can identify the individual autoupgrade.log for this database plus an additional autoupgrade_user.log. The latter contains not only some useful information about this particular database but also the upgrade progress:
2019-07-15 22:45:51.087 INFO [Upgrading] is [59%] completed for [db12] +---------+-------------+ |CONTAINER| PERCENTAGE| +---------+-------------+ | DB12|UPGRADE [59%]| +---------+-------------+
Even more important, in the subdirectories preupgrade
, prechecks
, grp
, prefixups
, drain
, dbupgrade
, postchecks
, postfixups
and postupgrade
you find detailed information about each phase of the autoupgrade.jar
. I would like to point out only the most important ones, especially in relation to a regular command line upgrade.
/<database>/prechecks
In the /prechecks
directory you can identify a preupgrade.log
– but it looks different than the output of the preupgrade.jar
if you call it directly from ?/rdbms/admin
. In addition, the tool generates a html
file as well:

/DB12/102/prechecks/db12_preupgrade.html
/<database>/dbupgrade
In the /dbupgrade
directory you will find the standard upgrade logs from each worker and – at the end – and upg_summary.log
. These files are identical to the ones you would see when you do a regular command line upgrade with dbupgrade
or catcl.pl
.
In my example these files are in the /dbupgrade subdirectory:
-rwx------. 1 oracle dba 30637 Jul 15 22:30 phase.log -rwx------. 1 oracle dba 500 Jul 15 22:30 catupgrd20190715222614db12_catcon_7698.lst -rwx------. 1 oracle dba 0 Jul 15 22:49 catupgrd20190715222614db12_datapatch_upgrade.err -rwx------. 1 oracle dba 1304 Jul 15 22:51 catupgrd20190715222614db12_datapatch_upgrade.log -rwx------. 1 oracle dba 4612 Jul 15 22:51 during_upgrade_pfile_catctl.ora -rwx------. 1 oracle dba 9206559 Jul 15 22:52 catupgrd20190715222614db121.log -rwx------. 1 oracle dba 4558045 Jul 15 22:52 catupgrd20190715222614db122.log -rwx------. 1 oracle dba 6854830 Jul 15 22:52 catupgrd20190715222614db123.log -rwx------. 1 oracle dba 36860 Jul 15 22:53 catupgrd20190715222614db12_stderr.log -rwx------. 1 oracle dba 516 Jul 15 22:53 db12_autocompile20190715222614db12_catcon_13694.lst -rwx------. 1 oracle dba 1912 Jul 15 22:59 db12_autocompile20190715222614db12_stderr.log -rwx------. 1 oracle dba 11801 Jul 15 22:59 autoupgrade20190715222614db12.log -rwx------. 1 oracle dba 31024 Jul 15 22:59 db12_autocompile20190715222614db120.log -rwx------. 1 oracle dba 390 Jul 15 22:59 upg_summary_report.pl -rwx------. 1 oracle dba 1359 Jul 15 22:59 upg_summary.log -rwx------. 1 oracle dba 32832308 Jul 15 22:59 catupgrd20190715222614db120.log -rwx------. 1 oracle dba 46 Jul 15 22:59 upg_summary_report.log
I marked the upgrade 4 worker’s logs in red. You can of course tail -f
them during the upgrade as well. The worker log with the 0
is the main worker which is active in serial and parallel phases.
Interaction with Oracle Support
In case you open an SR, then the best approach is to zip the entire log directory together and upload it to MyOracle Support (MOS). This way you ensure that all required information is present. The tree contains information about the tool’s activities plus about each individual database.
Of course, if you upgraded 10 databases in one run but only one had issues, then you should include only the /cfgtoollogs
and the /<database_with_issues>
directories.
Also if you contact us, the Upgrade Development Team, directly please make sure you did open an SR at first. We worked closely together with our Support engineers – and this way the logs are all together.
Further Information and Links
- AutoUpgrade: Refresh Status Information automatically
- Oracle Database 19c Upgrade Guide – Logfile Structure
–Mike
Hello Mike,
I am currently testing the AutoUpgrade utility with my test DB called UPGR12T.
I was able to successfully upgrade it to Oracle 19.4.0.0. After the successful upgrade I restored an Oracle 11.2.0.4 copy of the same test DB UPGR12T (copy was taken prior to the upgrade to Oracle 19.4.0.0).
I changed some parameters within the test DB UPGR12T and wanted to upgrade it again with AutoUpgrade to Oracle 19.4.0.0 using the same config file (and thus also the same log directories). Unfortunately I received the message “Previous execution found loading latest data”.
I know I did a successful upgrade of the test DB UPGR12T but my current situation is one where the test DB UPGR12T is again an Oracle 11.2.0.4 DB after its restore so I should be able to restart the upgrade again. I know that I can restart it if I remove all log directories manually but according to me this should not be the solution for this issue as there are cases where you would like to keep those logfiles. The AutoUpgrade utility does not seem to have options to administrate the logfiles/log directories.
I opened SR 3-20994397831 for this issue and the solution given was “remove the autoupg_log_dir and retry Autoupgrade” but as said according to me this is not a nice solution for this situation.
PS : While creating the SR for the problem type “Database Upgrade” I had to choose one of the options below as best option in order to answer Question 1 :
– Upgrading database to 10gR2 using DBUA
– Upgrading database to 10gR2 using manual method
– Other methods of upgrade / platform migration / Database bit conversion / OS upgrade
– Link to download 10.2 patchsets
You can see that the list of the possible options is outdated as they are all still referring to Oracle 10gR2. I presume this list should be reviewed and should also include the “AutoUpgrade” option.
Greetings,
Chris
Chris,
cleaning out the log directory is one option.
The other is:
java -jar $OH19/rdbms/admin/autoupgrade.jar -config /home/oracle/DB12.cfg -mode analyze -clear_recovery_data
This way the next run gets a new job ID. If your first run was job 100, the next will be 101 – with a new 101 directory.
Does this help?
Mike
PS: Will be on the blog soon as well 😉
Hello Mike,
Thank you very much for your reply ! I was not yet aware of the parameter “-clear_recovery_data” which is not documented in detail within the “Oracle® Database
Database Upgrade Guide”. This parameter was also not proposed by the Support Engineer in the SR which I created for this issue.
After having added “-clear_recovery_data” to the AutoUpgrade command I was indeed able to continue further as the last job from the previous successful upgrade finally got removed form the job list (Details from this job in the console were : Job# 103 – Stage => POSTUPGRADE – Operation => STOPPED – Status => FINISHED). A new job got created (Job# 104 -Stage => PRECHECKS) instead.
I noticed the parameter “-clear_recovery_data” made sure that the files progress.json and status.json were recreated from scratch within the directory “status” from the global.autoupg_log_dir.
However one question which bothers me now is wouldn’t executing AutoUpgrade with “-clear_recovery_data” do any harm in case other Oracle DB upgrades are executed in parallel writing to the same global.autoupg_log_dir and thus also writing to the same progress.json and status.json files ? Wouldn’t this cause the status- and progress being lost for the other upgrades which are running in parallel at the time the AutoUpgrade is executed with “-clear_recovery_data” ?
At this moment I configured the global.autoupg_log_dir in a way that it is a generic directory for all upgrades being executed with AutoUpgrade but wouldn’t it then be better to make sure the global.autoupg_log_dir is unique for every Oracle Database to be upgraded instead of having a global.autoupg_log_dir which is used for all Oracle Databases to be upgraded with AutoUpgrade ?
Greetings,
Chris
Thank you very much Mike for this information as this indeed solved my issue. I was not aware about the purpose of the parameter “-clear_recovery_data”.
I have to say there is a lack of information about the parameter “-clear_recovery_data” in the Upgrade guide. Yes it is mentioned as a possible parameter under https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/autoupgrade-command-line-parameters.html#GUID-37E47B92-642E-4316-877F-2449B4A4B815 but there is no explanation in the Upgrade Guide on what this parameter actually does. Is it possible to adapt the Upgrade Guide in a way that a clear explanation is given for the parameter “-clear-recovery_data” ?
Greetings,
Chris
Hi Chris,
I will try to complete the blog post series about AutoUpgrade sometime next week (hopefully).
In between, scroll down here to the end to see how the parameter can be used. And I’ll send your feedback to the team and our doc writer, too.
https://mikedietrichde.com/hands-on-lab-autoupgrade-to-oracle-19c/
Cheers,
Mike
Hello Mike,
Am I correct in stating that DB’s can’t be upgraded in parallel with AutoUpgrade in case they all have the parameter “global.autoupg_log_dir” set to the same directory ?
In my case I have the parameter “global.autoupg_log_dir” set to the same directory for two DB’s which I am upgrading at the same time.
However in case I launch “autoupgrade” on the second DB, while an upgrade via AutoUpgrade is already running on the other DB, then I receive the following error :
“AutoUpgrade is already running using this configuration. Exiting.”
So does this mean that, in order to be able to upgrade two- or more Oracle DB’s in parallel with AutoUpgrade, one should make sure that the parameter “global.autoupg_log_dir” is not set to the same directory in all involved DB’s to be upgraded at the same time ?
I had in mind that the parameter “global.autoupg_log_dir” would need to be set, as you also described in this blog, in a way that it is a generic- and unique directory on an Oracle DB server hosting all AutoUpgrade log directories from all Oracle DB’s upgraded with AutoUpgrade on that server. If this setting however does not allow upgrades to be executed in parallel then such a setting has a big drawback.
PS : I also asked the same question in SR 3-20994397831 but I have not yet received an answer on my question.
Greetings,
Chris
Hi Chris,
you can upgrade as many as you want in parallel. The global.autoupg_log_dir is just the “root” of the logs. Every database gets a separate log tree underneath it. When you set it (as I do here): https://mikedietrichde.com/hands-on-lab-autoupgrade-to-oracle-19c/ to the same log as for each database then the following happens:
e.g. global.autoupg_log_dir=/home/oracle/logs
upg1…. same
upg2 … same
Then you’ll get:
/home/oracle/logs
/home/oracle/logs/SID1
/home/oracle/logs/SID2
Actually, I prefer it this way as it avoids any issues with uppercase/lowercase/whatever – the tool does it right 🙂 Trust us 🙂
And no, the tool is really designed to upgrade as many as you want. And I will cover this in one of the upcoming blog posts. Hence, I’m very happy for your comment as it shows me that we may need to explain more.
Cheers,
Mike
And btw, this is one of the many reasons why AutoUpgrade is far more superior over DBUA. While DBUA can only upgrade one database at a time from within one home, AutoUpgrade can upgrade hundreds in parallel – if your server can digest that 🙂
Cheers,
Mike
Hello Mike,
FYI, I received the message “AutoUpgrade is already running using this configuration. Exiting.” in the following situation where the same “autoupgrade.jar” file was used at the same time :
– DB “UPGR12T” on Server “DEV2003” => AutoUpgrade running (in mode “analyze”, “deploy”, “fixups” or “upgrade”)
=> Config file UPGR12T_19400.cfg
– DB “DBA12T” on same Server “DEV2003” => Start Autoupgrade (in mode “analyze”, “deploy”, “fixups”, or “upgrade”)
=> Config file DBA12T_19400.cfg
Above issue comes from the fact that there is a limitation that only one “AutoUpgrade” instance can be run at a time and I want to run it twice as I have two seperate config files (one for each DB).
I can run the upgrade of the DB’s “UPGR12T” and “DBA12T” in parallel in case I would only use one “AutoUpgrade” instance and thus also only one config file (instead of two seperate config files for each DB) but then every phase will always be executed on both DB’s in parallel (ex. “analyze”, “deploy”, “fixups”, “upgrade”).
We however have situations where we want to run, on the same server, AutoUpgrade in mode “analyze” for DB1 while AutoUpgrade might already be running in mode “deploy” for another DB, let’s say DB2. This is something which does not seem to be possible with the usage of one AutoUpgrade installation on a server.
According to me the only way to get, at this moment, around the “only one AutoUpgrade can be run at a time” limitation on a server is to :
– Not use a global AutoUpgrade directory on the server where “autoupgrade.jar” is stored- and used for all Oracle DB upgrades on that server
– Store “autoupgrade.jar” seperately in different directories for every DB on the same server
– Use an individual AutoUpgrade config file for every DB to be upgraded on the same server
Are my statements correct ?
PS : You can find more detailed information within the SR created for this issue => “SR 3-20994397831”
Greetings,
Chris Smids
Hi Chris,
I see what has happened. And yes, it is intended that you run only one autoupgrade from one Oracle Home on the same server.
The idea of autoupgrade is really to give away control – and remove the “upgrade work” from your shoulders.
I fully understand that people have requirements such as:
– I want to run multiple instances of autoupgrade in parallel for different tasks
– I want more control …
🙂
Sure thing. We see all this. But the intention is to schedule the runs. And you are fully correct – you can’t have the same config file at run just an ANALYZE on database1 whereas an DEPLOY or only UPGRADE is running on database2 on the same server. But the way we designed it that you run an ANALYZE on your databases during the day to check whether the database(s) are ready to be upgraded automatically – and than schedule an DEPLOY for overnight or the weekend, whenever it is convenient.
If you desire to ANALYZE another database in between, then you can always take the most recent preupgrade.jar from MOS 884522.1 and run it while autoupgrade is processing your databases. But from within the same home, you can have only one autoupgrade running. If we would allow to have more, we are asking for mess and trouble. Just as a simple example, a config file containing the same database twice. Hence, you should be able to run separate autoupgrade sessions from within separate homes. And with different log directories.
As you are not the first one asking about how to do this, I will explain and show this on the blog by tomorrow.
Cheers,
Mike
Hello Mike,
Once again, thank you very much for your valuable feedback.
Indeed, the introduction of “AutoUpgrade” will also require a change in the way customers handle Oracle DB upgrades today without “AutoUpgrade”.
In our case we will therefore need to change our way of working from a manual execution of Oracle DB upgrades on a scheduled timestamp, via the execution of home-made scripts, towards a “scheduling” approach where upgrades will be automatically executed via AutoUpgrade after they have been carefully planned via the creation of a config file (where you also have the possiblity to start upgrades at different timestamps via the “start_time” parameter).
Ofcourse the customer also has to foresee new scripts in order to build/maintain this config file and to make sure the necessary actions, specific to the setup of the customer, will be executed before- or after the upgrade with AutoUpgrade (by specifying the required scripts to be executed via the parameters “before_action” and “after_action”).
I am looking forward to your new blog about “AutoUpgrade”.
Greetings,
Chris Smids
Hi Chris,
almost written – will be on the blog tomorrow morning 😉
I can give you a sneak preview: You need only a different global.autoupg_log_dir – then you can even run two in parallel with the same tool from the same home.
Cheers,
Mike
Hello Mike,
I hope it is correct to place the question here .
I try to upgrade a newly installed 12.1.0.2. to a patched Oracle 19 Home .
I get
Stage [DBUPGRADE]
Operation [STOPPED]
Status [ERROR]
Info [
Error: UPG-1401
Opening Database XYXYXYXY in upgrade mode failed
Cause: Opening database for upgrade in the target home failed
For further details, see the log file located at /stage/shared/oracle/upg_logs/xyxyxyxy/XYXYXYXY/103/autoupgrade_20200518_user.log]
There I find
DATABASE NAME: XYXYXYXY
CAUSE: ERROR at Line 5 in [Buffer]
REASON: SP2-0751: Unable to connect to Oracle. Exiting SQL*Plus
ACTION: [MANUAL]
DETAILS:
2020-05-18 14:10:41.259 ERROR Database Open Failed for xyxyxyxy ERROR:
ORA-12546: TNS:permission denied
I dont know how to categroize this as an SR.
Any hints appreciatred
Hi Norbert,
is the environment set correctly? I’m blindly guessing that the SQL Plus session doesn’t have the right context and doesn’t see the DB.
ORA-12546: TNS:permission denied
is a strange error. Can you connect with SQL Plus from within this OS session you start autoupgrade from to your database?
Is this Windows? Then check which OS user you are using.
Do you have special parameters in your sqlnet.ora?
You can open an SR with “autoupgrade” – the folks will route it correctly (I hope).
Thanks,
Mike
Any chance for the next release to include a report based on information in the two JSON files? Or maybe an option to generate html from the JSON files?
I think this is not on the list right now – we have a lot of projects at the moment 🙂
Cheers,
Mike