This is the first of a series of blog post explaining the new AutoUpgrade utility step-by-step. You’ll find the tool and overview information in The New AutoUpgrade Utility in Oracle 19c
In this blog post here I will explain and demonstrate how you can create a sample config file and adjust it for a standard database upgrade.
You will see how to add additional options and change default settings. Furthermore, I will showcase how to change, adjust and tweak init.ora parameters during or after upgrade in the upcoming blog posts.
AutoUpgrade – Step-by-step
- The new AutoUpgrade Utility – Download, documentation and supported versions
- Create and adjust the config file for AutoUpgrade
- Config file for AutoUpgrade – Advanced options
- Config file for AutoUpgrade – Tweaking init parameters
- AutoUpgrade: ANALYZE, FIXUPS, UPGRADE and DEPLOY modes
- AutoUpgrade: Where do you find all the logfiles?
- UPG: The AutoUpgrade Command Line Interface
- Upgrading Multitenant databases with AutoUpgrade
- Moving to a new server with AutoUpgrade
- How to tweak the hidden settings in AutoUpgrade
- AutoUpgrade and Data Guard, RAC, Restart and non-CDB to PDB
- AutoUpgrade and Wallets
Create and adjust the config file for AutoUpgrade
When you start working with the AutoUpgrade utility, the only manual task at the moment for you is the creation and adjustment of the config file. At the moment, the config file is just a sample file.
$ java -jar $OH19/rdbms/admin/autoupgrade.jar -create_sample_file config Created sample configuration file /home/oracle/sample_config.cfg
The sample file contains 3 examples. In my example I just use the first example and adjust it based on my needs.
#Global configurations #Autoupgrade's global directory, non-job logs generated, #temp files created and other autoupgrade files will be #send here global.autoupg_log_dir=/default/current/location # # Database number 1 # upg1.dbname=employee upg1.start_time=NOW upg1.source_home=/u01/app/oracle/product/11.2.0/dbhome_1 upg1.target_home=/u01/app/oracle/product/19.1.0/dbhome_1 upg1.sid=emp upg1.log_dir=/scratch/auto upg1.upgrade_node=node1 upg1.target_version=19.1 #upg1.run_utlrp=yes #upg1.timezone_upg=yes
You can put as many databases into the config file as necessary. There is no limitation. And you can upgrade to different target homes and from different source homes as well.
Modifying the sample_config.cfg
First of all, the name of the config file is something you can choose by yourself. You don’t have to stay with the name the tool uses.
Second, there are two types of parameters:
- individual – in the sample file those are
upg3. But you can name it whatever you want.
You need to adjust the
global.autoupg_log_dir, This is the global logging directory. And I usually use this as my root logging directory.
In the next step I adjust the settings for the database upg1, which in my case will be an 22.214.171.124 database from the Hands-On Lab named
# # Database number 1 # upg1.dbname=UPGR upg1.start_time=NOW upg1.source_home=/u01/app/oracle/product/126.96.36.199 upg1.target_home=/u01/app/oracle/product/19 upg1.sid=UPGR upg1.log_dir=/home/oracle/logs/upgr upg1.upgrade_node=hol upg1.target_version=19
In this example, I set the
target_home individually. But you can also make this a
global parameter if you upgrade all databases to the same home:
Then of course you can remove it from the individual section.
You can do the same task with the
upgrade_node parameter,. It is meant for cases where you define a comprehensive config file for all your databases but differ now between nodes. For example, having two databases with two different
upgrade_node values will execute each section only in the case where
upgrade_node matches the hostname:
upg1.upgrade_node=hol ... upg2.upgrade_node=boston
In this case, database
upg1 will be upgraded only when
upgrade_node matches the hostname. And of course, the same applies to
upg2. In my case all my databases will be upgraded on the same host. But I can’t make this a
global parameter – it is meant to be a local or individual parameter only. It has to stay in each section.
And finally, as I will upgrade all my databases to Oracle 19c, the
target_version can be switched into a
global parameter as well.
dbname and SID
While I’m writing this the documentation is incorrect about the definition of
dbname. We’ll fix this soon. In between, please be aware that
dbname must match your
SID of course must match your database’s SID. If you are in doubt, open SQL*Plus and check with “
show parameter db_unique_name” or open your
/etc/oratab file instead.
[dbname is not needed since AutoUpgrade 19.9.0 as the value will be taken and read from the database]
upg1.dbname=UPGR ... upg1.sid=UPGR
Current status with only one database
When we look at the current status, my config file looks like this:
global.autoupg_log_dir=/home/oracle/logs global.target_home=/u01/app/oracle/product/19 global.target_version=19 # # Database number 1 # upg1.dbname=UPGR upg1.start_time=NOW upg1.source_home=/u01/app/oracle/product/188.8.131.52 upg1.sid=UPGR upg1.log_dir=/home/oracle/logs/upgr upg1.upgrade_node=hol.localdomain
Adding additional databases
Now I’m adding two additional databases –
FTEX, another Oracle 184.108.40.206 database, and
DB12, an 220.127.116.11 database.
global.autoupg_log_dir=/home/oracle/logs global.target_home=/u01/app/oracle/product/19 global.target_version=19 # # Database number 1 # upg1.dbname=UPGR upg1.start_time=NOW upg1.source_home=/u01/app/oracle/product/18.104.22.168 upg1.sid=UPGR upg1.log_dir=/home/oracle/logs/upgr upg1.upgrade_node=hol.localdomain # # Database number 2 # upg2.dbname=FTEX upg2.start_time=NOW upg2.source_home=/u01/app/oracle/product/22.214.171.124 upg2.sid=FTEX upg2.log_dir=/home/oracle/logs/ftex upg2.upgrade_node=hol.localdomain # # Database number 3 # upg3.dbname=DB12 upg3.start_time=NOW upg3.source_home=/u01/app/oracle/product/126.96.36.199 upg3.sid=DB12 upg3.log_dir=/home/oracle/logs/db12 upg3.upgrade_node=hol.localdomain
During the next step, we will add optional parameters and adjust the starting time for each upgrade.
Please find the full list of parameters supported in the config file in the Oracle 19c Database Upgrade Guilde.
Hi Mike – have been doing some testing with the 19c autoupgrade feature:
Here are some links ….
Your valued feedback will be greatly appreciated!
Thanks a lot for sharing, Gavin 🙂
Looks very good!!!
The documentation of Autograde is confusing. It states:
Identifies the DB_UNIQUE_NAME value of the database that you want to upgrade. This name should be registered in the server host file ( for example, /etc/hosts), or on a domain name server. When you are logged in to the server, it should be the name that is returned when you enter the command hostname.
The DB_UNIQUE_NAME is never registered in the hosts file. The hostname command on Linux always returns the name of the server and not the DB_UNIQUE_NAME.
(Optional) Generates a Guaranteed Restore Point (GRP) for database restoration. If you select this option, then both database backup and database restoration must be performed manually by the DBA.
I thought that if I specify restoration=no, then I have to take care of backup and restore myself, but the documentation seems to be ambiguous.
Can you please clarify?
correct – this is not mentioned correctly in the doc.
I will feedback this to the doc guys.
The dbname is what you have as DB_UNIQUE_NAME in your spfile.
And you are right with the GRP as well – restoration=no overrides the GRP creation (meant for SE2 databases and dbs in noarchivelogmode).
If something fails in such cases, it will be your task to restore by yourself. But the doc is not correct here.
I ran into another issue with Autoupgrade. I was following the steps to upgrade from 12.1 to 19c via Data Guard.
The broad steps are:
a) Create a physical standby using Data Guard.
b) Convert physical standby to logical standby.
c) Stop redo apply.
d) Upgrade the logical standby to 19c.
In the last step, when I ran Autoupgrade in analyze mode against the logical standby, it just kept complaining that two datafiles needed recovery. These two datafiles belonged to SYSTEM and SYSAUX tablespaces of the PDB$SEED database which is READ ONLY by default. I searched on MOS if there was any way to make these datafiles not need recovery, but didn’t have much luck. Finally, I just used the DBUA from 19c to upgrade and it performed the upgrade without any issues. Something the Autoupgrade team needs to look at.
I think this is at first a general Multitenant problem. I have seen this in other cases as well that the PDB$SEED does seem to be in this strange state after duplication for standby. But let me discuss this with the developers.
I’ll try to get back to you asap.
Thanks Mike. I noticed that after upgrade via DBUA, the PDB$SEED SYSTEM and SYSAUX datafiles did not need recovery (v$recover_file returns no rows). Is it possible that, because PDB$SEED is opened READ WRITE during upgrade, the datafile headers get fixed? I could not find any MOS note saying that I can manually open PDB$SEED in READ WRITE mode, even temporarily, to fix this issue.
The only difference I could find that in v$datafile_header, the last_dealloc_change# was NULL in primary database and had some numeric value in the standby database for these two files.
Does Autoupgrade upgrade RAC database? I am asking because it seems that upgrade_node is a mandatory parameter. But I can specify only one upgrade_node for a database. So, will Autoupgrade make sure that all instances of the database are started from upgraded home on all nodes?
you find a workaround to open the PDB$SEED here:
_oracle_script=true is the secret key.
The upgrade of course opens the PDB$SEED read/write as otherwise we wouldn’t be able to upgrade it.
The patch scripts do the same. Key thing is that catcon.pl (or catcon.pm) are used. But you can do this manually as well.
Again, do I get you right:
When you DUPLICATE FOR STANDBY, on the standby the PDB$SEED has SYSTEM and SYSAUX in “needs recovery” mode? So when you run preupgrade.jar on the standby (which is possible), an error will be flagged?
Can you please run these two queries after you setup the standby in the PDB$SEED:
1) SELECT NULL FROM v$recover_file WHERE ((error <> ‘OFFLINE NORMAL’) or (error is null)) and rownum <=1 2) SELECT count(1) FROM v$recover_file WHERE ((error <> ‘OFFLINE NORMAL’) or (error is null)) and rownum <=1 Regarding RAC, we are working on it. And yes, this is why hostname is there. I default it to "localhost" - and our sample file is not perfect yet. But we are working on it, too. RAC support will come, maybe late this CY, maybe early next CY. At the moment, we set cluster_database=false (or you set it) and you need to the SRVCTL part by yourself. Cheers, Mike
I know the _ORACLE_SCRIPT workaround but MOS note specifically says that mere mortals shall not use it..:)
“_ORACLE_SCRIPT”=TRUE PARAMETER Should not be Invoked by Users (Doc ID 2378735.1)
I did the following (every step by the book, Chapter 13 of 12.1 Data Guard):
a) Created a physical standby database from a primary database (188.8.131.52, April 2019 patch).
b) Converted the physical standby database to logical standby.
c) Temporarily stop redo apply to the logical standby.
d) Upgrade logical standby to 19c. This is where I ran Autoupgrade (not preupgrade) and hit the error. I do not know where in the entire process these two files SYSTEM and SYSAUX for PDB$SEED got flagged as needing recovery. In the primary database, these were not showing in need of recovery, only on the logical standby.
I did query the v$recover_file view and these two files were listed in the view. I did not run the specific query.you have provided but the SYSTEM datafile was in SYSOFF state and SYSAUX was OFFLINE.
I do not have the setup anymore as I have already upgraded logical standby to 19c. Let me check if I can repeat the steps. It may take a while.
But DBUA was able to upgrade the logical standby database to 19c even when these files showed as needing recovery. Maybe Autoupgrade needs to pull code from DBUA…
thanks for your reply. And I doubt that we will pull code from DBUA 😉
But the _ORACLE_SCRIPT topic is a mined territory. There are MOS notes explicitly explaining how to use it to tweak things such as “password aging”. In such cases you need _ORACLE_SCRIPT. The Multitenant team made it a dogma in saying “No user shall alter the PDB$SEED” but this does not fly well in some cases – and the MOS notes are an indication that it is needed to alter PDB$SEED sometimes.
Let me check with my DG contacts.
first thanks for your effort to help us mortals. 🙂
Second, I found a little bug, which of course can be ignored, but nevertheless here it is:
– I installed 19.3 on Windows and downloaded the latest version of autoupgrade.jar tool
– then I created the sample_config.cfg file
– one variable is not set correctly:
It seems , all backslashes have been removed from the path…
Thanks Dejan, I will share it with the team.
Jira filed – AUPG-1592 is the bug number.
Watch out for one of the next AutoUpgrade drops – the changelog.txt is at the end of the note where you download autoupgrade from.
Thanks for letting us know – cheers,
Oracle docs mention a workflow where source and target database home are on different servers:
However, I can’t find details for setting source_home and target_home in config file
Do I need to install (if only temporarily) the target_home version on the source server?
I will write the blog post soon – but this is how it works:
1. You run ANALYZE and FIXUPS on source
2. You run UPGRADE on target
Your config file for source has to have:
(but doesn’t need “target_home”)
Your config file for the target has to have:
source_home=/dev/null or /tmp
It needs a path – there’s an enhancement open to allow you to not use source_home for this case.
Thanks, Mike. target_version was exactly what I was looking for. -Blair
My question is regarding Autoupgrade, does Autoupgrade perform PDB upgrade only?
Let me explain the scenario, we have a CDB with 1 user PDB at 12c version and another CDB at 19c with no user PDB. What I’m looking is to upgrade User PDB from 12c to 19c, by unplugging it from 12c and plug it on on 19c CDB and Run catctl.pl by passing catupgrd.sql for PDB upgrade which works fine. But What I’m looking is whether same possible from Autoupgrade utility? As per my understanding that’s not possible at the moment. Please suggest.
Your blogs are great and gives so much useful information. Thanks for that.
we are working on the unplug/plug/upgrade scenario to allow you moving selected PDBs to a higher version, and upgrade it. Actually, this is followed with highest priority.
Can’t give you a date unfortunately – but it will be available in the near future.
thanks for this great thread.
If the source database (non multitenant which uses omf) and I want to migrate and convert into an pdb I cant use “target_pdb_copy_option”, right?
As renaming of datafiles is not allowed with omf…
yes, this is correct. And it works regardless of OMF or not. But you can’t rename the files with OMF unfortunately.
It’s quite logical, so “use the brain, admin!” would have helped.
I’m a little bit confused about the parameter “target_pdb_copy_option”.
If the Source DB uses omf, file_name_convert of files can not work I think.
Here an example from my config file:
I would like to get them copied to:
Setting this would not help…
ORA-01276: File has an Oracle Managed Files file name.
the parenthesis are the problem – this is a doc bug we get fixed next week hopefully.
It should be:
or this should work, too:
Sorry for the confusion.
Thanks a lot!
Customer using autoupgrade.jar Source: 12.2 non-CDB and Target: 19.8 and Upgrade on same host.
12.2 non-CDB database which has a mixture of OMF and non-OMF datafiles and Customer needs to convert PDB using autoupgrade.
And want to have non-OMF files in the PDB after the upgrade.
Does autoupgrade support the mixture of OMF and non OMF.
We tried multiple options But it leads to different error .
ORA-65005: missing or invalid file name pattern for file –
ORA-01276: Cannot add file
File has an Oracle Managed Files file name.
Option C :
Cause: Could not create pluggable database
2020-08-31 14:53:46.272 ERROR Dispatcher failed:
ORA-01276: Cannot add file
Cause: Could not create pluggable database
For further details, see the log file located at /refresh/home/Logs/test/105/autoupgrade_20200831_user.log; aborting job 105 for database test
I think I explained this to the customer already – it is very tricky to have this mixture supported. And personally, I don’t see any good reason to mix OMF and non-OMF naming. There is the ONLINE MOVE of datafiles available since 184.108.40.206 – the customer should either go this or that way. Then all works fine. And this is not an AU misbehavior, but the convert string becomes hell of complicated in this case.
Regarding dbname, https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/autoupgrade-utility-configuration-files.html does not list it. Could it be an obsolete parameter in 19c and only existing in previous versions?
Another question I have is about sid. If my database is RAC, should I set it to the name of the instance running on the local node? I assume I need to set cluster_database=false, stop database, start instance. Right?
yes, this is correct – and thanks for spotting this.
It is not necessary anymore. At least the 19.9 and 19.10 versions of AU don’t need it anymore.
Thanks and kind regards
Good day Mike;
I have just deployed the latest version of autoupgrade.jar..
I am trying to upgrade two single instance databases running on the same server (220.127.116.11) to 19c (RU 19.9).
My config file looks like this:
Upon running deploy I receive the following error:
Processing config file …
Invalid value for target_base [ORACLE_BASE is not found in the keyP☺↨ßé] – Path should be absolute
It was not possible to validate the directory for target_base of entry upg2
Any idea what I am doing wrong? I have also checked privileges on my ORACLE_BASE..
Your feedback will be greatly appreciated.
just remove the target_base and try it again please.
Tried removing target_base, same result:
Invalid value for target_base [ORACLE_BASE is not found in the keyP☺☼G²] – Path should be absolute
It was not possible to validate the directory for target_base of entry upg2
is this a Grid Infrastructure environment?
Do you have an SR open for it?
Not using Grid or RAC either. No SR currently. Would you mind a 5-10min Teams Session so I can show you what I see?
you please need to work with Oracle Support on this.
You can pass on the SR number if there won’t be a solution.
Hi Mike, I have run autoupgrade many times with no issues on multiple databases. Now all of a sudden I am getting this error when I try to run the analyze:
Unable to determine DB unique name for entry upg1
The parm file I am using looks like:
#18.104.22.168-to-19.0.3 config file
did you open an SR? If not, please do so. I’m pretty certain that the solution is simple – I just can’t remember what it was.
PS: You can shorten your config file to:
#22.214.171.124-to-19.0.3 config file
I have try same method for 12c to 19c & it is completed successfully but i check invalid object present & timezone same as 26.
I run manually @?/rdbms/admin/utlrp.sql & it is done same for doing timezone run script. If i add parameter on config file upg1.timezone_upg=yes,upg1.run_utlrp=yes so why it is not reflect after upgrade, why i need to do manually
then something went wrong.
Both parameters you have set are YES by default anyways. So this is both to be expected, TZ change and recompilation.
You please may need to zip the log together with:
java -jar autoupgrade.jar -config myconfig.cfg -zip
and upload it to support.
I guess there was an error somewhere which lead to those post actions not happening.
I have a RAC database with RAC DG (DB_NAME and INSTs are the same on DG)
upg1.dbname=FRED_PRI … i.e the DB_UNIQUE_NAME
upg1.sid=FRED1 … is that correct? or should it be the DB_NAME (FRED)
because autoupgrade is only run from node 1, I would have thought you enter the DB_NAME for .sid.
please avoid the “dbname” and use only the SID. The “dbname” is not needed anymore and makes life much easier.