Our intention with the AutoUpgrade as the single tool to upgrade your databases was initially: You have one big config file for your databases. And you will use this file to upgrade your databases unattended. Ideally you run an “analyze
” at first, then you schedule a “deploy
” for overnight or the next weekend or whenever it is convenient for you. But sometimes people have different needs. In this particular case several people asked whether you can run multiple AutoUpgrade tools in parallel on the same server. Hence, I’d like to explain the first of more AutoUpgrade Tips: Running two (or more) sessions in parallel.
Does this make sense?
Well, I’m not in the position to ask this question. If you say, you have this requirement, then I can either tell you “It is not working” or I can explain how it works. In this case one of the requirements was to use AutoUpgrade with another “analyze
” run while AutoUpgrade already upgrades some databases on the same server. Therefore, it is not a question of “sense” but more about how the goal can be achieved.
But the questions are:
What do you have to do in order to make this work? Do you need multiple autoupgrade files? Do you need to copy them or store them under different names? Or do they need to run from different homes? You’ll find the (surprisingly) simple answer below.
An Example
The best option to show how to do it is a simple example. As usual, I will use the Hands-On Lab environment. I will upgrade the database FTEX and DB12 in parallel – but with two separate autoupgrade tools in action. The tools are identical.
Config File Database DB12
global.autoupg_log_dir=/home/oracle/upg_logs
#
# Database number 1
#
upg1.dbname=DB12
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/12.2.0.1
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=DB12
upg1.log_dir=/home/oracle/upg_logs
upg1.upgrade_node=localhost
upg1.target_version=19
upg1.timezone_upg=no
upg1.restoration=no
Config File Database FTEX
global.autoupg_log_dir=/home/oracle/logs
#
# Database number 2
#
upg2.dbname=FTEX
upg2.start_time=NOW
upg2.source_home=/u01/app/oracle/product/11.2.0.4
upg2.target_home=/u01/app/oracle/product/19
upg2.sid=FTEX
upg2.log_dir=/home/oracle/upg_logs
upg2.upgrade_node=localhost
upg2.target_version=19
upg2.timezone_upg=no
upg2.restoration=no
Please note that the autoup_log_dir
is different for each database. That is the key requirement for this test.
Now I can invoke the upgrades by calling:
java -jar $OH19/rdbms/admin/autoupgrade.jar -config FTEX.cfg -mode deploy
and:
java -jar $OH19/rdbms/admin/autoupgrade.jar -config DB12.cfg -mode deploy
Both upgrades are running fine, even using the same autoupgrade.jar.
Output Database FTEX
upg> lsj +----+-------+---------+---------+-------+--------------+--------+--------+------------+ |Job#|DB_NAME| STAGE|OPERATION| STATUS| START_TIME|END_TIME| UPDATED| MESSAGE| +----+-------+---------+---------+-------+--------------+--------+--------+------------+ | 100| FTEX|DBUPGRADE|EXECUTING|RUNNING|19/11/27 15:07| N/A|15:15:10|18%Upgraded | +----+-------+---------+---------+-------+--------------+--------+--------+------------+ Total jobs 1
Output Database DB12
upg> lsj +----+-------+---------+---------+-------+--------------+--------+--------+-----------+ |Job#|DB_NAME| STAGE|OPERATION| STATUS| START_TIME|END_TIME| UPDATED| MESSAGE| +----+-------+---------+---------+-------+--------------+--------+--------+-----------+ | 100| DB12|DBUPGRADE|EXECUTING|RUNNING|19/11/27 15:06| N/A|15:13:25|8%Upgraded | +----+-------+---------+---------+-------+--------------+--------+--------+-----------+ Total jobs 1
You may recognize that both database upgrades run under the same job number, 100
.
It is that simple?
Surprisingly yes, it is. You only have to make sure that you define separate log directories. And that’s it. No copying or renaming of the tool is necessary. I even used the same environment settings above.
Just make sure autoup_log_dir
is different for each config file.
That’s it.
More Information
- The new AutoUpgrade Utility – Download, documentation and supported versions
- Create and adjust the config file for AutoUpgrade
- Config file for AutoUpgrade – Advanced options
- Config file for AutoUpgrade – Tweaking init parameters
- AutoUpgrade: ANALYZE, FIXUPS, UPGRADE and DEPLOY modes
- AutoUpgrade: Where do you find all the logfiles?
- UPG: The AutoUpgrade Command Line Interface
- Upgrading Multitenant databases with AutoUpgrade
- Moving to a new server with AutoUpgrade
- How to tweak the hidden settings in AutoUpgrade
- AutoUpgrade and Data Guard, RAC, Restart and non-CDB to PDB
- AutoUpgrade and Wallets
–Mike
Hello,
Thx for your blog, it helps us greatly for our upgrades campaigns.
I have a little question : is it possible with autoupgrade to limitate the number of PDBs upgraded in parallel ? We have containers with several dozens PDBs, and the parallel upgrade of PDBs occurs a large amount of charge on the machine. (I didnt see anything about in autoupgrade doc).
Hi Pierre,
this is a fair question – and unfortunately, this is not possible in a regular environment. Only if you’d be able to change the CPU_COUNT, you’d limit the number of PDBs upgraded in parallel as we will upgrade “CPU_COUNT / 2” PDBs in parallel, each with 2 workers. So if you’d lower it, you should see less parallel activity.
But I haven’t tried it by myself yet.
We will add parameter options sometime in the future for such cases.
Cheers,
Mike
Is it possible to connect to the AutoUpgrade console from another session?
Hi Edwin,
you can connect to the console from each environment but not across environments.
Cheers,
Mike