Thanks to all our customers who participated in our evaluation test for RAC upgrade support with the AutoUpgrade Tool. And my colleagues worked hard to deliver the new version as quickly as possible. Just to make sure: This is not an April’s fools blog post. Since 2 days, you can download AutoUpgrade – New Version with RAC Database Upgrade Support.

Picture by: Jason Yuen on Unsplash
Where to download it?
As usual, you can download the newest version – in this case 19.8 aka 20200327 from MOS Note: 2485457.1:
You simply copy the new version into your 19c (or 12.2.0.1 or 18c) Oracle Home and overwrite the previous “autoupgrade.jar
“.
What is new – and what are the limits?
Of course the support to upgrade RAC databases is new in this version. We did support this before already but with extra steps. You could find them in AutoUpgrade – Data Guard, RAC and Restart. But these manual steps are not necessary anymore.
Still, there are some things to know, and two limits:
- AutoUpgrade does not upgrade your Grid Infrastructure (GI) and Clusterware component (OCW)
- You always must upgrade GI/OCW at first – then you can upgrade your database(s)
- If you SPFILE is managed locally (i.e. in file system instead of ASM) the tool won’t be able to handle this
- Due to the nature of the MS Windows architecture, we don’t support RAC upgrades on Windows right now with AutoUpgrade’s end-ti-end automation
But my colleagues have added all the RAC explanations to MOS Note: 2485457.1 as well. I won’t copy&paste them here but they included:
- Requirements for using AutoUpgrade with RAC
- AutoUpgrade Process Flow for RAC
- Preparing RAC for use with AutoUpgrade
- Scope Limits
- File System preparation
Known Issues – Time Zone Upgrade
Actually thanks a lot to Peter Lehmann for pointing me to this brand new MOS Note: 2575477.1. When upgrading a RAC database to 19c with AutoUpgrade, you may spot this error in the alert.log:
ORA-00603: ORACLE server session terminated by fatal error ORA-01092: ORACLE instance terminated. Disconnection forced ORA-39701: database must be mounted EXCLUSIVE for UPGRADE or DOWNGRADE
During postfixups phase it looks as the upgrade would hang with the time zone upgrade. You may see in the autoupgrade.log:
2020-06-23 11:00:00.123 INFO Error opening file [/xxx/xxxxxx/xxx/xxx/xxxxx/xxxx/dbs/inixxxxxx.ora] for reading.
The reason for this “hang” is that AU tries to restart the database but the DB Configuration still shows the Oracle Home as the old RAC Home. This does now work obviously.
$ srvctl config database -d xxxx_xxxxx
Database unique name: xxxx_xxxxx01
Database name: xxxxx
Oracle home: /xxx/xxxxxx/xxx/xxx/xxxxx/xxxx/12.1.0.2/RAC <<<<<<<<<<<<<<
Two workarounds exist:
- User
prefix.timezone_upg=no
in the config file $ srvctl upgrade database -d db-unique-name -o oraclehome
wheredb-unique-name
is the database name assigned to it (not the instance name), andoraclehome
is the Oracle home location in which the database is being upgraded. Then complete DST/TZ upgrade with the time zone upgrade scripts.
Further Information and Links
- MOS Note: 2485457.1 – AutoUpgrade Tool Download
- The new AutoUpgrade – Step by Step
- MOS Note: 2575477.1 – AutoUpgrade fails in POSTFIXUP phase of RAC DB Upgrade
–Mike
Hi Mike,
It works for Oracle Restart too?
Thanks so much.
Hi Paul,
RESTART requires still some extra steps – we are working on this right now.
https://mikedietrichde.com/2019/07/19/autoupgrade-and-data-guard-rac-restart-and-non-cdb-to-pdb/
Cheers,
Mike
Hi Mike
In this link, you mention “You will need to prevent clusterware to control the database while you are upgrading”.
Could you give us more detail about what should be done?
By the way, would you recommend using dbua to upgrade a database (version 12.1.0.2) with Oracle Restart? These additional steps required with autoupgrade are also necessary for dbua?
Thank you very much for your invaluable help.
Paul
Paul,
I don’t recommend DBUA for various reasons.
Reg. “You will need to prevent clusterware to control the database while you are upgrading”
This sentence is in: https://mikedietrichde.com/2019/07/19/autoupgrade-and-data-guard-rac-restart-and-non-cdb-to-pdb/
and not in this blog post you placed your comment with. The new version makes this unnecessary.
For Restart, you must do the SRVCTL part by yourself, mostly as I documented it for: https://mikedietrichde.com/2019/07/19/autoupgrade-and-data-guard-rac-restart-and-non-cdb-to-pdb/
Soon, AutoUpgrade will do this as well for a Restart environment.
Cheers,
Mike
Hello Mike,
We tested this news version, it just worked fine on our test env (very fine 😀 ). But while we were doing a massive db uprgade on 3 hosts of a cluster, autoupgrade reacted as if these weren’t RAC DB. (we created SR 3-23295194041).
The only peculiarity of these databases compared to our test environnement, is that were all Rac One Node restricted to one candidate server.
Could the problem wee met linked to the conf (only one node for candidate server) of our databases ?
Cheers,
PBO
Hi Pierre,
I’m off this week – I will check next week when I’m back.
Thanks,
Mike
Hi Pierre,
I see that your SR didn’t get any attention for 4 days.
Please raise the severity, make several updates into the SR.
And, if nothing happens quickly or if the ping-pong starts, you may need to request a management callback as well within X hours.
Sorry for this inconvenience 🙁
Mike
Hi Pierre,
please – this is really important – we can’t do anything without the proper logs from the upgrade runs.
Our developers checked the SR – but you need to provide the logs like I explained here:
https://mikedietrichde.com/2020/04/08/troubleshooting-restoring-and-restarting-autoupgrade/
Thanks,
Mike
Hi Mike,
what is your recommendation if spfile is not in ASM? I couldn’t find any hints.
The best workaround right now is to move the file temporarily into ASM, and then out again.
Otherwise you may see errors – and you will have to go forward from there manually (when the error occurs) – such as copying around files by yourself.
Cheers,
Mike
Hello Mike,
This week I plan to upgrade our first cluster to 19c and I want to keep the downtime as less as possible.
Hence I want to ANALYZE and FIXUP the database one day before upgrade to safe the time for upgrade next day.
The ANAYLZE report recommends to upgrade the timezone file of DB. Of course I want to do this.
The report writes out: “FixUp Available: YES Severity: WARNING Stage: POSTCHECKS”
Does POSTCHECKS mean, the timezone upgrade, which needs a DB downtime, will run after DB upgrade as POSTCHECK AND POST-FIXUP of upgrade?
In general is it a good idea to run FIXUPS in normal office hours? Is there a higher risk of an outage while running fixups?
Thanke you Mike!
Hi Peter,
it depends what the fixups will do.
There are PRE and POST fixups.
In your case, this will be done AFTER the upgrade. But it can have an impact on the availability as this will really change data. And it depends whether you have timestamp with time zone data, or not.
For the time zone part, this is optional. It is supposed to be extra downtime, especially as the database will be restarted twice.
The pre-fixups should be safe to execute, especially as you don’t move up from 11.2. as far as I know.
Thanks,
Mike
Hi Mike,
Looks like autoupgrade is not handling the RAC database upgrades if they have the disabled services. Autoupgrade process trying to disable the database services but since some services are already disabled, it is failing.
2020-07-03 11:05:40.584 ERROR The dispatcher has failed due to: AutoUpgException [UPG-3100#PRCR-1005 : Resource ora.aracdb.test_svc.svc is already stopped
PRCR-1005 : Resource ora.aracdb.test_svc2.svc is already stopped
Is it possible to handle this situation by autoupgrade in future ?
When we do upgrades, we have some external client dependencies, we want to makesure DB is idle before we kick off the autoupgrade.
–> Stop the application
–> Stop and Disable Services
–> Set job queue process to zero
–> Restart db
–> Validate client processed all pending work
Once above steps are completed then we start the autoupgrade.
However, in RAC, autoupgrade is trying to disable services again and failing as they are already disabled.
It will be really nice if autoupgrade handles this situation.
Rakesh,
thanks for letting us know – but please open an SR and upload the logs as otherwise we won’t be able to dig into the log files.
Cheers,
Mike
Hi Mike,
We have opened the SR.
SR 3-23541876161 : Autoupgrade failing if 11204 RAC database services are Disabled
Please note that, if services are disabled in 12102 RAC, autoupgrade is able to move forward and upgrading the database. But if services are disabled in 11204 RAC then it is failing.
Thanks Rakesh,
I will share it with the team.
Cheers,
Mike
Hi Mike
Thanks for this blog.
I did an test migration and it was running fine!
However, many of our customers use data guard, and I realy love the feature.
Is there any scope, when autoupgrade can handle this option?
Cheers
Christian
Hi Christian,
you can use it today already – you can pass on pre/post scripts which disable the broker and defer the log transport. If you test this in your env, it will work rock solid.
I can’t give you a date – only “soon” 🙂
Cheers,
Mike
Hi Mike,
I tried the 2nd option as workaround thought i might face this timezone issue, however, it failed with :
Error Details:
Error: UPG-3100
PRCD-1229 : An attempt to access configuration of database prise_jpe2 was rejected because its version 19.0.0.0.0 differs from the program version 12.1.0.2.0. Instead run the program from /u01/ORACLE/baseDB/product/19.3.0.0/homeDB.
Cause: Unable to list RAC services
16:29:10 oracle@dexb501:prise101 % srvctl downgrade database -d prise_jpe2 -o /u01/ORACLE/baseDB/product/12.1.0.2/homeDB_rise -targetversion 12.1.0.2
upg> resume -job 102
I downgraded it and it just worked perfect. I think in some case we may face this bug or whatever, however, my test shows, we should not do change like srvctl upgrade etc , just let it go and it upgrades smoothly along with timezone, However if we face it, 1st option is ok (timezone=no).
I have one quesion Mike, please help. Looks like AUTOUPGRADE is not yet upgraded to export/import the keys(TDE) from noncdb to pdb . I tried it failed, I should have exported it before starting the upgrade from non-cdb to pdb with wallet keys.
For me it looks like we should
1) Create Container DB
2) Create KEYSTORE there and Open it
3) Backup the keys from NON-CDB
4) Start Autoupgrade/DBUA/Manual to upgrade the database.
5) Post creating the pdb from manifest file , we should import the keys and open it and then run the noncdb_to_pdb.sql to get it done.
Do you have any advice on it or suggestion please?
Regards,
Shah Firdous
Hi Shah,
yes, you are correct – AutoUpgrade does not handle the TDE keystore export/import right now. We are waiting for security clearance by our sec team.
The reason for this is that AU would need to know the password – but it does not. We can deal with upgrades when your wallet/keystore is autologon – but for the plugin operation this does not help.
Still, we are working on a solution for this.
Cheers,
Mike
Hi Mike,
Kudos for such a neat tool.
We are using dNFS mounts for database files and also spfile located on dNFS mounts for RAC 19c databases.
I read above the tool will not be able to handle upgrade for this scenario and wanted to check if there is a workaround for us to use autoupgrade for our RAC databases or perhaps have to wait for future release when this feature will be included.
Hi Nick,
there are multiple workarounds:
a) put the SPFILE temporarily into ASM, and then back to its original location
b) The tool got smarter due to the various setups we see out there – have you tried it?
c) At worst, you will need to do the SRVCTL part at the end by yourself and point to the correct SPFILE
Cheers,
Mike
Thank you Mike for the update.
I have just tried ANALYZE mode and thought of checking before we run with DEPLOY mode.
I’ll plan to run without moving SPFILE on our test cluster and will update on the execution.
Thank you,
Nick
Hi Mike,
I wanted to provide an update on our upgrade. We upgraded our RAC database having SPFILE on dNFS location using autoupgrade without any issues. Everything completed smoothly.
Thanks for the feedback, Nick!
Hi Mike.
We looking to use Autoupgrade Tool to upgrade a 2 node rac database.
Would you help us with sample config file for this configuration.
Thanks
Hi Vinay,
that is very simple:
upg1.source_home=
upg1.target_home=
upg1.sid=
That’s it.
Thanks,
Mike
Hi Mike,
I just ran across this tool today and tried it on a 2 node RAC cluster. I upgraded from 19.17 to 19.19. While upgrading the database, this tool shuts down INST2 while it works on INST1.
Currently, when I do a point release such as this, I’ve been upgrading INST1 which would include a shutdown of INST1, move of the ORACLE_HOME, and then datapatch. Once complete and verified I do the same on INST2. Is there a switch with this tool to do the same so that users always have access through the upgrade process? I ran the autoupgrade.jar build.version 22.3.220503
Thanks!
Chris
Hi Chris,
yes, the next release of AU will handle “rolling”. Unfortunately I am so far behind with blog posts.
At the moment, AU does a restart which is not wanted in a RAC env. There are some extras we needed to add for rolling.
Please stay tuned – I will announce the rolling capability as soon as we release it.
Cheers
Mike