In the previous blog post I explained how to create a config file for the AutoUpgrde utility. And now I’d like to show how you can use some important advanced options to have the AutoUpgrade utility do or skip specific tasks. Read about how you can tweak the config file for AutoUpgrade 19c – Advanced options.
I will describe only the most important advanced parameters which I think can be very useful in your environments. Of course, there are many more as you can see in the Database Upgrade Guide.
AutoUpgrade – Step-by-step
- The new AutoUpgrade Utility – Download, documentation and supported versions
- Create and adjust the config file for AutoUpgrade
- Config file for AutoUpgrade – Advanced options
- Config file for AutoUpgrade – Tweaking init parameters
- AutoUpgrade: ANALYZE, FIXUPS, UPGRADE and DEPLOY modes
- AutoUpgrade: Where do you find all the logfiles?
- UPG: The AutoUpgrade Command Line Interface
- Upgrading Multitenant databases with AutoUpgrade
- Moving to a new server with AutoUpgrade
- How to tweak the hidden settings in AutoUpgrade
- AutoUpgrade and Data Guard, RAC, Restart and non-CDB to PDB
- AutoUpgrade and Wallets
The full list of (almost) all parameters
You will find the list of almost all parameters, global and local, in the Database Upgrade Guide:
Why do I write “almost”? We have no control on when our documentation colleagues are able to publish a refreshed version of the docs. Hence, it could be the case that our downloadable tool has a new parameter which is actually not yet in the documentation. While I’m writing this, drop_grp_after_upgrade
is such an example. It will be in the official documentation as soon as it gets refreshed the next time.
Guaranteed Restore Points – Yes or No?
In order to have a valid fallback in place, the AutoUgrade uses a guaranteed restore point you can fallback to. This is meant for situations where something fails – and the tool allows you to flashback. After a successful upgrade, we do not delete the guaranteed restore point by default (see below drop_grp_after_upgrade
).
But there are situations where you’d like to override this setting – or where you need to. For instance, if your database is in NOARCHIVELOG
mode, you can’t create a restore point. And the AutoUograde utility can’t either. Or you have a different restore plan, and hence, you don’t want a restore point. Or if you plan to upgrade a Standard Edition database. In SE and SE2 there are no guaranteed restore points available. But for SE databases, there’s no need to use the below parameter – we detect it automatically and autoupgrade.jar
does not attempt to set a GRP in an SE database.
Whatever the reason is, this is how you disable the creation of a guaranteed restore point with AutoUpgrade.
prefix.restoration=no
Be aware when you use this. Having a correct and working fallback strategy is key.
Drop the Guaranteed Restore Point automatically
By default, we keep the GRP even after a successful upgrade. This is important for instance when you’d like to test certain things but you plan to flashback the night after to rehearse another test run.
Since the June 2019 version of the AutoUpgrade tool (20190620) you can override this default behavior by setting:
prefix.drop_grp_after_upgrade=yes
Then the GRP will be dropped by autoupgrade.jar after a successful upgrade.
Changing the Start Time
The AutoUpgrade parameter start_time
is an optional parameter. If you don’t set it, the upgrade will start immediately once you invoke the tool. In the sample config we use start_time=NOW
which is equivalent to not explicitly setting it.
But you can adjust the start_time
as well. The format is DD/MM/YYYY HH24:MI:SS
.
For example, you can set it to:
prefix.start_time=15/06/2019 20:00:00
prefix.start_time=1/7/2019 10:30:30
prefix.start_time=01/07/2019 23:59:59
But you can’t set it to (failures marked in red):
prefix.start_time=06/15/2019 10:00:00
prefix.start_time=12/12/2019 12:00:00am
prefix.start_time=2019/01/19 22:00:00
prefix.start_time=15-06-2019 21:00:00
The default will be:
upg1.start_time=NOW
If you alter it, remember to use the slash “/
” as a date separator and specify the day, then the month. For the time, use the colon “:
” and the 24 hour format.
Adding scripts before and after
When we started the beta test, one of the first requirements from customers was to execute a script before and after the upgrade. Usually the database upgrade is only a small portion of a change process, an application server farm needs to be taken down, applications need to be stopped – and everything of course needs to be restarted afterwards.
Hence, we built in these two global parameters, which can be used as local parameters as well on a per-DB basis:
global.before_action=/home/oracle/scripts/before.sh global.after_action=/home/oracle/scripts/after.sh prefix.before_action=/home/oracle/scripts/before_DB12.sh prefix.after_action=/home/oracle/scripts/after_DB12.sh
You can call the following script types with it:
- Unix shell (
.sh
) - Microsoft Windows batch (
.bat
,.cmd
) - Microsoft Windows PowerShell (
.ps1
) - Oracle SQL file (
.sql
), with a local operation only designated by the prefix.
In addition, with a special Y
flag after the script, you advice the autoupgrade.jar to stop in case the script fails.
prefix.before_action=/home/oracle/scripts/before_DB12.sh Y
Remove Underscores and Events
I’ve just dealt with a strange customer case where it turned out that the issue is caused by an event still set from an issues happening years ago. Once the customer removed the event, all went fine. For such purposes you can set:
global.remove_underscore_parameters=yes
This is a global-only parameter. If you’d like to tweak parameters on a per-db-level, you should use the parameter options in the following blog post. By default, we’ll keep underscores and events in the spfile
.
Postpone Recompilation
There are situations when you won’t like to have the recompilation after upgrade happen automatically. This could be the case when you have too many CPU cores presented to the database. Or too much additional load at the same time. See also: utlrp.sql – How to decrease resource consumption with utlprp.sql. For such situations, you can use:
prefix.run_utlrp=no
This is a local parameter you need to set on a per-database level.
Postpone Time Zone Adjustment
Some may also want to postpone or even neglect the time zone adjustment. The autoupgrade.jar
does all the necessary time zone adjustments for the highest available time zone file in the destination home. But there are cases where you may not want this. Be it that you’d prefer a specif time zone version everywhere, or that you would like to handle this by yourself – or, in some rare cases, that it simply takes too much downtime. Anyway, this is how you won’t get a time zone adjustment after the upgrade has been completed:
prefix.timezone_upg=no
This is a local parameter. It needs to be set on a per-database level.
Summary
This is already a long list of parameters you can set when you use the autoupgrade.jar. There are much more options. I summarized only the most important non-standard parameters. You can find the full list in the AutoUpgrade Utility Configuration Files – Oracle 19c Upgrade Guide.
–Mike
If you are creating restore point in SE you can simply restore from EE software and open with resetlogs from SE without any issues. This is just workaround 🙂
Hi Krasimir,
well, you are doing a license violation if you have an SE or SE2 license but use EE exec to open the database.
Cheers,
Mike
Hello Mike,
This think is not related to license but in general yes, you are correct.
Also this is just testing for own knowledge what oracle using or not
SE case
Very interesting why Oracle keep flashback logs in case of restore point, you cannot restore from it and you should use RMAN in order to restore database, but why flashback logs are there?
Oracle should not keep something for nothing, because this can lead to perf issue due to GRP 🙂
So, simple quetion
Regards,
Krasi
Hi Krasi,
I know what you mean – and yes, I agree with your position.
The only way to address this is via an SR followed by a bug filed against SE2.
As I don’t own any of the code or products involved her, that is the only way I can think of.
Thanks,
Mike
Hello Mike,
Thanks for sharing!
Where I work has many databases 12.1 and 12.2 and we are already preparing to migrate to 19c.
I feel more comfortable using dbua instead of AutoUpgrade but maybe this is a matter of familiarity with the tool, I do not know …
If the upgrade is local and only one instance at a time, is there any advantage in using AutoUpgrade instead of dbua? all the options that dbua offers are available for AutoUpgrade?
Piero,
there are a million advantages here.
First of all, the DBUA is not really resumable whereas “dbupgrade” and “autoupgrade” both are.
Then we react to issues often overnight or within a few days whereas many DBUA issues you find on the blog got fixed months later only.
There are other reasons – but if you are comfortable with the DBUA, feel free to use it. My experience is unfortunately that, as soon as you hit a bad situation with DBUA, you will blame the “UPGRADE” as not working correctly. And this is in most cases not the fact. I just helped two customers last week with failed DBUA upgrades who both could complete the command line upgrade flawless.
Most important, who said that autoupgrade can upgrade only one instance at one time????
That is unfortunately only the fact for the DBUA who can (and always could) upgrade only one database at one time per home but never multiple databases. The command line always could – and of course, the AutoUpgrade is designed to upgrade as many databases at the same time on your server, constraint only by your compute power.
Hope this helps – go with the future! AutoUpgrade it is!
Cheers,
Mike
Thank you Mike,
I agree that dbua has some problems but every time I used it to upgrade it always used the latest available version and last patch in boths database before upgrade, this greatly reduces the probability of an error occurring.
But I will also use AutoUpgrade until I get used to the new tool
Hi Piero,
than please always check if the patch bundle you use includes a new patched version of the DBUA as well.
Cheers,
Mike
HI Mike,
we are using upgrade’s using DBUA method and have options like below. Can we use same option in
Auto upgrade?, If yes then any parameters use in configuration file. Please advice if will great help us.
Select Upgrade Options — Select Enable Parallel Upgrade
—Leave Set User Tablespace to Read Only During Upgrade –Unchecked
Configure Network – deselect the new 18c listener creation
Configure Management —
Uncheck Configure Enterprise Manager(EM) Database Express
Confirm Registerwith Enterprise Manager(EM) Cloud Control is Unchecked
Regards,
Satya
Hi Satya,
the tablespace-read-only option is something which never worked well in DBUA imho – I did this manually whenever needed.
AutoUpgrade does not create a new listener – and it won’t configure EM Express – but this is a simple call if you need that as it is embedded by default and needs only the XML listener and the port.
For the registration with EM CC, we are working on this with the EM team.
Cheers,
Mike
Hi Satya, we are working to implement what’s missing.
But actually I wouldn’t call EM Express as a miss, as this is done via one call.
Furthermore, the Offline Backup procedure never worked in DBUA correctly when I tested it – I would trust more the GRP feature of AutoUpgrade.
Cheers,
Mike
Thank you so much for Quick response and rectified my doubts , we are good to go on live database 🙂
HI Mike,
I am try to test abort upgrade while running and restore database or job using auto upgrade.
Restore process was hung , it is try to downgrade srvctl, please check and help us.
“Downgrade srvctl to source home”.
upg> status -job 106
Progress
———————————–
Start time: 20/05/11 04:23
Elapsed (min): 16
End time: N/A
Last update: 2020-05-11T04:36:47.508
Stage: DBUPGRADE
Operation: EXECUTING
Status: RUNNING
Pending stages: 4
Stage summary:
SETUP <1 min
PREUPGRADE <1 min
PRECHECKS <1 min
GRP abort -job 106
Are you sure you want to abort job [106] ? [y|N] y
Abort job: [106][SKAUTORC1]
upg>
upg> status -job
————————————————-
Errors in database [SKAUTORC1]
Stage [DBUPGRADE]
Operation [STOPPED]
Status [ERROR]
Info [
Error: UPG-1419
[Unexpected Exception Error]
Cause: This indicates that the upgrade has been killed, by an abort or restore command.
For further details, see the log file located at /cmshomes/oracle_scripts/AUTOUPG_TEST/AUTOUPG_LOGS_19C/SKAUTORC/SKAUTORC1/106/autoupgrade_20200511_user.log]
————————————————-
Logs: [/cmshomes/oracle_scripts/AUTOUPG_TEST/AUTOUPG_LOGS_19C/SKAUTORC/SKAUTORC1/106/autoupgrade_20200511_user.log]
————————————————-
Unrecognized cmd: status -job
upg> restore -job 106
The job 106[SKAUTORC1] on stage [DBUPGRADE] has the status [ABORTED]
Are you sure you want to restore? all progress will be lost [y/N] y
upg>
upg> status -job 106
Progress
———————————–
Start time: 20/05/11 04:23
Elapsed (min): 21
End time: N/A
Last update: 2020-05-11T04:43:58.682
Stage: GRPRESTORE
Operation: EXECUTING
Status: RUNNING
Pending stages: -1
Stage summary:
SETUP <1 min
PREUPGRADE <1 min
PRECHECKS <1 min
GRP <1 min
PREFIXUPS 3 min
DRAIN 1 min
DBUPGRADE 13 min
GRPRESTORE status -job 106
Progress
———————————–
Start time: 20/05/11 04:23
Elapsed (min): 1,317
End time: N/A
Last update: 2020-05-11T04:50:36.207
Stage: GRPRESTORE
Operation: STOPPED
Status: ERROR
Pending stages: -1
Stage summary:
SETUP <1 min
PREUPGRADE <1 min
PRECHECKS <1 min
GRP <1 min
PREFIXUPS 3 min
DRAIN 1 min
DBUPGRADE 13 min
GRPRESTORE 1291 min (IN PROGRESS)
Job Logs Locations
———————————–
Logs Base: /cmshomes/oracle_scripts/AUTOUPG_TEST/AUTOUPG_LOGS_19C/SKAUTORC/SKAUTORC1
Job logs: /cmshomes/oracle_scripts/AUTOUPG_TEST/AUTOUPG_LOGS_19C/SKAUTORC/SKAUTORC1/106
Stage logs: /cmshomes/oracle_scripts/AUTOUPG_TEST/AUTOUPG_LOGS_19C/SKAUTORC/SKAUTORC1/106
TimeZone: /cmshomes/oracle_scripts/AUTOUPG_TEST/AUTOUPG_LOGS_19C/SKAUTORC/SKAUTORC1/temp
Additional information
———————————–
Details:
Restoration, phase 7/{0} – Downgrade srvctl to source home
Satya,
please share the SR number with me.
Thanks,
Mike
HI Mike,
Created SR 3-23153131281 please look into this.
Regards,
Satya
Hi Satya,
please work with Oracle Support. Get back to me once you don’t get a solution from Support within a reasonable time frame and after raising the severity if necessary to Sev.1.
Please understand that I can help everybody – I do my best but the blog is my hobby and I don’t get paid for it 😉 Actually I pay it completely by myself … but you have a support contract 😉
Cheers,
Mike
Hi Mike!
-mode analyze does not check if the db is in archivelog mode but -mode deploy will fail if archivelog is off.
2020-05-19 15:24:02.719 INFO Analyzing xxx, 77 checks will run using 8 threads
2020-05-19 15:24:14.941 ERROR The following checks have ERROR severity and no fixup is available or
the fixup failed to resolve the issue. Please fix them manually before continuing:
xxx ARCHIVE_MODE_ON
2020-05-19 15:24:15.040 INFO Starting error management routine
2020-05-19 15:24:15.041 INFO Ended error management routine
2020-05-19 15:24:15.044 ERROR Error occurred while running the dispatcher for job 101
Cause: There are checks present in the database with ERROR severity and its fixup is not available, therefore the AutoUpgrade execution cannot continue. This will require a manual fix to the database.
I think this could be checked during the analyze mode so that there are no surprises during the upgrade.
…I duplicated the DB to a restore VM and disabled archivelog so I don’t have to care about backup and recovery dest size.
cheers
Daniel
Hi Daniel,
actually it is very good that you report this as we have an internal discussion already whether the chosen approach is good or not.
At least, it is not obvious … let me explain.
You run an ANALYZE – and it looks great. All passed, databases are ready to be upgraded.
Then you attempt a DEPLOY and it fails.
I have encountered the same now several times. And the logic is that the ANALYZE will only fail if “the analyze” itself fails. But not if it finds something which may cause trouble.
The current approach is that you need to browse through the HTML file in /prechecks – and check for such issues.
Can you please check if the NOARCHIVELOG mode was mentioned there? I’m curious …
I assume you agree with me – ANALYZE finding something should report it. We are discussing internally how to improve this.
But I will copy your comment to the team.
Thanks (as always)!
Mike
Hi,
yes the error is there
CheckName: ARCHIVE_MODE_ON FixUp Available: NO Severity: ERROR Stage: PRECHECKS
beside others *coughcough*
To be honest, I did’nt even know that there is a precheck directory with the results.
As I see now it was also mentioned in the log:
2020-05-19 15:17:35.009 WARNING The following checks have ERROR severity and no fixup is available or
the fixup failed to resolve the issue. Please fix them manually before continuing:
xxx ARCHIVE_MODE_ON – PreChecks.executeChecks
It seems that humans are not able to read any further after “successful” appears somewhere on the screen or at last I’m not…
This is the 2nd time I used autoupgrade.jar and the first time all went smooth.
Additional Feedback:
the best feature for me is
resume -Job
I needed that very often during the upgrade because due to lack of space I installed the OH into a soft linked directory, which is not a good idea as I know now.
Additional Question:
How is the restore -job handled in case of an SE2 DB as there is no Flashback?
Thanks for the quick response (as Always 😉
Daniel
Hi Daniel,
we are working already on improving this – and I fully agree with you here.
Regarding the restore job with SE2, the command will tell you that you can’t restore.
upg> restore -job 100
Restoration is inactive based on current configuration for database DB12.
Cheers,
Mike
Hi Mike,
We are currently experimenting with autoupgrade within our test environment. I’m playing with pre/post action scripts but can’t seem to make them work with parameters specifically for each upgrade. I can’t seem to find examples on MOS or on Google on how to specify database name within those scripts. We already tried playing with configuration parameter “env”. (Example: upg1.env=DATABASE:/ARG_U11/, generates error “The provided custom environment data is invalid for entry upg1”). The second thing we tried is to use $ORACLE_SID variable but that also doesn’t seem to work. Do you have any tips for us?
Cheers,
Lorenzo
Hi Lorenzo,
see here for an example:
https://mikedietrichde.com/2020/03/18/autoupgrade-and-the-compatible-parameter/
The “. db19” I call is a shell script which switches my environments and set ORACLE_SID etc.
Hope this helps – thanks,
Mike
Hi Mike,
Thanks for the information. Is it possible to share the content of your “. db19” shell script please? We can’t seem to propagate the ORACLE_SID in our (pre/post) action scripts.
Lorenzo
Sure – please find it here:
https://mikedietrichde.com/2020/03/18/autoupgrade-and-the-compatible-parameter/
Cheers,
Mike
I have a few questions I would like to find out more about but cant seem to find anything useful. Let me explain my situation first
We have 100+ DB servers on windows 2012 (12.1.0.2) we want to upgrade to 19c. We will be installing Oracle 19c and creating a new listener, and configuring the databases to run against the 19c listener on the servers. My question is
1) Can autoupgrade be run off the server. Ie, upgrade type server with Oracle 19c installed that can connect to the database and be upgraded remotely?
2) If the above is possible what happens to the copy of the relevant files from 12c home to 19c home, such as spfile, password file?
3) What happens to the service delete/re-creation?
With the above explained when I ran autoupgrade on a test server unfortunately I found the service was not deleted and re-created in the correct home. is this a manual step?
Thanks
Hakan
Hi Hakan,
AU runs always locally, never remotely.
AU will move the relevant files. Hence, it is very important to use the same OS user for old and new homes.
AU will take care on oradim as well.
Cheers,
Mike
Hi Mike,
Apologies for the late response. Been busy upgrading our databases using the automation tool!
I have a few questions I wonder if you can help with.
1) On windows I am finding the SID is never recreated. We have to do this manually. Assume this is ok?
2) I am finding some databases taking 12+ hours to complete. Others taking less than 40 minutes. Looking at the catupg logs I see this is why it took so long. I have logged with Oracle support so your aware
Looking at one of the catupg logs i see this is what took the longest and not sure why
Elapsed: 00:00:00.08
19:46:45 SQL>
19:46:45 SQL> — First remove XDBFolderListing.xsd and then re-register it with genTables false
19:46:45 SQL> declare
19:46:45 2 c NUMBER;
19:46:45 3 csxxinurl VARCHAR2(2000) := ‘http://xmlns.oracle.com/xdb/XDBFolderListing.xsd’;
19:46:45 4 begin
19:46:45 5 select count(*) into c from xdb.xdb$schema s
19:46:45 6 where s.xmldata.schema_url = csxxinurl;
19:46:45 7
19:46:45 8 if c > 0 then
19:46:45 9 dbms_xmlschema.deleteschema(csxxinurl, dbms_xmlschema.delete_cascade);
19:46:45 10 end if;
19:46:45 11 exception when others then raise;
19:46:45 12 end;
19:46:45 13 /
PL/SQL procedure successfully completed.
Elapsed: 05:41:59.93
01:28:45 SQL>
01:28:45 SQL> declare
01:28:45 2 FLXSD BFILE := dbms_metadata_hack.get_bfile(‘xdbfolderlisting.xsd’);
01:28:45 3 FLURL VARCHAR2(2000) := ‘http://xmlns.oracle.com/xdb/XDBFolderListing.xsd’;
01:28:45 4
01:28:45 5 begin
01:28:45 6 dbms_metadata_hack.cre_dir();
01:28:45 7 xdb.dbms_xmlschema.registerSchema(FLURL, FLXSD, FALSE, TRUE, FALSE, FALSE, FALSE, ‘XDB’);
01:28:45 8 end;
01:28:45 9 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.94
01:28:45 SQL>
01:28:45 SQL> — First remove stats.xsd and then re-register it
01:28:45 SQL> declare
01:28:45 2 c NUMBER;
01:28:45 3 inurl VARCHAR2(2000) := ‘http://xmlns.oracle.com/xdb/stats.xsd’;
01:28:45 4 begin
01:28:45 5 select count(*) into c from xdb.xdb$schema s
01:28:45 6 where s.xmldata.schema_url = inurl;
01:28:45 7
01:28:45 8 if c > 0 then
01:28:45 9 dbms_xmlschema.deleteschema(inurl, dbms_xmlschema.delete_cascade);
01:28:45 10 end if;
01:28:45 11 exception when others then raise;
01:28:45 12 end;
01:28:45 13 /
PL/SQL procedure successfully completed.
Elapsed: 04:09:33.02
05:38:18 SQL>
05:38:18 SQL> /* ————————————————————————–*/
05:38:18 SQL> /* register statistics schema
05:38:18 SQL> /* ————————————————————————–*/
05:38:18 SQL> declare
05:38:18 2 STATSXSD BFILE := dbms_metadata_hack.get_bfile(‘stats.xsd’);
05:38:18 3 STATSURL VARCHAR2(2000) := ‘http://xmlns.oracle.com/xdb/stats.xsd’;
05:38:18 4 n integer;
05:38:18 5 begin
05:38:18 6 select count(*) into n from xdb.xdb$schema s
05:38:18 7 where s.xmldata.schema_url = ‘http://xmlns.oracle.com/xdb/stats.xsd’;
05:38:18 8
05:38:18 9 dbms_metadata_hack.cre_dir();
05:38:18 10 if (n = 0) then
05:38:18 11 xdb.dbms_xmlschema.registerSchema(STATSURL, STATSXSD, FALSE, TRUE,
05:38:18 12 FALSE, TRUE, FALSE, ‘XDB’);
05:38:18 13 end if;
05:38:18 14 end;
05:38:18 15 /
3) I notice the password file is not being copied from the 12c home to new 19c home. The homes are on the same windows server so not sure why this would not happen.
Thanks
Hakan
Hi Hakan,
you please need to open an SR and upload the logfiles you zipped together with “java -jar autoupgrade.jar -config MYconfig.cfg -zip”.
Without the logs, it is very hard to tell more.
Cheers,
Mike
Hi Mike,
Is it possible to restore database upgraded from 12.1.0.2 to 19c using a guarantee restore point (GRP) if the database was upgraded using upg1.timezone_upg=yes in the autoupgrade.cfg file.
I read somewhere that this is not possible if the timezone TSTZ fields are changed.
Thankyou
Hi Joe,
with GRPs this is absolutely possible and flawless. Only a database DOWNGRADE with catdwgrd/catrelod would require that your old home gets the same time zone patch than your target. But Flashback is flawless and goes beyond.
Cheers,
Mike