Oracle AutoUpgrade between two servers

I try to catch up with missing AutoUpgrade posts. And I should have written this one a while earlier ideally. Just yesterday a colleague asked me about it. So it’s time now to describe how to use Oracle AutoUpgrade between two servers.

Oracle AutoUpgrade between two servers

Photo by Alex Motoc on Unsplash

Documentation

The Oracle Database 19c Upgrade Guide has this section about the scenario:

And you see that it consists of 5 steps:

  1. Run the -mode analyze step on source server
  2. Run the -mode fixups step on source server
  3. Shutdown and restore your database to the target server (not done by AutoUpgrade)
  4. Start your database in STARTUP UPGRADE mode on target server
  5. Run the -mode upgrade step on target server

To avoid failing at some point, I’d like to give you step-by-step instructions here.

My Setup

I use two virtual machines, an OL6.10 one with Oracle 12.1.0.2, and my OL7.6 one with Oracle 19.8.0. I will do the required steps on the 12.1.0.2 database without presence of Oracle 19c (it is not supported on OL6), then shutdown my database and copy it to the new server.

If I’d be interested in minimal downtime for this solution, I’d install Oracle 12.1.0.2 software on my OL7 machine, and setup a physical standby database. But as this blog post is not about Minimal Downtime but instead about how to use AutoUpgrade between two servers, I won’t cover the standby option here.

Preparation

At first, I’m downloading the most recent AutoUpgrade tool to my source machine:

In addition, I need to prepare a config file:

global.autoupg_log_dir=/home/oracle/upg_logs
#
# Database number 1 
#
upg1.dbname=DB12
upg1.start_time=NOW
upg1.source_home=/u01/app/oracle/product/12.1.0.2
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=DB12
upg1.log_dir=/home/oracle/upg_logs
upg1.upgrade_node=localhost
upg1.target_version=19
upg1.restoration=no

You may recognize that I specify a target_home which does not exist on this server. But this will be ignored.

Analyze Mode

To start my upgrade, I need to execute AutoUpgrade on the source server with -mode analyze. Neither my OL6 nor my 12.1.0.2 java versions are good enough to work here – but I have an 18c Oracle Home as well. Hence, I will utilize the JDK installed with 18c. At worst case, you may need to refresh the Java version in your current home.

$ /u01/app/oracle/product/18/jdk/bin/java -jar /u01/app/oracle/product/12.1.0.2/rdbms/admin/autoupgradeOL6.jar -config DB12.cfg -mode analyze

AutoUpgrade tool launched with default options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be analyzed
Type 'help' to list console commands
upg> Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 100 for DB12

I show a part of the output from the analyze run of the db12_preupgrade.html report file written into the logging ./prechecks subdirectory.

Oracle AutoUpgrade between two servers

Everything can be either fixed automatically or safely ignored.

Fixups Mode

As next step I’m executing the -mode fixups phase. This will do changes to my source database now. Hence, I’d recommend that you do this as closely as possible to the downtime window your acquired for the upgrade to the new server.

$ /u01/app/oracle/product/18/jdk/bin/java -jar /u01/app/oracle/product/12.1.0.2/rdbms/admin/autoupgradeOL6.jar -config DB12.cfg -mode fixups

AutoUpgrade tool launched with default options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be processed
Type 'help' to list console commands

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+-------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|      MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+-------------+
| 101|   DB12|PREFIXUPS|EXECUTING|RUNNING|20/07/31 12:13|12:14:08|Remaining 4/4|
+----+-------+---------+---------+-------+--------------+--------+-------------+
Total jobs 1

upg> Job 101 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 101 for DB12

As you may recognize, it created a new job 101 – and of course a new directory tree for this job under the log directory you defined.

In the ./prefixups subdirectory you will find the prefixups_db12.log. It lists all the actions being done here by AutoUpgrade. In my particular case these are:

BEGIN
   SYS.DBMS_STATS.GATHER_INDEX_STATS('SYS', 'I_OBJ#');
   SYS.DBMS_STATS.GATHER_SCHEMA_STATS('SYS');
   SYS.DBMS_STATS.GATHER_SCHEMA_STATS('SYSTEM');
END;
/

BEGIN
   SYS.DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
END;
/

PURGE DBA_RECYCLEBIN
/

That’s it.

Move the database

Now I can shutdown my database and copy it to the new server. In my case, I use the most simple way. I copy all files including redologs, controlfiles – plus the SPFILE and the password file to the new machine. I keep the exact same directory structure.

In addition, I prepare also my environment file and add an entry to /etc/oratab.

Adjust the config file

A very important step I need to do is adjusting the config file. As there is no source home on the target server, I need to do a tweak. The source_home will point to a random directory, such as /tmp.

global.autoupg_log_dir=/home/oracle/logs
upg1.dbname=DB12
upg1.start_time=NOW
upg1.source_home=/tmp
upg1.target_home=/u01/app/oracle/product/19
upg1.sid=DB12
upg1.log_dir=/home/oracle/logs
upg1.upgrade_node=localhost
upg1.target_version=19
upg1.restoration=no

In a later version of AutoUpgrade this may not be necessary anymore.

Start in STARTUP UPGRADE

Next step for me is starting the 12.1.0.2 database in STARTUP UPGRADE mode:

$ . db19
[DB12] oracle@hol:/u02/oradata/DB12

$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jul 31 14:10:10 2020
Version 19.8.0.0.0

Copyright (c) 1982, 2020, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup upgrade
ORACLE instance started.

Total System Global Area 1048575776 bytes
Fixed Size		    8904480 bytes
Variable Size		  268435456 bytes
Database Buffers	  763363328 bytes
Redo Buffers		    7872512 bytes
Database mounted.
Database opened.

Upgrade Mode

You may ask yourself: Why am I not running (as usual) the -mode deploy at this stage but instead -mode upgrade? The answer is simple. We don’t have a source home on the target server. Hence, we don’t need a Guaranteed Restore Point – and we wouldn’t even be able to write it. Furthermore, the deploy mode combines analyze, fixups and upgrade with some extras. But we have done the fixups already.

$ java -jar $OH19/rdbms/admin/autoupgrade.jar -config DB12.cfg -mode upgrade

AutoUpgrade tool launched with default options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be processed
Type 'help' to list console commands

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME| UPDATED|     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+------------+
| 100|   DB12|DBUPGRADE|EXECUTING|RUNNING|20/07/31 14:16|14:19:51|10%Upgraded |
+----+-------+---------+---------+-------+--------------+--------+------------+
Total jobs 1

Here we go.

upg> Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 100 for DB12

And here is the upgrade summary:

Start of Summary Report
------------------------------------------------------

Oracle Database Release 19 Post-Upgrade Status Tool    07-31-2020 14:38:0
Database Name: DB12

Component                               Current         Full     Elapsed Time
Name                                    Status          Version  HH:MM:SS

Oracle Server                          UPGRADED      19.8.0.0.0  00:12:31
Oracle Workspace Manager               UPGRADED      19.8.0.0.0  00:00:29
Oracle Real Application Clusters     OPTION OFF      19.8.0.0.0  00:00:00
Oracle XML Database                    UPGRADED      19.8.0.0.0  00:01:39
Datapatch                                                        00:02:46
Final Actions                                                    00:03:25
Post Upgrade                                                     00:00:21

Total Upgrade Time: 00:20:09

Database time zone version is 18. It is older than current release time
zone version 32. Time zone upgrade is needed using the DBMS_DST package.

Grand Total Upgrade Time:    [0d:0h:27m:22s]

Flawless!

Can we plugin?

Just out of curiosity, I wanted to know if the plugin operation works in this case as well.

But this is something I will cover in an additional blog post:

Further Links and Information

–Mike

Share this: