I received this question twice within a few days. And I remember that a tiny little while ago I wrote something on the blog about it. But as this “while” is 11 years ago, I came to conclusion to refresh this topic – and write something about How long will my database upgrade take? Revisited …

Photo by Aron Visuals on Unsplash
Is there a query?
At first, before I go into details, an important fact upfront: There is neither any query nor any tool giving you a rough estimate or precise figure of how long your upgrade will take. There is exactly one way to gather this information: You need to test the upgrade on comparable hardware. With a copy of the database you will upgrade later in production.
Testing in an OVM environment with just two CPU cores assigned to it, is not a comparable system when your production database runs on an Exadata with 96 CPU cores.
Progress bars are as inaccurate as possible – that’s why neither AutoUpgrade nor “dbpgrade” give you more than the precise information about the current phase. AutoUpgrade tells you the percentage it has completed already – but not in relation to time. And dbupgrade tells you in which of the 100-something phases the upgrade is. That’s the best we can do.
Why do CDB/PDBs always take longer?
Especially in the case where you have a CDB with only one PDB (single tenant), the upgrade takes typically more than twice as long as a regular non-CDB upgrade. At first, as a matter of fact, a PDB has a dictionary. Hence, a PDB needs to be upgraded, too.
When you attempt to upgrade a single tenant, the following things happen:
- Upgrade of the CDB$ROOT with a maximum of 8 parallel workers
- This is exactly the same process as upgrading a non-CDB
- Upgrade of the PDB$SEED and the PDB in parallel
- Each of them will have two workers
Now in addition, the PDB$SEED has a forced hard-wired recompilation phase per design by the Multitenant team. This leads to the following effect, regardless if you have only one or many PDBs. While your PDBs get upgraded by two workers for each of them by default, the PDB$SEED hits the recompilation part. I guess it will recompile with 2x the number of CPU_CORES – and therefore has to take away CPU capabilities form the other workers upgrading your regular user-created PDBs at the same time. We saw in many real world projects that PDB$SEED took the longest from all PDBs due to the recompilation but also slowed down PDB upgrades happening in parallel if you are short on CPU capabilities.
Instead, you can unplug and plug, and then upgrade. Then your upgrade duration is similar to a non-CDB upgrade but you’ll have to add the unplug/plug operation.
Do tools matter?
Regardless of which tools you choose, AutoUpgrade, dbugrade or the DBUA: The database upgrade process underneath the covers is always exactly the same. catctl.pl will utilize catcon.pm and drive the upgrade in parallel wherever possible. The DBUA had an issue in older releases when it didn’t allow you to choose the parallel degree but chose the wrong one by itself. But I guess that this has been fixed in the newest releases (I didn’t check).
Does size matter?
Size clearly doesn’t matter for the duration of an upgrade. At least not the size of your database. Whether it is 40TB or just 40GB, the upgrade usually doesn’t touch data.
There are very rare exceptions, for instance in the past we have seen a situation where SPATIAL data needed a refresh. And of course, then it mattered if you really used Spatial Data Option, or not. And the time zone adjustment is a special candidate. Even this is only optional, you should be aware that the time zone change can add plenty of downtime post upgrade if YOU have a lot of timestamp with timezone data in your database. Due to locks, your application data may not be accessible during the apply script run. So basically this means downtime.
But apart from such edge cases, the database upgrade process is independent of the physical size of your database.
Less components, faster upgrade
On the blog you find a longer series about how to remove components from a database. I’m not a fan of installing everything into a database two simple reasons:
- More components mean: More scripts need to be run. And this means: longer upgrade duration
- Most component upgrade scripts are serial, and can’t even parallelized with other component upgrades
This is also the reason why you should not use the seed databases but create your own databases, ideally with DBCA templates.
The biggest factor deciding how long an upgrade takes is the number of components. Query DBA_/CDB_REGISTRY to see whether you have just a minimal set of components – or if you have all 17 or 18 or 19 components.
And just on the side, we removed the APEX upgrade from the standard upgrade process as well with Oracle Database 12.2.0.1 for two reasons:
- It is an application layer, and hence needs to be verified separately from the database upgrade
- Upgrade APEX upfront saves downtime
Dictionary complexity may matter
If you have millions of objects in the dictionary, then the upgrade may take longer for obvious reasons. Typical examples are EBS databases where you have really a huge dictionary. In this case, a lot of operations can take significantly longer. But there is a good way to deal with this. See below …
Statistics matter!
We have seen really brutal cases where an upgrade took 10 hours. And the key to speed it up was gathering stats on some cluster indexes or on an XDB table. So as a matter of fact, you should refresh your dictionary statistics before the upgrade. As a standard task, the AutoUpgrade does it for you IF you haven’t refreshed them within the past 7 days before you invoked the upgrade. You may also gather stats directly for the SYS and maybe the SYSTEM schema, too.
But let me show you an example we’ve dealt with a 2 years ago.
This is the upgrade output from the database upgrade:
More than 10 hours alone for the OJVM sounds a bit … odd. But once we simply refreshed the dictionary and the fixed objects stats, the upgrade completed in 52 minutes and 42 seconds. That’s 93% faster 😉 And I swear, this case is not made up – it was a real case Roy had to deal with.
How about CPU, IO?
Well, CPU may influence the upgrade a lot when you upgrade many PDBs at the same time. Actually, in this case the CPU power of your machine matters a lot. But for a single PDB, or a non-CDB or the CDB$ROOT, the CPU power does not influence the upgrade duration significantly. Of course, when you have only two virtual cores available, then CPU matters. But on a correctly sized system, the CPU power of clock won’t have a bigger influence on the upgrade.
For instance, Roy’s Mac has 8 cores – and mine is an outdated old dual core CPU which can pretend to have 4 cores via hyperthreading. The lab upgrades run twice as fast on his machine than on mine. But in reality we don’t compare laptops but database upgrades on real servers.
IO may matter only when we have a super-huge dictionary or one of the edge cases above. But a database upgrade is not IO intense.
But we do a lot of COMMIT operations during an upgrade. But still, even if your redo logs are on super-fast devices and perfectly well sized, you won’t see a big impact (unless you put copy your database to my laptop).
Can you speed it up?
Yes, you can! Less components mean faster upgrades. This is the biggest differentiator. By far actually. Dictionary statistics matter, too.
And there are some edge cases such as a huge volume of synopsis for incremental statistics – and between different releases the underlying partitioning structure in the dictionary got altered. More synopsis meant longer runtime. You could truncate the synopsis and recreate them afterwards. Or just wait a bit …
But again, this is an edge case.
Less components mean faster upgrades. And the amount of user data does not matter for the duration of a database upgrade.
That’s how simple as it is.
Further Information and Links
- How long will your upgrade take? (Sep 8, 2009)
- Remove and Cleanup Components from Oracle 11.2-12.2 (Jul 26, 2017)
- Always create CUSTOM databases (Jul 11, 2017)
- Create your own databases with DBCA templates (Jul 12, 2017)
- Save upgrade downtime, upgrade APEX upfront (Jun 6, 2014)
–Mike
Thanks Mike,
I’ve read all your AutoUpgrade pages so this was not really new but nice to see a clear page containing good information to share with management at the customer site. This will help their decision-making.
Kind Regards,
Stuart
Thanks Stuart!
Hi Mike,
While upgrading some of the smaller databases, I noticed that couple of them took about 1.5+ hours to upgrade from 11g to 19c non-CDB using DBUA. This was lot longer than about 35-40 minutes for bigger databases. Looking at the alert log, I noticed that in smaller databases, since the redo log file size was small, like 50 MB small and there were only two redo logs per instance, the log switching was consuming lot of time. So, it is better to increase the redo log file size. While there is no magic number, I found that increasing redo log file size to 128 MB and having 4 redo log files per instance did speed up the upgrade.
I also noticed that with 19c, memory requirements increase. For example, many of our 11g databases were working just fine with MEMORY_TARGET of 2G. Since the best practice recommendation now is to use ASMM with Exadata, I split the memory target into SGA_TARGET and PGA_AGGREGATE_TARGET of 2G each. So, memory allocation went up from 2G to 4G. Even then I found major performance issues after upgrade. Again, there is no magic number, but I found that a minimum of 4G for SGA_TARGET and 4G for PGA_AGGRGATE_TARGET are least which will keep the database up and running. Going below 4G for any of these memory areas can cause database crash and other issues like ORA-4031 errors.
I also noticed that if PGA_AGGREGATE_TARGET is below 4G, the create pluggable database statement fails. These are just some of my observations.
Thanks
Arun
Hi Arun,
sorry for getting back to you so late on this.
At first, the 50MB Redo size is a relic from the dark ages of Oracle where the standard was 50MB – I think nowadays the standard is much bigger. But I tried to make some experiments with redo sizes (an “unpublished” draft blog post) and I came to no clear recommendation. But clearly, 50MB redos can lead to a log of commit waits – you’d see this in v$session_wait during the upgrade.
Regarding memory_target – please NEVER use it. Use SGA_TARGET and PGA_AGGREGATE_TARGET instead. It is no good parameter. The memory operations in the code run differently, you get trouble as it does not support huge/large pages. And we have seen so many obscure effects with it. You can ask basically every database expert outside of Oracle about it, and they will cross their hands and pull out the garlic as prevention of vampires.
But you are fully right: 19c requires more memory as 11.2.0.4 for sure. It’s just impossible to give an estimate without knowledge of the individual system.
And with PGA_AGGREGATE_TARGET and PGA_AGGREGATE_LIMIT there are some “interesting” behavior patters in 19c …
Cheers,
Mike
Hi Mike,
Another bullet to consider, is:
If target is at least Oracle 19c, and source database hava a huge number of DBMS_JOB’s, the upgrade takes too long due “Changes for Scheduler” introduced on 19c…
(https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/behavior-changes-deprecated-desupport-oracle-database.html#GUID-3BAFD95E-4D00-4F0F-BC80-6064F497F878)
Right now I’m faccing this issue. DB upgrade is on the same stage since 10h45M…
10:47:53 SQL> Rem =======================================================================
10:47:53 SQL> Rem Begin Changes for Scheduler
10:47:53 SQL> Rem =======================================================================
10:47:53 SQL>
10:47:53 SQL> set serveroutput on
10:47:53 SQL> declare
10:47:53 2 broken BOOLEAN;
10:47:53 3 begin
10:47:53 4 for job_map in (select job, powner, next_date, interval#, what, flag
10:47:53 5 from sys.job$) loop
10:47:53 6 if job_map.flag = 1 then
10:47:53 7 broken := TRUE;
10:47:53 8 else
10:47:53 9 broken := FALSE;
10:47:53 10 end if;
10:47:53 11
10:47:53 12 — Try to create the dbms_job using the reimplementation,
10:47:53 13 — ignore errors if the dbms_job id is already in use
10:47:53 14 begin
10:47:53 15 sys.dbms_ijob.submit(job => job_map.job,
10:47:53 16 luser => job_map.powner,
10:47:53 17 puser => job_map.powner,
10:47:53 18 cuser => job_map.powner,
10:47:53 19 next_date => job_map.next_date,
10:47:53 20 interval => job_map.interval#,
10:47:53 21 broken => broken,
10:47:53 22 what => job_map.what,
10:47:53 23 nlsenv => ”,
10:47:53 24 env => ”);
10:47:53 25 commit;
10:47:53 26 exception
10:47:53 27 when DUP_VAL_ON_INDEX then null;
10:47:53 28 when others then
10:47:53 29 — BUG 28632810: Supress errors migrating jobs that have
10:47:53 30 — invalid metadata, this could happen if a user manually
10:47:53 31 — modified the job$ table in the past, since dbms_job
10:47:53 32 — have not had upgrades for several releases this might
10:47:53 33 — be the case
10:47:53 34 dbms_output.put_line(‘WARNING: DBMS_JOB job ‘ || job_map.job ||
10:47:53 35 ‘ could not be resubmitted successfully’);
10:47:53 36 end;
10:47:53 37 end loop job_map;
10:47:53 38 end;
10:47:53 39 /
SQL>select count(1) from dba_jobs;
COUNT(1)
———-
31078
SQL>select count(1) from dba_scheduler_jobs;
COUNT(1)
———-
31155
SQL>
Issue:
Oracle 19c – BPO up: 1.2h, 1 ins, 0 sn, 0 us, 2.0G sga, 0%db
ID %CPU %DCP LOAD AAS ASC ASI ASW IDL MBPS %FR PGA UTPS RT/X DCTR DWTR
EVENT (C) TOT WAITS TIME(s) AVG_MS PCT WAIT_CLASS
DB CPU 4223 70
db file sequential read 61474 333 5.4 6 User I/O
db file parallel write 3847 171 44.6 3 System I/O
ID SID SPID USR PROG S PGA SQLID/BLOCKER OPN E/T STA STE EVENT/*LA W/T
1 242 14287180 SYS sqlp D 152M 9ca6qk4nt1wpw DEL 1.7h ACT CPU cpu runqu 149u
9ca6qk4nt1wpw
DELETE FROM SYS.SCHEDULER$_DBMSJOB_MAP WHERE JOB_NAME NOT IN (SELECT NAME FROM SYS.OBJ$ WHERE NAME LIKE ‘DBMS_JOB$_%’)
So, in conclusion, the high number of jobs can slow down the database upgrade due that “Behavior Changes” intruduced on 19c.
Wow – do you have an SR for this?
Or can you at least share the logs with me?
Thanks,
Mike