Incremental Statistics Collection in Oracle – Upgrade Pitfalls

A while back I blogged already about Incremental Statistics collection in Oracle Database

Incremental Statistics Collection improved in Oracle 12c

And you’ll find more information in our documentation and in posts by our optimizer folks:

And  you may read on this follow-up blog post about a related real world customer example …

Incremental Statistics Collection in Oracle – A True Story

Database Upgrade

Important to know is the fact that during a database upgrade the underlying tables containing the synopsis for incremental stats collection may be reorganized. And depending on the amount of data this can take a bit.

The largest synopsis tables I have seen so far were almost 1TB of size at a financial customer in Europe. But I have seen ranges around 300GB quite often in the past months.

What happens during the database upgrade?

Incremental Statistics Collection got introduced with Oracle Database and improved from release to release. But during a database upgrade a reorganization of the synopsis table can happen.

  • Upgrade from Oracle or to Oracle
    • Restructuring of WRI$_OPSTAT_SYNOPSIS$ to use range-hash partitioning 
    • Most data movement will happen here
    • As for the interim period two synopsis tables exist this will consume 2x the space of the synopsis table during the movement
  • Upgrade from Oracle to Oracle 12.1.0.x:
    • Restructuring of WRI$_OPSTAT_SYNOPSIS$ from range-hash partitioning to list-hash partitioning
    • There is little data movement in this case as the move happens with the help of metadata swapping

Which symptoms may you see?

Actually very simple and obvious symptoms:
Phase 1 of the parallel upgrade to Oracle Database 12c takes unusually long. It should usually complete within the range of less than a few minutes. But in those cases it can take literally hours.

If that happens check your catupgrd0.log and watch out for the long running statements. It does not mean necessarily that it happens because of a huge synopsis table. For instance one of my German reference customers, DVAG had leftovers in the SYSAUX because of bugs in earlier releases they had worked with.

But if you spot such results (quoting a colleague here):

“The table WRI$_OPTSTAT_SYNOPSIS$ has 20420 partitions, 344618 subpartitions and 921207 MB size. [..] This transformation step lasts for 6,5 hours, so the whole upgrade process duration has an important impact from this

then you should be alerted.

How can you check this upfront?

We haven’t included a check into the preupgrd.sql yet. But the following three queries will tell you if you may see issues when you get a larger number as result:

  • How many tables have incremental stats on?
    SQL> select count(distinct bo#) from sys.wri$_optstat_synopsis_head$;
  • How many partitions does your WRI$_OPSTATS_SYNOPSIS$ have?
    SQL> select partition_count from dba_part_tables where table_name='WRI$_OPTSTAT_SYNOPSIS$';
  • How large is your synopsis table?
    SQL> select sum(bytes/(1024*1024)) "MB" from dba_segments where segment_name='WRI$_OPTSTAT_SYNOPSIS$';
  • Tables where inc stats are ON?
    SQL> select "OWNER" , "TABLE_NAME" ,p.valchar
    from  sys.OPTSTAT_USER_PREFS$ p
    inner join sys.obj$ o on p.obj#=o.obj#
    inner join sys.user$ u on o.owner#=u.user#
    where p.PNAME = 'INCREMENTAL';
  • Synopsis for tables which don’t exist anymore?
    SQL> select distinct from sys.wri$_optstat_synopsis_head$ h where not exists (select 1 from$ t where t.obj# =;

Especially a large number of tables being monitored and a size of tens and hundreds of GBs will indicate that you may have to calculate for a longer upgrade duration.

How do you cure this?

Support sometimes gives the recommendation to look for MOS Note: 1055547.1 – SYSAUX Grows Because Optimizer Stats History is Not Purged and asks for a manual purge of stats, for instance:

    for i in reverse 1..31
    end loop;

But this won’t clean up the synopsis tables but only stats history for object statistics. And it may create some noise in your UNDO. So in any case you may better set your stats retention policy to something such as 10 days instead of the default of 31 days instead generally.

First of all you have to make sure that this patch got applied to your target already before upgrade – it will add parallel index capabilities which will speed up the rebuild a lot:

Be aware:
Truncating WRI$_OPTSTATS_SYNOPSIS$ and WRI$_OPTSTAT_SYNOPSIS_HEAD$ is strictly not recommended. If you plan to do it the hard way please check back with Oracle Support for their approval first.

Further Information?

Please read on here about a real world customer example …

Incremental Statistics Collection in Oracle – A True Story



3 thoughts on “Incremental Statistics Collection in Oracle – Upgrade Pitfalls

  1. We were getting to upgrade times of up to 19 hours. We saw the problem with the catuposb.sql file and now use the corrected one. We also cleaned out all the old statistics, but we STILL get stuck on Phase #65 of the catupgrd taking 16 hours which is a real problem with production databases.

  2. Paul,

    please drop me an email with the catupgrd0-n.logs and the output of the preupgrade.log. Or an SR number is fine as well. I’d like to understand what causes you so much trouble.


  3. Pingback: Incremental Statistics Collection in Oracle – A True Story | Upgrade your Database - NOW!

Leave a Reply

Your email address will not be published. Required fields are marked *