A while back I blogged already about Incremental Statistics collection in Oracle Database 12.1.0.2:
And you’ll find more information in our documentation and in posts by our optimizer folks:
- Database SQL Tuning Guide 12c:
Gathering Incremental Statistics on Partitioned Objects - Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table? (Mar 18, 2012)
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics
And you may read on this follow-up blog post about a related real world customer example …
Database Upgrade
Important to know is the fact that during a database upgrade the underlying tables containing the synopsis for incremental stats collection may be reorganized. And depending on the amount of data this can take a bit.
The largest synopsis tables I have seen so far were almost 1TB of size at a financial customer in Europe. But I have seen ranges around 300GB quite often in the past months.
.
What happens during the database upgrade?
Incremental Statistics Collection got introduced with Oracle Database 11.1.0.6 and improved from release to release. But during a database upgrade a reorganization of the synopsis table can happen.
- Upgrade from Oracle 11.1.0.6/7 or 11.2.0.1 to Oracle 11.2.0.2/3/4:
- Restructuring of WRI$_OPSTAT_SYNOPSIS$ to use range-hash partitioning
- Most data movement will happen here
- As for the interim period two synopsis tables exist this will consume 2x the space of the synopsis table during the movement
- Upgrade from Oracle 11.2.0.2/3/4 to Oracle 12.1.0.x:
- Restructuring of WRI$_OPSTAT_SYNOPSIS$ from range-hash partitioning to list-hash partitioning
- There is little data movement in this case as the move happens with the help of metadata swapping
Which symptoms may you see?
Actually very simple and obvious symptoms:
Phase 1 of the parallel upgrade to Oracle Database 12c takes unusually long. It should usually complete within the range of less than a few minutes. But in those cases it can take literally hours.
If that happens check your catupgrd0.log and watch out for the long running statements. It does not mean necessarily that it happens because of a huge synopsis table. For instance one of my German reference customers, DVAG had leftovers in the SYSAUX because of bugs in earlier releases they had worked with.
But if you spot such results (quoting a colleague here):
“The table WRI$_OPTSTAT_SYNOPSIS$ has 20420 partitions, 344618 subpartitions and 921207 MB size. [..] This transformation step lasts for 6,5 hours, so the whole upgrade process duration has an important impact from this
step.”
then you should be alerted.
How can you check this upfront?
We haven’t included a check into the preupgrd.sql yet. But the following three queries will tell you if you may see issues when you get a larger number as result:
- How many tables have incremental stats on?
SQL> select count(distinct bo#) from sys.wri$_optstat_synopsis_head$;
- How many partitions does your WRI$_OPSTATS_SYNOPSIS$ have?
SQL> select partition_count from dba_part_tables where table_name='WRI$_OPTSTAT_SYNOPSIS$';
- How large is your synopsis table?
SQL> select sum(bytes/(1024*1024)) "MB" from dba_segments where segment_name='WRI$_OPTSTAT_SYNOPSIS$';
- Tables where inc stats are ON?
SQL> select u.name "OWNER" ,o.name "TABLE_NAME" ,p.valchar from sys.OPTSTAT_USER_PREFS$ p inner join sys.obj$ o on p.obj#=o.obj# inner join sys.user$ u on o.owner#=u.user# where p.PNAME = 'INCREMENTAL';
- Synopsis for tables which don’t exist anymore?
SQL> select distinct h.bo# from sys.wri$_optstat_synopsis_head$ h where not exists (select 1 from sys.tab$ t where t.obj# = h.bo#);
Especially a large number of tables being monitored and a size of tens and hundreds of GBs will indicate that you may have to calculate for a longer upgrade duration.
How do you cure this?
Support sometimes gives the recommendation to look for MOS Note: 1055547.1 – SYSAUX Grows Because Optimizer Stats History is Not Purged and asks for a manual purge of stats, for instance:
begin for i in reverse 1..31 loop dbms_stats.purge_stats(sysdate-i); end loop; end; /
But this won’t clean up the synopsis tables but only stats history for object statistics. And it may create some noise in your UNDO. So in any case you may better set your stats retention policy to something such as 10 days instead of the default of 31 days instead generally.
First of all you have to make sure that this patch got applied to your target already before upgrade – it will add parallel index capabilities which will speed up the rebuild a lot:
- Bug 19855835 – Upgrade slow when reorganizing large stats history tables
- Check the most important MOS 2034610.1 – Things to Consider to Avoid Poor Performance or Wrong Results on 12.1.0.2 for inclusion in your current or future PSU or BP.
As it turns out the fix did not get included into a PSU yet – but it is included since BP7 for 12.1.0.2 – and as BPs are cumulative as well it is included since October 2015 already.
Be aware:
Truncating WRI$_OPTSTATS_SYNOPSIS$ and WRI$_OPTSTAT_SYNOPSIS_HEAD$ is strictly not recommended. If you plan to do it the hard way please check back with Oracle Support for their approval first.
Further Information?
Please read on here about a real world customer example …
–Mike
We were getting 11.2.0.4 to 12.1.0.2 upgrade times of up to 19 hours. We saw the problem with the catuposb.sql file and now use the corrected one. We also cleaned out all the old statistics, but we STILL get stuck on Phase #65 of the catupgrd taking 16 hours which is a real problem with production databases.
Paul,
please drop me an email with the catupgrd0-n.logs and the output of the preupgrade.log. Or an SR number is fine as well. I’d like to understand what causes you so much trouble.
Cheers
Mike