Choose your desired time zone version (DST) upgrade

This morning I received a very reasonable question from a customer who has a time zone mismatch between source and target database. And he wants to transport into a PDB on Exadata. Even though, this sounds trivial at first sight, unfortunately it isn’t. But a bug fix done recently for a German automotive customer is very handy here. So let me show you how you can Choose your desired time zone version (DST) upgrade.

Choose your desired time zone version (DST) upgrade

Photo by Vince Veras on Unsplash

Where does the problem start?

In this particular – and not unusual case – the customer wants to migrate into a PDB on Exadata in Oracle Database 19c. Now, Oracle 19c by default delivered DST V.32 – and databases created before 19.18.0 always had this DST V.32 version engraved. You can easily check this with:

select value$ "TZ Version" from SYS.PROPS$ where name='DST_PRIMARY_TT_VERSION';

Now, the customer upgraded the DST version in source to DST V.34 a while ago. And the 19c CDB has been patched several times up to 19.18.0 but still stays on DST V.32 since DST patching is not part of the datapatch run for many reasons.

Why is this now a problem?

When you want to transport tablespaces from database A to database B, Data Pump right now can’t foresee whether there is time zone data in the tablespace. But when you plugin now a tablespace from database A with a different time zone that database B has, you potentially corrupt your data with time zone related data types. Therefore, Data Pump rejects such attempts right now. I say “right now” because we work on a fix for this.

This is the error the customer received when he attempted to transport tablespaces from the source with DST V.34 to the target being on DST V.32:

ORA-39002: invalid operation
ORA-39405: Oracle Data Pump does not support importing from a source database with TSTZ version 34 into a target database with TSTZ version 32.


What is the potential solution?

This looks simple – and I can read your mind already. You are saying: Let’s upgrade the target database to DST V.34.

And you are right.

But when you try it, you’ll hit a nice surprise. Run the utltz* scripts you’ll find in ?/rdbms/admin which I described in more detail here. As soon as you do this in 19.18.0 or newer, your database will be DST upgraded to the highest available DST version.

And since we ship now all DST files from 19.18.0 onward, this will mean that your target will be patched to DST V.40 or higher.

Now you are in trouble. Your source would need to be patched to the same DST version in order to allow the transport. And usually, the source is the production. Hence, you may not want this approach.

So this is not a solution.


Well … downtime …

Ah, there is another challenge with the above solution. In our case here – and that may not be so unusual – the receiving CDB has additional PDBs already. So simply running the DST upgrade scripts would mean downtime to all of them.

I hear you thinking: But can’t we mix DST versions in various PDBs within one CDB?

And you are truly right. But as soon as you provision a new PDB for your transport, it will be provisioned as a clone of PDB$SEED. And PDB$SEED – unless you did a manual time zone tweak – inherits the DST version of the CDB$ROOT.

The solution for this case could be to use an intermediate, temporary staging CDB – and create this CDB with the desired time zone version, in our case with DST V.34.

But how do we do this?

There are two options:

This way you will workaround the downtime the DST upgrade would incur on your target production CDB.


Choose your DST version

But what if you want to upgrade the DST version but instead to the newest available one, you’d like to choose your DST version? So for instance, as in the above case you would not want to upgrade DST to V.40 but instead to V.34, the matching DST version of the source. And deleting files is not an option since you may have other databases in the same home relying on those file.

So what can we do?

When you call the standard utltz_upg_check.sql and utltz_upg_apply.sql script, utltz_upg_check.sql will always detect the highest available version by default and upgrade DST version to it. But there is relief.

Bug 35117839 : DST VERSION UPGRADE – UTLTZ_UPG_CHECK.SQL – NEED TO PASS THE VERSION INSTEAD OF PICKING AUTOMATICALLY fixes this and allows you to choose your desired DST version you will upgrade to. The fix introduces an event which allows you override this check and tell the scripts to which version it should upgrade the DST level.

alter session set events '35117839 trace name context forever, level 34';

The above command for instance will define DST V.34 to be chosen. But you will need to apply the fix to your home in order to be able to leverage this event.

Unfortunately, this does not help you with the PDB case since a new PDB gets provisioned as a clone of PDB$SEED. Only workaround here would be to have another CDB created with your desired DST version, provision a new PDB, then unplug and plugin the PDB into your target CDB.


What is now the final solution?

There are two viable options in this case.

Option no.1 is to upgrade the receiving CDB to the desired DST version. This would require that you’ve applied the patch for bug 35117839, and then set the above event before you upgrade DST. And obviously, you’ll have downtime twice. At first to apply the fix to your home, and then you’ll get another downtime when you invoke the DST upgrade itself.

This is described in MOS Note: 2940812.1 – How to specify the DST version when running utltz_upg_check.sql during DST upgrade?.

Option no.2 is to create a temporary CDB with the desired DST version, then provision a PDB which inherits the DST version of the CDB$ROOT. Once done, either transport into this PDB, then unplug/plug or clone into the target CDB. Or, unplug/plug at first, then transport into your target PDB having now the correct DST version. And of course, it is key that the DST files exist in the ?/oracore/zoneinfo when you mix DST versions within one CDB.

Especially when downtime is an issue, I would prefer option no.2. But option no.1 is a new route and gives you more flexibility.


Further Links and Information


Share this: