ORA-39405 with impdp when you have mixed time zone versions

Today, Gisela, a former team mate from CSS/ACS Support contacted me regarding an issue seen at a customer. She reported an ORA-39405 with impdp when you have mixed time zone versions. But how can this happen? And what are the potential workarounds? Since some of our hosted environments got into the same situation already, this may be worth a blog post and some explanation.

Why does this happen?

You never cared about DST (Daylight Savings Time) files and versions of your database? This isn’t unusual, and in many environments actually not a problem I guess. Hence, you may have never seen the ORA-39405 with an Data Pump expdp/impdp run yet. But this may happen one day if you don’t pay attention.

The root cause is something we had to do in order to allow smooth transition and migration between environments. Since Oracle Database 19c Release Update (RU) 19.18.0 we ship all available time zone files and deploy them to $ORACLE_HOME/oracore/zoneinfo.

Read more about this change here.

This change was not only important to ease your admin tasks and prevent the situation we had seen too often, moving databases and PDBs from host 1 to host 2, and apparently the ORACLE_HOME on host 1 had newer time zone patches than on host 2, and the move failed. The same problem was even more disturbing when you as a care-taking DBA had applied DST patches to your source, and then tried to migrate it into OCI, and the recent time zone patches were missing.

 

But … there can be a downside

Unfortunately, as often with a change, not everybody is happy no matter how important and needed the change is, and how good our intentions are.

During a patching operation we add the files but we don’t update your database. You can trigger this when you use AutoUpgrade for patching with the upg1.timezone_upg=yes explicitly set. But if you use any other approach to lift your database from the old to the new home, the database’s DST version remains unchanged.

In contradiction, the creation of a new database pulls the newest available DST file from $ORACLE_HOME/oracore/zoneinfo. You can workaround this when you use the environment variable ORA_TZFILE and point it to your desired DST file in the earlier mentioned directory. Find this workaround described here.

Correction (Feb 1, 2024):

In an earlier version of the blog post I wrote that DBCA (Database Configuration Assistant) ignores the environment variable ORA_TZFILE and does not offer a drop-down list or item to choose the DST version you want your database to be created with. This got tracked under:

  • Bug 35243563 – DBCA NEEDS AN OPTION TO CHOOSE THE DST VERSION A DATABASE GETS CREATED WITH
  • ER 35232202 – ER FOR PROVIDING AN INPUT TO DBCA FOR CHOSING A SPECIFIC VERSION OF TIMEZONE FILE
  • ER 26980562 – DBCA SHOULD PROVIDE AN OPTION FOR CUSTOMER TO CHOOSE THE DST TO BE ENABLED (rejected)

But this is not correct anymore, at least not in Oracle Database 19c with newer RUs.

Thanks to Andreas Groetz who pointed me to the changed behavior I learned and verified that you can set now the ORA_TZFILE variable, and then call DBCA in GUI mode to have your database freshly created – and it honors the variable’s setting. I can’t tell you from which RU this works but Andreas tried with 19.18, and there it works already.

In addition, DBCA seems to have now a new silent option parameter:

 dbca -silent -createDatabase -source <dbname> -timezoneVersion <>

allowing this on the command line as well.

Generally speaking, the technical reason for all these potential pitfalls is that you can’t downgrade time zone. It is a one-way street. So, when you take an export taken from a database with an older DST version and import it into a database on a higher DST version, all is fine. But when you try the reverse, the database does not know how to do the conversion. And since there is no downgrade for DST data types available, the import needs to be rejected.

But here our DST experts chimed in, and there are options evaluated whether this could be made possible as well.

 

What could go wrong?

At first, you create a database with an 19.17.0 home, and it will have DST V.32. You can always check the DST version your database is operating on with this query:

SELECT * FROM v$timezone_file;
 

FILENAME               VERSION    CON_ID
----------------------------------------
timezlrg_32.dat             32         0

Now, months later, you deploy 19.20.0 and patch your existing database. It will of course still stay on DST V.32 – no change (unless you used AutoUpgrade and advised it explicitly to adjust the DST version as part of the patching process.

But you create now a new database out of your 19.20.0 home. And when you check, you will find out that it has a newer DST version:

SELECT * FROM v$timezone_file;
 

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_42.dat              42          0

This is no issue as long as you don’t export from the NEW database, and want to import into the OLD one. And of course, the same issue will trouble you when you want to unplug a PDB from the NEW database, and plug it into the OLD CDB. Since the DST file is missing, you will receive a plugin error, and the PDB will open only restricted until you apply the missing DST patch to your 19.17.0 home.

This actually is one of the core reasons why we ship the DST files. The unplug/plug scenario is coming, and of course also when you clone. The DST files are needed in both homes to prevent such issues.

For Data Pump you will receive the error mentioned in this post’s headline: ORA-39405 with impdp when you have mixed time zone versions. This gets explained nicely in MOS Note: 2793311.1 – ORA-39405: Oracle Data Pump Does Not Support Importing From A Source Database With TSTZ Version, too.

 

Potential workarounds

Now the crucial question is: How can you avoid such situations, and what are the potential workarounds for you?

Since I updated the blog post to explain that DBCA indeed does not ignore ORA_TZFILE anymore, I adjusted this section as well.

 

Workaround 0 – Use an underscore (added June 2, 2024)

This workaround is currently available in Oracle 21c and 23ai only. But we are checking whether it can be added to 19c as well. Stay tuned. And be aware that you are at your own risk if you use it, i.e. you are responsible for making sure that no DST data gets imported. If you have DST data included, you potentially would corrupt your data logically.

_datapump_bypass_tstz_check=TRUE

would override the check. Default of the parameter is FALSE. It is available from Oracle 21c onward currently.

 

Workaround 1 – Create databases via a DBCA or via script

Set the ORA_TZFILE variable, and either use DBCA or a script to create the databases with the DST setting you desire.

 

Workaround 2 – Update your DST version in all databases

I guess this is the most unlikely workaround even though it would be a good best practice. Whenever you patch, make sure you use either AutoUpgrade with upg1.timezone_upg=yes or trigger the DST upgrade manually afterwards. This of course would require that you patch all your databases more or less at the same time, and not any of them just months later. It would work but I am unsure whether this is really a workaround matching reality.

 

Workaround 3 – Delete the unwanted files

This is potentially the most efficient workaround but not ideal as you can see from the comments below.

If you are strictly on-prem but also one you won’t most likely receive official blessing from Oracle Support. Still, I have customers using this one, and I don’t see any technical danger involved with it. But keep in mind: I am not Oracle Support, and I can’t give official blessings for any attempt to delete files we deploy into your Oracle Home with a patch.

The idea is simple: You just create a job which runs on a regular basis (or at least after every patch apply) and deletes all newer unwanted DST files from all $ORACLE_HOME/oracore/zoneinfo directories and its subdirectories. This way, you prevent new databases being created on newer time zone files. Unfortunately, the big downside here: You will miss the newer time zone files.

But – and thanks for this feedback – Hansjörg commented to better NOT use Workaround 3 on an Exadata or Engineered System or in Cloud VMs since the tooling (for instance, odacli or dbcli) will get in trouble.

 

Workaround 4 – The unofficial way

Well, I had no idea how many people would have been tempted to challenge this, or find better or smarter ways. Let me just point to Kamil Stawiarski’s blog post. Kamil is stopping the process, injecting a new value into the Master Table. Simple, isn’t it?

Cool approach – and it works nicely if you have OS access.

 

Workaround 5 – The insane way

But of course, we have crazy people in our team, too. Rodrigo was forced with the same challenge recently but in Autonomous Database where you don’t have OS access. So his advanced solution is a bit different. He uses a trigger which creates a job when the Master Table gets created, and this job tries to update the value in the Master table. Once it succeeded, it will stop and exit. The difference here is that Rodrigo has everything (including the import) done with DBMS_DATAPUMP, and nothing happens on the command line but everything within the database.

  1. Schema trigger AFTER DDL which fires once a CREATE TABLE for the Master table gets issued
  2. Do a metadata_only import to check whether the DST version is lower
  3. Background job started in an infinite loop. This job does an update on the Master table until the update succeeds – so you can see many failing update statements.

And of course, you want to do this only when you are 100% certain that you have no time zone data in your database.

 

 

Further Links and Information

–Mike

Share this: