PERL scripts for large migrations supported on all platforms (except Windows)

This topic fits very well as I present about +100 TB migrations today at the “Harmony” User Group Conference in Finland.

Finland May Helsinki 2017

Finland in May

The question whether the PERL scripts for RMAN incrementally rolled forward backups we deliver via MOS Note 1389592.1 (11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup) will be supported for migrations to non-Exadata systems such as Oracle SuperCluster as well.

And yes, now we got an update into the note saying:

Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other Unix based operating systems. However, any non-Linux operating system must be on 11.2.0.4.

Please be aware that “Unix” is important here:
The scripts are not supported (and won’t work) if you use them with MS Windows platforms.

Another caveat with the note to be aware of:

It does not talk explicitly about “Full Database Migrations”. And not about Full Transportable Export/Import. I’ve had several customers hitting this trap using the procedure described in the note but ending up with just parts of their database being migrated. Please keep in mind that you will have to move everything in SYSTEM tablespace (and potentially in SYSAUX such as AWR) as well as otherwise you’ll end up with data but will miss your grants, roles, synonyms etc. That’s why we highly recommend the Full Transportable Export/Import procedure as it takes care on all this.

You may please download my slide deck about how to use the PERL scripts for a large migration – it’s the talk I gave at Harmony Conference 2017:

–Mike

Full Transportable Export/Import – Things to Know

This blog post is an addition to:

Full Transportable Export/Import – Migrating an 11.2.0.4 database to Oracle Database 12c- into the Oracle Cloud

Seth Miller commented the pitfall of having a serious issue during the Data Pump run, Data Pump exiting and not finishing, and you’ll have to do the entire backup/restore/incremental-roll-foward thing again. Without any doubt, this is no fun at all.

So let me point out a few things to take into consideration – and some of them are not obvious I guess.

Do you need to backup/restore/incremental-roll-forward again?

When you let do Data Pump all the manual tasks of Transportable Tablespaces (TTS) this is called Full Transportable Export/Import (FTEX as I abbreviate it). With regular TTS you will take your tablespaces read-write in your destination at the end of the entire process. So whenever something is going wrong during the meta data rebuild you won’t have to restore all your tens of terabytes.

With FTEX things are a bit different. Data Pump will take the tablespaces several times read-write during the impdp run. This has to happen. And the job is not restartable right now. This may change in a future release or patch set. We’ll see.

What are your options now to prevent a full repeat of all your backup/restore/incremental-roll-forward activities?

FLASHBACK DATABASE unfortunately is not an option as you could set a Guaranteed Restore Point – but as soon as the data file headers got adjusted FLASHBACK won’t be able to revert this.

Setting the data files to read-only on the OS level is not an option either as you might force Data Pump to fail when it would like to write something into a tablespace and gets an OS return code for not being able to complete the write operation.

Therefore right now the only valid option I can think of is leveraging storage snapshot technologies in order to restore the files back into the status from “before Data Pump ran“.

We are aware of this pitfall and we discuss alternatives internally for the future.

Character Sets and Time Zone?

Another topic to pay close attention is the database’s character set and national character set. Please find all the guidelines about character sets and national character sets here:

Transportable Tablespaces – Characters Sets – Same same but different?

For time zone files please make sure you have the same time zone version in source and destination. As you can’t downgrade your time zone file in the destination usually you’ll have to apply the matching time zone patch in the source before initiating the transport. If you don’t pay attention Data Pump will deny the meta import of the data in the tablespaces as otherwise you may end up with corrupted time zone data.

Please find the precise description in the Oracle 12c documentation:

Tablespace Encryption

When your tablespaces are encrypted you’ll have to be a bit more careful when using TTS or FTEX. Even though the FAQ on Oracle.com mentions that there are no issues still there are some limitations. The documentation is more precise:

  • Limitations on Transportable Tablespace
    • Transportable tablespaces cannot transport encrypted tablespaces.
    • Transportable tablespaces cannot transport tablespaces containing tables with encrypted columns
  • Limitations on Full Transportable Export/Import
    • You cannot transport an encrypted tablespace to a platform with different endianness.
      To transport an encrypted tablespace to a platform with the same endianness, during export set the ENCRYPTION_PWD_PROMPT export utility parameter to YES, or use the ENCRYPTION_PASSWORD export utility parameter. During import, use the equivalent import utility parameter, and set the value to the same password that was used for the export.
–Mike