PERL scripts for large migrations supported on all platforms (except Windows)

This topic fits very well as I present about +100 TB migrations today at the “Harmony” User Group Conference in Finland.

Finland May Helsinki 2017

Finland in May

The question whether the PERL scripts for RMAN incrementally rolled forward backups we deliver via MOS Note 1389592.1 (11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup) will be supported for migrations to non-Exadata systems such as Oracle SuperCluster as well.

And yes, now we got an update into the note saying:

Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other Unix based operating systems. However, any non-Linux operating system must be on 11.2.0.4.

Please be aware that “Unix” is important here:
The scripts are not supported (and won’t work) if you use them with MS Windows platforms.

Another caveat with the note to be aware of:

It does not talk explicitly about “Full Database Migrations”. And not about Full Transportable Export/Import. I’ve had several customers hitting this trap using the procedure described in the note but ending up with just parts of their database being migrated. Please keep in mind that you will have to move everything in SYSTEM tablespace (and potentially in SYSAUX such as AWR) as well as otherwise you’ll end up with data but will miss your grants, roles, synonyms etc. That’s why we highly recommend the Full Transportable Export/Import procedure as it takes care on all this.

You may please download my slide deck about how to use the PERL scripts for a large migration – it’s the talk I gave at Harmony Conference 2017:

–Mike

Can you EXCLUDE tablespaces from Full Transportable Export/Import?

Question: Can you EXCLUDE one or more tablespaces when doing a Full Transportable Export/Import?

First of all, this question came up already twice in real world customer migrations. So it’s not a totally unusual question. In one case a tablespace called USERS got precreated and some data did get stored. In the second case we did use RMAN incremental backups to migrate a very large database (>100TB) and some tablespaces weren’t part of the backup set.

I did brainstorm with Roy – and I dug into my notes from some years ago when the question was raised to me as part of a migration project to Oracle 12.1.0.1.

Roy and I had basically 2 main approaches to validate.

  1. Run expdp with FULL=Y and TRANSPORTABLE=ALWAYS but EXCLUDE some tablespaces
  2. Run expdp with FULL=Y and TRANSPORTABLE=ALWAYS but EXCLUDE some tablespaces during impdp only

In addition for the customer case where some tablespaces where initially not included into the RMAN Incremental Backupset strategy, the files for these tablespaces can be copied (and converted) in addition. The impdp parameter file will need to take care of it.

First of all, here’s my test setup. The two tablespaces MIKE1 and MIKE2 are the ones I will treat separately.

SQL> select status, tablespace_name from dba_tablespaces;

STATUS	  TABLESPACE_NAME
--------- ------------------------------
ONLINE	  SYSTEM
ONLINE	  SYSAUX
ONLINE	  UNDOTBS1
ONLINE	  TEMP
READ ONLY USERS
ONLINE	  MIKE1
ONLINE	  MIKE2

SQL> create directory oradump as '/home/oracle';
SQL> grant read, write on directory oradump to SYSTEM;

SQL> EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'MIKE1, MIKE2', incl_constraints => TRUE);
PL/SQL procedure successfully completed.

SQL> SELECT * FROM transport_set_violations;
no rows selected

Case 1 – Run expdp but EXCLUDE some tablespaces

l will use this par file:

directory=oradump
dumpfile=exp.dmp
logfile=exp.log
transportable=always
full=y
metrics=y
version=12
exclude=table_statistics,index_statistics
exclude=tablespace:"IN ('MIKE1', 'MIKE2')"

Now invoking Data Pump with expdp:

$ expdp system/oracle parfile=exp.par

Export: Release 11.2.0.4.0 - Production on Tue May 16 13:17:41 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** parfile=ex.par 
Startup took 1 seconds
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
     Completed 1 PLUGTS_BLK objects in 1 seconds
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
     Estimated 1 TABLE_DATA objects in 1 seconds

[...]

     Completed 1 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 0 seconds
     Completed 30 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 1 seconds
     Completed 5 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 0 seconds
     Completed 59 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 0 seconds
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /home/oracle/exp.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
  /u02/oradata/FTEX/users01.dbf
Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Tue May 16 13:18:37 2017 elapsed 0 00:00:56

As next step I copy the datafile(s) to the destination’s directory and configure my import par file:

directory=oradump
dumpfile=exp.dmp
logfile=imp.log
metrics=y
logtime=all
transport_datafiles='/u02/oradata/DB12/users01.dbf'

And importing into my new destination database with impdp:

$ impdp system/oracle parfile=imp.par

Import: Release 12.1.0.2.0 - Production on Tue May 16 13:52:46 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
16-MAY-17 13:57:08.681: Startup took 0 seconds
16-MAY-17 13:57:09.299: Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
16-MAY-17 13:57:09.544: Source time zone is +02:00 and target time zone is +01:00.
16-MAY-17 13:57:09.546: Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** parfile=imp.par
16-MAY-17 13:57:09.596: Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER

[...]

16-MAY-17 13:57:45.160: Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
16-MAY-17 13:57:45.238: Completed 1 PLUGTS_BLK objects in 0 seconds
16-MAY-17 13:57:45.243: Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
16-MAY-17 13:57:47.868: Completed 1 MARKER objects in 2 seconds
16-MAY-17 13:57:47.875: Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
16-MAY-17 13:57:48.368: Completed 2 PROCACT_SCHEMA objects in 1 seconds
16-MAY-17 13:57:48.373: Processing object type DATABASE_EXPORT/AUDIT
16-MAY-17 13:57:48.499: Completed 29 AUDIT objects in 0 seconds
16-MAY-17 13:57:48.504: Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
16-MAY-17 13:57:49.501: Completed 1 MARKER objects in 1 seconds
16-MAY-17 13:57:49.552: Completed 30 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 0 seconds
16-MAY-17 13:57:49.558: Completed 5 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 2348 seconds
16-MAY-17 13:57:49.567: Completed 57 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 2349 seconds
16-MAY-17 13:57:49.628: Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 44 error(s) at Tue May 16 13:57:49 2017 elapsed 0 00:00:41

I silently ignore the errors here. Some additional EXCLUDE statements will make this run flawless. Main reason for the errors is simply that with a 11.2.0.4 source database not everything gets excluded internally what should be excluded.

Anyhow, Case 1 looks ok.

SQL> select tablespace_name, status from dba_tablespaces;

TABLESPACE_NAME 	       STATUS
------------------------------ ---------
SYSTEM			       ONLINE
SYSAUX			       ONLINE
UNDOTBS1		       ONLINE
TEMP			       ONLINE
USERS			       ONLINE

But there are some details which may cause trouble. Quick look into the log file:

16-MAY-17 13:57:15.553: ORA-39083: Object type USER:"MIKE1" failed to create with error:
ORA-00959: tablespace 'MIKE1' does not exist
Failing sql is:
 CREATE USER "MIKE1" IDENTIFIED BY VALUES 'S:67D1974017CA6517A6A9BA1655182FEA352800F1460501DCD29602A666B3;CF5B00ED3595B543' DEFAULT TABLESPACE "MIKE1" TEMPORARY TABLESPACE "TEMP"
16-MAY-17 13:57:15.553: ORA-39083: Object type USER:"MIKE2" failed to create with error:
ORA-00959: tablespace 'MIKE2' does not exist
Failing sql is:
 CREATE USER "MIKE2" IDENTIFIED BY VALUES 'S:8915BFF218BB5A49958F5C12F4ED161BBB9EC71A54E1FFA471F94002F8C2;335808EE482DFA17' DEFAULT TABLESPACE "MIKE2" TEMPORARY TABLESPACE "TEMP"

Sure, as tablespace MIKE1 does not exist, user MIKE1 who had the default tablespace MIKE1 can’t be created either. This is ok in my case. But I’d like to point out that you have to pay close attention to such errors.

And of course, the same applies to objects in this tablespace(s) as well:

16-MAY-17 13:57:44.958: Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
16-MAY-17 13:57:45.068: ORA-39083: Object type TABLE:"MIKE1"."TEST1" failed to create with error:
ORA-00959: tablespace 'MIKE1' does not exist
Failing sql is:
CREATE TABLE "MIKE1"."TEST1" ("OBJ#" NUMBER NOT NULL ENABLE, "DATAOBJ#" NUMBER, "OWNER#" NUMBER NOT NULL ENABLE, "NAME" VARCHAR2(30 BYTE) NOT NULL ENABLE, "NAMESPACE" NUMBER NOT NULL ENABLE, "SUBNAME" VARCHAR2(30 BYTE), "TYPE#" NUMBER NOT NULL ENABLE, "CTIME" DATE NOT NULL ENABLE, "MTIME" DATE NOT NULL ENABLE, "STIME" DATE NOT NULL ENABLE, "STATUS" NUMBER NOT NULL ENABLE, "R

Be careful!

Case 2 – Run expdp but EXCLUDE some tablespaces during impdp

Parameter files look like this for exp.par:

directory=oradump
dumpfile=exp.dmp
logfile=exp.log
transportable=always
full=y
metrics=y
version=12
exclude=table_statistics,index_statistics

and for imp.par:

directory=oradump
dumpfile=exp.dmp
logfile=imp.log
metrics=y
logtime=all
exclude=tablespace:"IN ('MIKE1', 'MIKE2')"
transport_datafiles='/u02/oradata/DB12/users01.dbf

But the result unfortunately is different. While the expdp runs smoothly of course, the impdp fails:

$ impdp parfile=imp.par

Import: Release 12.2.0.1.0 - Production on Thu May 16 17:18:54 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
ORA-39002: invalid operation
ORA-39352: Wrong number of TRANSPORT_DATAFILES specified: expected 3, received 1

Case 2 does not work.

Summary

You can EXCLUDE one or more tablespaces when doing a Full Transportable Export/Import but you have to be careful. My approach from above is simple and does not have any dependencies. Be aware thst the Full Transportable Export/Import will try to create the users and objects – and will fail of course if the tablespace is not part of the operatio

–Mike

Transportable Tablespaces and READ ONLY in Oracle Database 12c

We recently worked with a customer who noticed that they were not able to use transportable tablespaces to connect the same tablespace data files to two databases at the same time, even after setting the tablespaces READ ONLY in SQL*Plus. This is new behavior in 12c, and many customers are not yet aware of this change. Here are the details of what changed, why, and how you might want to deal with it if the changes affect your environment.

What Changed?

Starting in 12.1, data pump sets tablespaces read write during the import phase of a transportable tablespace migration. This means that a tablespace cannot be hooked into two different databases at the same time using transportable tablespaces.

Why Was This Change Made?

There were a couple of motivations for this change.

First, as databases have grown, we encountered performance hits when dealing with tablespaces that contain many partitions or subpartitions for either tables or indexes. The reason for this (apologies if this gets too deep) is that we try to ensure that we can reclaim free space in cases where a tablespace is being moved, but where not all tables within that tablespace are part of the operation. For example, you could move a tablespace data file which includes partitions from 5 tables, but you may be interested in only 2 of those tables. The segments used by the other 3 tables would be dead space that we should reclaim.

Prior to 12c, we would reclaim this space by first recording all the segments being exported during the export phase, and then those imported during the import phase. This allowed us to free up all the space for segments that were not imported. This worked, but as bigfile tablespaces grew into the tens of terabytes, performance really suffered. We ran into cases where all that accounting for segments took literally days. In 12c we implemented a different technique where we no longer record the segments on export (this is available as a backport to 11g as well), and then upon import we recompute the bitmap for the tablespace. Recomputing the bitmap means calling DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS, and generally takes a matter of seconds compared to the previous method which could take hours.

Therefore, this change is a very large performance improvement for both export and import of transportable tablespaces when there are large numbers of data segments involved.

The second reason for this change was to enable transportable tablespaces to handle import of a tablespace into a database that used a different version of Timestamp with Timezone (TSTZ) data. Prior to 12c, there were many restrictions on moving TSTZ data between databases. We have progressively been able to relax and eliminate these restrictions over time. To quote from the 11.2.0.4 Database Utilities Guide:

Jobs performed in transportable tablespace mode have the following requirements concerning time zone file versions:

  • If the source is Oracle Database 11g release 2 (11.2.0.2) or later and there are tables in the transportable set that use TIMESTAMP WITH TIMEZONE (TSTZ) columns, then the time zone file version on the target database must exactly match the time zone file version on the source database.

  • If the source is earlier than Oracle Database 11g release 2 (11.2.0.2), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses TSTZ columns.

If these requirements are not met, then the import job aborts before anything is imported. This is because if the import job were allowed to import the objects, there might be inconsistent results when tables with TSTZ columns were read.

Starting in 12.1 we are able to handle TSTZ data when moving a tablespace to a database with a higher timezone version. This is done by opening the tablespace datafile and fixing the TSTZ data using features created in 12c for this purpose. This means that with 12c we can use transportable tablespaces in more scenarios, and that the resulting import will be more complete when there is Timestamp with Timezone data involved.

In summary, the change to opening data files read write during a transportable tablespace import makes the process faster and the technique more broadly applicable.

But Neither of those Benefits Affect Me, and I Like the Old Behavior!

It might be possible for us to allow the old behavior, by implementing a parameter explicitly for that purpose. However, there would be the following shortcomings to using this parameter:

  1. We would not be able to account for the segments imported during the transportable impdp operation, because we would have no list of exported segments against which to compare them. Those segments would be dead space until the user explicitly called the DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS procedure.
  2. If any of the tablespaces being imported contained TSTZ data, then the timezone version of the importing database would
    • have to exactly match that of the exporting database, or
    • We would abort the import

However, we don’t require a new parameter in order to achieve the goal of preserving the old behavior. If the files are set READ ONLY at the operating system level (or in ASM) then this prevents any attempt to set them read write at the database level. This means that we will not reclaim free space from unused segments, and the TSTZ fix-ups will not be performed. In this case, any tables with TSTZ data will be dropped upon import in cases where the timezone version of the target database is not the same as that of the imported data.

We (Data Pump development) tend to view the benefits gained by this change as far outweighing the loss of functionality.
However, we do recognize that we have hundreds of thousands of customers who may have varying use cases and priorities for Data Pump, so if there is a desire to provide options as described above, we would be happy to accept enhancement requests for consideration.

Transportable Tablespaces – Characters Sets – Same same but different?

All credits go to Don Wolf, an Oracle Advanced Customer Support engineer from Ohio as he dug out this information 🙂 Thanks Don!

Do database character sets have to match EXACTLY for Transportable Tablespaces?

That sounds like a simple question. When you look into our big slide deck the answer will be a straight “Yes”. No doubts. Regardless if you would like to do Transportable Tablespaces or Full Transportable Export/Import your sources and your target’s database character sets must be equal. Otherwise Data Pump won’t allow you to process the meta data import.

But Don was wondering about slightly differing information in MOS notes and the documentation.
And asked me if I can clarify things. I couldn’t.

  • MOS Note:1454872.1
    Transportable Tablespace (TTS) Restrictions and Limitations:
    Details, Reference, and Version Where Applicable

    tells you:

  •  And the above documentation link then tells you:
    • The source and the destination databases must use compatible database character sets. That is, one of the following must be true:
      • The database character sets of the source and the target databases are the same.
      • The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true:
        • The source database is in version 10.1.0.3 or higher.
        • The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets.
        • The tablespaces to be transported contain no columns with the CLOB data type, or the source and the target database character sets are both single-byte or both multibyte.
      • The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true:
        • The source database is in a version lower than 10.1.0.3.
        • The maximum character width is the same in the source and target database character sets.
  • And furthermore from “Choosing a Character Set” section of Oracle 12.1  Database Globalization Support Guide:
    • Oracle
      Database does not maintain a list of all subset-superset pairs but it
      does maintain a list of binary subset-superset pairs that it recognizes
      in various situations such as checking compatibility of a transportable
      tablespace or a pluggable database. Table A-11 and Table A-12 list the
      binary subset-superset pairs recognized by Oracle Database.
    • In those tables the only binary subset-superset pairs involving AL32UTF8 are:
      • UTF8/AL32UTF8
      • US7ASCII/AL32UTF8
  • This is not 100% congruent to the information provided in
    MOS Note 371556.1 – How to Migrate to different Endian Platform Using
    Transportable Tablespaces With RMAN

    saying:
    The source and target database must use the same character set and national character set.

What is correct, what’s not?

First of all the wording of “compatible character sets” seems to be gotten introduced with Oracle Database 11.2.

In fact the scope for the target system has become broader in Oracle Database 11.2.  These rule here are correct as soon as your target database is an Oracle Database 11.2 or newer database release.

  • The source and the destination databases must use compatible database character sets. That is, one of the following must be true:
    • The database character sets of the source and the target databases are the same.
    • The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true:
      • The source database is in version 10.1.0.3 or higher.
      • The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets.
      • The tablespaces to be transported contain no columns with the CLOB data type, or the source and the target database character sets are both single-byte or both multibyte.
    • The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true:
      • The source database is in a version lower than 10.1.0.3.
      • The maximum character width is the same in the source and target database character sets.

–Mike

Data Pump – Exclude Stats differently for TTS and FTEX

Nice little best practice for statistics and Data Pump when doing either Transportable Tablespaces or Full Transportable Export-Import (credits to Roy and Dean Gagne).

Transport Statistics via a Staging Table

First of all we always recommend to exclude statistics when doing a Data Pump export as the import of such stats takes way longer than transporting them via a stats table. If you are unfamiliar with transporting stats between databases please see the Oracle Performance Tuning Guide with a nice tutorial:

The basic steps to transport statistics from one database to another fast and efficient consist of:

  1. Create a staging table in your source database with DBMS_STATS.CREATE_STAT_TABLE
  2. Export your local stats into this staging table by using DBMS_STATS.EXPORT_SCHEMA_STATS
  3. Export the staging table and import it into your destination database with Data Pump
  4. Import the statistics held in the staging table by using DBMS_STATS.IMPORT_SCHEMA_STATS

For the regular Data Pump exports we always recommend to set:

EXCLUDE=STATISTICS

to avoid performance penalties during impdp.

But this does not affect Transportable Tablespaces and Full Transportable Export/Import.

How to exclude Statistics for TTS and FTEX?

For reasons I don’t know the metadata heterogeneous object for “transportable” is different than all of the others Therefore in order to exclude statistics for Transportable Tablespaces and Full Transportable Export/Import you must set:

EXCLUDE=TABLE_STATISTICS,INDEX_STATISTICS

Note:1389592.1 Public – Speed Up Cross Platform Transportable Tablespaces (xTTS) with RMAN Incremental Backups

As we have already mentioned in our recent workshops the MOS Note:1389592.1 is public now:

  • MOS Note:1389592.1
    Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backups

This technique currently works for migrations to Linux64 and is supported by Oracle Support for Exadata migrations only. It utiliuzes RMAN incremental backups to reduce the large amount of downtime it takes to copy and convert the datafiles cross Endianness.

Two requirements must be met to use this new functionality:

  • Oracle 11.2.0.2 plus Exadata BP12 (patch 12982245)
  • One-off patch 13340675 contains the RMAN extension and is currently available on top of BP12 on Oracle 11.2.0.2

Let me know if you have further questions – and this restriction on Oracle 11.2.0.2 with BP12 applies to the intermediate instance you’ll use to roll forward your converted incremental backups only. Source might be Oracle 10.2 whereas target might be also an Oracle 11.2.0.3 installation.