PERL scripts for large migrations supported on all platforms (except Windows)

This topic fits very well as I present about +100 TB migrations today at the “Harmony” User Group Conference in Finland.

Finland May Helsinki 2017

Finland in May

The question whether the PERL scripts for RMAN incrementally rolled forward backups we deliver via MOS Note 1389592.1 (11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup) will be supported for migrations to non-Exadata systems such as Oracle SuperCluster as well.

And yes, now we got an update into the note saying:

Although preferred destination system is Linux (either 64-bit Oracle Linux or a certified version of RedHat Linux), this procedure can be used with other Unix based operating systems. However, any non-Linux operating system must be on 11.2.0.4.

Please be aware that “Unix” is important here:
The scripts are not supported (and won’t work) if you use them with MS Windows platforms.

Another caveat with the note to be aware of:

It does not talk explicitly about “Full Database Migrations”. And not about Full Transportable Export/Import. I’ve had several customers hitting this trap using the procedure described in the note but ending up with just parts of their database being migrated. Please keep in mind that you will have to move everything in SYSTEM tablespace (and potentially in SYSAUX such as AWR) as well as otherwise you’ll end up with data but will miss your grants, roles, synonyms etc. That’s why we highly recommend the Full Transportable Export/Import procedure as it takes care on all this.

You may please download my slide deck about how to use the PERL scripts for a large migration – it’s the talk I gave at Harmony Conference 2017:

–Mike

Can you EXCLUDE tablespaces from Full Transportable Export/Import?

Question: Can you EXCLUDE one or more tablespaces when doing a Full Transportable Export/Import?

First of all, this question came up already twice in real world customer migrations. So it’s not a totally unusual question. In one case a tablespace called USERS got precreated and some data did get stored. In the second case we did use RMAN incremental backups to migrate a very large database (>100TB) and some tablespaces weren’t part of the backup set.

I did brainstorm with Roy – and I dug into my notes from some years ago when the question was raised to me as part of a migration project to Oracle 12.1.0.1.

Roy and I had basically 2 main approaches to validate.

  1. Run expdp with FULL=Y and TRANSPORTABLE=ALWAYS but EXCLUDE some tablespaces
  2. Run expdp with FULL=Y and TRANSPORTABLE=ALWAYS but EXCLUDE some tablespaces during impdp only

In addition for the customer case where some tablespaces where initially not included into the RMAN Incremental Backupset strategy, the files for these tablespaces can be copied (and converted) in addition. The impdp parameter file will need to take care of it.

First of all, here’s my test setup. The two tablespaces MIKE1 and MIKE2 are the ones I will treat separately.

SQL> select status, tablespace_name from dba_tablespaces;

STATUS	  TABLESPACE_NAME
--------- ------------------------------
ONLINE	  SYSTEM
ONLINE	  SYSAUX
ONLINE	  UNDOTBS1
ONLINE	  TEMP
READ ONLY USERS
ONLINE	  MIKE1
ONLINE	  MIKE2

SQL> create directory oradump as '/home/oracle';
SQL> grant read, write on directory oradump to SYSTEM;

SQL> EXEC SYS.DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'MIKE1, MIKE2', incl_constraints => TRUE);
PL/SQL procedure successfully completed.

SQL> SELECT * FROM transport_set_violations;
no rows selected

Case 1 – Run expdp but EXCLUDE some tablespaces

l will use this par file:

directory=oradump
dumpfile=exp.dmp
logfile=exp.log
transportable=always
full=y
metrics=y
version=12
exclude=table_statistics,index_statistics
exclude=tablespace:"IN ('MIKE1', 'MIKE2')"

Now invoking Data Pump with expdp:

$ expdp system/oracle parfile=exp.par

Export: Release 11.2.0.4.0 - Production on Tue May 16 13:17:41 2017
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** parfile=ex.par 
Startup took 1 seconds
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/PLUGTS_FULL/FULL/PLUGTS_TABLESPACE
Processing object type DATABASE_EXPORT/PLUGTS_FULL/PLUGTS_BLK
     Completed 1 PLUGTS_BLK objects in 1 seconds
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
     Estimated 1 TABLE_DATA objects in 1 seconds

[...]

     Completed 1 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 0 seconds
     Completed 30 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 1 seconds
     Completed 5 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 0 seconds
     Completed 59 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 0 seconds
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /home/oracle/exp.dmp
******************************************************************************
Datafiles required for transportable tablespace USERS:
  /u02/oradata/FTEX/users01.dbf
Job "SYSTEM"."SYS_EXPORT_FULL_01" successfully completed at Tue May 16 13:18:37 2017 elapsed 0 00:00:56

As next step I copy the datafile(s) to the destination’s directory and configure my import par file:

directory=oradump
dumpfile=exp.dmp
logfile=imp.log
metrics=y
logtime=all
transport_datafiles='/u02/oradata/DB12/users01.dbf'

And importing into my new destination database with impdp:

$ impdp system/oracle parfile=imp.par

Import: Release 12.1.0.2.0 - Production on Tue May 16 13:52:46 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
16-MAY-17 13:57:08.681: Startup took 0 seconds
16-MAY-17 13:57:09.299: Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
16-MAY-17 13:57:09.544: Source time zone is +02:00 and target time zone is +01:00.
16-MAY-17 13:57:09.546: Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** parfile=imp.par
16-MAY-17 13:57:09.596: Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER

[...]

16-MAY-17 13:57:45.160: Processing object type DATABASE_EXPORT/END_PLUGTS_BLK
16-MAY-17 13:57:45.238: Completed 1 PLUGTS_BLK objects in 0 seconds
16-MAY-17 13:57:45.243: Processing object type DATABASE_EXPORT/FINAL_POST_INSTANCE_IMPCALLOUT/MARKER
16-MAY-17 13:57:47.868: Completed 1 MARKER objects in 2 seconds
16-MAY-17 13:57:47.875: Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
16-MAY-17 13:57:48.368: Completed 2 PROCACT_SCHEMA objects in 1 seconds
16-MAY-17 13:57:48.373: Processing object type DATABASE_EXPORT/AUDIT
16-MAY-17 13:57:48.499: Completed 29 AUDIT objects in 0 seconds
16-MAY-17 13:57:48.504: Processing object type DATABASE_EXPORT/POST_SYSTEM_IMPCALLOUT/MARKER
16-MAY-17 13:57:49.501: Completed 1 MARKER objects in 1 seconds
16-MAY-17 13:57:49.552: Completed 30 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 0 seconds
16-MAY-17 13:57:49.558: Completed 5 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 2348 seconds
16-MAY-17 13:57:49.567: Completed 57 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 2349 seconds
16-MAY-17 13:57:49.628: Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 44 error(s) at Tue May 16 13:57:49 2017 elapsed 0 00:00:41

I silently ignore the errors here. Some additional EXCLUDE statements will make this run flawless. Main reason for the errors is simply that with a 11.2.0.4 source database not everything gets excluded internally what should be excluded.

Anyhow, Case 1 looks ok.

SQL> select tablespace_name, status from dba_tablespaces;

TABLESPACE_NAME 	       STATUS
------------------------------ ---------
SYSTEM			       ONLINE
SYSAUX			       ONLINE
UNDOTBS1		       ONLINE
TEMP			       ONLINE
USERS			       ONLINE

But there are some details which may cause trouble. Quick look into the log file:

16-MAY-17 13:57:15.553: ORA-39083: Object type USER:"MIKE1" failed to create with error:
ORA-00959: tablespace 'MIKE1' does not exist
Failing sql is:
 CREATE USER "MIKE1" IDENTIFIED BY VALUES 'S:67D1974017CA6517A6A9BA1655182FEA352800F1460501DCD29602A666B3;CF5B00ED3595B543' DEFAULT TABLESPACE "MIKE1" TEMPORARY TABLESPACE "TEMP"
16-MAY-17 13:57:15.553: ORA-39083: Object type USER:"MIKE2" failed to create with error:
ORA-00959: tablespace 'MIKE2' does not exist
Failing sql is:
 CREATE USER "MIKE2" IDENTIFIED BY VALUES 'S:8915BFF218BB5A49958F5C12F4ED161BBB9EC71A54E1FFA471F94002F8C2;335808EE482DFA17' DEFAULT TABLESPACE "MIKE2" TEMPORARY TABLESPACE "TEMP"

Sure, as tablespace MIKE1 does not exist, user MIKE1 who had the default tablespace MIKE1 can’t be created either. This is ok in my case. But I’d like to point out that you have to pay close attention to such errors.

And of course, the same applies to objects in this tablespace(s) as well:

16-MAY-17 13:57:44.958: Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
16-MAY-17 13:57:45.068: ORA-39083: Object type TABLE:"MIKE1"."TEST1" failed to create with error:
ORA-00959: tablespace 'MIKE1' does not exist
Failing sql is:
CREATE TABLE "MIKE1"."TEST1" ("OBJ#" NUMBER NOT NULL ENABLE, "DATAOBJ#" NUMBER, "OWNER#" NUMBER NOT NULL ENABLE, "NAME" VARCHAR2(30 BYTE) NOT NULL ENABLE, "NAMESPACE" NUMBER NOT NULL ENABLE, "SUBNAME" VARCHAR2(30 BYTE), "TYPE#" NUMBER NOT NULL ENABLE, "CTIME" DATE NOT NULL ENABLE, "MTIME" DATE NOT NULL ENABLE, "STIME" DATE NOT NULL ENABLE, "STATUS" NUMBER NOT NULL ENABLE, "R

Be careful!

Case 2 – Run expdp but EXCLUDE some tablespaces during impdp

Parameter files look like this for exp.par:

directory=oradump
dumpfile=exp.dmp
logfile=exp.log
transportable=always
full=y
metrics=y
version=12
exclude=table_statistics,index_statistics

and for imp.par:

directory=oradump
dumpfile=exp.dmp
logfile=imp.log
metrics=y
logtime=all
exclude=tablespace:"IN ('MIKE1', 'MIKE2')"
transport_datafiles='/u02/oradata/DB12/users01.dbf

But the result unfortunately is different. While the expdp runs smoothly of course, the impdp fails:

$ impdp parfile=imp.par

Import: Release 12.2.0.1.0 - Production on Thu May 16 17:18:54 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
ORA-39002: invalid operation
ORA-39352: Wrong number of TRANSPORT_DATAFILES specified: expected 3, received 1

Case 2 does not work.

Summary

You can EXCLUDE one or more tablespaces when doing a Full Transportable Export/Import but you have to be careful. My approach from above is simple and does not have any dependencies. Be aware thst the Full Transportable Export/Import will try to create the users and objects – and will fail of course if the tablespace is not part of the operatio

–Mike

Create a database with NON-DEFAULT Time Zone

One requirement of Transportable Tablespaces (and of course Full Transportable Export/Import as well) is to have identical database character sets AND identical time zone settings.

Problem

Source database has a lower time zone setting than the default target database in the destination home.

Lets assume you’d like to migrate an Oracle 11.2.0.4 off IBM AIX and migrate it into Oracle 12.1.0.2 on an Exadata meaning Oracle Linux 6. The source time zone version if it has never been upgraded would be TZ V14 – whereas the new Oracle 12.1.0.2 database you’d create would get TZ V18 by default.

Time Zone Setting Oracle Database

Solution 1 – Upgrade Time Zone version in Source

You could now “simply” upgrade the Time Zone version in the source database meaning you’d apply the time zone patch matching the default time zone of the destination database home (see MOS Note: 412160.1 for the list of available Time Zone patches). Then you would have to adjust the data with the scripts provided in MOS Note:1585343.1

Or you could go a step further and apply the most recent available time zone patches to both homes which I’d recommend in this case. Then you’ll adjust the source database before initiating the transport steps including the copy operations.

I have put “simply” in quotation marks on purpose as often this may not be an option. Changing the source’s time zone version will incur downtime and a restart of the database. Furthermore in case of somebody has a very high amount of time zone dependent data it may take a while.

Solution 2 – Create Destination Database with lower Time Zone version

Therefore I’d consider the second solution as the more interesting and practical one: Create the destination database to transport to with a different time zone version than the default. In Oracle Database 12.1.0.2 the default would be TZ V18.

Step 1: Create database creation scripts with DBCA

I haven’t tried if it would work as well to let the DBCA directly create a custom database but I don’t trust Java applications using my environment variables so I prefer the script version. It does not work to use one of the seed databases in DBCA (DWH and OLTP and GENERAL PURPOSE) as those all use the default version of the release without an option to override it.

Step 2: Set environment variable ORA_TZFILE

The time zone files are located in $ORACLE_HOME/oracore/zoneinfo. By using the environment variable ORA_TZFILE we can override the default of using the highest numbered time zone file in this directory:

$ export  ORA_TZFILE=/u01/app/oracle/product/12.1.0.2/oracore/zoneinfo/timezone_14.dat
Step 3: Create the database with the “sh” script

DBCA has created the “create database” scripts and one named <SID>.sh. Start the creation by:

$ ./MIKE.sh
Step 4: Verify the correct time zone setting of your new database

The following query reflects the current time zone version of this database:

SQL> select value$ "TZ Version" from SYS.PROPS$ where name='DST_PRIMARY_TT_VERSION';

In my example the result will be 14 now instead of 18 (the default for an Oracle 12.1.0.2 database).

–Mike

Transportable Tablespaces and READ ONLY in Oracle Database 12c

We recently worked with a customer who noticed that they were not able to use transportable tablespaces to connect the same tablespace data files to two databases at the same time, even after setting the tablespaces READ ONLY in SQL*Plus. This is new behavior in 12c, and many customers are not yet aware of this change. Here are the details of what changed, why, and how you might want to deal with it if the changes affect your environment.

What Changed?

Starting in 12.1, data pump sets tablespaces read write during the import phase of a transportable tablespace migration. This means that a tablespace cannot be hooked into two different databases at the same time using transportable tablespaces.

Why Was This Change Made?

There were a couple of motivations for this change.

First, as databases have grown, we encountered performance hits when dealing with tablespaces that contain many partitions or subpartitions for either tables or indexes. The reason for this (apologies if this gets too deep) is that we try to ensure that we can reclaim free space in cases where a tablespace is being moved, but where not all tables within that tablespace are part of the operation. For example, you could move a tablespace data file which includes partitions from 5 tables, but you may be interested in only 2 of those tables. The segments used by the other 3 tables would be dead space that we should reclaim.

Prior to 12c, we would reclaim this space by first recording all the segments being exported during the export phase, and then those imported during the import phase. This allowed us to free up all the space for segments that were not imported. This worked, but as bigfile tablespaces grew into the tens of terabytes, performance really suffered. We ran into cases where all that accounting for segments took literally days. In 12c we implemented a different technique where we no longer record the segments on export (this is available as a backport to 11g as well), and then upon import we recompute the bitmap for the tablespace. Recomputing the bitmap means calling DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS, and generally takes a matter of seconds compared to the previous method which could take hours.

Therefore, this change is a very large performance improvement for both export and import of transportable tablespaces when there are large numbers of data segments involved.

The second reason for this change was to enable transportable tablespaces to handle import of a tablespace into a database that used a different version of Timestamp with Timezone (TSTZ) data. Prior to 12c, there were many restrictions on moving TSTZ data between databases. We have progressively been able to relax and eliminate these restrictions over time. To quote from the 11.2.0.4 Database Utilities Guide:

Jobs performed in transportable tablespace mode have the following requirements concerning time zone file versions:

  • If the source is Oracle Database 11g release 2 (11.2.0.2) or later and there are tables in the transportable set that use TIMESTAMP WITH TIMEZONE (TSTZ) columns, then the time zone file version on the target database must exactly match the time zone file version on the source database.

  • If the source is earlier than Oracle Database 11g release 2 (11.2.0.2), then the time zone file version must be the same on the source and target database for all transportable jobs regardless of whether the transportable set uses TSTZ columns.

If these requirements are not met, then the import job aborts before anything is imported. This is because if the import job were allowed to import the objects, there might be inconsistent results when tables with TSTZ columns were read.

Starting in 12.1 we are able to handle TSTZ data when moving a tablespace to a database with a higher timezone version. This is done by opening the tablespace datafile and fixing the TSTZ data using features created in 12c for this purpose. This means that with 12c we can use transportable tablespaces in more scenarios, and that the resulting import will be more complete when there is Timestamp with Timezone data involved.

In summary, the change to opening data files read write during a transportable tablespace import makes the process faster and the technique more broadly applicable.

But Neither of those Benefits Affect Me, and I Like the Old Behavior!

It might be possible for us to allow the old behavior, by implementing a parameter explicitly for that purpose. However, there would be the following shortcomings to using this parameter:

  1. We would not be able to account for the segments imported during the transportable impdp operation, because we would have no list of exported segments against which to compare them. Those segments would be dead space until the user explicitly called the DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS procedure.
  2. If any of the tablespaces being imported contained TSTZ data, then the timezone version of the importing database would
    • have to exactly match that of the exporting database, or
    • We would abort the import

However, we don’t require a new parameter in order to achieve the goal of preserving the old behavior. If the files are set READ ONLY at the operating system level (or in ASM) then this prevents any attempt to set them read write at the database level. This means that we will not reclaim free space from unused segments, and the TSTZ fix-ups will not be performed. In this case, any tables with TSTZ data will be dropped upon import in cases where the timezone version of the target database is not the same as that of the imported data.

We (Data Pump development) tend to view the benefits gained by this change as far outweighing the loss of functionality.
However, we do recognize that we have hundreds of thousands of customers who may have varying use cases and priorities for Data Pump, so if there is a desire to provide options as described above, we would be happy to accept enhancement requests for consideration.

Full Transportable Export/Import – PAR File Examples

Roy and I blogged about Full Transportable Export/Import in the past:

If you haven’t heard of this database feature, it allows you to migrate a full database by using Transportable Tablespaces as a base technology but letting Data Pump do all the manual steps for you in a one-command migration. And if needed, it works with RMAN Incremental Backups as well in order to decrease the downtime for copying larger databases. It works cross-platform, cross-Endianness – and as early as with a source version of Oracle Database 11.2.0.3 – but only towards Oracle Database 12c.

Often there are questions regarding the correct syntax for the parameter files. As I have to look it up from time to time as well I thought I’d share some example par files as there are two different approaches: With or without NETWORK_LINK.

With expdp/impdp – without NETWORK_LINK

expdp.par

directory=oradump
dumpfile=wspmst_ftex_exp.dmp
logfile=wspmst_ftex_exp.log
transportable=always
full=y
metrics=y
logtime=all
exclude=table_statistics,index_statistics

impdp.par

directory=oradump
dumpfile=wspmst_ftex_exp.dmp
logfile=wspmst_ftex_imp.log
metrics=y
logtime=all
TRANSPORT_DATAFILES='/oradata/wspmst/data/WSPMSTCON/3ABC7F153DB55A4DE0539ED0C80A102D/datafile/advsecmon.331.880913493'
TRANSPORT_DATAFILES='/oradata/wspmst/data/WSPMSTCON/3ABC7F153DB55A4DE0539ED0C80A102D/datafile/advsecmon.555.880923697'

Only impdp (no expdp) – using NETWORK_LINK

2 examples here – one dealing with an Oracle 11.2 source database, the other moving directly from Oracle 12c non-CDB into a pluggable database (PDB).

impdp1.par

impdp mike/passwd@v121
NETWORK_LINK=v112

FULL=Y
TRANSPORTABLE=ALWAYS
VERSION=12
METRICS=Y
exclude=table_statistics,index_statistics

LOGTIME=ALL
 LOGFILE=ftex_dir:v112fullimp.log
TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts1.dbf'
TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts2.dbf'

impdp2.par

impdp system/oracle@pdb2
network_link=sourcedb
full=y
transportable=always
metrics=y
exclude=table_statistics,index_statistics
directory=mydir logfile=pdb2.log
transport_datafiles='/u02/oradata/CDB2/pdb2/users01.dbf'
transport_datafiles='/u02/oradata/CDB2/pdb2/users02.dbf'
transport_datafiles='/u02/oradata/CDB2/pdb2/users03.dbf'

Please note:

  • The VERSION=12 option must be used only when an Oracle 11.2 database is the source as in this case Data Pump on the 11.2 side must be triggered to access this 12c feature and create an export in a form the 12c Data Pump can understand.
  • There’s no expdp necessary when using Data Pump over a database link (NETWORK_LINK). But in this case you will need the keywords FULL=Y and TRANSPORTABLE=ALWAYS as export parameters as the export portion of Data Pump on the source side will be triggered underneath the covers.

–Mike

Full Transportable Export/Import – Things to Know

This blog post is an addition to:

Full Transportable Export/Import – Migrating an 11.2.0.4 database to Oracle Database 12c- into the Oracle Cloud

Seth Miller commented the pitfall of having a serious issue during the Data Pump run, Data Pump exiting and not finishing, and you’ll have to do the entire backup/restore/incremental-roll-foward thing again. Without any doubt, this is no fun at all.

So let me point out a few things to take into consideration – and some of them are not obvious I guess.

Do you need to backup/restore/incremental-roll-forward again?

When you let do Data Pump all the manual tasks of Transportable Tablespaces (TTS) this is called Full Transportable Export/Import (FTEX as I abbreviate it). With regular TTS you will take your tablespaces read-write in your destination at the end of the entire process. So whenever something is going wrong during the meta data rebuild you won’t have to restore all your tens of terabytes.

With FTEX things are a bit different. Data Pump will take the tablespaces several times read-write during the impdp run. This has to happen. And the job is not restartable right now. This may change in a future release or patch set. We’ll see.

What are your options now to prevent a full repeat of all your backup/restore/incremental-roll-forward activities?

FLASHBACK DATABASE unfortunately is not an option as you could set a Guaranteed Restore Point – but as soon as the data file headers got adjusted FLASHBACK won’t be able to revert this.

Setting the data files to read-only on the OS level is not an option either as you might force Data Pump to fail when it would like to write something into a tablespace and gets an OS return code for not being able to complete the write operation.

Therefore right now the only valid option I can think of is leveraging storage snapshot technologies in order to restore the files back into the status from “before Data Pump ran“.

We are aware of this pitfall and we discuss alternatives internally for the future.

Character Sets and Time Zone?

Another topic to pay close attention is the database’s character set and national character set. Please find all the guidelines about character sets and national character sets here:

Transportable Tablespaces – Characters Sets – Same same but different?

For time zone files please make sure you have the same time zone version in source and destination. As you can’t downgrade your time zone file in the destination usually you’ll have to apply the matching time zone patch in the source before initiating the transport. If you don’t pay attention Data Pump will deny the meta import of the data in the tablespaces as otherwise you may end up with corrupted time zone data.

Please find the precise description in the Oracle 12c documentation:

Tablespace Encryption

When your tablespaces are encrypted you’ll have to be a bit more careful when using TTS or FTEX. Even though the FAQ on Oracle.com mentions that there are no issues still there are some limitations. The documentation is more precise:

  • Limitations on Transportable Tablespace
    • Transportable tablespaces cannot transport encrypted tablespaces.
    • Transportable tablespaces cannot transport tablespaces containing tables with encrypted columns
  • Limitations on Full Transportable Export/Import
    • You cannot transport an encrypted tablespace to a platform with different endianness.
      To transport an encrypted tablespace to a platform with the same endianness, during export set the ENCRYPTION_PWD_PROMPT export utility parameter to YES, or use the ENCRYPTION_PASSWORD export utility parameter. During import, use the equivalent import utility parameter, and set the value to the same password that was used for the export.
–Mike

Full Transportable Export/Import – Migrating an 11.2.0.4 database to Oracle Database 12c- into the Oracle Cloud

Full Transportable Export/Import – one of the coolest features in Oracle Database 12c 

We blogged about Full Transportable Export/Import a while back. It is – no doubt – one of the coolest features in Oracle Database 12c. And it is part of our Hands-On Lab exercise (Hands On Lab – Upgrade, Migrate, Consolidate to Oracle Database 12c) as well.

It utilizes the technique of Transportable Tablesspaces – cross-platform, cross- Endianness, cross-version – but lets Oracle Data Pump do all the “dirty” work of rebuilding everything kept in your SYSTEM and SYSAUX tablespace including views, synonyms, public objects, grants, sequences etc etc etc.

You can even combine it with RMAN Incremental Backups – of course cross-platform and cross-Endianness as well – in order to decrease the downtime for large databases.
Please find our presentation about the combination of these feature here in the Slides Download Center:

In addition: You can use this Oracle Database 12c feature with your source database Oracle 11.2.0.3 or 11.2.0.4. Just the destination database you are upgrading or migrating to must be at least an Oracle 12.1.0.1 database (or newer of course).

See the feature in action

Roy recorded a 20 minute video demonstrating how to:

  • Migrate an Oracle Database 11.2.0.4 database into an Multitenant’s pluggable database using
  • Full Transportable Export/Import
  • into the Oracle DBaaS Cloud

And of course the feature works also without migrating into a Container database environment – and without migrating to the Oracle DBaaS Cloud.

If you need to migrate a database from Oracle 11.2.0.3 or newer to Oracle Database 12c Full Transportable Export/Import can become your best friend.

–Mike

Transportable Tablespaces – Characters Sets – Same same but different?

All credits go to Don Wolf, an Oracle Advanced Customer Support engineer from Ohio as he dug out this information 🙂 Thanks Don!

Do database character sets have to match EXACTLY for Transportable Tablespaces?

That sounds like a simple question. When you look into our big slide deck the answer will be a straight “Yes”. No doubts. Regardless if you would like to do Transportable Tablespaces or Full Transportable Export/Import your sources and your target’s database character sets must be equal. Otherwise Data Pump won’t allow you to process the meta data import.

But Don was wondering about slightly differing information in MOS notes and the documentation.
And asked me if I can clarify things. I couldn’t.

  • MOS Note:1454872.1
    Transportable Tablespace (TTS) Restrictions and Limitations:
    Details, Reference, and Version Where Applicable

    tells you:

  •  And the above documentation link then tells you:
    • The source and the destination databases must use compatible database character sets. That is, one of the following must be true:
      • The database character sets of the source and the target databases are the same.
      • The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true:
        • The source database is in version 10.1.0.3 or higher.
        • The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets.
        • The tablespaces to be transported contain no columns with the CLOB data type, or the source and the target database character sets are both single-byte or both multibyte.
      • The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true:
        • The source database is in a version lower than 10.1.0.3.
        • The maximum character width is the same in the source and target database character sets.
  • And furthermore from “Choosing a Character Set” section of Oracle 12.1  Database Globalization Support Guide:
    • Oracle
      Database does not maintain a list of all subset-superset pairs but it
      does maintain a list of binary subset-superset pairs that it recognizes
      in various situations such as checking compatibility of a transportable
      tablespace or a pluggable database. Table A-11 and Table A-12 list the
      binary subset-superset pairs recognized by Oracle Database.
    • In those tables the only binary subset-superset pairs involving AL32UTF8 are:
      • UTF8/AL32UTF8
      • US7ASCII/AL32UTF8
  • This is not 100% congruent to the information provided in
    MOS Note 371556.1 – How to Migrate to different Endian Platform Using
    Transportable Tablespaces With RMAN

    saying:
    The source and target database must use the same character set and national character set.

What is correct, what’s not?

First of all the wording of “compatible character sets” seems to be gotten introduced with Oracle Database 11.2.

In fact the scope for the target system has become broader in Oracle Database 11.2.  These rule here are correct as soon as your target database is an Oracle Database 11.2 or newer database release.

  • The source and the destination databases must use compatible database character sets. That is, one of the following must be true:
    • The database character sets of the source and the target databases are the same.
    • The source database character set is a strict (binary) subset of the target database character set, and the following three conditions are true:
      • The source database is in version 10.1.0.3 or higher.
      • The tablespaces to be transported contain no table columns with character length semantics or the maximum character width is the same in both the source and target database character sets.
      • The tablespaces to be transported contain no columns with the CLOB data type, or the source and the target database character sets are both single-byte or both multibyte.
    • The source database character set is a strict (binary) subset of the target database character set, and the following two conditions are true:
      • The source database is in a version lower than 10.1.0.3.
      • The maximum character width is the same in the source and target database character sets.

–Mike

Data Pump – Exclude Stats differently for TTS and FTEX

Nice little best practice for statistics and Data Pump when doing either Transportable Tablespaces or Full Transportable Export-Import (credits to Roy and Dean Gagne).

Transport Statistics via a Staging Table

First of all we always recommend to exclude statistics when doing a Data Pump export as the import of such stats takes way longer than transporting them via a stats table. If you are unfamiliar with transporting stats between databases please see the Oracle Performance Tuning Guide with a nice tutorial:

The basic steps to transport statistics from one database to another fast and efficient consist of:

  1. Create a staging table in your source database with DBMS_STATS.CREATE_STAT_TABLE
  2. Export your local stats into this staging table by using DBMS_STATS.EXPORT_SCHEMA_STATS
  3. Export the staging table and import it into your destination database with Data Pump
  4. Import the statistics held in the staging table by using DBMS_STATS.IMPORT_SCHEMA_STATS

For the regular Data Pump exports we always recommend to set:

EXCLUDE=STATISTICS

to avoid performance penalties during impdp.

But this does not affect Transportable Tablespaces and Full Transportable Export/Import.

How to exclude Statistics for TTS and FTEX?

For reasons I don’t know the metadata heterogeneous object for “transportable” is different than all of the others Therefore in order to exclude statistics for Transportable Tablespaces and Full Transportable Export/Import you must set:

EXCLUDE=TABLE_STATISTICS,INDEX_STATISTICS

Full Transportable Export/Import White Paper

One of the really cool and helpful features in Oracle Database 12c is called Full Transportable Export/Import. It combines the basics of transportable tablespaces – of course cross-platform, cross-endianess, cross-version – with Data Pump taking care on all the stuff not stored in tables and indexes, such as views, synonyms, trigger, packages etc. And you can even reduce downtime by combining the feature with incrementally converted RMAN backups.

And the best thing: this will work with an Oracle Database 11.2.0.3/4 to Oracle Database 12c – so you don’t have to be on Oracle Database 12c in order to leverage from that cool new feature.

See the brand new White Paper about Full Transportable Export/Import here:
http://www.oracle.com/technetwork/database/enterprise-edition/full-transportable-wp-12c-1973971.pdf

-Mike