We recently worked with a customer who noticed that they were not able to use transportable tablespaces to connect the same tablespace data files to two databases at the same time, even after setting the tablespaces READ ONLY in SQL*Plus. This is new behavior in 12c, and many customers are not yet aware of this change. Here are the details of what changed, why, and how you might want to deal with it if the changes affect your environment.
Starting in 12.1, data pump sets tablespaces read write during the import phase of a transportable tablespace migration. This means that a tablespace cannot be hooked into two different databases at the same time using transportable tablespaces.
Why Was This Change Made?
There were a couple of motivations for this change.
First, as databases have grown, we encountered performance hits when dealing with tablespaces that contain many partitions or subpartitions for either tables or indexes. The reason for this (apologies if this gets too deep) is that we try to ensure that we can reclaim free space in cases where a tablespace is being moved, but where not all tables within that tablespace are part of the operation. For example, you could move a tablespace data file which includes partitions from 5 tables, but you may be interested in only 2 of those tables. The segments used by the other 3 tables would be dead space that we should reclaim.
Prior to 12c, we would reclaim this space by first recording all the segments being exported during the export phase, and then those imported during the import phase. This allowed us to free up all the space for segments that were not imported. This worked, but as bigfile tablespaces grew into the tens of terabytes, performance really suffered. We ran into cases where all that accounting for segments took literally days. In 12c we implemented a different technique where we no longer record the segments on export (this is available as a backport to 11g as well), and then upon import we recompute the bitmap for the tablespace. Recomputing the bitmap means calling DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS, and generally takes a matter of seconds compared to the previous method which could take hours.
Therefore, this change is a very large performance improvement for both export and import of transportable tablespaces when there are large numbers of data segments involved.
The second reason for this change was to enable transportable tablespaces to handle import of a tablespace into a database that used a different version of Timestamp with Timezone (TSTZ) data. Prior to 12c, there were many restrictions on moving TSTZ data between databases. We have progressively been able to relax and eliminate these restrictions over time. To quote from the 220.127.116.11 Database Utilities Guide:
Starting in 12.1 we are able to handle TSTZ data when moving a tablespace to a database with a higher timezone version. This is done by opening the tablespace datafile and fixing the TSTZ data using features created in 12c for this purpose. This means that with 12c we can use transportable tablespaces in more scenarios, and that the resulting import will be more complete when there is Timestamp with Timezone data involved.
In summary, the change to opening data files read write during a transportable tablespace import makes the process faster and the technique more broadly applicable.
But Neither of those Benefits Affect Me, and I Like the Old Behavior!
It might be possible for us to allow the old behavior, by implementing a parameter explicitly for that purpose. However, there would be the following shortcomings to using this parameter:
- We would not be able to account for the segments imported during the transportable impdp operation, because we would have no list of exported segments against which to compare them. Those segments would be dead space until the user explicitly called the DBMS_SPACE_ADMIN.TABLESPACE_REBUILD_BITMAPS procedure.
- If any of the tablespaces being imported contained TSTZ data, then the timezone version of the importing database would
- have to exactly match that of the exporting database, or
- We would abort the import
However, we don’t require a new parameter in order to achieve the goal of preserving the old behavior. If the files are set READ ONLY at the operating system level (or in ASM) then this prevents any attempt to set them read write at the database level. This means that we will not reclaim free space from unused segments, and the TSTZ fix-ups will not be performed. In this case, any tables with TSTZ data will be dropped upon import in cases where the timezone version of the target database is not the same as that of the imported data.
We (Data Pump development) tend to view the benefits gained by this change as far outweighing the loss of functionality.
However, we do recognize that we have hundreds of thousands of customers who may have varying use cases and priorities for Data Pump, so if there is a desire to provide options as described above, we would be happy to accept enhancement requests for consideration.