Full Transportable Export/Import – Migrating an database to Oracle Database 12c- into the Oracle Cloud

Full Transportable Export/Import – one of the coolest features in Oracle Database 12c 

We blogged about Full Transportable Export/Import a while back. It is – no doubt – one of the coolest features in Oracle Database 12c. And it is part of our Hands-On Lab exercise (Hands On Lab – Upgrade, Migrate, Consolidate to Oracle Database 12c) as well.

It utilizes the technique of Transportable Tablesspaces – cross-platform, cross- Endianness, cross-version – but lets Oracle Data Pump do all the “dirty” work of rebuilding everything kept in your SYSTEM and SYSAUX tablespace including views, synonyms, public objects, grants, sequences etc etc etc.

You can even combine it with RMAN Incremental Backups – of course cross-platform and cross-Endianness as well – in order to decrease the downtime for large databases.
Please find our presentation about the combination of these feature here in the Slides Download Center:

In addition: You can use this Oracle Database 12c feature with your source database Oracle or Just the destination database you are upgrading or migrating to must be at least an Oracle database (or newer of course).

And please find more information about the PERL scripts for xTTS/FTEX with RMAN Incremental Backups here:

See the feature in action

Roy recorded a 20 minute video demonstrating how to:

  • Migrate an Oracle Database database into an Multitenant’s pluggable database using
  • Full Transportable Export/Import
  • into the Oracle DBaaS Cloud

And of course the feature works also without migrating into a Container database environment – and without migrating to the Oracle DBaaS Cloud.

If you need to migrate a database from Oracle or newer to Oracle Database 12c Full Transportable Export/Import can become your best friend.


Share this:

28 thoughts on “Full Transportable Export/Import – Migrating an database to Oracle Database 12c- into the Oracle Cloud

  1. This is a really cool feature and it is the first method I try when migrating customers to 12c. Unfortunately, it has one shortcoming that makes it unusable for all but very small databases.

    If during the metadata import the import fails at any point, all of the data files that were successfully imported prior to the failure must be copied again from the source. The reason is that the data file header is updated with the new database information and any attempt to re-import it will fail.

    I ran into this a number of times where I would copy over 12TB of data files, run the metadata import and have it fail on one of the last steps and have to copy all 12TB of data files over just to have it fail again on something else.

    This same behavior can happen with standard TTS as well but there is much less that can go wrong. There is also the ability to break up the TTS export into multiple exports so that a failure can be limited to just the data files that were included in the export.

    This was all confirmed from MOS with the response, "If you are trying to reimport, then the tablespace has to be dropped and copy/place the read only datafiles again to the Target location. This is because on the first attempt, the datafiles were already attached to the tablespaces".

  2. Seth,

    you touch one of the points of weakness of this feature. And you are right. The Data Pump job is not restartable right now. There is a project planned for it – but it’s not there yet.

    And as far as I see a GRP won’t solve this either as FLASHBACK in my experience won’t flash back an update to the data file header.

    For FULL TRANSPORTABLE we need to put the datafiles (or tablespaces) several times into read/write mode – something we do not need to do for regular TTS as you’ll put them into R/W just at the end when everything has been finished.

    Currently the only solution I see is a storage snapshot of your restored files (of course including the inc backups once you deal with incrementally rolled forward backups).

    If you want to discuss this further please feel free to drop me an email. We are aware of this and I think I will write about the caveats as well pointing out a few typical things such as timezone version, encrypted tablespaces, character set mismatches – and R/O on source – and R/W on target.


  3. Thanks for your comment – but as this is a impdp NOT going over the dblink you won’t need to do it.

    I will publish some examples within the next days.


  4. Does FTEX work with SE databases?

    Yes, but only in the direction into SE as Transportable Tablespaces works from EE=>SE but not the other way round.


  5. Noooooooooo! 🙂

    Clearly a feature to migrate TO Oracle or higher.
    It’s a 12c feature – and we wanted to give people the ability to use it with a source database

    And there is zero reason to migrate to right now. It goes out of Waived Extended Support in 5 months.


  6. It isn’t mentioned anywhere that I’ve seen, but Oracle 12.2 in the Cloud requires encrypted tablespaces. That means you can’t import unencrypted tablespaces, at all.
    The error is:
    ORA-39123: Data Pump transportable tablespace job aborted
    ORA-28427: cannot create, import or restore unencrypted tablespace: _________ in Oracle Cloud

    Of course, if you change the new encrypt_new_tablespaces parameter, you can still created unencrypted tablespaces, you just can’t import them.

  7. Hi Mike,

    we are planning to migrate enterprise databases (sizes up to 6 TB) from Solaris x86 zones (Veritas FS / ZFS) to Solaris SPARC zones (ASM, DB EE with minimal or no impact/changes for the source database and less downtime. Got the challenge?

    First idea:
    Use of full transportable export/import (FTEX) in combination with incremental RMAN backups, maybe in combination with a standby database to minimize the impact for the source database.

    Second idea:
    If incremental RMAN backups are not possible (technically or in case of support), we want to use a standby database in an interim Solaris x86 zone for doing the RMAN CONVERT job from Veritas FS / ZFS to ASM. The ASM disks are based on LUNs from SAN, so we would move the LUNs after the copy job to the target Solaris SPARC Zone and mount the ASM diskgroups on the target (we don’t want to copy the datafiles over network). I could’nt figure out, if ASM disks/diskgroups are moveable between different endian platforms (I know this for ZFS) and if it is supported.


  8. Hi Mike,

    I have question about migration and upgrade a database from oracle with apex 5.0.3 to cdb standalone pdb with default APX 4.2, Can I use Full Transportable Export/Import to do that? how to handle APEX? they are in sysaux tablespace.

    Thanks a lot !


  9. Hi Sean,

    is APEX installed in the CDB$ROOT or only in the PDB – in source AND in target? Not sure if I’ve got your setup correctly but to me it sounds as:
    Source: non-CDB with APEX 5.0.3
    Target: CDB with APEX 4.2

    Please see this VERY long blog post about the complications with APEX in the CDB$ROOT:

    So completely apart from Full Transportable you’ll have to make sure:
    – APEX is not in the CDB$ROOT.
    – If APEX is in the CDB$ROOT remove it
    – Then make sure APEX 5.0.3 is installed in the destination home software wise
    – Try to provision a fresh PDB – and see if APEX is not there in COMP_ID in DBA_REGISTRY
    – Then install APEX 5.0.3 only locally in this PDB

    Then approach your migration with FTEX.

    See here for FTEX limitations:
    (AWR and XDB are not supported).

    Basically to avoid any issues I would export the APEX application, EXCLUDE the APEX schema from the FTEX run and import it afterwards. But it ‘should’ work out of the box with FTEX as well.

    If not, please let me know the error pattern and I will check with the APEX and the Data Pump team.


  10. Hi Stefan,

    this is a tricky case.

    Officially RMAN Inc Backups with our PERL scripts are supported with Linux as a target only. But it should work with all other platforms as well in theory. My recommendation: Download the PERL scripts, use my "Inc Backups" presentation from the blog as a quick guidance and try it.

    You can’t unfortunately mount ASM disk groups from a little Endian (x86 Solaris) into a big Endian (SPARC) platform.

    So I fear option 2 is not an option for you.


  11. Pingback: Full Transportable Export/Import – Things to Know | Upgrade your Database - NOW!

  12. Hi Mike,

    Hi Martin,

    I have the following couple of requirements and currently evaluating the options available for the endian conversion.,HP-UX, Big endian, ASM RAC, 3TB –>, OEL, Little endian, ASM RAC, HP-UX, Big endian, ASM Single Instance, 8TB —>, OEL, Little Endian, PDB, RAC

    I am evaluating in terms of different options, any restrictions (in terms of compression, encryption and any special feature usage at the source that’ll put a restriction), the amount of downtime and costs involved.

    Any help and advice will be much appreciated.

    Thanking you,

    • Vish,

      it depends all on your downtime. Data Pump is always the most straight forward path. And in case it does not fit downtime wise you may either evaluate TTS or FTEX (FUll Transportable Export/Import) and maybe combine it with RMAN Incremental Backups.


      • Hi Mike,

        Thank you very much for taking the time in responding to my query. Downtime will certainly be an issue, considered going with incremental backup – apparently, as per metalink there is a bug (22570430) that will not allow us to use this functionality to go into a 12c PDB. I think I’ll go with FTEX. I have seen the video that Roy made. It appears quite simple and straight forward. Should I do anything special with ASM or can I just copy the datafile copies from one ASM to another across network?

        • If you gonna use the RMAN inc backups please see PERL scripts in MOS Note:1389592.1 and in MOS Note: 2005729.1 for the process.
          You can’t simply copy files from ASM on HP to ASM on Linux. You’ll have to either restore a backup including the conversion, or use DBMS_FILE_TRANSFER with converts while restoring.


          • Hi Mike,

            Thanks. I have been through those documents and noticed the following

            Note:1389592.1 :
            NOTE: Neither method (XTTS or XTTS incremental) supports 12c multitenant databases. Enhancement bug 22570430 addresses this limitation.

            Note: 2005729.1:
            This process does not support multitenant databases. Enhancement bug 22570430 addresses this limitation.
            As I am not able to access the bug information I raised an SR and Oracle have come back saying:
            “There is no workaround for that bug but that applies to 12c databases.
            I would recommend to use the document about how to reduce transportable tablespace downtime using Cross Platform Incremental Backup ( Doc ID : 1389592.1 ) and after that UPGRADE the database to 12c.”

            This is a live migration of 7 TB critical database with an available downtime of one weekend. Can I use RMAN Backup / DBMS_FILE_TRANSFER with FTEX?

            Any help is much appreciated. If you believe I am spamming your thread, you could email me at crazyvish at gmail.

            Thank you.

          • Vish,

            actually I see your point – but I don’t think that this restriction will cause you a bigger issue.

            First of all, do you REALLY need to migrate into a Multitenant container database? When people ask me for terabyte DBs I have my doubts where your benefits should be unless you want to consolidate more into one or adopt standard multitenant processes for everything.

            Second, you migrate your and databases directly into a database, freshly created, with the help of RMAN incremental backups. I would recommend FTEX as the technique as both MOS notes for the PERL scripts don’t talk about the fact that the aren’t talking about a complete database migration but only about a subset. You’ll have to cover everything in SYSTEM (views, synonyms etc) as well.

            Third, if you REALLY REALLY want to migrate your database into a PDB then you generate the XML manifest file – and plug it in and then run noncdb_to_pdb.sq. Please see my blog posts here:


  13. Thanks Mike. True, that was my first question – why PDB? Apparently the senior management made the decision following the Oracle’s best practices and to make use of its advanced features if and when required. I am sure they will never ever use it to host another PDB in there :).

    Thank you for your time, much appreciated. I’ll go through those links.


  14. Hi Mike. I am very comfortable with the Tablespace Transport techniques, however I am not knowledgeable about the RMAN scripts needed to reduce downtime for the data file copies. Can you point me to some RMAN script examples?
    – I am migrating from Linux to Linux (non-container)
    – Do I want to RMAN copy only the non-system tablespaces?
    – How do I use multiple incrementals to copy the datafiles?
    Thanks, Bill Parsley

  15. Hi Mike,
    we need to migrate our standard edition (LINUX) to 12.2c/ 18c (AIX 7.2)
    We know that Transportable tablespace is possible only from EE to SE and not viceversa.
    What strategy do you advice in order to make a fast migration of this databases? (except expdp/impdp)

    Thank you very much

    • Renato,

      I fear that expdp/impdp are you best options. There’s an Oracle ACS service available – it is called O2O (Oracle to Oracle) which does a semi-automatic combination of expdp/impdp and CTAS. But it is sold as a service, not the product itself. Apart from that, only standard techniques such as moving data over DB-Links and such could help you avoiding expdp/impdp.


  16. Hi Mike,

    I am migrating 20 TB database from solaris to linux (400 tablespace and 1000 datafiles) using XTTS version 4.

    The RMAN backup job failed mid-way (after 30 hours) with RMAN-03009 , RMAN-10038. I tried to restart the job again (xttdriver.pl –backup) after renaming the logfile – FAILED. The restarted job deleted all the previous backuped datafiles and started all over again.

    Is there a way I can restart the failed backup job (level 0) on source from the point of the failure ?


    • Hi Tapan,

      when the files got deleted, I think there’s no way.
      What you “could” have done:
      Check which tablespaces are copied successfully already and exclude them from a second call to generate the level-0 backups.

      But this is just based on “guessing”.
      You will need to open an SR please and check with support if they have a better idea (and especially to diagnose the error as you don’t want it to happen again I guess).

      Cheers and sorry for the inconvenience,

Leave a Reply

Your email address will not be published. Required fields are marked *

* Checkbox to comply with GDPR is required


I agree

This site uses Akismet to reduce spam. Learn how your comment data is processed.