Full Transportable Export/Import – one of the coolest features in Oracle Database 12c
We blogged about Full Transportable Export/Import a while back. It is – no doubt – one of the coolest features in Oracle Database 12c. And it is part of our Hands-On Lab exercise (Hands On Lab – Upgrade, Migrate, Consolidate to Oracle Database 12c) as well.
It utilizes the technique of Transportable Tablesspaces – cross-platform, cross- Endianness, cross-version – but lets Oracle Data Pump do all the “dirty” work of rebuilding everything kept in your SYSTEM and SYSAUX tablespace including views, synonyms, public objects, grants, sequences etc etc etc.
You can even combine it with RMAN Incremental Backups – of course cross-platform and cross-Endianness as well – in order to decrease the downtime for large databases.
Please find our presentation about the combination of these feature here in the Slides Download Center:
In addition: You can use this Oracle Database 12c feature with your source database Oracle 11.2.0.3 or 11.2.0.4. Just the destination database you are upgrading or migrating to must be at least an Oracle 12.1.0.1 database (or newer of course).
And please find more information about the PERL scripts for xTTS/FTEX with RMAN Incremental Backups here:
https://mikedietrichde.com/2017/05/18/perl-scripts-for-large-migrations-supported-on-all-platforms-except-windows/
See the feature in action
Roy recorded a 20 minute video demonstrating how to:
- Migrate an Oracle Database 11.2.0.4 database into an Multitenant’s pluggable database using
- Full Transportable Export/Import
- into the Oracle DBaaS Cloud
And of course the feature works also without migrating into a Container database environment – and without migrating to the Oracle DBaaS Cloud.
If you need to migrate a database from Oracle 11.2.0.3 or newer to Oracle Database 12c Full Transportable Export/Import can become your best friend.
–Mike
This is a really cool feature and it is the first method I try when migrating customers to 12c. Unfortunately, it has one shortcoming that makes it unusable for all but very small databases.
If during the metadata import the import fails at any point, all of the data files that were successfully imported prior to the failure must be copied again from the source. The reason is that the data file header is updated with the new database information and any attempt to re-import it will fail.
I ran into this a number of times where I would copy over 12TB of data files, run the metadata import and have it fail on one of the last steps and have to copy all 12TB of data files over just to have it fail again on something else.
This same behavior can happen with standard TTS as well but there is much less that can go wrong. There is also the ability to break up the TTS export into multiple exports so that a failure can be limited to just the data files that were included in the export.
This was all confirmed from MOS with the response, "If you are trying to reimport, then the tablespace has to be dropped and copy/place the read only datafiles again to the Target location. This is because on the first attempt, the datafiles were already attached to the tablespaces".
Seth,
you touch one of the points of weakness of this feature. And you are right. The Data Pump job is not restartable right now. There is a project planned for it – but it’s not there yet.
And as far as I see a GRP won’t solve this either as FLASHBACK in my experience won’t flash back an update to the data file header.
For FULL TRANSPORTABLE we need to put the datafiles (or tablespaces) several times into read/write mode – something we do not need to do for regular TTS as you’ll put them into R/W just at the end when everything has been finished.
Currently the only solution I see is a storage snapshot of your restored files (of course including the inc backups once you deal with incrementally rolled forward backups).
If you want to discuss this further please feel free to drop me an email. We are aware of this and I think I will write about the caveats as well pointing out a few typical things such as timezone version, encrypted tablespaces, character set mismatches – and R/O on source – and R/W on target.
Cheers!!
Mike
Hi Mike,
Actually, I had a question on the white paper: Full Transportable Export/Import White Paper
https://blogs.oracle.com/UPGRADE/entry/full_transportable_export_import_white
In the white paper page 11, step no 8, the impdp does not have the parameter ‘full=y’.
While checking the manual (https://docs.oracle.com/database/121/ADMIN/transport.htm#ADMIN13873), it specified the parameter ‘full=y’.
Does it matter?
Thanks
Thanks for your comment – but as this is a impdp NOT going over the dblink you won’t need to do it.
I will publish some examples within the next days.
Cheers
Mike
Hi Mike,
does this Feature "Full Transportable Export/Import" work with Oracle Standard Edition?
Regards
Does FTEX work with SE databases?
Yes, but only in the direction into SE as Transportable Tablespaces works from EE=>SE but not the other way round.
Cheers
Mike
Can you use full transportable export/imp from 12c to 11gr2?
Noooooooooo! 🙂
Clearly a feature to migrate TO Oracle 12.1.0.2 or higher.
It’s a 12c feature – and we wanted to give people the ability to use it with a source database 11.2.0.3/4.
And there is zero reason to migrate to 11.2.0.4 right now. It goes out of Waived Extended Support in 5 months.
Cheers
Mike
It isn’t mentioned anywhere that I’ve seen, but Oracle 12.2 in the Cloud requires encrypted tablespaces. That means you can’t import unencrypted tablespaces, at all.
The error is:
ORA-39123: Data Pump transportable tablespace job aborted
ORA-28427: cannot create, import or restore unencrypted tablespace: _________ in Oracle Cloud
Of course, if you change the new encrypt_new_tablespaces parameter, you can still created unencrypted tablespaces, you just can’t import them.
Yep – you are right.
And we are working on this. Please stay tuned for updates in the following weeks on the blog.
Mike
Hi Mike,
we are planning to migrate 11.2.0.3/11.2.0.4 enterprise databases (sizes up to 6 TB) from Solaris x86 zones (Veritas FS / ZFS) to Solaris SPARC zones (ASM, DB EE 12.1.0.2) with minimal or no impact/changes for the source database and less downtime. Got the challenge?
First idea:
Use of full transportable export/import (FTEX) in combination with incremental RMAN backups, maybe in combination with a standby database to minimize the impact for the source database.
Second idea:
If incremental RMAN backups are not possible (technically or in case of support), we want to use a standby database in an interim Solaris x86 zone for doing the RMAN CONVERT job from Veritas FS / ZFS to ASM. The ASM disks are based on LUNs from SAN, so we would move the LUNs after the copy job to the target Solaris SPARC Zone and mount the ASM diskgroups on the target (we don’t want to copy the datafiles over network). I could’nt figure out, if ASM disks/diskgroups are moveable between different endian platforms (I know this for ZFS) and if it is supported.
Regards,
Stefan
Hi Mike,
I have question about migration and upgrade a database from oracle 11.2.0.3 with apex 5.0.3 to 12.1.0.2 cdb standalone pdb with default APX 4.2, Can I use Full Transportable Export/Import to do that? how to handle APEX? they are in sysaux tablespace.
Thanks a lot !
Sean
Hi Sean,
is APEX installed in the CDB$ROOT or only in the PDB – in source AND in target? Not sure if I’ve got your setup correctly but to me it sounds as:
Source: 11.2.0.3 non-CDB with APEX 5.0.3
Target: 12.1.0.2 CDB with APEX 4.2
Please see this VERY long blog post about the complications with APEX in the CDB$ROOT:
https://blogs.oracle.com/UPGRADE/entry/apex_in_pdb_does_not
So completely apart from Full Transportable you’ll have to make sure:
– APEX is not in the CDB$ROOT.
– If APEX is in the CDB$ROOT remove it
– Then make sure APEX 5.0.3 is installed in the destination home software wise
– Try to provision a fresh PDB – and see if APEX is not there in COMP_ID in DBA_REGISTRY
– Then install APEX 5.0.3 only locally in this PDB
Then approach your migration with FTEX.
See here for FTEX limitations:
http://www.oracle.com/technetwork/database/enterprise-edition/full-transportable-wp-12c-1973971.pdf
(AWR and XDB are not supported).
Basically to avoid any issues I would export the APEX application, EXCLUDE the APEX schema from the FTEX run and import it afterwards. But it ‘should’ work out of the box with FTEX as well.
If not, please let me know the error pattern and I will check with the APEX and the Data Pump team.
Cheers
MIke
Hi Stefan,
this is a tricky case.
Officially RMAN Inc Backups with our PERL scripts are supported with Linux as a target only. But it should work with all other platforms as well in theory. My recommendation: Download the PERL scripts, use my "Inc Backups" presentation from the blog as a quick guidance and try it.
You can’t unfortunately mount ASM disk groups from a little Endian (x86 Solaris) into a big Endian (SPARC) platform.
So I fear option 2 is not an option for you.
Cheers
Mike
Hi Mike,
Hi Martin,
I have the following couple of requirements and currently evaluating the options available for the endian conversion.
11.2.0.4,HP-UX, Big endian, ASM RAC, 3TB –> 11.2.0.4, OEL, Little endian, ASM RAC
11.2.0.3, HP-UX, Big endian, ASM Single Instance, 8TB —> 12.1.0.2, OEL, Little Endian, PDB, RAC
I am evaluating in terms of different options, any restrictions (in terms of compression, encryption and any special feature usage at the source that’ll put a restriction), the amount of downtime and costs involved.
Any help and advice will be much appreciated.
Thanking you,
Vish.
Vish,
it depends all on your downtime. Data Pump is always the most straight forward path. And in case it does not fit downtime wise you may either evaluate TTS or FTEX (FUll Transportable Export/Import) and maybe combine it with RMAN Incremental Backups.
Cheers
mike
Hi Mike,
Thank you very much for taking the time in responding to my query. Downtime will certainly be an issue, considered going with incremental backup – apparently, as per metalink there is a bug (22570430) that will not allow us to use this functionality to go into a 12c PDB. I think I’ll go with FTEX. I have seen the video that Roy made. It appears quite simple and straight forward. Should I do anything special with ASM or can I just copy the datafile copies from one ASM to another across network?
If you gonna use the RMAN inc backups please see PERL scripts in MOS Note:1389592.1 and in MOS Note: 2005729.1 for the process.
You can’t simply copy files from ASM on HP to ASM on Linux. You’ll have to either restore a backup including the conversion, or use DBMS_FILE_TRANSFER with converts while restoring.
Cheers
Mike
Hi Mike,
Thanks. I have been through those documents and noticed the following
Note:1389592.1 :
NOTE: Neither method (XTTS or XTTS incremental) supports 12c multitenant databases. Enhancement bug 22570430 addresses this limitation.
Note: 2005729.1:
This process does not support multitenant databases. Enhancement bug 22570430 addresses this limitation.
As I am not able to access the bug information I raised an SR and Oracle have come back saying:
“There is no workaround for that bug but that applies to 12c databases.
I would recommend to use the document about how to reduce transportable tablespace downtime using Cross Platform Incremental Backup ( Doc ID : 1389592.1 ) and after that UPGRADE the database to 12c.”
This is a live migration of 7 TB critical database with an available downtime of one weekend. Can I use RMAN Backup / DBMS_FILE_TRANSFER with FTEX?
Any help is much appreciated. If you believe I am spamming your thread, you could email me at crazyvish at gmail.
Thank you.
Vish,
actually I see your point – but I don’t think that this restriction will cause you a bigger issue.
First of all, do you REALLY need to migrate into a Multitenant container database? When people ask me for terabyte DBs I have my doubts where your benefits should be unless you want to consolidate more into one or adopt standard multitenant processes for everything.
Second, you migrate your 11.2.0.3 and 11.2.0.4 databases directly into a 12.1.0.2 database, freshly created, with the help of RMAN incremental backups. I would recommend FTEX as the technique as both MOS notes for the PERL scripts don’t talk about the fact that the aren’t talking about a complete database migration but only about a subset. You’ll have to cover everything in SYSTEM (views, synonyms etc) as well.
Third, if you REALLY REALLY want to migrate your database into a PDB then you generate the XML manifest file – and plug it in and then run noncdb_to_pdb.sq. Please see my blog posts here:
https://mikedietrichde.com/2017/03/08/converting-an-12-1-non-cdb-and-plug-it-into-an-12-2-cdb/
https://mikedietrichde.com/2017/06/08/noncdb_to_pdb-sql-take-long/
Cheers
Mike
Thanks Mike. True, that was my first question – why PDB? Apparently the senior management made the decision following the Oracle’s best practices and to make use of its advanced features if and when required. I am sure they will never ever use it to host another PDB in there :).
Thank you for your time, much appreciated. I’ll go through those links.
Cheers,
Vish.
Hi Mike. I am very comfortable with the Tablespace Transport techniques, however I am not knowledgeable about the RMAN scripts needed to reduce downtime for the data file copies. Can you point me to some RMAN script examples?
– I am migrating from Linux 11.2.0.4 to Linux 12.1.0.2 (non-container)
– Do I want to RMAN copy only the non-system tablespaces?
– How do I use multiple incrementals to copy the datafiles?
Thanks, Bill Parsley
Hi Bill,
I updated the blog post with this link:
https://mikedietrichde.com/2017/05/18/perl-scripts-for-large-migrations-supported-on-all-platforms-except-windows/
This guides you to the MOS notes.
Let me know if you have further questions – thanks,
Mike
Hi Mike,
we need to migrate our standard edition 11.2.0.4.190115 (LINUX) to 11.2.0.190115/ 12.2c/ 18c (AIX 7.2)
We know that Transportable tablespace is possible only from EE to SE and not viceversa.
What strategy do you advice in order to make a fast migration of this databases? (except expdp/impdp)
Thank you very much
Renato
Renato,
I fear that expdp/impdp are you best options. There’s an Oracle ACS service available – it is called O2O (Oracle to Oracle) which does a semi-automatic combination of expdp/impdp and CTAS. But it is sold as a service, not the product itself. Apart from that, only standard techniques such as moving data over DB-Links and such could help you avoiding expdp/impdp.
Cheers,
Mike
Hi Mike,
I am migrating 20 TB database from solaris to linux (400 tablespace and 1000 datafiles) using XTTS version 4.
The RMAN backup job failed mid-way (after 30 hours) with RMAN-03009 , RMAN-10038. I tried to restart the job again (xttdriver.pl –backup) after renaming the logfile – FAILED. The restarted job deleted all the previous backuped datafiles and started all over again.
Is there a way I can restart the failed backup job (level 0) on source from the point of the failure ?
Thanks,
Tapan
Hi Tapan,
when the files got deleted, I think there’s no way.
What you “could” have done:
Check which tablespaces are copied successfully already and exclude them from a second call to generate the level-0 backups.
But this is just based on “guessing”.
You will need to open an SR please and check with support if they have a better idea (and especially to diagnose the error as you don’t want it to happen again I guess).
Cheers and sorry for the inconvenience,
Mike
Hi Mike,
I wonder, in regard to the FTTS procedure mentioned.
1) if there is no need for datafile conversion (same platform).
Is there some other “xttdriver.pl” option (besides -c) that could transfer the files to the destination ASM location, without converting the files? Can we just use dbms_file_transfer for the files that “xttdriver.pl –c” has created, in order to place them in the destination ASM?
and
2) if there is no network connection between the two databases in order to just use the impdp with the DB Link, how does the procedure change?
We obviously need to do the FTTS export (which is omitted in your procedure due to the impdp with dblink), but in which exact step?
Best regards,
VN
Hi VN,
there’s of course need for datafile conversion when you from between Endianness groups such as from AIX to Linux.
Of course, we don’t included this here in the video as it was “Linus to Linux” 😉
Not sure about your first question.
xttdriver has two techniques: RMAN with conversion and DBMS_FILE_TRANSFER. DBMS_FILE_TRANSFER has the downside that it limits you to 2TB files (yes, this is sometimes a downside) but has the advantage of not requiring staging space. So yes, you can use DBMS_FILE_TRANSFER but you must use an earlier version of the PERL scripts:
https://mikedietrichde.com/2019/01/28/different-mos-notes-for-xtts-perl-scripts-use-v4-scripts/
The V4 version does not offer this technique anymore.
And reg 2, the procedure changes this way:
https://mikedietrichde.com/2016/11/03/full-transportable-exportimport-par-file-examples/
Cheers,
Mike
Hi Mike,
First of all thank you very much for your prompt response and please bare with me a little further.
In our scenario we don’t need conversion (source and destination are both linux) so please tell me your opinion about the following:
Can we:
1) use the v4 script for the preparation phase (datafile backup creation on a staging area via RMAN) on the source using “xttdriver.pl -p” and
2) then use v3 script on the destination (with the dbms_file_transfer option) and without any conversion, in order to just move the datafiles into destination ASM (12.1.0.2) ?
And in any case (a, perhaps, stupid question), would it matter if we used the v4 “xttdriver.pl -c” on the destination side, in order just to move the files into destination ASM? Would it produce an error because there is no need for conversion but just copy?
Thank you and best regards,
VN
Hi VN,
I doubt that this combination will work.
Stick with the V3 scripts – they still work but lack some improvements, especially regarding Multitenant.
Using the -c option would be just wasting time as (as far as I know) it would attempt the conversion and either do it (waste of time) or fail (not nice).
Cheers,
Mike
Hi Mike,
Thanks again for your response. For the first we will stick with the V3 scripts as you mentioned. As far as the -c option it seems (in our test) to work without any error. It just transfers the datafiles into destination ASM even though conversion is not needed and also the last step of your procedure “validate tablespace … check logical” doesn’t produce any error in regard to the transferred tablespace. Do you think that all the above reassure as on the validity of this action (-c without the need for conversion)?
Best regards,
VN
Yes, I think so – cheers,
Mike
Hello Mike, and thanks for the nice information.
I have three questions:
1- Can we use the same procedure when migrating the database to the same platform, same endian (no conversion)? (e.g. Solaris Sparc to Solaris Sparc). Is there a simplified approach in this case?
2- Can this procedure used when the source data files are on a normal file system and the destination data files will be on ASM?
3- Can I manually use rman incremental backups with FTEX without the scripts at all?
Thanks
Ahmed
Hi Ahmed,
1) Yes, you can of course. And the simplified approach would be to restore either your database and simply upgrade it or, even better, build up a standby, activate and upgrade it (you can test this multiple times).
2) Sure – no limitations, regardless if you use FTEX or upgrade as I proposed above.
3) No – you’ll have to use the scripts for several reasons. But as I mentioned before, “upgrade” is the way more simple approach in your case.
Cheers,
Mike
Hi Mike,
I have a requirement of migrating our 11.2.0.2/11.2.0.3(ASM) standalone databases on windows and Solaris[tm] OE (64-bit) platforms respectively to 19c with OEL OS on the cloud. Both the databases are 1.5 TB each. I am considering Datapump utilities to achieve this migration as I am familiar with this approach.
Are there any limitations/restrictions in using Datapump? Can I directly migrate to 19c from 11gR2 using Datapump? If yes can you please provide me a document link for migrating DB in my environment?
Also can you please suggest whether using FTEX is a better approach and guide me with a document link as I have never used the transportable tablespaces approach before.
Many Thanks in advance,
PV
Hi,
yes, you can use Data Pump if your downtime window does allow it.
Please see our slides and virtual classroom seminar on Migration to the Cloud which has explicit recommendations for Data Pump as well.
https://mikedietrichde.com/videos/
https://mikedietrichde.com/slides/#WEB2020
Cheers,
Mike