Roy and I blogged about Full Transportable Export/Import in the past:
- Full Transportable Export/Import White Paper
- Full Transportable Export/Import – Things to Know
- Full Transportable Export/Import – Migrating an 11.2.0.4 database to Oracle Database 12c- into the Oracle Cloud
- Exclude Stats differently for TTS and FTEX
If you haven’t heard of this database feature, it allows you to migrate a full database by using Transportable Tablespaces as a base technology but letting Data Pump do all the manual steps for you in a one-command migration. And if needed, it works with RMAN Incremental Backups as well in order to decrease the downtime for copying larger databases. It works cross-platform, cross-Endianness – and as early as with a source version of Oracle Database 11.2.0.3 – but only towards Oracle Database 12c.
Often there are questions regarding the correct syntax for the parameter files. As I have to look it up from time to time as well I thought I’d share some example par files as there are two different approaches: With or without NETWORK_LINK.
With expdp/impdp – without NETWORK_LINK
expdp.par
directory=oradump dumpfile=wspmst_ftex_exp.dmp logfile=wspmst_ftex_exp.log transportable=always full=y metrics=y logtime=all parallel=8 exclude=table_statistics,index_statistics
impdp.par
directory=oradump dumpfile=wspmst_ftex_exp.dmp logfile=wspmst_ftex_imp.log metrics=y logtime=all TRANSPORT_DATAFILES='/oradata/wspmst/data/WSPMSTCON/3ABC7F153DB55A4DE0539ED0C80A102D/datafile/advsecmon.331.880913493' TRANSPORT_DATAFILES='/oradata/wspmst/data/WSPMSTCON/3ABC7F153DB55A4DE0539ED0C80A102D/datafile/advsecmon.555.880923697'
Only impdp (no expdp) – using NETWORK_LINK
2 examples here – one dealing with an Oracle 11.2 source database, the other moving directly from Oracle 12c non-CDB into a pluggable database (PDB).
impdp1.par
impdp mike/passwd@v121 NETWORK_LINK=v112 FULL=Y TRANSPORTABLE=ALWAYS VERSION=12 METRICS=Y exclude=table_statistics,index_statistics LOGTIME=ALL LOGFILE=ftex_dir:v112fullimp.log TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts1.dbf' TRANSPORT_DATAFILES='/oracle/DQ1/sapdata50/ts2.dbf'
impdp2.par
impdp system/oracle@pdb2 network_link=sourcedb full=y transportable=always metrics=y exclude=table_statistics,index_statistics directory=mydir logfile=pdb2.log transport_datafiles='/u02/oradata/CDB2/pdb2/users01.dbf' transport_datafiles='/u02/oradata/CDB2/pdb2/users02.dbf' transport_datafiles='/u02/oradata/CDB2/pdb2/users03.dbf'
Please note:
- The VERSION=12 option must be used only when an Oracle 11.2 database is the source as in this case Data Pump on the 11.2 side must be triggered to access this 12c feature and create an export in a form the 12c Data Pump can understand.
- There’s no expdp necessary when using Data Pump over a database link (NETWORK_LINK). But in this case you will need the keywords FULL=Y and TRANSPORTABLE=ALWAYS as export parameters as the export portion of Data Pump on the source side will be triggered underneath the covers.
–Mike
Hi Mike,
i’ve used, full=y and VERSION=12 during export, but import is failing at
Processing object type DATABASE_EXPORT/PASSWORD_VERIFY_FUNCTION
ORA-31684: Object type PASSWORD_VERIFY_FUNCTION already exists
ORA-39083: Object type PROFILE:”PROFILE_CMPLX” failed to create with error:
ORA-07443: function VERIFY_FUNCTION_11G not found
Failing sql is:
CREATE PROFILE “PROFILE_CMPLX” LIMIT COMPOSITE_LIMIT UNLIMITED SESSIONS_PER_USER UNLIMITED CPU_PER_SESSION UNLIMITED CPU_PER_CALL UNLIMITED LOGICAL_READS_PER_SESSION UNLIMITED LOGICAL_READS_PER_CALL UNLIMITED IDLE_TIME UNLIMITED CONNECT
_TIME UNLIMITED PRIVATE_SGA UNLIMITED FAILED_LOGIN_ATTEMPTS 5 PASSWORD_LIFE_TIME 7776000/86400 PASSWORD_REUSE_TIME 15552000/86400 PAS
ORA-39083: Object type PROFILE:”SESSION_LIMIT_FSP” failed to create with error:
ORA-07443: function PASSWD_VARIFICATION_–snipped– not found
the function existed starts with “PASSWORD”, whereas function not found is starting with “PASSWD”.
source is 11203, target is 12102.
source is used for goldengate bi-directional replication, is there anything else we need to consider during import ? target will also be used for bi-directional replication post cutover.
Regards,
Samad
Hi Samad,
could you please log an SR for this, upload the logs and share the SR number with me?
Cheers,
Mike
Hi Mike,
I noticed that PARALLEL parameter is not included in export and import parfiles. Does this mean PARALLEL operation is not supported during FULL TTS export/ import?
Regards,
Ging
Thanks for the hint, Ging.
Since parallel metadata came in late, the PARALLEL clause is not in the example. But you are right, it will be needed as well. I double-checked, and in order to enable parallel metadata for FTEX, you need to set PARALLEL=2 or higher.
Cheers,
Mike
Hello, can someone help me. I’m dealing with 01565 error in identifying one of the data files and would like to know if someone can help me. I’m new to import/export and upon import I’m getting an error that the file can’t be found on the system. I’ve transported them physically and now am attempting the import command but not working. Should the files be read from the same path as the import.par file and should that path be the same on the source and target database?
Hi there,
even though I assume you solved this issue by now, I would need way more information in order to assist you. This includes par file, exact error output, a clear description of your commands etc.
Thanks,
Mike
Hi Mike, thank you for your contributions. The error that I’m getting is ORA – 19722 can you walk me through restoring a previous version of a data file?
Hi,
as I wrote before, in order to assist you, we would need:
par file
exact error output (not only a single message)
a clear description of your commands etc
Data Pump log
But actually it would be much better if you’d opened an SR.
In case you post all this here, I will have a look of course.
Cheers,
Mike
Hi Mike, first of all, congratulations for your work in Oracle Database Upgrades and Migrations channel.
My question:
1) Could “Full Transportable Export/Import ” be used to transport one TABLESPACE/USER to the destination database, while the other tablespaces remains in source Database.
2) Could “Full Transportable Export/Import Incremental” be used to transport one TABLESPACE/USER to the destination database, while the other tablespaces remains in source Database. I have followed your video, and the example is using only one tablespace https://www.youtube.com/watch?v=zharMnB02Uw
In this case, which are the parameters in parameters file during import ( full=y …. I suppose is not correct).
I am trying this option, but I am having some problems.
Thanks in advance for your answer.
Hi Alfonso,
1) Yes, but during the transport section (i.e. when you copy the file(s)), the tablespace(s) must be read-only in source. Otherwise you’ll get a fuzzy copy which won’t work. You can deal with incremental backups to keep the duration of the read-only phase as low as possible in case the files are larger.
2) Yes, this is possible without any problem as long as the tablespace is self-contained (i.e. table A is in tablespace X but index B(A) is in tablespace Y – this wouldn’t be self-contained, then you need to transport both tablespaces or move index B(A) into tablespace X).
FULL=Y and TRANSPORTABLE=ALWAYS are still correct but you add either INCLUDE or EXCLUDE clauses to define which tablespace is affected.
Cheers,
Mike
Can this method be used to migrate 11.2.0.4 -> 11.2.0.4 ? I understand that I need version=12 on the expdp in order to be able to specify TRANSPORTABLE=ALWAYS but then I get ORA-39142: incompatible version number 4.1 from 11.2.0.4 impdp. If I try 19.0 impdp I get UDI-00018: Data Pump client is incompatible with database version 11.2.0.4.0
I don’t have 12c installed on the target server but would that version of impdp work in this scenario?
Hi Ron,
no, this method can only be used to go to at least 12.1.0.2. You can’t use it for an 11g => 11g migration since Data Pump in 11g has no features to support the import part.
Thanks
Mike
Hi Mike,
I could successfully do a cross platform migration using v4 xtts script. But i want to know how we can ger the same user credentials from the source , as we have to create users manually on the target. Is there any way to make the credentials same as source db.
Hi Mansour,
yes.
If you’d use Full Transportable Export/Import where data pump does the work for you, it will rebuild the credentials.
If you do it manually by yourself, you need to use DBMS_METADATA and create scripts you execute.
Cheers
Mike