A long time ago my colleagues published PERL scripts to assist especially with cross platform Transportable Tablespace migrations. The PERL scripts allow you to utilize incremental backups. This way you can decrease the downtime in a migration with large databases significantly. But there are different MOS Notes for xTTS PERL scripts available. Which one should you take?
Transportable Tablespaces and Incremental Backups
The biggest pain points in a transportable tablespace migration are usually the size of the database and its complexity. With RMAN Incrementally Rolled Forward Backups you can tackle the size aspect. Instead of having a long downtime to copy terabytes of files from A to B during the read-only phase of the transport activity (tablespaces have to remain in read-only mode for transportable tablespaces during the copy and export meta contents operation), you can leverage incremental backups.
You will always start with a Level-0 imagefile copy backup which has the same size as your database. But your tablespaces can remain read-write mode during the backup.
In the following phase you will create Level-1 incremental backups. With those, you will roll forward. Still your tablespaces remain read-write.
When you get downtime, the tablespaces need to be set into read-only mode before you trigger the final incremental backup and roll forward again. But this final incremental backup usually has just a few gigabyte. It completes much faster than a copy operation of terabytes. Then you will start the transport phase.
In order to ease that process of RMAN Incremental Backups, even cross platform and cross Endianness, we deliver PERL scripts. You can download them from MOS notes.
The PERL scripts
You can find two showcase presentations explaining the PERL scripts a bit more in detail in the Slides Download Center on the blog.
The following MOS Notes give you access to PERL scripts supporting the RMAN Incremental Backup path together with either Transportable Tablespaces or Full Transportable Export Import:
- PERL script for Oracle 11g:
- PERL scripts for Oracle 12c:
- PERL scripts V4 – NEW and IMPROVED
- PERL scripts for the ZDLRA
The last MOS Note refers to a special version for the Zero Data Loss Recovery Appliance which is a big added value if you have a ZDLRA or plan to purchase one: It can become your migration vehicle. See the Migrate to ExaCC slides above for more details.
Which PERL scripts should you use?
Clearly you should use the PERL scripts V4 delivered via MOS Note: 2471245.1.
We keep the previous versions of the scripts on MOS only in case you started a project already. The V3 version got updated with the most recent fixes but MOS Note: 2471245.1 has the new and improved version of the scripts (V4) which you should use from now on.
To point to the new V4 release, the older notes have now this INFORMATION BOX:
NOTE: Consider using the new release of this procedure, version 4. This version has drastically simplified the steps and procedure. Before proceeding, review:
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1
–Mike
Hi,
Mike, a little bit off-topic questions: can we use Full Transportable tablespaces to migrate 12.1.0.2 non-pdb to
12.2.0.1 pdb ? Is this covered anywhere in documentation ?
Regards,
Artem
Hi Artem,
yes, you can – and it is covered in our 19c slide deck (see: https://mikedietrichde.com/slides ).
But you raise an idea here for a more detailed blog post about it.
I think the documentation itself does not cover it.
Cheers,
Mike
Hello Mike,
I’m not able to retrieve any MOS note referenced in your article.
The 3 notes states “Document cannot be displayed…”
Do you know any reason for that ?
Regards
Leygonie,
sorry for the inconvenience – the owner put the notes into REVIEW state to implement some changes (I guess).
I can’t access them either right now through the external portal.
They should be visible again soon.
Sorry for the inconvenience 🙁
I dropped the owner an email.
Cheers,
Mike
Hello Mike,
I am currently doing a proof of concept to migrate AIX (Big Endian) databases to Linux x86_64 (Little Endian). Just mentioning Endianess for clarity’s sake. We are running Oracle 12.1.0.2 (July 2018 PSU) for both the source and target.
I have read through the official documentation regarding Transporting Data Across Platforms and also read through most of the MOS notes including this MOS Note you are mentioning above: 2471245.1 – V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup. I also openend various service requests to confirm the right procedure for this and also for some problems I am having which I mention below
As our databases ranges between 3 and 12 TB we will use this incremental approach with V4 Perl Scripts as I can’t see any other alternatives?
I am currently doing the first tests manually with setting all tablespaces to read-only and without the Perl Scripts to get a feel for the process and I am running into problems with self-containment in the Data Pump section. So if my understanding is correct the datapump export contains the metadata required to plug the tablespaces into the destination database? The Core Banking application uses partioning for archiving old data and this seems to be causing all the self-containment issues.
Another thing to note is that most of the business code is PL/SQL so what about the Dictionary?
Below is the rman command and some output produced during logging
backup to platform ‘Linux x86 64-bit’
format ‘//linux_x86_64_d-%d_s-%s.bck’
datapump format ‘//linux_x86_64_d-%d_s-%s.dmp’
tablespace
;
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully
Performing export of metadata for specified tablespaces…
EXPDP> Starting “SYS”.”TRANSPORT_EXP_XXXXXXXX_aump”:
EXPDP> ORA-39123: Data Pump transportable tablespace job aborted
ORA-39187: The transportable set is not self-contained, violation list is
ORA-39901: Partitioned table x.xxxxxxx is partially contained in the transportable set.
So my question – is there a way to get around this self-containment issue? The software provider will not change this mechanism so that is not an option. Are there any alternatives to this procedure?
Not sure if is the right medium for asking but I thought I’d put it in the comments in any case. Maybe you have some ideas on how to tackle this differently?
Regards
Neil
PS. I have generated a pdf from the MOS Note 2471245.1 before it went down 🙂
Neil,
ALL data tablespaces need to be in R/O phase at the same time when you’d like to transport.
Unless you have user objects in SYSTEM tablespace (or SYSAUX), the “self contained” check will succeed when you run it on all of the tablespaces you will transport in one call. Please check this – there’s no way around this.
Cheers,
Mike
Hi Mike
Thanks so much for your reply. All tablespaces are set to R/O and we don’t have objects in SYSTEM or SYSAUX.
If I run dbms_tts.transport_set_check on the tablespace list then there are no violations.
Below the snippet from the backup to platform ‘Linux x86 64-bit’…
Starting backup at 29-JAN-19
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully
Performing export of metadata for specified tablespaces…
EXPDP> Starting “SYS”.”TRANSPORT_EXP_AT50DB01_aump”:
EXPDP> ORA-39123: Data Pump transportable tablespace job aborted
ORA-39187: The transportable set is not self-contained, violation list is
We traced this today and noticed that the not all tablespaces are processed so I have a strong feeling this could be a bug. Information in SR 3-19259291331. Basically what happens is that not all tablespace in the list I provide are processed which in turn then causes the ORA-39187.
Just a side note. The tablespace list is huge 🙁 Nothing I can do about it. This is our core banking application.
Regards
Neil
Neil,
how do you specify the tablespaces? Can you put a snippet of your par file with 5-10 tablespaces here please?
Cheers,
Mike
Hello Mike,
Another excellent article that makes an excellent bookmark for one of the most important aspects (DB Migration).
Like Leygonie mentioned above, it seems the issue of multiple MOS notes not being accessible appears to be universal (??). Any idea if this is being dealt with? I have opened a SR 3-19262566371 for the same yesterday but no sign of any resolution in sight so far.
Narendra,
I have realized this as well. And I have seen no official statement yet regarding this.
Please open an SR and ask the support folks what is going on.
This is beyond our control unfortunately.
Cheers,
Mike
Hi Mike
I sent a reply to mike.dietrich@xxxxxxxx.com
Hope that is ok?
Regards
Neil
Sure, this is ok.
Cheers,
Mike
Hi, Mike,
Thanks for the blog, really hope you have influence on prioritizing of XTTS related enhancement and bug fix. I am dealing with 800TiB+ database migration, really hope the section size could be added to xtts scripts, filed an enhancement bug 29036164 in mid Dec, but was told the enhancement is still many months away (because a bug related to using section size in perl script needs to be fixed first). Have a few bigfiles that are 85TiB+ and growing. The non-restart-able nature of using XTTS script set compared to regular RMAN is also not helpful. But the V3/4 are confirmed to be able to handle added new tablespaces, the restart of level 0 (continuing from the completed tablespace backups) is possible, the run will be treated as level 1 for those already backed up, the level 0 copy will be generated for the new tablespaces.
Thanks a lot
Thanks Z.
Uhh … that’s quite a big database.
Would you please mind if you’d drop me an email (mike.dietrich …. at ….. oracle.com)?
Then we can discuss this in a bit more detail and can include the responsible people.
Cheers,
Mike
Hello Mike!
V4 script contains this:
In version 3 important features are
# 1. Standby support
…
## allowstandby
## ———
## This will allow the script to be run from standby database.
allowstandby=1
but at MOS 2471245.1 I do see this
“It is not supported to execute this procedure against a standby or snapshot standby databases.”
Is that just a misleading documentation?
Thanks
Hi Den,
you will need to open an SR for this unfortunately. Or tried it by yourself. Or quickly check the PERL script if it contains functionality for this switch.
I think the support should be there now – and I guess the MOS note is not in synch with it yet.
Cheers,
Mike
Hi Mike,
It appears that the perl scripts (v4) perform IMAGE COPY backup of the datafiles. This means that the initial level 0 backup will occupy the same amount of storage as the production database. I was wondering if it is possible to perform Full Transportable export using BACKUPSET (NOT image copy)? I can’t seem to find where this is documented. This is more of a general question, and not specific to the perl scripts in Doc. 2471245.1
Thanks.
Hi Gerry,
no – it works this way for some reasons (RMAN B/R internal things). It will occupy the same space another time (at least) if you can mount NFS from both sides. Unfortunately this is true but applies only to the initial step.
Cheers,
Mike
Hi Mike,
Thank you for your response in a timely manner.
I have two more questions and hope you can shed some light:
1. I was wondering if the V4 perl scripts would still work as intended with full transportable tablespace export? (Doc ID 2471245.1)
2. We have a 24×7 production 11g database that we wish to upgrade to 18c. We cannot alter the tablespaces to READ ONLY in production. Therefore, we are looking to use our standby DB to test the upgrade using Full TTS export with RMAN incremental backup. We do understand the reason why we cannot export from a read-only standby DB, however, my question is: is it possible to perform full transportable tablespace export/import method (using incremental backup) on a SNAPSHOT STANDBY. I can’t seem to find documentation on this.
Thanks,
-Gerry
Hi Gerry,
yes, of course the PERL scripts can be used as well when you plan to do FTEX instead of xTTS. Our slides cover this usually (see the “slides” section of the blog, go for the deck with the BRIDGE picture).
And yes again for your second question.
Cheers,
Mike
We have tested entire XTTS steps with standby, level 0, level 1 and final tablespace meta dump with snapshot standby mode and successfully plugged in the tablespaces. However, support voiced in multiple SRs:
Development’s stand is that this is not supported to run against standby. We manually give customers the details if they are insistent as a courtesy. Technically the use of standby is not fully tested. As this is the case with standby, I would guess their policy will be the same with snapshot standby.
That does not help us too much when discussing with our client. Mike, if there is any change in that direction, please let us know.
In our case, due to lack of support of SECTION SIZE in XTTS method and much worse than expected throughput from source AIX server, we are rethinking about the migration method.
Hi Z,
the owners of the xTTS scripts are working on the SECTION size inclusion – but this will still take a while as it needs to be tested as well.
Regarding the “non” Support using the scripts from a standby, can you send me the SR where this has been voiced?
Thanks,
Mike
Thanks for the update on the section size;
SR 3-18932700561, SR 3-19733208571 and its spin-off SR 3-20425289368 (pending answer regarding to using snapshot standby mode to make tablespace meta dump)
Regards
Hi Mike,
Any update on SECTION size in XTTS. It has been over 2 years , has it done. I have 100TB of database to be migrated from Solaris to Linux.
Hi Sanjay,
as I’m not the owner of the scripts, you please may need to open an SR and ask.
Cheers,
Mike
I have been told by my oracle account person that some rman restore enhancements will be done to support rman restore of multi section backup in 19.14 RU.
DB 19.14 (Jan) is targeted to support RMAN RESTORE cross-platform restore of multi-section backups.
However there is no change in perl script ( v4) supplied so I am not sure how we will making use of this feature. Any idea about this ?
Hi Sanjay,
I seriously can’t tell you since I don’t know the content of 19.14.0 at this point.
And I’m not sure whether the team owning the V4 scripts does any enhancements right now.
Let’s see – but I can’t predict future things unfortunately.
Cheers,
Mike
Hi Mike,
what do you think about fact that RMAN not using block change tracking file when it do incremental backup for transport? How can I reduce time when my DB in RO state and RMAN scan 30TB for changes. Amount of changes is small but time looking for its is large.
Hi Dmitry,
what do you mean with “RMAN does not use the block change tracking file for incremental backups”?
Is a BCT file in use in your database? Then it should be used as RMAN does not know about the reason you are using it for. RMAN creates a level-1 incremental backup and as far as I am aware, the BCT should be used. But I may be wrong here. Tell me a bit more please.
Thanks,
Mike
Yes, BCT is used in my db.
test:
backup for transport allow inconsistent incremental level 0 datafile 10; — 15 sec runtime
backup for transport allow inconsistent incremental level 1 datafile 10; — 4 sec
backup for transport allow inconsistent incremental level 1 datafile 10; — 3 sec
backup for transport incremental level 1 datafile 10; — before this I did alter tablespace read only; 16 sec runtime
select file#, datafile_blocks, blocks_read, incremental_level, completion_time
from v$backup_datafile
where used_change_tracking = ‘YES’ and file#=10
order by completion_time
FILE# DATAFILE_BLOCKS BLOCKS_READ INCREMENTAL_LEVEL COMPLETIO
——————- ——————- ——————- ——————- ———
10 262144 261504 0 07-AUG-19
10 262144 116397 1 07-AUG-19
10 262144 116397 1 07-AUG-19
I can`t see string for last backup.
The same on linux and solaris. 12.1 and 12.2
5 days comment on moderation
Dmitry,
sorry but if you need immediate assistance, please open an SR. I’m not “Oracle Support”.
I travel a lot – and I don’t announce my absence on the blog.
Seriously …
And regarding the issue you see, I agree – but you need to open an SR please.
This needs to be looked into by somebody from Support – and synched with the owners of the PERL scripts.
Feel free to share the SR number with me, then I’ll mail the owners directly.
Cheers,
Mike
Mike,
sorry for my post but I `don`t knew that moderator is you.
I had open SR and workaround is ‘alter system checkpoint’ after ‘alter tablespace read only’.
Hi Dmitriy,
thanks for the explanation 🙂
Can you share the SR number with me please? I’d like to have a look. Either here via comment or you can use my email (mike.dietrich — at — oracle.com).
Cheers,
Mike
Hello Mike,
thanks a lot for sharing details. We are doing a POC for migrate/upgrade using the xtts scripts. Our use case include both home and hetrogeneous platforms.
However, we are facing a challenge while doing XTTS migration from On prem linux to Cloud. Our source DB is also having TDE enabled. The issue happens when we try to roll forward the data with xttdriver.pls -restore script. during L1 restore/roll forward of the backup we hit following error.
Start rollforward
——————————————————————–
Entering RollForwardAfter applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
DECLARE
*
ERROR at line 1:
ORA-19583: conversation terminated due to error
ORA-06512: at “SYS.X$DBMS_BACKUP_RESTORE”, line 3144
ORA-19870: error while restoring backup piece
/u01/app/oracle/nfs_backups/xib_07u9o75l_1_1_6
ORA-19913: unable to decrypt backup
ORA-06512: at “SYS.X$DBMS_BACKUP_RESTORE”, line 3138
ORA-06512: at line 42
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error:
——
/u01/app/oracle/tmp/restore_Aug21_Wed_07_03_17_171//xxttroll_6.sql execution failed
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
What i understand is that our source Files are encrypted using TDE keys & our Target DB has its own keys. When we restore the datafiles from source using xttdriver.pl, the source keys are not present on target (which is obvious). Is there any way we can bypass/fix the issue so that we can leverage the xttdriver.pl for similar migrations.
Can you pls advise on the above.
thanks in advance.
Hi Hardik,
thanks for sharing – but I can’t advice here on the blog as I would need to reproduce this at first.
Please open an SR. And let me know if the SR does not get progressed correctly.
Cheers,
Mike
Hi Mike … really enjoy your blog … one of the best out there.
I’m working on.a complex migration of about 75 AIX databases (versions 10 through 12) to Linux.
For the smaller databases, Full Transportable Database import where version >=11.2.0.3 works great.
Where version =11.2.0.3, and XTTS if not)
Of course, I’m not suggesting 7 lines of PLSQL can replace 6700 lines of Perl. …. to maintain continuity of the migration and determine when recovery has finished. you also need a db link to look at v$datafile_copy, v$backup_piece, v$datafile, v$tablespace plus some other bits and pieces.
Sorry for the long-winded post, Mike … I’m just trying to establish if WHAT needs to happen has now become simpler than HOW it’s currently done. There comes a point where the effort and cost needed to maintain an original design becomes greater than a complete re-design … isn’t that why you did autoupgrade.jar?
Hi Mark,
I generally would use expdp/impdp for all smaller environments where downtime is not the issue.
Very often database can go over the weekend – and when everything is setup nicely, especially with the network parts, the using NETWORK_LINK or going with a dump file, either can be a good option. Only for those where this won’t work as your downtime constraints are too tight, you may consider the PERL scripts.
Cheers,
Mike
HI Mike,
We are moving data from Solaris to Linux….but cant take database in read only mode.
We want to Snapshot Standby database as we cant take use production….We have raised SR but Oracle is unable to provide any steps to use Snapshot standby database for final step 4.
Do you have any idea how can we use the scripts for our situation?
Any help will be appreciated
Hi Saurabh,
do I understand you correctly: You’d like to do this process with a standby database instead which you convert to a snapshot standby temporarily?
I’m not 100% sure but I think a MIX of using a standby and later the production for the PERL scripts is still not supported.
But certainly you should be able IMHO to use your standby which you convert into a snapshot standby for the process. But everything has to be done on the snapshot standby. So even the last step, the final incremental backup would mean:
1. Stop application on PROD
2. Make sure the last change has been synched to your standby
3. Convert standby into snapshot standby again
4. Set your tablespaces into RO mode
5. Trigger the last run of the PERL scripts
Step 5 must happen on this database – you can’t use PROD as the scripts (as far as I know unless this has been changed) work only with ONE database, not with a mix. And of course, you will have downtime as you need to do all the additional TTS or FTEX work in addition.
Cheers,
Mike
Hi Mike, I have to make a copy of 11.2.0.3 DB on HP-UK to a 12 on Linux. Can I use V4 procedure or I have to go with 11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1).
I any case do I need to apply a last incremental? Can I avoid this? because I dont need a exact copy.
Hi Mario,
you need to check with Support. I think it should work. But I think to remember that officially the source should be 11.2.0.4. Still, I guess if your source is 11.2.0.3 it will work, too.
And no, you can’t avoid applying the last incremental as otherwise you will you have an incomplete backup – and this does not work here.
Cheers,
Mike
Hi Mike,
Thank you for the great blog!
XTTS V4 on snaphot standby – Have you heard new updates on this? The MOS note still says that the script is not supported on Standby and snapshot standby.
Hi Hari,
I know that several others asked the same thing – and I’m still unsure.
You please have to open an SR and check with Oracle Support.
Cheers,
Mike
Thank you for the update Mike!
When we converted our standby to snapshot standby, I noticed that a restore point and a resetlogs is performed. This resetlogs sets the scn to 0. The XTTS V4 relies on scn numbers (res.txt is one of the files that carries these markers) for the incremental update. That is why you cannot use standby for the initial backup and then followed by snapshot standby for read only (scn 0). So, your earlier comment about everything (initial, incrementals, read only incremental and metadata export) having to be done in the snapshot standby makes perfect sense. I am yet to test it but will update once I finish the testing.
Thanks Hari!
Cheers,
Mike
Hello Mike,
many thanks for the great effort,
is it possible to use xtts V4 with source 10 G on solaris 10 into 19C Redhat linux ?
or we need to use v3 ?
another question:
we have to do 2 separate imports into destination,
one for source db metadata, and the second for importing TBS definitions,
is that the right way or one import will do import everything ?
Thanks
Hi Eliane,
you can try – and it may work. But at some point, the V4 scripts are not supported wit 10g sources anymore for the simple fact that we can’t check all releases backwards.
But in theory, it should work. And if not, the previous scripts are still available.
Cheers,
Mike
Hello Mike, Can you please help or should I raise SR – when I execute
nohup $ORACLE_HOME/perl/bin/perl xttdriver.pl –backup &
I can see all the tf files are created, but the script just keeps running
and doesnt exit
Hi Shaun,
please open an SR – I neither own the scripts nor have I used them with nohup before.
Cheers,
Mike
Hi Mike,
What happens after initial backup and then initial restore going on. Restore process gets killed due to any reason. Since restore process already restored some OMF files, does starting restore again will start from where it was killed or it will start restoring same files again.
Thanks
Hi Sanjay,
then you need to cleanup and start again from scratch.
Cheers,
Mike
Hi Mike,
Level0 restore worked from 12.1.0.2 non-cdb to 12.1.0.2 PDB, While executing incremental restore. Failed with ERROR IN CONVERSION ORA-45913: cannot backup file /oradump/rman_tts_scripts/test_dump/eg0mo0n9_1_1 because it belongs to dropped
PDB.
Please share your thoughts. Opened SR but no reply yet.
sqlplus -L -s “/ as sysdba” @/oradump/rman_tts_scripts/test_dump_tmp/restore_Feb25_Fri_16_59_45_426//xxttconv_an0mkplh_1_1_6.sql /oradump/rman_tts_scripts/test_dump/an0mkplh_1_1 /oradump/rman_tts_scripts/test_dump 6
ERROR IN CONVERSION ORA-45913: Message 45913 not found; product=RDBMS;facility=ORA
; arguments:
[/oradump/rman_tts_scripts/test_dump/an0mkplh_1_1]
ORA-19600: input file is backup
piece (/oradump/rman_tts_scripts/test_dump/an0mkplh_1_1)
ORA-19601: output file is
backup piece (/oradump/rman_tts_scripts/test_dump/xib_an0mkplh_1_1_6)
CONVERTED BACKUP PIECE/oradump/rman_snfdbq_bakcup_omh/xib_an0mkplh_1_1_6
Hi Santosh,
please try it into 19c – not into 12.1.0.2.
And if you don’t get a proper response, please increase the Severity up to Sev.1 (but not 24×7).
If that doesn’t help, please call the HUB (this is the Support telephone number for your country).
Then escalate the SR via the HUB and request a management callback.
Sorry that I have no better response. But there could be so many reasons.
Thanks,
Mike
Thank you.
After 2 months, Oracle provide patch for 12.1.0.2. Which helped in resolving the issue.
Thanks,
Santosh
Thanks a lot for letting me know, Santosh!
Cheers
Mike
Hi Mike. Im going to migrate to OCI 19c my database of 100Tb on 11.2.0.3 AIX 6.1 Can I do that using xttsV4 perl scripts?
Hi William,
yes, you can.
Please see our virtual classroom seminar about Migrating Very Large Databases on https://MikeDietrichDE.com/videos
Thanks
Mike
How about dbmigusera.pl not support multisection backup, this is the default config to use ZDLRA. I can’t use this script because does not support multisection backup
Hi José,
actually I didn’t refer to the special scripts for the ZDLRA. I know that there are limitations – but I can’t tell you what the current status for enhancements and such is. I know only that some people wanted to stop supporting these special scripts for the ZDLRA generally.
Cheers
Mike
Mike I am curious about the xtt.properties for src and dest scratch_locations when using NFS for backup. It looks like in earlier versions of the perl script this may have been implemented differently. Can you elaborate on how this works in the latest version for me?
Hi Terence,
we tried to show this in our Virtual Classroom Seminar Episode 12:
https://mikedietrichde.com/videos/
But beyond this, I can’t tell you more details. Unless your source is an older database, you should use the V4 scripts please.
Cheers,
Mike
Hi Mike,
I am doing migration from Solaris server(12c db) to Linux server(19c db) using V4 XTTs. Database is about 25T and both source and target database are non-CDB. In xtt.properties file, there is a parameter called ‘dest_datafile_location’, according to V4 document, “Only one location is allowed for this parameter. “. The problem is since our database is huge, we have 10 file system mount points to store the datafiles in source database, in the target database, we also have 10 mount points (/oradata01, /oradata02, …/oradata10), each mount point is 4TB. If we only add one location in dest_datafile_location parameter, this single location will not be big enough to store all the datafiles coming from source db. How can we work around this issue? Thanks
Hi Grace,
if you struggle with this, you can run multiple entities of the PERL scripts in parallel, each of them taking care of a subset of files.
This should solve the constraint. You must make sure that the subdirectories all have different names such as “xtts01”, “xtts02” etc etc.
Cheers,
Mike
Hi there, is you are migrating a DB of 25TB to 19c. Why you are not using ASM on target. GI/ASM are free.
Hi,
I am not sure about the nature of your question.
Personally, I don’t have ASM in my VBox environment. Simple as it is – and this is why most examples on the blog are done in file system.
But the PERL scripts support ASM for sure – tons of customers have done this.
Cheers,
Mike
Sorry Mike. It wasn’t a question. It was related to a question from Grace about using xtts to migrate 25TB to 19c. Why she doesn’t use ASM as store of that big database.
Hi Mike,
What is the reason the dbmigusera.pl perl script for the ZDLRA uses Transportable Tablespaces instead of Full Transportable Export Import?
Is dbmigusera.pl meant for migrating PDBs from a CDB. In my case the tablespace names are not unique in the CDB.
Hi Nicola,
there is no specific reason – and we write a new MOS note right now which will have FTEX as the preferred scenario, especially since the old way is far too work intense.
Cheers,
Mike
On step 1 (first backup at source), how do we make sure res.txt is generated before the *.tf files are finished transporting to destination host? This is because when migrating large database 30+TB, the transfer of *.tf files takes a couple of weeks and network hiccup happens often, causing the process to fail and res.txt file not generated.
If the logic of xttdriver.pl can be modified to generate res.txt first before starting the transfer of *.tf files, we could manually transfer the *.tf files if the transfer process dies.
Hi,
the best is that you try it out in our lab env – then you see the process start-end by yourself:
https://apexapps.oracle.com/pls/apex/dbpm/r/livelabs/view-workshop?wid=3741
No installation done, it works within a browser session and takes 10-12 minutes to provision. Just use the green button and provision it. You need only your SSO user.
Cheers,
Mike