Yesterday I was browsing around for a useful simple example to test Transportable Tablespaces. A colleague mailed with the other day with a strange error message. The attempt to import into a PDB in Oracle 19c failed. My first thought: Oh, this is simple. But I failed, too. And even worse, I couldn’t find a single useful note in MyOracle Support (MOS) for ORA-31640, ORA-27037, Linux-x86_64 Error: 2 with Additional information: 7. So I decided to summarize this in Transportable Tablespaces – Example and strange error with a PDB.
You may have realized that there are a few techniques missing describing how to do a Database Migration from non-CDB to PDB – Migration with Data Pump is one of them. I will explain the most simple approach of going to Single- or Multitenant. It isn’t the coolest – and it isn’t very fast as soon as your database has a significant size. But it is not complex. And it allows you to move even from very old versions directly into an Oracle 19c PDB – regardless of patch levels or source and destination platform.
High Level Overview
Again I’ll have to thank my colleague Roland Gräff from the German ACS Support team in Stuttgart for bringing this into our radar. Roland alerted me a week ago about an issue with exports in Oracle 184.108.40.206 only when you are on a certain patch level. I summarize the issue here under Data Pump 220.127.116.11 – Wrong Dump File Version – ORA-39142.
In the below blog post you will learn about the actual issue, where it happens and when, and of course how to workaround it.
When does it happen?
The issue I will describe below happens only with …Continue reading...
I blogged a few days ago about Long Identifiers in Oracle Database 12.2 and accessing the objects via database links from lower database version. As this raised a few questions, I realized there may be a bit more clarification necessary. One question was about what happens during export with Data Pump and Long Identifiers. That’s a pretty good question.
Export with Data Pump and Long Identifiers
I’d like to demonstrate the effect with a short example. I’m doing all my tests in a fresh PDB inside an Oracle 18.1.0 CDB from out Hands-On Lab. But you can repeat …Continue reading...
We recently worked with a customer who noticed that they were not able to use transportable tablespaces to connect the same tablespace data files to two databases at the same time, even after setting the tablespaces READ ONLY in SQL*Plus. This is new behavior in 12c, and many customers are not yet aware of this change. Here are the details of what changed, why, and how you might want to deal with it if the changes affect your environment.
Starting in 12.1, data pump sets tablespaces read write during the import phase of a transportable tablespace …Continue reading...
Roy and I blogged about Full Transportable Export/Import in the past:
- Full Transportable Export/Import White Paper
- Full Transportable Export/Import – Things to Know
- Full Transportable Export/Import – Migrating an 18.104.22.168 database to Oracle Database 12c- into the Oracle Cloud
- Exclude Stats differently for TTS and FTEX
If you haven’t heard of this database feature, it allows you to migrate a full database by using Transportable Tablespaces as a base technology but letting Data Pump do all the manual steps for you in a one-command migration. And if needed, it works with RMAN Incremental Backups as well in order to …Continue reading...
Now that I am back from OpenWorld, I will hijack the Upgrade blog while Mike is traveling. 🙂
Thank you to everybody who came to our presentations or stopped at the demo booth to chat last week. We had a great many conversations, and we always learn from talking to customers! One of the common questions about Data Pump came in the form, “I have a data pump job that used to run in X minutes, but now takes <multiple of X> minutes. Can you tell me what might be happening?“
Of course with that much information we …Continue reading...
Full Transportable Export/Import – one of the coolest features in Oracle Database 12c
We blogged about Full Transportable Export/Import a while back. It is – no doubt – one of the coolest features in Oracle Database 12c. And it is part of our Hands-On Lab exercise (Hands On Lab – Upgrade, Migrate, Consolidate to Oracle Database 12c) as well.
It utilizes the technique of Transportable Tablesspaces – cross-platform, cross- Endianness, cross-version – but lets Oracle Data Pump do all the “dirty” work of rebuilding everything kept in your SYSTEM and SYSAUX tablespace including views, synonyms, public objects, …Continue reading...
Nice little best practice for statistics and Data Pump when doing either Transportable Tablespaces or Full Transportable Export-Import (credits to Roy and Dean Gagne).
Transport Statistics via a Staging Table
First of all we always recommend to exclude statistics when doing a Data Pump export as the import of such stats takes way longer than transporting them via a stats table. If you are unfamiliar with transporting stats between databases please see the Oracle Performance Tuning Guide with a nice tutorial:
The basic steps to transport statistics from one database to another fast and
One of my dear Oracle ACS colleagues (Danke Thomas!) highlighted this issue to me as one of his lead customers hit this pitfall a week ago. .
DBMS_DATAPUMP Import Over NETWORK_LINK fails with ORA-39126 / ORA-31600
KUPW$WORKER.CONFIGURE_METADATA_UNLOAD [ESTIMATE_PHASE] ORA-31600: invalid input value IN ('VIEWS_AS_TABLES/TABLE_DATA') for parameter VALUE in function SET_FILTER
This can be cured with the patch for bug19501000 – but this patch can conflict with:Bug 18793246 EXPDP slow showing base object lookup during datapump export causes full table scan per object and therefore may require a merge patch – patch 21253883 is the one to go …Continue reading...
Here is a new capability that might be interesting to anybody who is performing a migration using Data Pump. Previously, Data Pump would create indexes one at a time, specifying the PARALLEL keyword for the CREATE INDEX statement to invoke parallel query for index creation. We used to recommend a workaround to create indexes in parallel, which involved a three-step process of importing without indexes, then creating a SQLFILE of the CREATE INDEX statements, and breaking that file into multiple windows.
Through extensive performance testing we found that it is faster to create multiple indexes in parallel (using a parallel …Continue reading...
Ouch … I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent.
Well … I thought it is … but it’s not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp’s par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my “research”:
- MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with
You have never thought about excluding DBMS_SCHEDULER jobs from a Data Pump export? Me neither but I’ve recently got a copy of an email for such a customer case from Roy who owns Data Pump as well. And this is the code example from Dean Gagne:
exclude=procobj:"IN (SELECT NAME FROM sys.OBJ$ WHERE TYPE# IN (47,48,66,67,68,69,71,72,74))"
- This will work only on export
- It’s an all or nothing approach
Quite interesting, isn’t it?…Continue reading...
Interesting question a customer had last week during the Upgrade Workshop in Munich. He’s getting export dump files from several customers and often not much information describing the contents. So how can ge find out what’s in there, which was the source characterset etc.
This seems to be a simple question but it did cost me a few searches and tests to come back with some (hopefully) useful information.
First attempt: $strings expdp.dmp > outexpdp.txt
I bet there are better ways to do this but in my case this will give me:
"APP"."SYS_EXPORT_SCHEMA_01" x86_64/Linux 2.4.xx WE8ISO8859P15 LBB EMB GHC JWD… Continue reading...