Create a PDB directly from a stand-alone database?

The documentation offers a well hidden feature for migrating a database into the universe of Oracle Single-/Multitenant:

Remote Cloning with the NON$CDB option.

If you’ll read the documentation it doesn’t say much about this option, neither the requirements nor the exact syntax or an example:

Scroll down to the FROM clause:

FROM NON$CDB@dblink … this option will be able to plugin a stand-alone database and make it a pluggable database. Sounds interesting, let’s try it.

Test 1 – Try to plugin an Oracle database

Well, the documentation doesn’t say anywhere anything about source release limitans. So I tried it simply with an Oracle database.

  1. Created a database link from my existing CDB pointing into my database
  2. Started my SOURCEDB in read-only mode
  3. Tried to create a pluggable database from my SOURCEDB – and failed …
    SQL> create pluggable database PDB1 from non$cdb@sourcedb;
    create pluggable database PDB1 from non$cdb@sourcedb
    ERROR at line 1:
    ORA-17627: ORA-28002: the password will expire within 7
    ORA-17629: Cannot connect to the remote database server

Test 2 – Try to plugin an Oracle database in file system 

Thanks to Tim Hall – his blog post did the magic trick for me:

First of all, the reason why my Test 1 failed is simply that I can’t have a user in an Oracle database with the privilege CREATE PLUGGABLE DATABASE – but this is a requirement as I learned later on.

  1. You’ll need a user in SOURCEDB with the privilege to CREATE PLUGGABLE DATABSE:
  2. Start SOURCEDB in read-only mode after shutting it down:
  3. Create a database link pointing from the CDB back into the SOURCEDB:
    CREATE DATABASE LINK sourcedblink CONNECT TO sourcedb_user IDENTIFIED BY password USING 'upgr12';
  4. Now create the pluggable database from the stand-alone UPGR12 database:
    CREATE PLUGGABLE DATABASE pdb_upgr12 FROM NON$CDB@sourcedblink
  5. But when you check the status of the new PDB you’ll realize it is OPEN but only in RESTRICTED mode. Therefore noncdb_to_pdb,sql needs to be run. Connect to the new PDB and start the script:


What will you get from this command? Actually it will allow a simple way to plug in a stand-alone database into a container database but the following restrictions apply:

  • Source database has to be:
    • at least Oracle
    • Same Endianness
    • at the same (equal) version as the container database
  • Script noncdb_to_pdb.sql needs to be run

Please see the restrictions and limitations (thanks to Susan Smith from Oracle for digging out the links):

You may have a look at this MOS Note:1928653.1 Example for Cloning PDB from NON-CDB
via Dblink
as well [Thanks Krishnakumar for pointing me to this note].

Finally the only simplification seems to be to avoid the extra step of creating the XML manifest file with DBMS_PDB.DESCRIBE – but apart from that I can’t see many other benefits – except for easing of remote cloning with the above restrictions.

And not to forget:
The clone-via-db-link command does not allow to specify the NOCOPY option. So you’ll get always a copy of your source database which may not be desired in every situation … 


Share this:

9 thoughts on “Create a PDB directly from a stand-alone database?

  1. This method has one more benefit – Remote cloning
    In case noncdb is at different server, we can create PDB using DB link. No need to copy datafile to server where CDB is located.
    This pdb creation using dblink is introduced in version

  2. Hi Mike,
    Thank you for the insightful article. We have source 11g R2 datbase on the Windows 2012 VM, which would be considered stand-alone. The database contains critical replication via MVs from yet another source DB that runs on the Mainframe.
    Our goal is to migrate/upgrade the existing, functioning 11g R2 database from 11g R2 EE to 12c EE Multitenant as PDB. The 12c EE is running on the same Windows VM in the separate oracle home.

    Please provide any advice or suggestion if we will be able to migrate/upgrade the existing 11g DB to a 12c PDB without first upgrading the 11g DB home to 12c. In other words we want to have upgrade/migrate directly. Another question is will the original 11g EE DB be preserved of will it become 12c PDB after migration?

    Thank you

  3. Sorry to say that but your WILL HAVE TO UPGRADE TO 12c FIRST. I would love to tell you any advice to have this easier but that’s how it has been implemented. You will be able to unplug only ONCE you have upgraded the database first making the package DBMS_PDB available.

    You could of course use other techniques which don’t require an upgrade first (such as Data Pump or Transportable Tablepsaces or Full Transportable Export/Import – see the video on the right side where Roy demonstrates how to migrate a database directly from 11.2 to 12c PDB in the Cloud).


  4. Hi Mike,
    thanks for the instructive article! This works very nicely with source 12.2-noncdb and 18.3 target cdb.
    However, the upgrade of the new PDB does NOT work if the target CDB has got a db_unique_name specified. In this case, the script
    $ORACLE_HOME/perl/bin/perl -c “NEWPDB” -l /o42/app/oracle/admin/MYCDB/script/upgrade catupgrd.sql
    fails with “SP2-0640: Not connected”. Running in exactly the same configuration with the db_unique_name removed works fine.
    Do you have any idea on that? We need db_unique_name as the CDB is meant to be a dataguard primary.

    Thanks & regards,

      • Hi Mike,
        I think I have found the error – seems to me that the script can’t cope with custom entries in glogin.sql. Error message looks like this:

        SP2-0640: Not connected
        Enter value for cont: @::connect / AS sysdba> 2 SP2-0640: Not connected
        @::connect / AS sysdba> 2 SP2-0640: Not connected
        @::connect / AS sysdba> SP2-0640: Not connected
        @::connect / AS sysdba> @::connect / AS sysdba> 2 SP2-0640: Not connected
        @::connect / AS sysdba> 2 SP2-0640: Not connected
        @::connect / AS sysdba> @::connect / AS sysdba>
        @::connect / AS sysdba> ========== Process Terminated by catcon ==========

        I had some custom entries in glogin.sql to be able to determine which database / container I am logged on to. Probably was a coincidence that I modified glogin at exactly the wrong time… removed entries in glogin, now everything works fine 🙂

        Thanksy anyway,

          • Hi Mike,

            it’s true – it is a bit complicated as glogin.sql settings only work for the cdb$root but not in case of “alter session set container=pdb1;” as this is not actually a new login. So basically your sql prompt shows the root container anyway – no matter which pdb you change into.
            Finding / using glogin.sql was not a problem (or, more specifically, having found it turned out to BE the problem after all :-)).
            As I have many ORACLE_HOMES and a handful of cdbs on that system, I created a central glogin.sql, soft-linked the glogin.sql in every $ORACLE_HOME/sqlplus/admin to that central location and set SQLPATH to that location in my oracle login script. Then I modified the central glogin.sql like this:
            col CONT new_value CONT noprint
            select sys_context(‘userenv’, ‘con_name’) as CONT from dual;
            set sqlprompt “_USER’@’_CONNECT_IDENTIFIER::&&CONT> ”

            Works pretty well as long as the cdb is up and running.
            But you can see in the catupgrd0.log that the catupgrd.sql had trouble exactly with these settings:

            Enter value for cont: @:: connect / AS sysdba> 2 SP2-0640: Not connected

            Thinking about it, I suppose sys_context can’t work when connecting to an idle instance – so I will have just have to rethink my glogin settings… 🙂

            But thanks for the input – see you in Nürnberg at DOAG conference?

Leave a Reply

Your email address will not be published. Required fields are marked *

* Checkbox to comply with GDPR is required


I agree

This site uses Akismet to reduce spam. Learn how your comment data is processed.