Upgrade PDBs – Everything At Once (Full CDB Upgrade)

As referred to it before there are two techniques to upgrade an Oracle Multitenant environment:

In this post I will explain the method of “Everything At Once” and describe all the steps. The benefit of this approach is simplicity and ease of maintenance. In an upgrade workshop in Melbourne earlier this year a DBA from Germany came by in one of the breaks explaining that he takes care on over 100 developer databases – and it would ease his life a lot if he could “hit” all of the databases at the same time with one patch set or PSU, or even upgrade them to a higher release. This is the use case where it will make a lot of sense to leverage from this approach. But be aware and don’t over-consolidate as the pain point is common downtime. If you plan to use this approach you need to check before if your application owners can agree on common downtime windows – otherwise you may end in trouble. Big trouble!

The technique is easy to describe:

  • CDB$ROOT will always get upgraded first – I call this CYCLE 1
    • The “-n” parameter of catctl.pl will define how many parallel workers run the upgrade – 8 is the current maximum
    • The “-M” option will decide whether the CDB$ROOT stays in UPGRADE mode throughout the entire process of upgrade or becomes available for access after being upgraded leading to the fact that PDBs become available for access as well once they got upgraded – set “-M” and the CDB$ROOT will stay in UPGRADE mode throughout the entire process
  • Afterwards we can upgrade multiple PDBs (including the PDB$SEED) in parallel at the same time – I call this CYCLE 2,3, …
    • The “-n” parameter (divided by 2) of catctl.pl will determine how many PDBs will be upgraded in parallel
    • Your limiting factor is CPU power and cores
    • The “-N” parameter can alter the number of parallel workers per PDB – default is 2.

A few simple examples will demonstrate how the parameters work hand-in-hand:

  • CDB has (always) one PDB$SEED and 25 PDBs numbered PDB1 .. PDB25
  • That means we’ll have CYCLE 1 (for the CDB$ROOT) and between one additional CYCLE 2 (for all remaining 26 PDBs in parallel) up to 27 CYCLES once you decide to have no PDBs upgraded at the same time together with another PDB
  • catctl.pl -M -n 16 would lead to:
    • CYCLE 2: PDB$SEED, PDB1-PDB7 (-n 16 divided by 2 = 8 PDBs to upgrade in parallel)
    • CYCLE 3: PDB8-PDB15
    • CYCLE 4: PDB16-PDB23
    • CYCLE 5: PDB24 and PDB25
    • Each PDB will be upgraded with 2 workers in parallel as -N default is 2
    • The CDB$ROOT will remain in UPGRADE mode until the last PDB is upgraded due to -M setting
  • catctl.pl -M -n 26 would lead to:
    • CYCLE 2: PDB$SEED, PDB1-PDB12 (-n 26 divided by 2 = 13 PDBs to upgrade in parallel)
    • CYCLE 3: PDB13-PDB25
    • Each PDB will be upgraded with 2 workers in parallel as -N default is 2
    • The CDB$ROOT will remain in UPGRADE mode until the last PDB is upgraded due to -M setting
  • catctl.pl -n 52 -N 1 would lead to:
    • CYCLE 2: PDB$SEED, PDB1-PDB25 (-n 52 divided by 2 = 26 PDBs to upgrade in parallel)
    • Each PDB will be upgraded with 1 workerl as -N is 1
    • The CDB$ROOT will be available once upgraded – same applies for PDBs once the upgrade is finished

Step-by-step instructions:

Preupgrade Steps

  • Create a guaranteed restore point in order to flashback in case of failure
  • Copy preupgrd.sql and utluppkg.sql from the Oracle home’s ?/rdbms/admin into the source Oracle ?/rdbms/admin directoryTHIS STEP IS EXTREMLY IMPORTANT as otherwise the preupgrd.sql and the utluppkg.sql from will be loaded into the database – but the concept requires always to use the preupgrd.sql/utluppkg.sql from the higher version. If you fail you see it (a) hanging and (b) getting plenty of errors.
  • Execute preuprd.sql within the source database – database needs to be up and running:
    $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -d $ORACLE_HOME/rdbms/admin -l /home/oracle/mike -b preupgrd preupgrd.sql

    • It will create 3 files which combine all information for the root, the seed and all pdbs together into one preupgrade.log, one preupgrade_fixups.sql and one postupgrade_fixups.sql. Default location for those files is $ORACLE_HOME/cfgtoollogs/<SID>/preupgrade
    • Verify the preupgrad.log and follow all advices
  • Execute the preupgrade_fixups.sql while all PDBs are open:
    $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -d $ORACLE_HOME/cfgtoollogs/cdbupgr/preupgrade -l /home/oracle/mike -b preupgrade_fixups preupgrade_fixups.sql
  • Copy the init<sid>.ora into the new $ORACLE_HOME/dbs
  • Specific stepts for RAC environments:
    • Set cluster_database=false
    • Stop all instances
      srvctl stop database -d <db_name>


  • Switch to the new $ORACLE_HOME including all necessary env variables
  • Connect with sqlplus:
    sqlplus / as sysdba
  • Bring the CDB$ROOT instance into upgrade mode:
  • Bring all PDBs into upgrade mode:
  • Control it with:
    Status should be MIGRATE for all PDBs
  • Exit from SQL*Plus and cd to $ORACLE_HOME/rdbms/admin :
    cd $ORACLE_HOME/rdbms/admin
  • Perform the upgrade in parallel:
    $ORACLE_HOME/perl/bin/perl catctl.pl -d $ORACLE_HOME/rdbms/admin -n 16 -M -l /home/oracle/mike catupgrd.sql
  • The important file with timings per PDB and to for a quick check is called upg_summary.log and can be found in:

Postupgrade Steps

  • Only in case -M hasn’t been used then the CDB remains open during the upgrade of the PDBs and will need to be shutdown manually post upgrade:
  • Followed by a startup all PDBs must be opened now for recompilation
  • Execute the postupgrade_fixups.sql:
    $ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -n 1 -d $ORACLE_HOME/cfgtoollogs/cdbupgr/preupgrade -l /home/oracle/mike -b postupgrade_fixups postupgrade_fixups.sql
  • Exit from SQL*Plus and cd into $ORACLE_HOME/rdbms/admin :
    cd $ORACLE_HOME/rdbms/admin
  • The recompilation is done via catcon.pl using the utlrp.sql script from within ?/rdbms/admin:
    $ORACLE_HOME/perl/bin/perl catcon.pl -n 1 -e -b utlrp -d '''.''' utlrp.sql
    1. utl.rp does not parallelize between PDBs
    2. Default parallel degree is: cpu_cores x 2
    3. It takes roughly 45 sec per PDB PLUS another 30 seconds to
      initialize XDB – as this happens serially it takes approx a while to
      complete recompilation past upgrade
  • Drop the guaranteed restore point
  • In RAC environments only:
    • Set cluster_database=true again
    • Start all instances
      srvctl start database -d <db_name>

Seems to be a lot of work, too? But in fact most of the steps are applicable to any upgrade such as the preupgrd.sql etc. Only remarkable change is the need to start scripts with catcon.pl – and not directly within SQL*Plus. Please remember that this approach will upgrade as many PDBs as you have depending on your CPU power in parallel.

In my test environment (a very outdated Exadata V1 without Flash, 6 year old disks, 2 physical CPU sockets) I upgrade 25 PDBs each roughly 25 GB in size with user data in it and all options present in less than 3 hours including recompilation. Please repeat this exercise with 25 independent databases consolidated on the same node within the same time 😉 Try it 🙂

Please see also my updated steps +2 years later when Oracle Database became available:



Share this:

18 thoughts on “Upgrade PDBs – Everything At Once (Full CDB Upgrade)

  1. Hi Mike,

    truly appreciate the article. I think under "catctl.pl -n 52 -N 1 would lead to:", it should be "CYCLE 2: PDB$SEED, PDB1-PDB25" instead of "CYCLE 2: PDB$SEED, PDB1-PDB12".

  2. $ORACLE_HOME/perl/bin/perl catctl.pl -d $ORACLE_HOME/rdbms/admin -n 16 -M -l /tmp/madhav catupgrd.sql

    I tried many times.. however it did not work.. will u plz suggest?

    Analyzing file /orasw/app/oracle/product/
    Log files in /tmp/madhav
    catcon: ALL catcon-related output will be written to /tmp/madhav/catupgrd_catcon_17179.lst
    catcon: See /tmp/madhav/catupgrd*.log files for output generated by scripts
    catcon: See /tmp/madhav/catupgrd_*.lst files for spool files, if any
    catconInit: database is not open on the default instance

    Unexpected error encountered in catconInit; exiting
    2 at catctl.pl line 2071.

    cat catupgrd_catcon_17179.lst
    catcon: See /tmp/madhav/catupgrd*.log files for output generated by scripts
    catcon: See /tmp/madhav/catupgrd_*.lst files for spool files, if any

    catconInit: start logging catcon output at 2015-03-13 15:28:37


    catconInit: database is not open on the default instance

  3. Hm,

    it’s hard to say what’s going wrong in your case.
    Catcon.pl exists with this error:
    "database is not open on the default instance"
    Therefore I wonder in which state your database is – and from which environment you are running this.

    Would you please mind to open an SR so Support can double check with you?


  4. For the catconInit: database is not open on the default instance error that mr ran into:

    I ran into that issue when my ORACLE_SID was set to the RAC database name and not the local instance name when running the following:

    $ORACLE_HOME/perl/bin/perl catctl.pl -c ‘foo’ catupgrd.sql

  5. Geoff,

    can you please give me a few more details?

    In my above example my SID (this is the instance SID per node) is set via the environment. When I’m issuing:

    $ORACLE_HOME/perl/bin/perl catctl.pl -d $ORACLE_HOME/rdbms/admin -n 16 -M -l /home/oracle/mike catupgrd.sql

    it will not give me any issues.


  6. On a Windows 2008 system I am at the at the part in an 11g to 12c upgrade where I a running this command:

    %ORACLE_HOME%\perl\bin\perl catctl.pl -n 1 catupgrd.sql

    My %ORACLE_HOME% is set to the 12c path.

    Here is the output:
    Microsoft Windows [Version 6.1.7601]
    Copyright (c) 2009 Microsoft Corporation. All rights reserved.

    C:\app\scripts>cd %ORACLE_HOME%\rdbms\admin

    C:\app\oracle\product\\dbhome_1\RDBMS\ADMIN>%ORACLE_HOME%\perl\bin\perl catctl.pl -n 1 catupgrd.s

    Argument list for [catctl.pl]
    SQL Process Count n = 1
    SQL PDB Process Count N = 0
    Input Directory d = 0
    Phase Logging Table t = 0
    Log Dir l = 0
    Script s = 0
    Serial Run S = 0
    Upgrade Mode active M = 0
    Start Phase p = 0
    End Phase P = 0
    Log Id i = 0
    Run in c = 0
    Do not run in C = 0
    Echo OFF e = 1
    No Post Upgrade x = 0
    Reverse Order r = 0
    Open Mode Normal o = 0
    Debug catcon.pm z = 0
    Debug catctl.pl Z = 0
    Display Phases y = 0
    Child Process I = 0

    catctl.pl version:
    Oracle Base = C:\app\oracle

    Analyzing file catupgrd.sql
    Log files in C:/app/oracle/product/
    catcon: ALL catcon-related output will be written to catupgrd_catcon_4288.lst
    catcon: See catupgrd*.log files for output generated by scripts
    catcon: See catupgrd_*.lst files for spool files, if any
    catconInit: database is not open on the default instance

    Unexpected error encountered in catconInit; exiting
    No such file or directory at catctl.pl line 2071.


    The database is open to the default instance (so I believe). I did this via: startup upgrade pfile=C:\TEMP\upg\initMYSID.ora

    In my initMYSID.ora db_name=’MYSID’ etc.

    This is not my first 11g to 12c upgrade. It is the first time running into this error.

    Thanks for your help,


  7. Michael,

    can you connect to the database without password?

    (1) cmd
    (2) set your oracle environment (SID, Base etc)
    (3) > sqlplus / as sysdba
    SQL> startup upgrade pfile=…
    SQL> create spfile from pfile
    SQL> shutdown immediate
    SQL> startup upgrade
    SQL> exit
    (4) cd %ORACLE_HOME%\rdbms\admin
    (5) %ORACLE_HOME%\perl\bin\perl catctl.pl -l c:\tmp catupgrd.sql

    I’m assuming that you have run the preupgrade_fixups.sql and fixed all things signaled by the preupgrade.log and the preupgrade_fixups.sql?

    Does the same error happen when you avoid the "-n 1" to trigger a serial upgrade (why are you doing this? Just wondering …).


  8. Mike thanks for your help!

    No I cannot login using just sqlplus /. I have not used that form since being in a shop that used UNIX\LINUX for Oracle. Databases at this shop are all on Windows (one day to move to UNIX\LINUX).

    I did make the changes you suggested to switch from pfile to spfile. It is working\starting OK there now using the SPFILE.

    I was using the -n option thinking that controlled parallelism and with these initial systems that I was upgrading it might run better this way. I tried it without the -n option and still got the exact error.

    Yes I ran the preupgrade_fixups.sql. I think we’re OK there.

    Other things I tried without success:
    * I also noticed I had the TNS_ADMIN set to 11g path so deleted that. Restarted new console with proper env. Also recreated 12c listener using proper path.
    * Renamed the glogin.sql to zglogin.sql (from a Google search and Oracle KB that recommended that).
    * Played with -l pathing to scripts.

    What else can you recommend?

  9. Michael,

    I think the issue is most likely credited to the inability to use OS authentication. catctl.pl does not have an issue with it but the underlying mechanism seems to struggle. We’ve had the same problem with datapatch when you don’t use OS authentication.

    The OS authentication is not tailored to unix only – you can use it on windows as well. Please check if this would be a workaround for you.



  10. Folks,

    Re: catconInit: database is not open on the default instance

    If you have a RAC database Multitenant database then you must have your ORACLE_SID set to the INSTANCE NAME, not the DATABASE NAME when using catcon.pl. Mike mentioned this in his comments, but it can be a bit easy to miss…

    Two node RAC cluster, database is named ORCL
    Node 1 Instance: ORCL_1, Node 2 Instance: ORCL_2

    If you are on Node 1 then you’ll need to set your ORACLE_SID to orcl_1 and NOT to orcl when you use . oraenv or other techniques to set your environment.

    This will NOT work:

    . oraenv

    catcon.pl will generate the above error.

    This will work:
    . oraenv

    catcon.pl will now work…


  11. Pingback: Upgrade PDBs – One at a Time (unplug/plug) | Upgrade your Database - NOW!

  12. Pingback: Applying a PSU or BP to a Single-/Multitenant Environment | Upgrade your Database - NOW!

  13. Pingback: PDB unplug/plug/patch with PSUs or BPs | Upgrade your Database - NOW!

  14. Pingback: Upgrade Everything at Once – Multitenant Upgrade from Oracle 12.1 to 12.2 | Upgrade your Database - NOW!

  15. Pingback: Upgrading the PDB$SEED takes always longer than any other container

Leave a Reply

Your email address will not be published. Required fields are marked *

* Checkbox to comply with GDPR is required


I agree

This site uses Akismet to reduce spam. Learn how your comment data is processed.