What is noncdb_to_pdb.sql?
The script
noncdb_to_pdb.sql
gets run only once in the life span of your database. It gets executed when a regular Oracle database (non-CDB) gets move to become part of a Single-/Multitenant deployment as a pluggable database (PDB). But why does it take so long?
You will never run this script again for this database. The script is a sanity script to clean up and change things in a non-CDB, and unite it finally and irreversible with the CDB.
How long does it take to complete?
Runtime varies a lot. It depends mainly on the number of objects it has to adjust and map to the CDB’s dictionary. And on the time it takes for recompilation as it does forced recompilations.
16:21:48 SQL> DECLARE
16:21:48 2 threads pls_integer := &&1;
16:21:48 3 BEGIN
16:21:48 4 utl_recomp.recomp_parallel(threads);
16:21:48 5 END;
16:21:48 6 /
PL/SQL procedure successfully completed.
Elapsed: 00:10:12.67
16:32:01 SQL>
16:32:01 SQL> SELECT dbms_registry_sys.time_stamp('utlrp_end') as timestamp from dual;
TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END 2017-06-08 16:32:01
1 row selected.
Can you speed it up? Can it be restarted?
Based on my experience you can’t speed it up except for adding CPU power for the recompilation. Therefore test the runtime upfront if you depend on certain downtime restrictions and rules.
In Oracle Database 12.1.0.2 the script can’t be restarted or resumed. Once it failed the entire setup had to be restored and the process need to be restarted from the beginning. That’s why a backup is absolutely essential.
Since Oracle Database 12.2.0.1 the script is supposed to be resumable – and I did check back with my Support colleagues in Romania who tested this already with positive results. Bu the overall run time hasn’t been improved.
Known issue
One known issue I came across (Thanks to Franky Weber Faust!):
noncdb_to_pdb.sql
will fail when Concurrent Statistics Gathering is on.
Error pattern:
15:35:52 SQL> DECLARE 15:35:52 2 threads pls_integer := &&1; 15:35:52 3 BEGIN 15:35:52 4 utl_recomp.recomp_parallel(threads); 15:35:52 5 END; 15:35:52 6 / DECLARE * ERROR at line 1: ORA-20000: Unable to gather statistics concurrently: Resource manager plan is not active or is not managing CPU usage ORA-06512: at "SYS.UTL_RECOMP", line 899 ORA-06512: at "SYS.DBMS_STATS", line 36873 ORA-06512: at "SYS.DBMS_STATS", line 4321 ORA-06512: at "SYS.DBMS_STATS", line 36652 ORA-06512: at "SYS.UTL_RECOMP", line 260 ORA-06512: at "SYS.UTL_RECOMP", line 803 ORA-06512: at line 4
This can be solved simply:
SQL> exec DBMS_STATS.SET_GLOBAL_PREFS('CONCURRENT','FALSE');
Furthermore the issue gets tracked and solved under bug 26434999.
Important Addition – Oracle 19.9.0 – May 5, 2021
Since Oracle 19.9.0 you will find an important improvement with noncdb_to_pdb,sql. We added the fix for bug 25809128 into this Release Update. And since RUs are cumulative, it is in 19.10.0, 19.11.0 etc as well of course.
This fix leads to less objects requiring recompilation. Especially with Oracle EBS database where you have a huge dictionary with so many objects the migration to Multitenant in 19c caused extremely long runtimes, in some cases several hours. With this fix there are significantly less objects to recompile. And hence, the script completes faster.
You find the fix listed in MOS Note: 2523220.1 – Database 19 Release Updates and Revisions Bugs Fixed Lists as well.
More information?
Please find a complete upgrade followed by a conversion into a PDB here:
- Oracle Database Upgrade Guide 12.2:
Upgrade Scenarios for Non-CDB database to a PDB on a CDB - MOS Note: 2000882.1
ORA-01405 while running “noncdb_to_pdb.sql” - MOS Note: 2523220.1 – Database 19 Release Updates and Revisions Bugs Fixed Lists
- Bug 25809128
–Mike
Hi Mike!
We are trying to convert our 12.2.0.1 non CDB database to PDB and during the execution of noncdb_to_pdb.sql we saw that almost all the time session spent on recompilation. Tracing of that session showed that actually it was doing d1s02myktu19h begin dbms_utility.validate(:1,:2,:3,:4); end; and it was waiting for the “single-task message” event.
18:56:41 DOC>#
18:56:41 SQL>
18:56:41 SQL> DECLARE
18:56:41 2 /* threads pls_integer := &&1;*/
18:56:41 3 threads pls_integer := 32;
18:56:41 4 BEGIN
18:56:41 5 utl_recomp.recomp_parallel(threads);
18:56:41 6 END;
18:56:41 7 /
PL/SQL procedure successfully completed.
Elapsed: 00:39:12.86
19:35:54 SQL>
19:35:54 SQL> SELECT dbms_registry_sys.time_stamp(‘utlrp_end’) as timestamp from dual;
TIMESTAMP
——————————————————————————–
COMP_TIMESTAMP UTLRP_END 2018-12-10 19:35:54
Seems to me there is some bug here, because 39 minutes for recompilation of objects it is too much!
Could you please help us to find out the cause of long dbms_utility.validate/single-task message waits ?
Thank you in advance!
Sergei,
I can’t do this via the blog. And I know how painful SRs are – but PLEASE PLEASE PLEASE open an SR for this and send me the number (ideally via email – firstname.lastname@oracle.com because than I can follow up with you much easier). I know that problem from several cases – but so far it is treated as “it does what it does”.
I’m willing to help you – but we need all information collected in an SR first.
Thanks,
Mike
Hi Mike,
Before running convert non cdb to pdb script, should the backup you take be for the CDB Database or the Non CDB so that you can repeat the script once the error is resolved.
In my case I’m upgrading an EBS 11.2.0.4 Database to 19.3.0
Regards
Edwin
Hi Edwin,
you should have a backup from before the upgrade. When you invoke noncdb_to_pdb.sql, you plugged in already. So you can’t revert from this stage anyways.
Of course, you could take a backup from the point in time when the upgrade had been finished.
And please take into account what the EBS teams defined as processes and guidelines:
https://mikedietrichde.com/2020/06/30/collection-of-ebs-upgrade-information-for-oracle-database-19c/
Cheers,
Mike
PS: In our experience so far, the script can take easily 3 hours or more for an EBS upgrade. There is a patch available which skips unnecessary operations – but it hasn’t been backported to 19c yet as far as I’m aware (bug 31344170).
Hi Mike.
How can utlrp.sql script can be resumed if failed during non-cdb to pdb step.
Hi Mahesh,
you need to rerun the entire noncdb_to_pdb.sql – it is rerunnable since 12.2.0.1.
Otherwise you please need to open an SR with Support.
Cheers,
Mike
Hi Mike
I have a question for you.
We have non-CDB 12.2.0.1.0 instances and we need to upgrade to 19.10.0.0.0 and to a PDB structure
Which is the best practices to do this ?
Autoupgrade?
dbca?
first upgrade to 19 and then convert from non-CDB to PDB, or viceversa ?
Thank you very much
Hi Barbara,
autoupgrade – clearly.
It will upgrade your database at first – and set a guaranteed restore point to protect you from issues happening during the upgrade.
If you’d like to, you could let autoupgrade also plugin your database into a CDB you created.
See here:
https://mikedietrichde.com/2020/05/20/autoupgrade-and-plug-in-to-a-cdb-with-a-single-command/
Cheers,
Mike
Hi Mike , I have done autoupgrades on windows bumby but ended up working but I’m now doing it on Solaris .
This time the upgrade completed without problem the cre3ate pdb step too but the concdbtopdb.sql convertion script hit an ORA-600 error here is an excerpt .
SQL> set wrap ON
SQL> set xmloptimizationcheck OFF
SQL>
SQL> shutdown immediate
pluggable database closed
SQL> alter pluggable database “MYDB” open read write force instances=all;
alter pluggable database “MYDB” open read write force instances=all
*
ERREUR Ã la ligne 1 :
ORA-00600: internal error code, arguments : [kcftis_internal: enq],
[30dd0a5f], [3], [kpdba.c], [10801], [dbsdrv.c], [11838], [], [], [], [], []
I know I should open an SR but I just wanted to know if you have seen such error or blogged about it already .
Thanks Mike
I did a quick search on MOS but I would need to see the entire trace and alert.log.
First question is whether you are upgrading to 19.12.0 or an older RU?
If it is older, try 19.12.0 before you open an SR.
If then the problem persists, please open an SR, upload alert.log, trace file(s), “opatch lsinventory” and especially the log of the noncdb_to_pdb.sql.
In case you did it with AutoUpgrade, it will be in your logging directory of AutoUpgrade.
Cheers,
Mike
Thank you Mike ,
I am Auto-upgrading from 12.1 non CDB to 19.9 in Solaris with the last version of the jar.
Source
DB Compatible 12.1.0.0.0/ DB Version 12.1.0.2.0
Operating System SunOS / Edition EE
Destination : 19.8
Config file
=====================
#Global configurations
global.autoupg_log_dir=/export/home/oracle/UPGRADE_TO_19c/UPG_logs
###################
# Database number 1
###################
upg1.dbname=MYDB
upg1.start_time=NOW
upg1.source_home=/usr1/oracle/product/12.1.0.2/db_1
upg1.target_home=/usr1/oracle/product/19.0.0/db_1
upg1.sid=MYDB
upg1.log_dir=/export/home/oracle/UPGRADE_TO_19c/UPG_logs/MYDB
upg1.upgrade_node=server4
upg1.target_version=19.8
upg1.target_cdb=SU14CDB
upg1.run_utlrp=yes
upg1.source_tns_admin_dir=/usr1/oracle/product/12.1.0.2/db_1/network/admin
upg1.timezone_upg=yes
upg1.restoration=yes
all violation are resolved and PDB is showing open but the conCDBtoPDB.sql has run almost everything before reopening the PDB with that ORA-600 error
SQL> Show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
———- —————————— ———- ———-
2 PDB$SEED READ ONLY NO
3 MYDB READ WRITE NO
Thank you
I don’t see the ORA-600 in your comment, but I guess and hope you did open an SR.
You can try going to a much newer RU instead, such as 19.12.0 instead of the >1 year old 19.8.0.
Thanks,
Mike
Hi Mike
I was running the noncdb_to_pdb.sql in my newly created pluggable, from a manifest file but after over 3 hours I decided to kill it off. I could not see any work being done on the database.
Now, when I come to rerun it I get the following
ORA-600 [kqldsyu1], [2], [0]. [3780740], []
ORA-06512 at SYS.DBMS_UTILITY
I have raised an SR with Oracle but trying to see if I can find a way to workaround/fix this issue.
As I used the MOVE command in my PDB creation I would need to carry out a subsequent restore and upgrade to get back into this position. Not insurmountable but I’d rather avoid it if I could.
Were you aware of any ORA-600 related issues with rerunning noncdb_to_pdb ?
Thanks in advance
Hi Allan,
then restore your PDB – I hope you used COPY and not NOCOPY since we recommend that you use NOCOPY only in case where you have a valid backup you can restore from.
Second, most likely it recompiled.
And finally, make sure you upgrade to 19.12 or newer since the recompilation part of noncdb_to_pdb.sql runs much faster due to an important fix.
Cheers,
Mike
Hi Mike,
As I know, Oracle (re)compiles objects when they are needed. Why do we need to compile them in the “downtime”, while running noncdb_to_pdb?
In one of our test migration, we had issues with compiling invalid objects that are using an invalid DBLinks (similar to Bug 18593819, but on linux). As a test, we even compiled manually ne of those objects (alter materialized view xxx compile) without success – sessions hangs.
So, we commented the call to utlrp from @?/rdbms/admin/loc_to_common3.sql (called by nonCDB to pdb) and the conversion was successful. We compiled after manually all objects from ORACLE_MAINTAINED users.
After the downtime ended and app team recreated the DB links, i expect that compilation will work.
What do you think?
Kind regards,
Sabin Ilie.
Hi Sabin,
I think this is the issue we discussed at the DWH GL. Sorry for getting back so late via the blog but I accumulated a huge backlog of comments I tried to clean up. Please drop me an email anyway regarding the topic when your time allows: Mike.Dietrich … oracle.com.
Cheers
Mike