Different block sizes when you plugin?

Interesting case Roy came across a week ago with a customer in the US. The customer received a plugin violation because of different DB_BLOCK_SIZE between the source and the destination database. This happened regardless of unplug/plug or plugging in a non-CDB into a CDB. Hence the question came up: Can you have different block sizes when you plugin?

It should be possible – but you may need a tiny little workaround to make it happen.

The documentation as far as I can see was not helpful. MOS Note: 1935365.1 (Multitenant Unplug/Plug Best Practices) wasn’t very helpful either.

Different block sizes when you plugin?

You can have databases of different block sizes (DB_BLOCK_SIZE) between 2k and 32k. We generally recommend 8k as the standard but especially historically it is not unlikely that you may have databases with different block sizes.

But what happens when you’d like to plugin a database (or PDB) with a different DB_BLOCK_SIZE than the destination?

For a quick test I create a new CDB in Oracle 12.2.0.1 with a DB_BLOCK_SIZE of 16k:

The I unplug the PDB and do the compatibility check:

alter session set container=cdb$root;
Session altered.

alter pluggable database FRANZ_PDB1 close;
Pluggable database altered.

alter pluggable database FRANZ_PDB11 unplug into '/home/oracle/FRANZ_PDB11.xml';
Pluggable database altered.

This creates the XML manifest file separately. Then I run the compatibility check in the destination CDB:

SET SERVEROUTPUT ON

DECLARE
  compatible CONSTANT VARCHAR2(3) := CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
  pdb_descr_file => '/home/oracle/FRANZ_PDB11.xml',
  pdb_name => 'FRANZ_PDB11')
  WHEN TRUE THEN 'YES' ELSE 'NO'
END;
BEGIN
  DBMS_OUTPUT.PUT_LINE(compatible);
END;
/

PDB_PLUG_IN_VIOLATIONS

The result is “NO” (as it is quite often with this call). But I can check PDB_PLUG_IN_VIOLATIONS for further details. It can be purged since Oracle Database 12.2.0.1. And here it tells me:

column message format a50
column status format a9
column type format a9
column con_id format 9

select con_id, type, message, status
  from PDB_PLUG_IN_VIOLATIONS
 where status<>'RESOLVED'
 order by time;

CON_ID TYPE	 MESSAGE					    STATUS
------ --------- -------------------------------------------------- ---------
     2 WARNING	 Database option CATJAVA mismatch: PDB installed ve PENDING
		 rsion NULL. CDB installed version 12.2.0.1.0.

     2 WARNING	 Database option JAVAVM mismatch: PDB installed ver PENDING
		 sion NULL. CDB installed version 12.2.0.1.0.

     2 WARNING	 Database option OLS mismatch: PDB installed versio PENDING
		 n NULL. CDB installed version 12.2.0.1.0.

     2 WARNING	 Database option XML mismatch: PDB installed versio PENDING
		 n NULL. CDB installed version 12.2.0.1.0.

     1 ERROR	 system tablespace block size(16384) does not match PENDING
		  configured block sizes, plug in is not allowed

     1 WARNING	 Database option CATJAVA mismatch: PDB installed ve PENDING
		 rsion NULL. CDB installed version 12.2.0.1.0.

     1 WARNING	 Database option JAVAVM mismatch: PDB installed ver PENDING
		 sion NULL. CDB installed version 12.2.0.1.0.

     1 WARNING	 Database option OLS mismatch: PDB installed versio PENDING
		 n NULL. CDB installed version 12.2.0.1.0.

     1 WARNING	 Database option XML mismatch: PDB installed versio PENDING
		 n NULL. CDB installed version 12.2.0.1.0.

     1 WARNING	 CDB parameter db_block_size mismatch: Previous 163 PENDING
		 84 Current 8192

     1 WARNING	 CDB parameter pga_aggregate_target mismatch: Previ PENDING
		 ous 400M Current 120M

In addition to the usual nonsense warnings (see here) you see the really important one above: The ERROR for the SYSTEM tablespace having a wrong block size. An attempt to plugin will fail without further treatment.

The solution

The solution is quite simple thus not obvious:
You have to define a DB_n_CACHE_SIZE parameter matching the DB_BLOCK_SIZE in the destination CDB.

alter system set DB_16k_CACHE_SIZE=100M scope=both;

Afterwards you can plugin. The copy operation happens automatically:

create pluggable database FRANZ_PDB11 using '/home/oracle/FRANZ_PDB11.xml'
file_name_convert=('/u02/oradata/FRANZ/FRANZ_PDB11','/u02/oradata/CDB2/FRANZ_PDB11');
Pluggable database created.

alter pluggable database FRANZ_PDB11 open;
Pluggable database altered.

alter pluggable database franz_pdb11 save state;
Pluggable database altered.

show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 5 FRANZ_PDB11			  READ WRITE NO

Done.
With no errors. That’s why I consider the warning of PDB_PLUG_IN_VIOLATIONS useless and obsolete.

And the Oracle Cloud (OCI-Classic, DBCS, DBaaS)?

As expected the default setting for a database in the Oracle OCI-Classic is 8k:

[oracle@MIKE18C ~]$ sqlplus / as sysdba

SQL*Plus: Release 18.0.0.0.0 - Production on Mon May 7 19:57:21 2018
Version 18.2.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.


Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0

SQL> show parameter db_block_size

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
db_block_size			     integer	 8192

Nothing unexpected or unusual – just be aware of it when you plugin.

Finally …

At the end of my journey I found the following articles on MOS – but as I haven’t attempted to plugin due to the error condition I haven’t seen the ORA- error. But without the error you have to know what to look for on MOS. Of course I could have found the second note below – if I would have searched on MOS upfront.

–Mike

9 thoughts on “Different block sizes when you plugin?

  1. Hello Mike,
    just in case you have seen something similar before? We created an 16k CDB with April RU 18.2.0.0.180417. But we cannot create PDBs.

    NAME TYPE VALUE
    db_block_size integer 16384
    enable_pluggable_database boolean true

    CREATE PLUGGABLE DATABASE czebe ADMIN USER PDBADMIN IDENTIFIED BY ROLES=(CONNECT) file_name_convert=NONE;

    Error starting at line : 1 in command –
    CREATE PLUGGABLE DATABASE czebe ADMIN USER PDBADMIN IDENTIFIED BY ROLES=(CONNECT) file_name_convert=NONE
    Error report –
    ORA-00604: error occurred at recursive SQL level 1
    ORA-29913: error in executing ODCIEXTTABLEFETCH callout
    ORA-29400: data cartridge error
    KUP-04095: preprocessor command /u01/app/oracle/product/18.1.0.0/dbhome_1/QOpatch/qopiprep.bat encountered error “/u01/app/oracle/product/18.1.0.0/dbhome_1/QOpatch/qopiprep.bat: line 67: /u01/app/oracle/product/18.1.0.0/dbhome_1/rdbms/log/stout_36558.txt: Permission denied

    00604. 00000 – “error occurred at recursive SQL level %s”
    *Cause: An error occurred while processing a recursive SQL statement
    (a statement applying to internal dictionary tables).
    *Action: If the situation described in the next error on the stack
    can be corrected, do so; otherwise contact Oracle Support.

    No problems with 8k CDBs and PDBs.

    Best regards,
    Dagmar

  2. Thanks, Mike,
    alter system set DB_16k_CACHE_SIZE fails with ORA-00380: cannot specify db_16k_cache_size since 16K is the standard block size
    I’ll open an SR.

    • Hello Dagmar,

      I did the following:
      18.3.0
      Create fresh CDB with DBCA (custom database, 16 block_size choosen, no PDBs created, characterset AL32UTF8)
      It took quite a bit to have it created (over 1 hour!)
      Then:

      SQL*Plus: Release 18.0.0.0.0 – Production on Thu Jul 26 14:51:00 2018
      Version 18.3.0.0.0

      Copyright (c) 1982, 2018, Oracle. All rights reserved.

      Connected to:
      Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 – Production
      Version 18.3.0.0.0

      SQL> create pluggable database pdb1 admin user adm identified by adm file_name_convert=(‘/u02/oradata/HUGO/pdbseed’,’/u02/oradata/HUGO/pdb1′);
      Pluggable database created.

      SQL> alter pluggable database pdb1 open;
      Pluggable database altered.

      SQL> show pdbs

      CON_ID CON_NAME OPEN MODE RESTRICTED
      ———- —————————— ———- ———-
      2 PDB$SEED READ ONLY NO
      3 PDB1 READ WRITE NO

      SQL> show parameter db_block_size

      NAME TYPE VALUE
      ———————————— ———– ——————————
      db_block_size integer 16384

      Mike

      • Dagmar,

        can you check if this is true in your case:

        I have seen this if customer has role separation enabled. If the linux-groups of GI owner or RDBMS owner does not have write permissions for $OH/rdbms/log , this issue is seen.
        We already have an open bug 26301387 for this. Most of the symptoms match, but, to be exactly sure, I would like to know more info and fully confirm if it’s a duplicate of above bug.

        Work-around is to give required permissions. “ chmod 775 $ORACLE_HOME/rdbms/log “

        Can you let me know if this is the issue?

        Thanks,
        Mike

  3. We do have role separation (user grid for clusterware and ASM, user oracle for databases).
    ll -d $ORACLE_HOME/rdbms/log
    drwxr-xr-x 31 oracle oinstall 20480 Jul 26 15:34 /u01/app/oracle/product/18.1.0.0/dbhome_1/rdbms/log

    chmod 775 $ORACLE_HOME/rdbms/log

    ll -d $ORACLE_HOME/rdbms/log
    drwxrwxr-x 31 oracle oinstall 20480 Jul 26 16:35 /u01/app/oracle/product/18.1.0.0/dbhome_1/rdbms/log

    czbc_exa3> CREATE PLUGGABLE DATABASE czebe ADMIN USER PDBADMIN IDENTIFIED BY… ROLES=(CONNECT) file_name_convert=NONE;

    Pluggable database CZEBE created.

    ===> Thank you very much, Mike, you saved us many days with support!!!

    Strange it did not happen with 8k block size dbs, but only 16k, who knows, what is different there.
    Bug is apparently not (yet) customer visible.

    Best regards,
    Dagmar

Leave a Reply

Your email address will not be published. Required fields are marked *

* Checkbox to comply with GDPR is required

*

I agree

This site uses Akismet to reduce spam. Learn how your comment data is processed.