Many of you recognized that we delivered a new feature with Oracle Database 19.10.0, the January 2021 Release Upgrade. So I’d like to put a few interesting things together from the upgrade and migration perspective regarding Oracle Database 19.10.0 and Blockchain Tables and COMPATIBLE.
What are Blockchain Tables?
At first, I don’t want to repeat what other people have blogged and written about already. You may just go through the blog posts of Connor McDonald and Tim Hall and others to learn more about this cool feature.
- Connor McDonald about “Blockchain Tables are here in Oracle 19c”
- Tim Hall on Blockchain Tables in Oracle 21c
But the topic I’d like to shed more light from the upgrade perspective has to do with the tasks necessary if you’d like to enable Blockchain Tables in Oracle 19.10.0.
You need to apply a one off patch. And you must raise COMPATIBLE for a release update (RU).
Blockchain One-Off Patch 32431413
So at first you need to download patch 32431413 from MyOracle Support.
- Bug 32431413: 19.10 RU FOR ORACLE IS MISSING QCPLK.O WHICH GETS LINKED INTO LIBGENERIC19.A
Without this patch you will receive an ORA-901: Invalid Create Statement when you try to create a blockchain table. As the README of the patch states, you can install it RAC rolling. In my environment I have no RAC, so I get a downtime anyway.
$ cd 32431413/ [CDB2] oracle@hol:~/32431413 $ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./ Oracle Interim Patch Installer version 18.104.22.168.23 Copyright (c) 2021, Oracle Corporation. All rights reserved. PREREQ session Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 22.214.171.124.23 OUI version : 126.96.36.199.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-02-17_22-57-31PM_1.log Invoking prereq "checkconflictagainstohwithdetail" Prereq "checkConflictAgainstOHWithDetail" passed. OPatch succeeded.
And then I’m applying it – without running datapatch as this step isn’t mentioned in the README.
$ $ORACLE_HOME/OPatch/opatch apply Oracle Interim Patch Installer version 188.8.131.52.23 Copyright (c) 2021, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 184.108.40.206.23 OUI version : 220.127.116.11.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-02-17_23-00-41PM_1.log Verifying environment and performing prerequisite checks... -------------------------------------------------------------------------------- Start OOP by Prereq process. Launch OOP... Oracle Interim Patch Installer version 18.104.22.168.23 Copyright (c) 2021, Oracle Corporation. All rights reserved. Oracle Home : /u01/app/oracle/product/19 Central Inventory : /u01/app/oraInventory from : /u01/app/oracle/product/19/oraInst.loc OPatch version : 22.214.171.124.23 OUI version : 126.96.36.199.0 Log file location : /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-02-17_23-02-34PM_1.log Verifying environment and performing prerequisite checks... OPatch continues with these patches: 32431413 Do you want to proceed? [y|n] y User Responded with: Y All checks passed. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u01/app/oracle/product/19') Is the local system ready for patching? [y|n] y User Responded with: Y Backing up files... Applying interim patch '32431413' to OH '/u01/app/oracle/product/19' Patching component oracle.rdbms, 188.8.131.52.0... Patching component oracle.rdbms.rsf, 184.108.40.206.0... Patch 32431413 successfully applied. Log file location: /u01/app/oracle/product/19/cfgtoollogs/opatch/opatch2021-02-17_23-02-34PM_1.log OPatch succeeded.
Ok, step 1 completed. Now on with step 2.
Sinan, a colleague from Oracle in Germany, send me a ping one night asking whether I may need to change my recommendation now:
In this blog post I clearly recommended with emphasis to never change COMPATIBLE within one release. Whether you are on 19.3.0 or 19.28.0, there is no feature and no reason to change COMPATIBLE to anything else than the default, 19.0.0.
Well, I was quite surprised since I didn’t expect that a feature would ever require to raise COMPATIBLE within a release. Even Oracle In-Memory, a really important feature did not require to adjust COMPATIBLE in a database patch set (!!). You can (and should) have COMPATIBLE in an 220.127.116.11 database set to “12.1.0” – and of course you can use In-Memory with no restrictions.
But this here is different. No Blockchain Tables without raising COMPATIBLE.
At least, I can see a very positive aspect here: Nobody will accidentally enable the feature as you need to change COMPATIBLE, and then restart your database.
When you plan to do this change as part of a database upgrade, then please keep in mind that you can’t downgrade anymore when you raised COMPATIBLE. Furthermore, you can’t even raise COMPATIBLE when you have a Guaranteed Restore Point (GRP) in place as AutoUpgrade for instance creates it by default.
What is the impact when you raise COMPATIBLE to 19.10.0?
The first pitfall I see is in a Multitenant environment. When you have one CDB with COMPATIBLE=19.10.0, and another one with the default, 19.0.0, you can’t unplug and plug PDBs back and forth anymore. As soon as you plugin a PDB sailing in with COMPATIBLE=19.0.0 into the Blockchain CDB with 19.10.0, COMPATIBLE gets raised implicitly. No big deal for now. Unless you’d like to unplug this PDB and plan to plug it into the original CDB or any other CDB with COMPATIBLE lower than 19.10.0. This is what I call the Silent COMPATIBLE change in Multitenant.
Hence, I would just think of doing this change when all your environments are on 19.10.0 already. Then you may be able to raise COMPATIBLE on all of them at the same time – of course with the expected restart of all databases. But then adjust your DBCA templates to ensure that new databases will be created with the exact same COMPATIBLE setting as well as otherwise you’ll end of with an inconvenient mix again one day.
Let me steal Connor’s example and create a blockchain table – but not in CDB$ROOT as this will fail with ORA-05729: blockchain table cannot be created in root container.
create blockchain table bc1 (d date, amt number) no drop until 0 days idle no delete until 31 days after insert hashing using "sha2_512" version v1;
And then a customer commented already on the blog since I wrote about new parameters in 19.10.0 and a default change. Export of the database is supposed to fail.
Ok, I’d like to test this – hence, I’ll export from my PDB:
$ expdp system/oracle@PDB1 parfile=x.par Export: Release 18.104.22.168.0 - Production on Wed Feb 17 23:32:11 2021 Version 22.214.171.124.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 126.96.36.199.0 - Production ORA-39001: invalid argument value ORA-39021: Database compatibility version 188.8.131.52.0 is not supported.
Ouch. Even though the user schema I’m exporting has nothing to do with the blockchain table I created, Data Pump does not function anymore.
Why is that? This has to do with:
- Bug 30828205 – EXPDP IS FAILING WITH ORA-39021 WHEN COMPATIBLE IS SET AS 19.4.0
- MOS Note: 2656508.1 – Datapump Job Fails With Error ORA-39021 on 19c
The one-off is not yet available for 19.10.0. As far as I see it will be included in the 19.11.0 RU – and Roy filed a BLR for it already on top of 19.10.0.
Luckily, there is a simple workaround: You set VERSION=184.108.40.206.0 in your expdp par file, for instance:
DIRECTORY=DATA_PUMP_DIR DUMPFILE=dumpfile.dmp LOGFILE=logfile.log SCHEMAS=SYSTEM EXCLUDE=STATISTICS LOGTIME=ALL METRICS=YES FLASHBACK_TIME=SYSTIMESTAMP VERSION=220.127.116.11.0
Then it will work fine.
Export with Blockchain Tables
But what happens if you have a Blockchain Table in this user schema you are exporting?
I’m doing a simple INSERT into my table:
insert into BC1 values (sysdate,1); commit;
And then I’m gonna export the entire schema:
$ expdp system/oracle@PDB1 parfile=x.par Export: Release 18.104.22.168.0 - Production on Wed Feb 17 23:45:36 2021 Version 22.214.171.124.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 126.96.36.199.0 - Production 17-FEB-21 23:45:40.561: Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@PDB1 parfile=x.par 17-FEB-21 23:45:41.038: W-1 Startup took 1 seconds 17-FEB-21 23:45:43.588: W-1 Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA 17-FEB-21 23:45:44.015: W-1 Processing object type SCHEMA_EXPORT/DEFAULT_ROLE 17-FEB-21 23:45:44.061: W-1 Completed 1 DEFAULT_ROLE objects in 1 seconds 17-FEB-21 23:45:44.411: W-1 Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA 17-FEB-21 23:45:44.432: W-1 Completed 1 PROCACT_SCHEMA objects in 0 seconds 17-FEB-21 23:45:49.500: W-1 Processing object type SCHEMA_EXPORT/TABLE/TABLE 17-FEB-21 23:46:01.343: W-1 Completed 2 TABLE objects in 14 seconds 17-FEB-21 23:46:01.734: W-1 Processing object type SCHEMA_EXPORT/TABLE/COMMENT 17-FEB-21 23:46:04.288: W-1 Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX 17-FEB-21 23:46:07.620: W-1 . . exported "SYSTEM"."BC1" 9.929 KB 1 rows in 0 seconds using direct_path 17-FEB-21 23:46:07.690: W-1 . . exported "SYSTEM"."T1" 2.585 MB 23427 rows in 0 seconds using direct_path 17-FEB-21 23:46:08.104: W-1 Completed 2 SCHEMA_EXPORT/TABLE/TABLE_DATA objects in 0 seconds 17-FEB-21 23:46:08.971: W-1 Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded 17-FEB-21 23:46:08.993: ****************************************************************************** 17-FEB-21 23:46:08.993: Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: 17-FEB-21 23:46:08.995: /u01/app/oracle/admin/CDB2/dpdump/BB90AC0C81B81CC6E055000000000001/dumpfile4.dmp 17-FEB-21 23:46:09.030: Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Wed Feb 17 23:46:09 2021 elapsed 0 00:00:32
Works. The Restrictions section in the documentation (you need to scroll down a bit as linking doesn’t hit the exact place) explains:
Oracle Data Pump Export and Import
Blockchain tables are exported and imported as regular tables, without the system-generated hidden columns.
You don’t need to worry as this is documented and expected.
We updated the documentation just recently to list the currently known restrictions with Blockchain Tables.
I think it is still worth playing with this feature – but in an environment where changing COMPATIBLE has no influence on other environments.
Actually, there may be more issues with having Blockchain Tables in Oracle Database 19.10.0 and 19.11.0.. For instance, we got informed by an Oracle ACE Director that there may be an issue with STARTUP UPGRADE once you have a Blockchain table either in your non-CDB or in your PDB. And this unfortunately is correct and fixed (as of now) only in Oracle 21c as of now.
- BUG 32517405 – 19.11: RMAN: DUPLICATE PDB ON TARGET CDB FROM SOURCE PDB + BLOCKCHAIN_TABLE FAILED WITH ORA-05748: DATABASE OPEN IN UPGRADE MODE FAILED DUE TO NONEMPTY SYS.BLOCKCHAIN_TABLE$ TABLE
Furthermore, our testing showed that there may be an issue with Time Zone patching as well when you have a Blockchain Table. The Blockchain Table itself isn’t the issue as you can’t create it with datatype TIMESTAMP WITH TIME ZONE – but the dictionary information created seems to be problematic:
- Bug 32780676 – UPGRADE TSTZ: ERROR CODE -5715: ORA-05715: OPERATION NOT ALLOWED ON THE BLOCKCHAIN OR IMMUTABLE TABLE
Stay tuned and let us know if you found similar issues as well.
Further Links and Information
- Patch 32431413 from MyOracle Support
- Connor McDonald about “Blockchain Tables are here in Oracle 19c”
- Tim Hall on Blockchain Tables in Oracle 21c
- Should you change COMPATIBLE when you apply an RU?
- New parameters in 19.10.0 and a default change
- Be aware of the silent COMPATIBLE change in Multitenant
I did database beta testing for the XMLDB team, for some 10+ years, as a customer, and indeed there are features that become available when compatible is set to the correct wanted “sub-level” (at least in 11x)…
Maybe unwanted, but it exists already a while…
actually if there is one area where I would have expected such hidden changes, it would have been XDB.
This in fact has never been documented or allowed – and hence, I still stay with my statement (but read your comment with a lot of interest for sure!).
Hi Mike, I’m migrating my 188.8.131.52 to 19.10 with patches 24150575 “MERGE ON DATABASE RU 184.108.40.206.0 OF 32325728 32325739, and 24048047 “TRACKING BUG TO UPDATE BUNDLEFCP.XML IN 19.10DBRU”, COMPATIBLE=19.0.0. Schema exports with COMPRESSION=ALL are hanging due to 2 of the tables with BLOB columns – takes 6 hours with same script used in 12c only took 3 hours. Haven’t made much progress with MOS – expdp of those 2 tabs alone using ACCESS_METHOD=EXTERNAL_TABLE helps but just a bandaid. DB is really not that big, 480G. Parallel option does’nt help since they have BLOBs. Wondering if you had any insight – not planning to increase my COMPATIBLE unless I have to. We’re stuck in a holding pattern until this gets resolved… thanks.
Support needs to explain you how to enable tracing – because without a proper data pump trace we can’t say anything.
Do you have an SR for this?
One of the common band-aids I’ve seen recently was to use:
as the stats import took too long.
But I have no golden magic key unfortunately.