COMPATIBLE
is an almost mystic parameter. It has a default setting for each release. But if you try to find more information what it really does, you won’t be very happy. And in reply to my previous blog post about whether you need to change COMPATIBLE when you apply an RU, I received the following question: When and how should you change COMPATIBLE
?
What does COMPATIBLE do?
To find an answer to this question, I consulted the documentation at first. And I found this:
COMPATIBLE
SettingCOMPATIBLE
ensures that new features do not write data formats or structures to disk that are not compatible with the earlier release, preventing a future downgrade. Features that require a higher value ofCOMPATIBLE
to work correctly may be restricted or disabled to ensure downgrades are possible.TheCOMPATIBLE
parameter specifies the Oracle version number that the database disk format must be compatible with. The database can be downgraded to the version specified in theCOMPATIBLE
parameter or any later version.
That is a good explanation.COMPATIBLE
determines the data format and structures in disk. This for instance may apply to the internal format of the redologs or the data file headers. In addition, features seem to be dependent on it. And – this is the most important fact to take note of – if you change it, you won’t be able to downgrade anymore.
And the Database Upgrade guide gives you even more insights.
Furthermore, I queried MOS with terms such as “COMPATIBLE” or “COMPATIBLE features“. But honestly, I found a lot of things. And not what I was looking for. Indeed, I read results from the community pages where people had exactly this question: Which features depend on COMPATIBLE
?
When you change COMPATIBLE …
What happens when you change COMPATIBLE
? At first, you can only set it to a higher value. You can’t revert it to a lower value since Oracle 9i. The ALTER DATABASE RESET COMPATIBILITY
command does not exist anymore for over a decade. Hence, once changed, you are not able to revert to the previous value.
Changing COMPATIBLE
requires a restart of the database:
shutdown immediate startup mount alter system set COMPATIBLE='19.0.0'; alter database open;
You can’t adjust COMPATIBLE
while the database is up and running. And you can’t have different COMPATIBLE
settings on different instances in a RAC environment.
When you change it, it most likely will adjust the structure of the controlfiles, the redologs and the data file headers. And in the database, you may have now access to new features such as the online move of data files.
COMPATIBLE
is also used to determine how your database acts to the “outside”, i.e. how it interacts with an application. Having COMPATIBLE
set to 12.1.0
in an Oracle 18c environment should allow the database to behave as it would be an 12.1 database.
But still, where is the list which features depend exactly on COMPATIBLE
?
Unfortunately there is no such list to share. It would be good to have such a list. But the Oracle database code is a complex thing. We are working on something like that for the next releases. Some features are pretty obvious and well documented. For instance, see the LONG identifiers we introduced in Oracle 12.2. When COMPATIBLE
is <12.2
you can’t create a table named “THIS_IS_MY_TABLE_AND_IT_HAS_A_VERY_LONG_NAME
” (44 byte), but when COMPATIBLE
is >=12.2
, then you can.
When should you change COMPATIBLE?
Now the real question somebody raised after my previous blog post Should you change COMPATIBLE when you apply an RU? was: “When should you change COMPATIBLE
?”
And our usual answer is: “Change it 7-10 days after the upgrade IF you will get additional downtime.”
Why that?
If you change it as part of the upgrade, you won’t be able to downgrade anymore. And actually, the database downgrade is the by far most simple way to go back a few days after you upgraded. Guaranteed Restore Points are your database life insurance for issues happening during an upgrade, the database downgrade is your insurance for issues happening after the upgrade.
Especially customers who don’t (or can’t) test much, should at least test the downgrade option.
The downgrade option is simple and fast. It usually is even faster than the database upgrade as we don’t need to remove everything from an upgraded database. But your database will behave as it did before.
Hence, the 7-10 days are an estimate based on experience. And honestly, neither Roy nor I have seen many databases being downgraded in real life after an upgrade. But at least we could convince DBAs and project leads to include the downgrade option as a possible fallback scenario and test it.
Still, remember that you will need additional downtime to change COMPATIBLE
afterwards as the database will need to be restarted. This is not the case in every project.
And if you ask if you could run a database for months or years with a lower COMPATIBLE
setting, then my answer is: Of course, you can. But you will miss a lot of cool features. That is the downside.
Another person on Twitter raised the hand and mentioned that running the database even just for a few days with a lower COMPATIBLE
setting may required an extra testing cycle, i.e. database upgrade but with lower COMPATIBLE
setting versus database upgraded with higher COMPATIBLE
setting. While I can see the point, I wouldn’t invest too much into testing with the lower setting (see the optimizer section below).
COMPATIBLE and Database Upgrades
During and after a database upgrade, we don’t change COMPATIBLE
. And neither does the DBUA. Or the new AutoUpgrade.
There are two exceptions to this rule:
COMPATIBLE
gets adjusted when there is a forcedCOMPATIBLE
change needed. This is the case when the release you are upgrading to does not support theCOMPATIBLE
setting of the source anymore. Simple example: You operate an Oracle 11.2.0.4 database withCOMPATIBLE=10.2.0,
then it will be pushed up. Or another example: You upgrade an Oracle 9.2 database to Oracle 11.1.0.7. Oracle 11g did not supportCOMPATIBLE=9.2.0
– hence, there had to be an implicitCOMPATIBLE
adjustment.- Be aware when operating with Oracle Multitenant across different releases. Once you unplug from a CDB with lower
COMPATIBLE
setting and plug into one with a higher setting, an implicitCOMPATIBLE
change will happen. And it will block your option to plug back into the source again.
COMPATIBLE and the Optimizer
A very common misunderstanding happens when COMPATIBLE
gets mixed with the Oracle optimizer. Both are independent from each other. As far as I know, an adjustment in COMPATIBLE
has no effect on the optimizer and its features. Those are derived from the value of OPTIMIZER_FEATURES_ENABLE
. Please see also the SQL Tuning Guide about how to influence the Optimizer with the setting of OPTIMIZER_FEATURES_ENABLE
.
How many digits should you use?
That is another typical question I see from time to time. And my advice is: 3
Why 3? The default is 3 number, for instance ‘18.0.0
‘. And 3 numbers are always enough.
Even a marquee feature such as Oracle In-Memory which got introduced with so called patch set 12.1.0.2 did not require to adjust COMPATIBLE
in the forth digit. If you don’t believe me, please check MOS Note: 1903683.1 – Oracle Database In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features:
The COMPATIBLE
parameter must be set to at least 12.1.0 to enable the feature, and the RDBMS version must be at least 12.1.0.2.0.
Hence, 3 numbers are always enough. And as I wrote here, there’s no need to change COMPATIBLE
when you apply an RU or RUR.
–Mike
Thanks Mike for this much needed article about this topic, especially that many organizations are upgrading their infrastructure to be in alignment with Oracle Product Life Cycle Support.
To share my personal experience with you and everybody….as we are upgrading and moving on from Oracle 12cR1 to 18c, we keep the compatibility parameter as is “12.1.0” to have the ability to “downgrade” if needed(this is kept for few months until next scheduled maintenance window) of course TEST/QA environments should have the compatibility changed in advance.
Moreover, i tried using “schema only account” feature in Oracle 18c with “12.1.0” compatibility and it worked fine. on the other hand when upgraded a database from 12cR1 to 12cR2 and keeping the compatibility parameter to “12.1.0” i couldn’t implement the online tablespace encryption (which is 12cR2 feature) and the following error was thrown:
ERROR at line 1:
ORA-00406: COMPATIBLE parameter needs to be 12.2.0.0.0 or greater
ORA-00722: Feature “Online Tablespace Conversion”
so any storage-level features will not work with older compatibility parameter value
Thanks,
Emad Al-Mousa
Thanks a lot, Emad!
Cheers,
Mike
Hi Mike,
I’m a little skeptical about this recommendation because:
1. 7-10 days of living with a lowered COMPATIBLE parameter doesn’t provide much assurance that you’ll be fine after updating it to a higher value. The simplest example could be that your month closing processes that are probably quite different than the everyday workloads, may happen after you’ve increased the COMPATIBLE setting. If these processes touch other code paths in the DB engine and cause any bugs to surface, then – there’s no downgrade option anyway.
2. From the testing perspective, this is a little nightmare to handle. i.e. when upgrading from 12.1.0 to 18.5.0, we’d normally test the applications with COMPATIBE=18.5.0, but now – we should upgrade to 18.5.0 and we should leave COMPATIBLE=12.1.0 for a few days (an untested combination)? This is not OK to me unless the whole testing is done twice (still about to meet a developers’ team that would be happy about this), once with each DB release/COMPATIBLE setting combination.
I’d rather support a multi-option recommendation like this:
A) Ensure the quality of testing is sufficient (database size/workload types/concurrency is comparable to prod) enough to be comfortable without the downgrade option and set the COMPATIBLE parameter to the highest level immediately after the upgrade.
B) Do the whole testing twice, once with the upgraded DB and lower COMPATIBLE setting and the other time with the upgraded DB and the upgraded COMPATIBLE setting, then live with the lower COMPATIBLE setting after the upgrade long enough to see that all types of workloads (i.e. month closing too) complete fine (this could be around 30 days).
C) For less critical databases (ones that can tolerate some crashes and some downtime) I’d apply your recommendation: 7-10 days with lower COMPATIBLE setting after the upgrade, and the testing could be done on the higher COMPATIBLE setting alone.
Maris
Hi Maris,
while I see all your points let me comment that:
1. This is IMHO driven usually by the optimizer. And the optimizer is independent of COMPATIBLE. In my experience, I haven’t seen issues coming up during testing because COMPATIBLE was kept or increased. After doing database upgrades and migrations for more than 15 years now, I’d say that 99,5% of issues for sure are not bound to COMPATIBLE. And for the remaining 0,5% I’m not 100% sure – but pretty sure.
2. Hence, testing. Yes, others have raised this point as well. But when I look at the things happening instantly when you change COMPATIBLE, I wonder a bit what the extra test results will dig out. Of course, we can argue that a change in redolog structure or an update to the datafile header my introduce change. But I have my honest doubts that you’ll have any impact in testing. Going deeper here, as far as I know, no matter how you set COMPATIBLE, the underscores and parameters, as long as they are not bound to a feature which depends on COMPATIBLE directly, will change as well just with the new executable.
Just to put this into the correct light:
1. I don’t claim my point of view is either the only or the correct one. It is just how I see things.
2. I value the ability for a seamless fallback way more than a hypothetical option that something can go wrong or different because of COMPATIBLE. To my experience, this is driven mostly by optimizer behavior changes or no features as we saw several times in the past. And this is not driven by COMPATIBLE.
But thanks again for your input – I really value your opinion based on real world experience a lot. And it is good that others can read it here as well as I don’t claim that mine is the correct or only one 🙂
Thanks, Maris!
Hi Mike,
Can you give an example of the kind of scenario where a downgrade would need to be performed? I’m talking about a few hours or days after the upgrade where transactions have happened, so it’s not feasible to restore from the pre-upgrade backup. What type of thing might be going wrong? I’m thinking it must be the sort of thing that won’t be triggered in a week when the COMPATIBLE gets changed up, or it’s not really any advantage. But I can’t think of an example.
And one more question about COMPATIBLE. We’re hitting a couple bugs in our upgrade testing. In general, does it seem unlikely that keeping COMPATIBLE set at the previous level would be a way to work around bugs?
Thanks for your insight.
Karen
Hi Karen,
an example from the real world:
Customer upgraded to an 11.2 version and did test only with a subset of data due to the huge size of the database.
They went live. A few days after, they run a once per week job on the database. It took so brutally long (and much more than before) that they couldn’t process loading the trucks waiting in the yard outside. A decision had to be taken: We downgrade. Losing data was not an option of course. Then in test (now with the real set of data) an optimizer bug got discovered. Not their fault of course – but if they’d tested with a real data set, they would have found it during testing already. And downgrading the optimizer parametrization didn’t solve it.
Just an example – luckily I haven’t seen too many databases being really downgraded in real life. But you should be prepared in my opinion.
Regarding your COMPATIBLE question:
I doubt that COMPATIBLE does effect the case where you hit bugs – unless you pick on a new feature which is depending on COMPATIBLE. But I’d guess you don’t mean that. Hence, a clear “No” – regardless of how you set COMPATIBLE, you will see them.
Cheers,
Mike
There are a few issues here in my mind about the impact of changing compatible at a later date.
Compatible parameter is there to allow a downgrade in case we encounter an optimizer bug. Therefore it is generally agreed that the parameter can be kept at a lower level for a quick rollback, where a fix forward isn’t available.
However, the compatible setting introduces new features. I give you an example – Oracle Text Indexes. The functionality of these change from 12.1 to 12.2. Therefore when the compatible value is set, an index rebuild will create a different type of index. This also impacts performance. As others have mentioned, changing the compatible parameter at a later date should not be a replacement for thorough testing. Both steps should be tested.
As you said, the changes that compatible parameter makes is NOT documented, we do not know what will happen, how do we know we will not hit a bug. So the method described above where you just go ahead later and set it is in my opinion a high risk strategy.
I agree with the others who suggest full testing with both settings as they are in effect 2 different versions of the database. Also to work through the new features sections of the documentations to look for test case scenarios.
Hi Sonya,
I see your point – and it would be easier if we would document all embedded changes 🙁
But in order to prevent optimizer changes, OFE and not compatible will be needed.
This can be even used on the fly without impacting the downgrade at all.
Therefore, I would favor to test with OFE settings but still leave COMPATIBLE for the recommended period of time.
And to figure out OFE impact, SQL Performance Analyzer can be used.
I see your point with TEXT indexes – and we have another “gem” with SPATIAL as well. But this is clearly our fault as these changes are not documented 🙁
Cheers,
Mike
Mike,
If COMPATIBLE is set to 19.0.0, Exachk reports this as FAIL.
FAIL => Database parameter COMPATIBLE should be set to recommended value
instance_vesion = 19.0.0.0.0 and compatible = 19.0.0
This is with latest version of Exachk 19.2.0_20190717. I am ignoring this recommendation. Soon Exachk, ORAChk and TFA are going to be merged into AHF, which is currently in beta.
Thanks,
Arun
Thanks Arun – I will feedback and check with the ExaChk guys.
Cheers,
Mike
Hi Mike,
I attended 19c session in dubai office, i was doing my upgrade 19c from 11gR2 all steps are successful , but when i am trying to reconfigure my EBS 12.1.3 application while running autoconfig in database node am getting below Error …
./adbldxml.pl
Starting context file generation for db Tier …
ERROR: Unable to set CLASSPATH
JDBC Driver jars and zips are missing in $ORACLE_HOME/jdbc/lib directory ..
files present there are as below …
simplefan.jar
ojdbc8.jar
ojdbc8dms.jar
ojdbc8_g.jar
ojdbc8dms_g.jar .
kindly let me know if you can provide any document for E-Business suite ..
Thanks
Aejaz
Hi Aejaz,
this is beyond my expertise – I know only the “EBS Interoperability” notes.
Please check with Oracle Support.
Thanks,
Mike