COMPATIBLE
is an almost mystic parameter. It has a default setting for each release. But if you try to find more information what it really does, you won’t be very happy. And in reply to my previous blog post about whether you need to change COMPATIBLE when you apply an RU, I received the following question: When and how should you change COMPATIBLE
?
What does COMPATIBLE do?
To find an answer to this question, I consulted the documentation at first. And I found this:
COMPATIBLE
SettingCOMPATIBLE
ensures that new features do not write data formats or structures to disk that are not compatible with the earlier release, preventing a future downgrade. Features that require a higher value ofCOMPATIBLE
to work correctly may be restricted or disabled to ensure downgrades are possible.TheCOMPATIBLE
parameter specifies the Oracle version number that the database disk format must be compatible with. The database can be downgraded to the version specified in theCOMPATIBLE
parameter or any later version.
That is a good explanation.COMPATIBLE
determines the data format and structures in disk. This for instance may apply to the internal format of the redologs or the data file headers. In addition, features seem to be dependent on it. And – this is the most important fact to take note of – if you change it, you won’t be able to downgrade anymore.
And the Database Upgrade guide gives you even more insights.
Furthermore, I queried MOS with terms such as “COMPATIBLE” or “COMPATIBLE features“. But honestly, I found a lot of things. And not what I was looking for. Indeed, I read results from the community pages where people had exactly this question: Which features depend on COMPATIBLE
?
When you change COMPATIBLE …
What happens when you change COMPATIBLE
? At first, you can only set it to a higher value. You can’t revert it to a lower value since Oracle 9i. The ALTER DATABASE RESET COMPATIBILITY
command does not exist anymore for over a decade. Hence, once changed, you are not able to revert to the previous value.
Changing COMPATIBLE
requires a restart of the database:
alter system set COMPATIBLE='19.0.0' scope=spfile; shutdown immediate startup mount
You can’t adjust COMPATIBLE
while the database is up and running. And you can’t have different COMPATIBLE
settings on different instances in a RAC environment.
When you change it, it most likely will adjust the structure of the controlfiles, the redologs and the data file headers. And in the database, you may have now access to new features such as the online move of data files.
COMPATIBLE
is also used to determine how your database acts to the “outside”, i.e. how it interacts with an application. Having COMPATIBLE
set to 12.1.0
in an Oracle 18c environment should allow the database to behave as it would be an 12.1 database.
But still, where is the list which features depend exactly on COMPATIBLE
?
Unfortunately there is no such list to share. It would be good to have such a list. But the Oracle database code is a complex thing. We are working on something like that for the next releases. Some features are pretty obvious and well documented. For instance, see the LONG identifiers we introduced in Oracle 12.2. When COMPATIBLE
is <12.2
you can’t create a table named “THIS_IS_MY_TABLE_AND_IT_HAS_A_VERY_LONG_NAME
” (44 byte), but when COMPATIBLE
is >=12.2
, then you can.
When should you change COMPATIBLE?
Now the real question somebody raised after my previous blog post Should you change COMPATIBLE when you apply an RU? was: “When should you change COMPATIBLE
?”
And our usual answer is: “Change it 7-10 days after the upgrade IF you will get additional downtime.”
Why that?
If you change it as part of the upgrade, you won’t be able to downgrade anymore. And actually, the database downgrade is the by far most simple way to go back a few days after you upgraded. Guaranteed Restore Points are your database life insurance for issues happening during an upgrade, the database downgrade is your insurance for issues happening after the upgrade.
Especially customers who don’t (or can’t) test much, should at least test the downgrade option.
The downgrade option is simple and fast. It usually is even faster than the database upgrade as we don’t need to remove everything from an upgraded database. But your database will behave as it did before.
Hence, the 7-10 days are an estimate based on experience. And honestly, neither Roy nor I have seen many databases being downgraded in real life after an upgrade. But at least we could convince DBAs and project leads to include the downgrade option as a possible fallback scenario and test it.
Still, remember that you will need additional downtime to change COMPATIBLE
afterwards as the database will need to be restarted. This is not the case in every project.
And if you ask if you could run a database for months or years with a lower COMPATIBLE
setting, then my answer is: Of course, you can. But you will miss a lot of cool features. That is the downside.
Another person on Twitter raised the hand and mentioned that running the database even just for a few days with a lower COMPATIBLE
setting may required an extra testing cycle, i.e. database upgrade but with lower COMPATIBLE
setting versus database upgraded with higher COMPATIBLE
setting. While I can see the point, I wouldn’t invest too much into testing with the lower setting (see the optimizer section below).
COMPATIBLE and Database Upgrades
During and after a database upgrade, we don’t change COMPATIBLE
. And neither does the DBUA. Or the new AutoUpgrade.
There are two exceptions to this rule:
COMPATIBLE
gets adjusted when there is a forcedCOMPATIBLE
change needed. This is the case when the release you are upgrading to does not support theCOMPATIBLE
setting of the source anymore. Simple example: You operate an Oracle 11.2.0.4 database withCOMPATIBLE=10.2.0,
then it will be pushed up. Or another example: You upgrade an Oracle 9.2 database to Oracle 11.1.0.7. Oracle 11g did not supportCOMPATIBLE=9.2.0
– hence, there had to be an implicitCOMPATIBLE
adjustment.- Be aware when operating with Oracle Multitenant across different releases. Once you unplug from a CDB with lower
COMPATIBLE
setting and plug into one with a higher setting, an implicitCOMPATIBLE
change will happen. And it will block your option to plug back into the source again.
COMPATIBLE and the Optimizer
A very common misunderstanding happens when COMPATIBLE
gets mixed with the Oracle optimizer. Both are independent from each other. As far as I know, an adjustment in COMPATIBLE
has no effect on the optimizer and its features. Those are derived from the value of OPTIMIZER_FEATURES_ENABLE
. Please see also the SQL Tuning Guide about how to influence the Optimizer with the setting of OPTIMIZER_FEATURES_ENABLE
.
How many digits should you use?
That is another typical question I see from time to time. And my advice is: 3
Why 3? The default is 3 number, for instance ‘18.0.0
‘. And 3 numbers are always enough.
Even a marquee feature such as Oracle In-Memory which got introduced with so called patch set 12.1.0.2 did not require to adjust COMPATIBLE
in the forth digit. If you don’t believe me, please check MOS Note: 1903683.1 – Oracle Database In-Memory Option (DBIM) Basics and Interaction with Data Warehousing Features:
The COMPATIBLE
parameter must be set to at least 12.1.0 to enable the feature, and the RDBMS version must be at least 12.1.0.2.0.
Hence, 3 numbers are always enough. And as I wrote here, there’s no need to change COMPATIBLE
when you apply an RU or RUR.
–Mike
Thanks Mike for this much needed article about this topic, especially that many organizations are upgrading their infrastructure to be in alignment with Oracle Product Life Cycle Support.
To share my personal experience with you and everybody….as we are upgrading and moving on from Oracle 12cR1 to 18c, we keep the compatibility parameter as is “12.1.0” to have the ability to “downgrade” if needed(this is kept for few months until next scheduled maintenance window) of course TEST/QA environments should have the compatibility changed in advance.
Moreover, i tried using “schema only account” feature in Oracle 18c with “12.1.0” compatibility and it worked fine. on the other hand when upgraded a database from 12cR1 to 12cR2 and keeping the compatibility parameter to “12.1.0” i couldn’t implement the online tablespace encryption (which is 12cR2 feature) and the following error was thrown:
ERROR at line 1:
ORA-00406: COMPATIBLE parameter needs to be 12.2.0.0.0 or greater
ORA-00722: Feature “Online Tablespace Conversion”
so any storage-level features will not work with older compatibility parameter value
Thanks,
Emad Al-Mousa
Thanks a lot, Emad!
Cheers,
Mike
Hi Mike,
I’m a little skeptical about this recommendation because:
1. 7-10 days of living with a lowered COMPATIBLE parameter doesn’t provide much assurance that you’ll be fine after updating it to a higher value. The simplest example could be that your month closing processes that are probably quite different than the everyday workloads, may happen after you’ve increased the COMPATIBLE setting. If these processes touch other code paths in the DB engine and cause any bugs to surface, then – there’s no downgrade option anyway.
2. From the testing perspective, this is a little nightmare to handle. i.e. when upgrading from 12.1.0 to 18.5.0, we’d normally test the applications with COMPATIBE=18.5.0, but now – we should upgrade to 18.5.0 and we should leave COMPATIBLE=12.1.0 for a few days (an untested combination)? This is not OK to me unless the whole testing is done twice (still about to meet a developers’ team that would be happy about this), once with each DB release/COMPATIBLE setting combination.
I’d rather support a multi-option recommendation like this:
A) Ensure the quality of testing is sufficient (database size/workload types/concurrency is comparable to prod) enough to be comfortable without the downgrade option and set the COMPATIBLE parameter to the highest level immediately after the upgrade.
B) Do the whole testing twice, once with the upgraded DB and lower COMPATIBLE setting and the other time with the upgraded DB and the upgraded COMPATIBLE setting, then live with the lower COMPATIBLE setting after the upgrade long enough to see that all types of workloads (i.e. month closing too) complete fine (this could be around 30 days).
C) For less critical databases (ones that can tolerate some crashes and some downtime) I’d apply your recommendation: 7-10 days with lower COMPATIBLE setting after the upgrade, and the testing could be done on the higher COMPATIBLE setting alone.
Maris
Hi Maris,
while I see all your points let me comment that:
1. This is IMHO driven usually by the optimizer. And the optimizer is independent of COMPATIBLE. In my experience, I haven’t seen issues coming up during testing because COMPATIBLE was kept or increased. After doing database upgrades and migrations for more than 15 years now, I’d say that 99,5% of issues for sure are not bound to COMPATIBLE. And for the remaining 0,5% I’m not 100% sure – but pretty sure.
2. Hence, testing. Yes, others have raised this point as well. But when I look at the things happening instantly when you change COMPATIBLE, I wonder a bit what the extra test results will dig out. Of course, we can argue that a change in redolog structure or an update to the datafile header my introduce change. But I have my honest doubts that you’ll have any impact in testing. Going deeper here, as far as I know, no matter how you set COMPATIBLE, the underscores and parameters, as long as they are not bound to a feature which depends on COMPATIBLE directly, will change as well just with the new executable.
Just to put this into the correct light:
1. I don’t claim my point of view is either the only or the correct one. It is just how I see things.
2. I value the ability for a seamless fallback way more than a hypothetical option that something can go wrong or different because of COMPATIBLE. To my experience, this is driven mostly by optimizer behavior changes or no features as we saw several times in the past. And this is not driven by COMPATIBLE.
But thanks again for your input – I really value your opinion based on real world experience a lot. And it is good that others can read it here as well as I don’t claim that mine is the correct or only one 🙂
Thanks, Maris!
Thanks a lot Mike and Maris, it is really informational:-)
Welcome 🙂
Cheers,
Mike
Hi Mike,
Can you give an example of the kind of scenario where a downgrade would need to be performed? I’m talking about a few hours or days after the upgrade where transactions have happened, so it’s not feasible to restore from the pre-upgrade backup. What type of thing might be going wrong? I’m thinking it must be the sort of thing that won’t be triggered in a week when the COMPATIBLE gets changed up, or it’s not really any advantage. But I can’t think of an example.
And one more question about COMPATIBLE. We’re hitting a couple bugs in our upgrade testing. In general, does it seem unlikely that keeping COMPATIBLE set at the previous level would be a way to work around bugs?
Thanks for your insight.
Karen
Hi Karen,
an example from the real world:
Customer upgraded to an 11.2 version and did test only with a subset of data due to the huge size of the database.
They went live. A few days after, they run a once per week job on the database. It took so brutally long (and much more than before) that they couldn’t process loading the trucks waiting in the yard outside. A decision had to be taken: We downgrade. Losing data was not an option of course. Then in test (now with the real set of data) an optimizer bug got discovered. Not their fault of course – but if they’d tested with a real data set, they would have found it during testing already. And downgrading the optimizer parametrization didn’t solve it.
Just an example – luckily I haven’t seen too many databases being really downgraded in real life. But you should be prepared in my opinion.
Regarding your COMPATIBLE question:
I doubt that COMPATIBLE does effect the case where you hit bugs – unless you pick on a new feature which is depending on COMPATIBLE. But I’d guess you don’t mean that. Hence, a clear “No” – regardless of how you set COMPATIBLE, you will see them.
Cheers,
Mike
There are a few issues here in my mind about the impact of changing compatible at a later date.
Compatible parameter is there to allow a downgrade in case we encounter an optimizer bug. Therefore it is generally agreed that the parameter can be kept at a lower level for a quick rollback, where a fix forward isn’t available.
However, the compatible setting introduces new features. I give you an example – Oracle Text Indexes. The functionality of these change from 12.1 to 12.2. Therefore when the compatible value is set, an index rebuild will create a different type of index. This also impacts performance. As others have mentioned, changing the compatible parameter at a later date should not be a replacement for thorough testing. Both steps should be tested.
As you said, the changes that compatible parameter makes is NOT documented, we do not know what will happen, how do we know we will not hit a bug. So the method described above where you just go ahead later and set it is in my opinion a high risk strategy.
I agree with the others who suggest full testing with both settings as they are in effect 2 different versions of the database. Also to work through the new features sections of the documentations to look for test case scenarios.
Hi Sonya,
I see your point – and it would be easier if we would document all embedded changes 🙁
But in order to prevent optimizer changes, OFE and not compatible will be needed.
This can be even used on the fly without impacting the downgrade at all.
Therefore, I would favor to test with OFE settings but still leave COMPATIBLE for the recommended period of time.
And to figure out OFE impact, SQL Performance Analyzer can be used.
I see your point with TEXT indexes – and we have another “gem” with SPATIAL as well. But this is clearly our fault as these changes are not documented 🙁
Cheers,
Mike
Mike,
If COMPATIBLE is set to 19.0.0, Exachk reports this as FAIL.
FAIL => Database parameter COMPATIBLE should be set to recommended value
instance_vesion = 19.0.0.0.0 and compatible = 19.0.0
This is with latest version of Exachk 19.2.0_20190717. I am ignoring this recommendation. Soon Exachk, ORAChk and TFA are going to be merged into AHF, which is currently in beta.
Thanks,
Arun
Thanks Arun – I will feedback and check with the ExaChk guys.
Cheers,
Mike
Hi Mike,
I attended 19c session in dubai office, i was doing my upgrade 19c from 11gR2 all steps are successful , but when i am trying to reconfigure my EBS 12.1.3 application while running autoconfig in database node am getting below Error …
./adbldxml.pl
Starting context file generation for db Tier …
ERROR: Unable to set CLASSPATH
JDBC Driver jars and zips are missing in $ORACLE_HOME/jdbc/lib directory ..
files present there are as below …
simplefan.jar
ojdbc8.jar
ojdbc8dms.jar
ojdbc8_g.jar
ojdbc8dms_g.jar .
kindly let me know if you can provide any document for E-Business suite ..
Thanks
Aejaz
Hi Aejaz,
this is beyond my expertise – I know only the “EBS Interoperability” notes.
Please check with Oracle Support.
Thanks,
Mike
Hi,
And if i set compatible to a lower version – let’s say 18.0.0. – does it means that all desupported features of Oracle19C will be available?
No, unfortunately not.
COMPATIBLE turns on kernel features and behavior in the database. But as you deploy a fresh database or upgrade another one to 19c, all the things in 19c will happen such as the Multimedia and Streams API removal etc.
Cheers,
Mike
Thank you, Mike.
Just to make it clear – my use case ( one of 🙂 )
upgrade to 19c with compatible = 12.2.0.1.
There are changes in ALL, DBA, and USER_ARGUMENTS view in 18C.
Should I change my code accordingly? Or I will have old 12.2.0.1 all_arguments view with more rows , not only top-level items ?
I mean – if I’ve already tested my application with
12.2.0.1 – when I upgrade to 19c even with compatible = 12.2.0.1 – I have all changes of 18c and 19c , correct?
Thanks
Evgenia
Hi Evgenia,
changes in the dictionary views are documented in the Upgrade Guide – and none of them will be controlled by COMPATIBLE. As these views get adjusted during the database upgrade, COMPATIBLE won’t have any effect on it. When you use dictionary views in your application, you should make sure that this doesn’t raise an error.
Thanks,
Mike
Hi Mike,
It makes sense..
Thank you for the explanation!
Evgenia
Hi Mike,
thanks for the great clarification – it helps a lot to avoid some misunderstandings.
Just a one little side note – the parameter COMPATIBLE must be set with scope=spfile, otherwise the error appears:
SQL> alter system set COMPATIBLE=’19.3.0′;
alter system set COMPATIBLE=’19.3.0′
*
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified
SQL> alter system set COMPATIBLE=’19.3.0′ scope=spfile;
SQL> shu immediate
SQL> startup
Cheers!
Dejan
Hi Dejan,
you are … unfortunately 🙂 … VERY right, and I have no idea why I documented this wrongly.
I changed it – thanks!
Take care,
Mike
Hi Mike,
A really good article here.
I have a question that is excruciating me.
Can I set Compatible parameter when I am installing a database?
For instance: I want to install a 18c Database with Compatible parameter to 11.2.0 at database installation.
Is it possible to do it or I have to install 11g and then upgrade it to 18.9?
Greetings,
Gabriele Rosato
Hi Gabriele,
you install 18.3.0 (on prem base release), you apply 18.9.0 RU to this home, and then you create a database with DBCA. Before you kick it off, the DBCA has an “Advanced Parameter” button. Hit it and change COMPATIBLE to the VALUE you want. I guess this will only work out when you do a CUSTOM creation, not the OLTP/DWH prebuilt databases.
Cheers,
Mike
Unfortunately, 19.3c SE on Win has a wicked bug in apex_rest / http_utl, and we have to downgrade.
The tricky thing here is :
How to downgrade to a version later than the one specified in the COMPATIBLE parameter?
Currently the compatibility of 19.3c is 12.1. So how to tell the DB to downgrade to 18.3 and not to 12.1?
is it just matter of changing COMPATIBLE from 12.1 to 18.3 Then the DB will be downgraded from 19.3 to 18.3 ? Thanks
Hi Mohamad,
this is very simple – but I confess, not obvious.
Leave COMPATIBLE as it is.
Startup up database with STARTUP DOWNGRADE in the 19c home.
Make sure you spool into a file, any you set “SET SERVEROUT ON TERMOUT ON ECHO ON TIMING ON”
Then invoke catdwgrd.sql.
Once it has been completed, start the database with STARTUP UPGRADE in the 18c home.
SET TERMOUT ON ECHO ON TIMING ON
(not serverout as this does not work in this stage)
Spool into a file.
Run catrelod.sql from the 18c home.
Cheers,
Mike
Once it has completed, start the database
Hi Mike, Great article. Per my understanding any RMAN backups before the compatibility change will no longer valid post the change a fresh full backup is required. Could you please comment on this?
Hi Ravi,
why shouldn’t you be able to restore a backup you have taken before an upgrade or before a COMPATIBLE change anymore?
I’m not aware of an issue here.
But please, if you know more, shed some light.
Cheers,
Mike
Thanks for your response. Based on your response it is possible to restore a full backup taken from before this change and roll forward thru incremental backups taken after the change to the current state?
Yes, you could do this of course.
Cheers,
Mike
Hi Mike, can you please explain this sentence: “COMPATIBLE is also used to determine how your database acts to the “outside”, i.e. how it interacts with an application.” We are planning to upgrade our database(v12.2.0 with compatible 11.2.0) to 19c. And our code Natural, and c-code is + 20 years old.
Do you think this will be a concern?
regards
/Ulf
Hi Ulf,
“it depends” 🙂
Actually you need to test.
But regarding the COMPATIBLE setting, it really determines how the database acts. And this applies to the OCI (Call Interface, not cloud) as well.
The biggest issue or change usually is the SQL Net client protocol in such cases. So you may need to pay extra attention to these things:
https://mikedietrichde.com/2017/04/25/what-happens-to-password_versions-during-an-upgrade-to-oracle-12-2/
https://mikedietrichde.com/2017/04/24/having-some-fun-with-sec_case_sensitive_logon-and-ora-1017/
But generally, I’d expect an application which ran fine with 12.2.0.1 and COMPATIBLE 11.2.0 to work with 19c and COMPATIBLE 11.2.0 as well.
Cheers,
Mike
Hello Mike,
With the recent announcements that the feature Blockchain tables got backported to 19c as of RU 19.10.0, it is good to see that you already made a clear statement with an addition to this blogpost regarding your view on the mandatory change needed for the parameter “compatible” in order to be able to make use of it.
Ofcourse it’s very nice from Oracle that some features, introduced in an Innovation Release like 20c or 21c, get backported to Oracle 19c. We saw that for example already happening with the feature SQL Macros which got backported as of Release Update 19.6.0, however until now we did not had to touch the parameter “compatible” in order to make use of it.
I guess the change in the parameter “compatible” is needed because Blockchain tables writes data formats or structures to disk which are not compatible with an earlier 19c RU.
This means that the documentation about the parameter “COMPATIBLE” might also needs to be changed as it currently states :
“Setting COMPATIBLE ensures that new features do not write data formats or structures to disk that are not compatible with the earlier release”
I suppose that as of now the statement “earlier release”, in the definition of the parameter “COMPATIBLE”, might need to be changed to “earlier release or earlier Release Update” ?
PS : By the way it seems you want to stick to the year 2020 as your addition says “Addition Jan 29, 2020” ;-). I suppose it should have been “Addition Jan 29, 2021” (unless you have a crystal ball ofcourse ;-)).
Greetings,
Chris
Hahaha – thanks Chris for the 2020/2021 🙂
One of my parallel processes was still stuck in 2020 …
And yes, my recommendation stays unless your entire environment is fully on 19.10.0 – leave COMPATIBLE on the default.
Cheers,
Mike
However, if you’re running with COMPATIBLE=19.10.0, then you can no longer run expdp or impdp.
ORA-39021: Database compatibility version 19.10.0.0.0 is not supported.
You get the same problem if you’re running COMPATIBLE=19.4.0 on a 19.4.0 database.
There is patch 30828205 that you can apply to remove the expdp compatibility restriction, and it’s been produced for 19.0.0, 19.4.0, 19.9.0. But don’t use the later versions, as they can only be applied to that specific release. Use the original 19.0.0 patch.
Apparently the patch has been made permanent in 21.0.0.
Hi Peter,
thanks a lot for this hint – and you are absolutely right.
I will release a blog post this week about it …
Cheers,
Mike
Blog post released.
Patch 30828205 has been rolled into 19.11 (and above) as a permanent fix.
I should have added, that this is documented in Doc ID 2610939.1.
Added 3 bug for compatible changes might happen in 19c.
I have encountered the 2 & 3. The 2nd bug has caused logical corruption and big headache.
Considering the below approaches
1. Testing 2 round (old, new compatible) in UAT env and advance it in PROD after upgraded a period.
2. Testing in new compatible in UAT env and immediately advance it in PRD after upgraded.
The first one is safe but longer time required and might not be acceptable by APP team.
The 2nd one lead to DB downgrade impossible.
Bugs:
1. UPDATE DML Almost Twice as Slow for ROW STORE COMPRESS ADVANCED Table in 12.2 after Changing COMPATIBLE Parameter from 11.2.0.4 To 12.2.0.1 (Doc ID 2604718.1)
2. Bug 32290445 : CN: ORA-00600: [6856], [0], [2] WHILE RECOVERING TRANSACTION AGAIN OLTP COMPRESSION
3. DB compatible changed from 11.2.0.4 to 19.1.0 and OGG extract process hanged due to
Bug 32035893 : Integrated Extract is not moving after changing DB compatible parameter from 12.1.0.2 to 19.0.0
Hello Mike,
is there any difference between compatible=19.0.0 and values other than 19.0.0 like eg. 19.5.0?
And, if not, shouldn’t Oracle don’t care about it? I always get trouble with expdp / impdp, duplicate and such things:
ORA-19690: backup piece release 19.5.0.0.0 incompatible with Oracle release 19.0.0.0.0
This is annoying.
Hi Gerrit,
this has not to do with the database’s COMPATIBLE but with expdp/impdp. If you’d like to avoid this, use VERSION=19.0.0.0.0 – then you won’t receive this error. And technically, there is no difference except for the fact that the database and its tools react to settings such as 19.4.0 or 19.5.0. This is why we recommend to leave it at the default.
Cheers,
Mike
Hi Mike, so just to confirm, should the COMPATIBLE be set to 19.0.0, or to 19.3.0 (as this is the base release of Oracle 19c).
Hi Melvyn,
it should always remain at the default of the release – and in the 19c case it will be:
19.0.0
https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/COMPATIBLE.html
Cheers,
Mike
Hi All, if we are using compatible mode in 12c (using 11g), does that mean we can use all the deprecated features of 11g in 12c?
Hi Dinesh,
with COMPATIBLE=11.2.0 for instance in a 12c database, you can only use the features which were available on the 11.2 compatible setting level. Whether a feature is deprecated or not does not affect the usability of the feature.
Please read more here:
https://mikedietrichde.com/2016/03/23/what-does-deprecated-mean-and-desupported/
Cheers,
Mike
Howdy All!
We have Compatible = 19.0.0
Looking at Dataguard Broker manual, page 2-4, It says COMAPTIBLE should be 19.1 ! — “If you want to use the latest features”
Opened a case for clarification, for obvious reasons.
Hi,
this would only be needed if you intend to use Blockchain tables and Password Rollover features – I am not recommending to change COMPATIBLE in 19c unless you change it in ____ALL_____ your environments.
Cheers
Mike
Hello Mike,
As part of migration to ExaCS (from 11.2.0.4 on premise to 19c on ExaCS), we are being told that Goldengate Integrated Extract (and not Classic) must be used in order to have the benefit of reduced downtime for migration. More specifically, we are being told that the initial load can only happen without downtime provided we use Integrated Extract. However, we are also being told that we are not in a position to use Integrated Extract on our source databases (11.2.0.4 on premise) because our databases have compatible parameter set to 11.2.0.0. It needs to be 11.2.0.4 (so use of 4th number).
Is this correct? If yes, does this change the guidance in your above excellent note?
Hi Narendra,
I have never heard of such a requirement – and I have not seen any doc saying that you need to increase COMPATIBLE to 11.2.0.4 to use any feature. But that does not mean that it is not true. But a document or MOS link reference would be useful so I could double check.
Cheers
Mike
Hello Mike,
recently we upgraded our Oracle Enterprise 12.x.x DB to Oracle Enterprise DB 19c.
We ran the compatibility mode for a week or so, with no issues and then changed to full 19c.
Our software developer now informs us that he needs the 12.x19c combatibility for another few weeks.
What would be the steps required to achieve this?
Sorry for the bother
J.
Hi J.
there is no way back. The only way would be to export with Data Pump, using the VERSION=12.1.0 or VERSION=12.2.0 parameter, and then reimporting into a new database. You can’t reset COMPATIBLE.
Thanks,
Mike
Good morning Mike,
thank you for the reply.
Regards
JC
Welcome JC!
Cheers,
Mike
I agree that testing your applications after an upgrade is essential but only BEFORE you change compatible. 7-10 days seems like a reasonable amount of time to wait before choosing to change compatible but only if you are sure that all expected workloads have been processed.
Once you do take the plunge and increase compatible I don’t see the point in a further round of testing though, because even if testing shows up problems for some reason, at that point you can’t do much about it if downgrading has now been removed as an option.
Hi Mike,
just a short question regarding compatibility parameter.
On Exadata DB-Nodes (19c) following is set:
NAME TYPE VALUE
———————————— ———– ——————————
compatible string 19.0.0.0
On ASM-Instances we have
NAME COMPATIBILITY DATABASE_COMPATIBILITY
———- ——————– ————————-
DATA 19.0.0.0.0 11.2.0.4.0
RECO 19.0.0.0.0 11.2.0.4.0
SPARSE 19.0.0.0.0 12.1.0.2.0
We indend to upgrade the database_compativility to 19.0.0.0, so
ALTER DISKGROUP DATA SET ATTRIBUTE ‘compatible.asm’ = ‘19.0.0’;
ALTER DISKGROUP DATA SET ATTRIBUTE ‘compatible.rdbms’ = ‘19.0.0’;
Now my questions:
1. Is it recommenden?
2. What are the prerequisits?
3. Is it possible to change online?
4. If not what is the correct procedure to update the parameter?
Regards
Detlef
Hi Detlef,
see here:
https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/diskgroup-compatibility.html#GUID-5D3AFE24-3BDB-49EE-AC19-8C37D42D2818
This describes the default values set for ASM – and I don’t see a reason why you should not be able to change it online.
You will miss some features if compatible is lower – and of course, once fully set to 19c, you can’t run an 11.2.0.4 database in this ASM diskgroup anymore as far as I am aware.
Cheers
Mike
Hello Mike,
first of all, thank you for all the helpful information!
I know this post here is a bit older, but since I think the topic fits, I’ll ask my questions here. I hope that’s ok.
In order to be able to downgrade, we did not increase the value for compatible along with the upgrade for the databases we upgraded to 19c. But now we want to increase the value to 19.0.0 so that certain features can be used by development.
First we implemented this for our test environments, now it’s up for the productive database. Regarding tests and their scope, we have so far been guided by the opinion that changing the value for compatible does not change the code of the Opimizer and thus no change to execution plans is to be expected. This is just different from what would happen if the optimizer_features_enable parameter were changed.
This opinion is based, for example, on the statement by Connor McDonald here: https://asktom.oracle.com/pls/apex/asktom.search?tag=compatible-initora-parameter#9546681600346304695.
Therefore, we have limited ourselves to functional tests, which, by the way, have all been positive.
But after reading this post: https://scomp.ly/EroO4gB?trk=public_post-text I am unsure whether this is correct, because the core statement there is that there is a “Compatible enforced version of the optimizer”.
What is your opinion on this? Should we expand the scope of our testing to check for possible changes in execution plans?
Cheers,
Nils
Hi Nils,
I know Ernst quite well (the author of the blog post you are quoting) – he is a serious guy.
And since the blog post is written in German, I don’t need a translation.
Still, I am unclear about the finding and the solution he had:
==> “dass für die mit Compatible erzwungene Version des Optimizers Indexe auf den Tabellen fehlten. Mit Hilfe des SQL-Tuning Advisors und des Access Advisors wurde das Statement weiter analysiert. Der Output der Advisor erhärtete diesen Verdacht. Nach Analyse der Vorschläge und Vergleich mit den eigenen Erkenntnissen wurden die empfohlenen Indexe erstellt”
The first part I read as if certain indexes weren’t considered by the optimizer (which would be a plan change). But the second part reads as if certain indexes had to be created.
Since he stated that both, PROD and TEST/DEV were 19c databases, and hopefully at the exact same patch level, I am not convinced.
IF changing COMPATIBLE for the exact same release with the exact same data, specs, machine etc has such an effect, then this is a bug to me. Changing the COMPATIBLE parameter does not have such an effect as described unless it would turn on/off a specific feature in the code. But as of now, this would be news to me.
Hence, as long as I haven’t seen an SR or bug describing this, I stay with my statement (which basically is the same as Connor explained it, too).
Cheers,
Mike