A customer of mine hit an issue recently when upgrading to Oracle Database 184.108.40.206. They did everything correctly but received a ton of issues from the Data Guard Broker. A rule says: When you upgrade, disable the Data Guard Broker. But I can’t blame the customer as this “rule” is well hidden in the documentation.
When you upgrade, disable the Data Guard Broker
First of all, the Data Guard Broker is required if you would like to administer your databases in Oracle Cloud Control. Otherwise you can’t switchover or failover within Enterprise Manager. But once you approach a database upgrade please disable the Data Guard Broker while you do the upgrade. Enable the Broker after the upgrade again.
In an Oracle Data Guard configuration the Broker is always the boss. If you do something the Broker does not like – and you did it via SQL Plus and not via the Broker’s command line DGMGRL tool – the Broker will change your change. Bet with me.
Therefore you must disable the Data Guard Broker when you upgrade your primary database. Otherwise the Broker will interfere with the upgrade and cause trouble.
Where is this documented?
The crucial point is that you most likely won’t be able to find this piece of information in the documentation unless you studied the Data Guard’s Broker manual carefully. When you read through the steps to upgrade a database in a physical standby environment you were not able to find a hint.
Thanks to my colleagues from the MAA / Data Guard Team we updated the documentation.
- If you are using the Oracle Data Guard broker to manage your configuration, follow the instructions in the Oracle Data Guard Broker manual for information about removing or disabling the broker configuration.
It will guide you to this section explaining clearly how to disable (and enable afterwards) the Data Guard Broker while you upgrade your primary database.
How to upgrade your database with a physical standby in place?
I won’t replicate the documentation as the Data Guard documentation is usually very good and precise. But it is important that you don’t skip this part:
Then you proceed with:
In brief a very short summary:
- Disable the Data Guard Broker configuration
- Install the new software on the standby host and patch it with the newest RU
- Install the new software on the primary host as well and patch it with the newest RU
- Download the most recent
preupgrade.jar(for instance the September 2017 one) and execute it on your primary
- Shutdown everything on the standby and restart listeners etc with the new environment
- Mount the physical standby and start redo apply
- Upgrade the primary database as explained in the Database Upgrade Guide
- After the upgrade has been completed the standby has been upgraded via log apply as well
- Enable the Data Guard Broker configuration
An additional comment
I received this comment via LinkedIn and it’s worth to mention:
Having performed many such upgrades, I have found the safest method is to defer log shipping from the Primary to stop alerts and keep the Standby shut down until after the Primary is successfully upgraded. Only after upgrade mount the Standby and enable configuration (enable shipping and start redo apply). That way a failed Primary upgrade would not be replicated to the Standby.
You’re talking about the 12.2 upgrade here. Do you have any information about broker problems with application of 220.127.116.11.171017?
do you mean general broker issues? No, unfortunately not.
Thank you for all your useful information regarding the Oracle Database, I often refer to your website for assistance in my daily DBA duties.
I am writing you to let you know I have been testing 12c Data Gaurd upgrade to 18c. We currently have Streams in our architecture so we are not ready to move to 19c where Streams is desupported. I followed your instructions and the referenced documentation to disable Data Gaurd prior to the upgrade. However, the three times I have tried to re-enable Data Gaurd, following the steps on your site, I get a message from DGMGRL that there is no configuration found. I made sure that dgmgrl was using the 18c binaries. I verified that all the .dat files were available and I tested using the backed up .dat files and none of them worked. All three times I had to rebuild Data Gaurd configuration. Unfortunately, after filing an SR with Oracle requesting the upgrade documentation from 12c to 18c for the Data Gaurd, it does not exists. The referenced documentation is for upgrade to 12c. Can you help me?
do you have an SR opened for this?
I can’t diagnose such things just with an explanation. But I can look into an SR to see and understand what was going wrong.
I had one open previously and was referred to an older document for upgrade to 12c. I work in a highly classified environment so, I am unable to provide trace files or too much information other than high level information. I was hoping to either get a reference for a newer document for 18c data upgrade or a referral to a website.
I looked at the SR, and I agree that this looks like a Broker issue.
But I see also that you’ve rebuilt the environment now?
Yes, I have gotten everything to work. One thing I did differently was start the observer before I tried to re-enable Data Gaurd Manager. Also, I replaced the *.dat files from backup to replace the old ones obtained prior to the upgrade. I’m not sure which action or if both allowed me have Data Gaurd Manager to enable configuration.
Thank you for all your useful information, I just though this piece of information will be useful to the readers, this step needs to be done on Standby otherwise will not be able to use srvctl commands
/bin/srvctl upgrade database -d -o
I am trying to see if you can suggest a solution for my problem at work. Our databases are currently on Oracle 18c (18.104.22.168.0) with RHEL 6.8. We are planning to upgrade to Oracle 19c (22.214.171.124.0) with RHEL 8.x. I can’t find an easy way to do this without skipping RHEL 7.x. I can’t upgrade to Oracle 19c first because it is not certified on RHEL 6.8 and I can’t upgrade OS to RHEL 8 first because Oracle 18c is not certified on RHEL 8. I have standby DBs (with out data guard broker) and wondering if you can suggest an upgrade path by using standby DBs. Thanks for your help.
my recommendation would be OL7 instead for multiple reasons.
At first, it is supported for many years. And furthermore, it allows you to operate all the releases you have on the same machine.
Do you get fresh hardware or are you planning to do this on the existing HW?
If the latter is true, this would be my recommendation:
1. Upgrade the standby machine to OL7 As you can’t do an upgrade with all Oracle software in place, you need to wipe it anyway.
2. Bring back the standby to life.
3. Do a switchover
4. Repeat the same on the production host
5. Do a switchoer
6. Now upgrade to 19c on PROD
7. Bring back the standby in the new home and have it synched
8. If you want OL8, repeat the same steps another time
Is this still applicable? From https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/non-cdb-to-pdb-upgrade-guidelines-examples.html#GUID-F664B4A1-0E41-4480-8FD0-361C8F41C046 it looks like AutoUpgrade Completes for Oracle Data Guard Upgrades
The version taking care on the Broker (disabling it, copying the definition files, enabling it) is actually not “live” yet but will be in a few weeks.
Thanks for your feedback – cheers,
Hi Mike ,
Due to issue in upgrade to 19c , if we flashback the primary database and started upgrade again after fixing the issue , and started the standby . What will be the impact on standby as primary DB was flashbacked once during the upgrade process?
If the incarnation counted up, the INC will be propagated to the standby.
We are trying to upgrade non-Container126.96.36.199 to CDB 19.10 and using autoupgrade.jar that converts the non-Container to CDB/PDB after upgrade and we have standby 188.8.131.52 in place. As after non-CDB to PDB conversion dbid changes for primary, original 12c physical standby that we upgraded to 19c after primary upgrade is not syncing . Is there a way when we can upgrade the **non-Container 12c Primary and existing physical Standby to CDB/PDB using autoupgrade.jar or we have to do manual upgrade and then convert it in to noncdb to pdb. We are not using DG Broker.
yes, this is unfortunately expected at the moment. When your database is in ASM, an ASM alias file is needed which adds the ASM aliases for the files in the previous environment. Please see our deck “Virtual Classroom Seminar – Migration to Multitenant”: https://mikedietrichde.com/videos/ – the picture gets you to the slides. There you will find the standby plugin process and the most important MOS notes.
Sorry for the inconvenience – we are working to support this as well. But as it needs actions on the standby’s site ASM, this is not trivial from AU’s perspective.
Sound advices as usual. Reading the references though I think this relates to upgrade to 12.2 specifically from non 12 releases.
We are planning 184.108.40.206 OCT2020 BP to 19.10 upgrade with active dg and max availability. Broker config stored in ASM.
Was planning to:
stop log apply
make dg max perf (not sure I can disable config otherwise)
make dg max avail
mount stby and apply logs
open stby read only once all redo applied
Of course if not needed 12.1 -> 19.10 that would save me some steps 🙂
Yes, this should work.
just completed my upgrade after several months delay.
dont need to change from maxavail to max perf
The following link is a very good summary.
Always great info Mike thank you. This info seems to be specific to upgrading to 12.2 would it still apply to 19?
We are planning to do 12.1 to 19.10 in the coming months and have a max availability active dataguard env.
If dataguard is in max avail will it let you disable config or would it need changing to max perf first?
I was planning to do:
stop log apply, set to max perf and disable dg config as described above
upgrade prod + srvctl upgrade
reenable dg config
stop stby srvctl upgrade and mount
start log apply and when caught up open stby read only
change dg mode back to max avail
yes, it applies to 19c, too.
Have you tried doing a rollback/restore of the upgrade with DataGuard in place?
Just to clarify “Have you tried doing a rollback/restore of the upgrade with DataGuard in place? ” I am referring to a scenario where the primary and the DR has already been upgraded but with the GRP still in place. Initial thought is disable dataguard, rollback the primary with -restore and then rollback the DR assuming you have made a GRP of it pre-upgrade.
please see our Fallback Seminar:
And for the demo:
yes, we did – and this is something we will cover with our Virtual Classroom Seminar in November as well.
I have a question about flashback on a physical stby. I find all sorts of information on these blogs about flashing back the primary, but if that is replicating to the stby, how do I flash back the standby database after I flashback the primary database back the the pre-upgrade restore point?
How about the SPFILE on the DR, will that be ‘upgraded’ as well? When the PRIMARY gets upgraded a new 19c SPFILE is created, will it do the same automagically on the DR 🙂 or is this a manual step?
please see Daniel’s post: