A few days ago I published my quarterly blog post in the series of Patching all my environments … And I mentioned that datapatch took a bit longer for 19.18.0 than usual. In the alert I’ve had nasty error messages or warnings. And Walter (thanks!) commented immediately with some advice.

Photo by Filippo Bortolotto on Unsplash
What are these messages?
Actually, I saw them before already. Since I always create CUSTOM databases with a template I spotted those warnings the first time when I was wondering why DB creation took so abnormally long with Multitenant. I discovered KGL Heap warnings in the alert.log, especially when the PDB$SEED got created.
And now, with my blog post from last week applying 19.18.0 I’ve had similar messages. When datapatch took quite a bit of time, I started checking the alert.log, an old-fashioned reflex from my Support days. There, I saw these messages and compilations. It looked to me as those compilations which should complete in seconds or a fraction of a second took abnormally long.
2023-01-26T22:07:02.512151+01:00 KGL object name :0xaj1zwz5s2vj:ALTER VIEW "SYS"."KU$_FHTABLE_VIEW" COMPILE 2023-01-26T22:07:15.266728+01:00 Memory Notification: Library Cache Object loaded into SGA Heap size 57437K exceeds notification threshold (51200K)
I complained about it just a little bit.
Relief with Underscores
The good thing with this blog and its readers (you!) is that I learn so many things from you. Of course, it is time consuming to test out things and write it down. But as the blog is somehow my partial brain dump as well I like a lot when you add things, and I can learn and share again. So kudos to Walter from Austria. He pointed me to a MOS Note: giving some advice.
I’m quoting the MOS note here:
These are warning messages that are not causing process failure. They appear as a result of event messaging mechanism and memory manager introduced starting with 10gR2 database release. As large objects in the shared pool can potentially cause problems this warning threshold was implemented. Items/SQLs which allocate more space than this warning threshold, outputs a warning to the alert log. This warning is only to inform that a given heap object exceeds the defined threshold size and a trace file is generated so that a DBA can check potentially expensive – from shared memory point of view – objects. These are purely warning messages and have no impact on the database functionality, although they are designed to indicate possible tuning opportunities in customers’ applications. The messages do not imply that an ORA-4031 is about to happen immediately unless the size of shared pool is very small.
So in summary, whatever I set to suppress the warnings, it does not cure the root cause of it but avoids the nasty messages, and especially the traces to be written. As a result, my datapatch run won’t complete faster but I don’t see the traces anymore.
The two underscores changing the behavior:
- _kgl_large_heap_assert_threshold
- Maximum heap size before KGL raises an internal error
- _kgl_large_heap_warning_threshold
- Maximum heap size before KGL writes warnings to the alert log
What are the defaults, how can you change it?
At first, let us check the default settings before we tweak the parameters.
- _kgl_large_heap_assert_threshold = 524288000
- _kgl_large_heap_warning_threshold = 52428800
So the first one (…assert…) is set to 500MB, the second one (…warning…) is set to 50MB.
For the first one, let me quote the MOS note again:
The default value for _kgl_large_heap_assert_threshold can be too small for objects with a large number of partitions/sub-partitions in which case the value should be increased to avoid the ORA-600 error.
For example setting:
“_kgl_large_heap_assert_threshold” = 1572864000 will raise the threshold for the assert to 1500 MB.
That is important to know and understand. Be aware that you see a warning potentially much later. Usually, there is a reason why we put in such settings at default. But I did a bit of research and immediately found a related bug where the default got pumped up intentionally:
- 31585319 _KGL_LARGE_HEAP_ASSERT_THRESHOLD MODIFIES TO 1.5G WHEN CREATING CTX INDEX
There are obviously situations where you don’t want to see tons of warnings.
Further Links and Information
- Patching all my environments with the January 2023 Bundle Patches
- MOS Note: 330239.1 – Memory Notification: Library Cache Object loaded into SGA / ORA-600 [KGL-heap-size-exceeded]
–Mike
Hello Mike,
We have RAC singleton configuration and we want to run datapatch on all cluster nodes in the same time with “-pdbs” parameter (pdbs are open on different cluster nodes). Each pdb have separate datapatch process e.g. datapatch -pdbs “pdb1” then datapatch -pdbs “pdb2”. We are doing this because we have to do datapatch in short time frame and it is better to figure out on which pdb we faced some errors during datapatch operation. In some cases, when we try to run datapatch we faced bellow error:
db selectrow_array failed: ORA-20016: Unable to get the lock : get_opatch_lsinventory : 1
or
verify_queryable_inventory returned ORA-20016: Unable to get the lock : get_pending_activity : 1
Have you faced with error related to “ORA-20016: Unable to get the lock” which is related to get_opatch_lsinventory and get_pending_activity in dbms_qop package ?
I tried to find something in oracle documentation but i didn’t find anything related to our case.
Hi Dariusz,
interesting – do you have an SR I can pass on?
If not, no prob – but it would be helpful if you would open one (send me the SR number please) because this way we can see the logs from all nodes.
I know that autoupgrade is able to do the same – leveraging all nodes – but I’d like to investigate this further.
Cheers,
Mike
Hi Mike,
Yes i have SR: 3-32575347641.
Regards,
Dariusz
Hi Dariusz,
thanks for sharing the SR. I guess, this didn’t lead to the result you’d expected?
This is what the datapatch architect says:
If the customer can make sure he runs datapatch AFTER he had patched(binary via OPatch or gold image) the whole RAC, then there is a much better way of running datapatch . using “-local_inventory”
Step1 : Make sure the Oracle_home on all nodes is patched(updated to new binary)
Step2: Pick any node, then run ./datapatch -pdbs CDB\$ROOT -local_inventory # first finish CDB$ROOT patching
Step3: Once Step2 is successful, then they can patch each PDB from any node they like in any combination they like
Step4 : for eg: ./datapatch -pdbs PDB1 -local_inventory
By default, datapatch will call “DBMS_QOPATCH.get_pending_activity()” to figure out whether all nodes in the RAC are at the same patch level or not. When too many datapatch invocations are started simultaneously, every datapatch command will try to invoke DBMS_QOPATCH and sometimes the invocations coincide and a DBMS_QOPATCH lock might not be obtained.
So, the best way to handle such cases is, if customer himself can make sure that ALL nodes in the RAC are at the same patch level. Then there is no need to invoke DBMS_QOPATCH, so to skip calling DBMS_QOPATCH, the user can give the flag “-local_inventory”. With “-local_inventory”, datapatch will invoke ‘OPatch lsinventory’ directly. This way the simultaneous datapatch runs are much smoother.
Please let me know if that helps you – cheers,
Mike