The init.ora/spfile parameter
PGA_AGGREGATE_LIMIT got introduced in Oracle Database 126.96.36.199.
As per documentation in Oracle Database 12.1 it got defined as:
PGA_AGGREGATE_LIMIT specifies a limit on the aggregate PGA memory consumed by the instance.“.
Furthermore the algorithm for its setting got described as:
PGA_AGGREGATE_LIMIT is set to the greater of 2 GB, 200% of
PGA_AGGREGATE_TARGET, and 3 MB times the
PROCESSES parameter. It will be set below 200% of
PGA_AGGREGATE_TARGET if it is larger than 90% of the physical memory size minus the total SGA size, but not below 100% of
Default Value Change in Oracle Database 188.8.131.52
In Oracle Database 184.108.40.206 the default value gets adjusted a bit as it turned out that restricting
PGA_AGGREGATE_LIMIT to a value too low will lead to a significant number of issues.
See the Oracle Database 220.127.116.11 documentation on PGA_AGGREGATE_LIMIT explaining:
MEMORY_TARGET is set, then
PGA_AGGREGATE_LIMIT defaults to the
MEMORY_TARGET is not set, then
PGA_AGGREGATE_LIMIT defaults to 200% of
MEMORY_TARGET is not set, and
PGA_AGGREGATE_TARGET is explicitly set to 0, then the value of
PGA_AGGREGATE_LIMIT is set to 90% of the physical memory size minus the total SGA size.
In all cases, the default
PGA_AGGREGATE_LIMIT is at least 2GB and at least 3MB times the
And please remember, DO NOT use
MEMORY_TARGET as it will lead to issues such as no use of huge pages etc.
What happened in Oracle Database 18.104.22.168 when you set PGA_AGGREGATE_LIMIT too low?
How much is too low? Too low means it has been set lower that 2x the value of
PGA_AGGREGATE_TARGET. And then it got adjusted “silently”, i.e. the value of
PGA_AGGREGATE_LIMIT got adjusted internally after startup to reflect the minimum. Unfortunately this adjusted value does not get displayed by “show parameter”.
Short example in Oracle Database 22.214.171.124:
SQL> show parameter pga NAME TYPE VALUE ----------------------- ------------------------- ----------------------------- pga_aggregate_limit big integer 2G pga_aggregate_target big integer 120M SQL> alter system set pga_aggregate_limit=1G scope=spfile; System altered. SQL> alter system set pga_aggregate_target=1G scope=spfile; System altered.
After restarting the database:
SQL> show parameter pga NAME TYPE VALUE ------------------------------------ ------------ ------------------------------ pga_aggregate_limit big integer 1G pga_aggregate_target big integer 1G
Different behavior in Oracle Database 126.96.36.199
Repeating the same test in Oracle Database 188.8.131.52 leads to
SQL> show parameter pga NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ pga_aggregate_limit big integer 2G pga_aggregate_target big integer 120M SQL> alter system set pga_aggregate_limit=1G scope=spfile; System altered. SQL> alter system set pga_aggregate_target=1G scope=spfile; System altered. SQL> create pfile from spfile; File created. SQL> startup force ORA-00093: pga_aggregate_limit must be between 2048M and 100000G ORA-01078: failure in processing system parameters
PGA_AGGREGATE_LIMIT=2G I can startup my database again.
PGA_AGGREGATE_LIMIT gets enforced as at least 2x the value of
PGA_AGGREGATE_TARGET since Oracle Database 184.108.40.206.
pga_aggregate_limit should be avoided at all with 12.2 as it is still not reliable (https://fritshoogland.wordpress.com/2014/12/15/oracle-database-operating-system-memory-allocation-management-for-pga/) and there is a better solution. Starting with 12.2 SESSION_PGA_LIMIT is the right one: https://twitter.com/OracleSK/status/797464282485194752
are you sure? Frits examinations apply to 12.1 – the behavior has changed in 12.2. I’m not saying that it’s better or worse – but feel free to explain a bit more why you see SESSION_PGA_LIMIT is the by far better option 🙂
i have tested it in 12.2 as well – same behavior with pga_aggregate_limit (just referenced to Frits’s blog post for the test case and explanation of the issue). SESSION_PGA_LIMIT is the by far better option because it really enforces the limit – stops exactly at the set value. 🙂
Can you say anything more about “DO NOT use MEMORY_TARGET as it will lead to issues such as no use of huge pages etc.”? We spin up VMs for small departmental applications which don’t get a lot of hands-on configuration or management, and we’re not concerned about the performance penalty from not using huge pages. Is there anything else to be concerned about?
For example, we have a 12.2 database running on a VM with 2 GB RAM. Using MEMORY_TARGET (AMM), I see the following at startup:
ALTER DATABASE MOUNT
WARNING: pga_aggregate_limit value is too high for the
amount of physical memory on the system
PGA_AGGREGATE_LIMIT is 2048 MB
PGA_AGGREGATE_TARGET is 0 MB.
physical memory size is 1839 MB
limit based on physical memory and SGA usage is 1655 MB
MEMORY_MAX_TARGET is 1280 MB
Using default pga_aggregate_limit of 2048 MB
Should I be concerned about this? (The value of pga_aggregate_limit is the Oracle default, not something I’ve set.)
Thanks very much.
first of all, memory_Target leads also to a different way on how memory is allocated. It makes it much harder for support to diagnose issues such as leaks. And the different mem allocation way is not as good as the “normal” one. I think Frits Hoogland from Enkitec has demonstrated this a while ago – but I may mix up things. Basically I don’t see ZERO benefit in using it but just things which I don’t like. You don’t have to worry – the limit won’t affect you as far as my understanding goes (or only in theory).
Came across a stumbling block during upgrade. It turns out that the PGA_AGGREGATE_LIMIT defaults to higher of:
a) 2xPGA_AGGREGATE_TARGET or
b) 5(MB) x PROCESSES (For RAC). For non-RAC, I think it is 3(MB) x PROCESSES
Any attempt to set PGA_AGGREGATE_LIMIT to lower than these defaults, the database will fail to start. Oracle documentation also advises that this parameter should not be manually modified. Since PROCESSES parameter now plays a big role in controlling how much PGA memory can actually be used, it must be set/increased carefully.
On Exadata, when creating the CDB via DBCA, the PROCESSES defaults to 2240. For smaller databases, I just left it as is. The databases started using way too much PGA memory leading to free memory exhaustion. Since we are using MEMORY_TARGET with 11g databases, we never ran into this issue. With 19c upgrade, I decided to follow the best practices and switched to ASMM and ran into this issue.
I haven’t followed the current allocations lately for 19c. I just realized that some “magic” will be done in the backgroud.
You may please log an SR and double-check with Oracle Support on this.
In 18c it is not set to 2x of target, I tried with 256M target, it is setting the limit to 2GB always.
So this parameter is one of these “famous new” parameters, which are completely useless?
I think it got changed in 19c again – but I stopped checking as there seems to be always some secret magic being involved.
Please check with Oracle Support – they may know more.
It is actually documented. I have tested and the PGA_AGGREGATE_LIMIT works exactly as described in documentation. The minimum is 2GB as documented.
In all cases, the default PGA_AGGREGATE_LIMIT is at least 2GB and at least 3MB times the PROCESSES parameter (and at least 5MB times the PROCESSES parameter for an Oracle RAC instance).
Do not attempt to set PGA_AGGREGATE_LIMIT below its default value, even in a parameter file (pfile), or instance startup will fail. However, PGA_AGGREGATE_LIMIT can be set to 0 either in a parameter file or dynamically after startup. If a value of 0 is specified, it means there is no limit to the aggregate PGA memory consumed by the instance.