This blog is always a reminder to myself as well – just in case you wonder about that simple example and all steps here 😉
Currently I work with an important customer from Switzerland – and they upgrade their database landscape to Oracle Database 18.104.22.168. But in some rare cases we see the upgrade failing at different stages with dying PQ slaves or running out of memory. The reason may be that the customer is using only MEMORY_TARGET=1G – but actually our preupgrd.sql has nothing too complain about. And from reading the docs all seems to be fine as well (MEMORY_TARGET>152M).
The temporary workaround used by the customer is to increase MEMORY_TARGET prior to the upgrade to 4G – and decrease it later on to the old value. This works but it does not answer the question yet if something is wrong with MEMORY_TARGET or if our script does not recommend the correct value. And I can’t answer this at the moment – but in order to collect some information I’ve did the following in my environment:
- Simple SQL script check_sga.sh to monitor the change in pool shares while the upgrade is running:
#!/bin/sh ORACLE_BASE=/u01/app/oracle ORACLE_HOME=$ORACLE_BASE/product/22.214.171.124 if [ "$1" = "" ] ; then ORACLE_SID=UPGR else ORACLE_SID=$1 fi TNS_ADMIN=/u01/app/oracle/product/126.96.36.199/network/admin NLS_LANG=AMERICAN_AMERICA.AL32UTF8 PATH=$ORACLE_HOME/bin:$PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH CLASSPATH=.:$ORACLE_HOME/jdbc/lib/classes12.jar:$ORACLE_HOME/jdbc/lib/nls_charset12.jar:$ORACLE_HOME/rdbms/jlib/xdb.jar:$ORACLE_HOME/lib/xmlparserv2.jar:$ORACLE_HOME/sqlj/lib/utl_dbws.jar export ORACLE_BASE ORACLE_HOME ORACLE_SID NLS_LANG CLASSPATH PATH LD_LIBRARY_PATH TNS_ADMIN sqlplus -s "sys/oracle@upgr as sysdba" @check_sga.sql >> check_sga.txt
- Simple shell script to call the SQL script check_sga.sql periodically – and there would be other views to monitor the SGA behavior especially for resize operations than the V$MEMORY_DYNAMIC_COMPONENTS I used below: Shared Memory Management Views in Oracle 12c.
SET PAGESIZE 50000 SET LINESIZE 25000 SET NUMWIDTH 5 SET FEEDBACK OFF set echo off set heading on set headsep off set wrap off column MB format 999999 column OPERATION format a22 SELECT TO_CHAR(SYSDATE, 'dd-mm-yy hh24:mi:ss') "TIMESTAMP" from dual; select component, current_size/power(1024,2) "MB", last_oper_type "OPERATION" from v$memory_dynamic_components where current_size != 0; exit
- Enable the listener to accept remote connections via TPC while the database is in RESTRICTED SESSION more.
This is from my tnsnames.ora – I had to add the (UR=A) setting:
UPGR = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) ) (CONNECT_DATA = (UR = A) (SERVICE_NAME = UPGR) ) )
- Change the script properties to make it executable without any issues from cron
$ chmod 777 check_sqg.sh $ chmod +x checg_sga.sh
- Add the shell script to crontab for periodic execution every minute ==> * * * * *
$ export EDITOR=vi $ crontab -e* * * * * /home/oracle/check_sga.sh.
- Start the database in UPGRADE mode with a very tiny memory_target (700MB in my example) and no other memory values specified
:*.audit_file_dest='/u01/app/oracle/admin/UPGR/adump' *.audit_trail='NONE' *.compatible='12.1.0' *.control_files='/oradata/UPGR/control01.ctl','/oradata/UPGR/control02.ctl' *.db_block_size=8192 *.db_domain='' *.db_name='UPGR' *.db_securefile='ALWAYS' *.diagnostic_dest='/oradata' *.memory_target=734003200 *.open_cursors=300 *.processes=300 *.remote_login_passwordfile='EXCLUSIVE' *.sessions=100 *.undo_tablespace='UNDOTBS1'
- Monitor the output and see what’s happening
What do we see here:
- The Upgrade took roughly 34 minutes – it’s one my laptop, 2 years old, 5400rpm spinning disk, a true high end system 😉 Inside an Oracle VirtualBox image.
- SGA_TARGET and PGA_TARGET stay constant throughout the upgrade – no shift from one to another part is happening
- When the Java component upgrade happens at 13:13h there’s a huge allocation of JAVA_POOL_SIZE taken away from the Default Buffer Cache.
- The Shared Pool is slightly increasing during the upgrade – mem shifted also from the Default Buffer Cache.
- I did the same exercise now while recompiling the database objects afterwards – this has nothing to do with the original question but I was simply curious:
Of course we see plenty of shared pool getting allocated – that is expected. But see also how heavily the buffer cache shrinks with such a small SGA. That’s why we always recommend to have DB_CACHE_SIZE set in addition as a minimum barrier to prevent the database (regardless if you use SGA_TARGET or MEMORY_TARGET) from getting too low during regular operation).
And just on the side: The recompilation took only 4 minutes from 13:53h-13:57h – but as there was no load on the database the pools stayed constant afterwards.
I’m pretty sure this all can be done way more sophisticated – but I wanted to have a simple example working quickly in my environment. And it does what I want.
Now the next step will be to verify and compare this with the customer’s database upgrades.