Thanks to everyone attending our Cool Features – not only for DBAs web seminar yesterday on Jan 14, 2021. As over 500 attendees stayed until the end of the 2 hour long session with not less than 16 different features, we assume that you’ve enjoyed it. And from the questions you did ask, it looks like as some of the features really caught your attention. On Feb 10 it’s time for the next session for EMEA and India. But beforehand, let’s do a Cool Features Web Seminar Jan 2021 – Quick Recap, especially regarding the questions you’ve had.

Photo by Jason Blackeye on Unsplash
Extended VARCHAR2
Regardless which of the current Oracle releases you’ll use, MAX_STRING_SIZE is still set to the default STANDARD. Hence, Extended VARCHAR2 is not on by default for on-prem databases. You will find a different setup in the cloud where for instance in ADW it is enabled already. There MAX_STRING_SIZE=EXTENDED is set.
Furthermore, as I explained, you can enable this on a per-PDB basis. The parameter is PDB_MODIFIABLE. And you can enable it in a PDB without enabling it in the CDB$ROOT before.
SQL> alter session set container=PDB1; Session altered. SQL> startup Pluggable Database opened. SQL> alter system set max_string_size=extended scope=spfile; System altered. SQL> shutdown Pluggable Database closed. SQL> startup upgrade Pluggable Database opened. SQL> start ?/rdbms/admin/utl32k.sql [..] PL/SQL procedure successfully completed. Commit complete. Package altered. Package altered. Session altered. SQL> shutdown Pluggable Database closed. SQL> startup Pluggable Database opened. SQL> alter session set container=CDB$ROOT; Session altered. SQL> select con_id, name, value from V$SYSTEM_PARAMETER where name='max_string_size' CON_ID NAME VALUE ---------- ------------------------------ -------------------- 0 max_string_size STANDARD 3 max_string_size EXTENDED SQL> show parameter max_stri NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ max_string_size string STANDARD
The CON_ID=0 output in V$SYSTEM_PARAMETER is a bit strange as there is no CON_ID=0 in a CDB environment.
In addition, somebody asked whether the reduced number of roundtrips for the Extended VARCHAR2 compared to CLOB may be dependent on the SQL Plus setting of LONGCHUNKSIZE. It is not. I verified it today with the maximum setting of LONGCHUNKSIZE on my platform. It has no effect on the test results and the number of roundtrips.
Selecting the CLOB column:
SQL> select C from TEST; 10 rows selected. Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 132 consistent gets 80 physical reads 0 redo size 16310 bytes sent via SQL*Net to client 11890 bytes received via SQL*Net from client 52 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 10 rows processed
Versus selecting the Extended VARCHAR2 column:
SQL> select E from TEST; 10 rows selected. Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 14 consistent gets 0 physical reads 0 redo size 40721 bytes sent via SQL*Net to client 380 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 10 rows processed
Please see Franck Pachot’s blog post for further details.
The database environment I used for my tests had 8k DB_BLOCK_SIZE.
Install and Patch in one pass
I blogged about this already a while ago.: Install and Patch in one single action with OUI.
But what I missed in the slides was the fact that you need to exchange OPatch as well when you unzip. So the complete sequence of actions is:
- Unzip the base release, e.g. 19.3.0
- Remove the OPatch subdirectory
- Unzip the most recent OPatch – it will recreate the subdirectory you’ve erased
- Unzip the RU into a separare ./patchRU directory
- Optional: Unzip one-off patches into separate ./patch1 … ./patch2 subdirectories
Unfortunately I couldn’t find any indication that no other shell than bash may allow the use of -applyRU. So you may please need to double-check with Oracle Support.
Read-Only Oracle Homes
I didn’t have this on the slides explicitly but only in the audio. Correct, ROOH are the default in Oracle Database 21c.
Read-Only Oracle Home Default
Read-only Oracle homes, where all configuration data and log files reside outside of the read-only Oracle home, are the default option for Oracle Database installations and upgrades.
Read-only Oracle homes enable an easy, flexible, and software-image based deployment of Oracle software that can automatically and seamlessly be distributed across multiple servers. Read-only Oracle homes also enable patching and updating of Oracle Database without extended downtime, as patching simply means replacing a given set of binaries in a defined location.
Regarding the question whether Oracle E-Business Suite (EBS) supports Read-Only Oracle Homes, I received super-service by the EBS team telling me to tell the customer to open an SR. Roy and I still try to find out the answer internally. My blunt guess would be that EBS does not support ROOH – but it would be great to get a definite answer. I’ll let the person who’d asked know as soon as possible.
Another very good question was whether you can revert from a read-only home to a standard home again. I had to smile because there seem to be more undocumented options of roohctl than documented ones. And roohctl -disable is another one:
oracle@hol:/u01/app/oracle/product/19/bin $ roohctl -disable Disabling Read-Only Oracle home. Update orabasetab file to disable Read-Only Oracle home. Orabasetab file has been updated successfully. Read-Only Oracle home has been disabled successfully. Check the log file /u01/app/oracle/cfgtoollogs/roohctl/roohctl-210115PM041120.log for more details.
Please be aware of:
- Bug 32602534 – READONLY HOME NOT SUPPORTED ON GRID HOME – ROOHCTL UTILITY
Daniel’s topics
Daniel published a blog post where he answered the open questions for his topics. Many of you seem to be really into the Online Table Move feature. Which is great – and Daniel covered it all here: https://dohdatabase.com/2021/01/15/follow-up-on-cool-features-webinar-jan-2021/
Recording and Slides
The recording is available now. Please find it linked from the Videos page. Slides are available already, and we refreshed the upload now including Fred’s and Jeff’s slides, too.
The Upcoming Events
On February 10 we will deliver Upgrade Internals and so much more for you. And on March 17 we will do a session about Performance Testing for on prem using Cloud resources. We hope to see and hear you again then.
–Mike
The “bytes sent/ received via SQLNet” statistics are bit strange for your CLOB vs Extended VARCHAR2 test.
Why does the CLOB have less bytes sent (compared to the Extend VARCHAR2) and have so many bytes received ?
I really don’t know – but the test results stay the same, no matter how often I tried.
When will the video be available? I was unable to attend, due to the time difference. Thank you.
Hi Dean,
it was already – the marketing team just didn’t send the link 🙁
See https://MikeDietrichDE.com/videos
Cheers,
Mike
Hi Mike, Hope you are safe and well.
We are planning a database migration to new hardware RAC cluster and DR configuration (similar hardware)
We currently have a RAC 2 node cluster 12.1.0.2 with active data guard to Standby database with RAC 2 node cluster in another city. (9 databases approx 20tb in total)
We are considering the following:
1. Build new RAC infrastructure in new hardware with Oracle 12.1.0.2 and 19.8 + DR
2. Connect current production server to The new prod / new dr servers as additional standby databases
3. Shut down Production database. Wait for logs to replicate to DR then disconnect data guard from new prod/dr
4. Switchover to new prod/dr 12.1.0.2
5. Smoke test existing database on 12.1.0.2
6.Upgrade 9 databases to PDB’s (3 cdb’s*3pdb’s) For new production server and again for new DR server (can this be done on the new prod server once and the new DR upgraded automatically through data guard?) Also can the config file have some way of restricting a maximum number of databases to upgrade at once to prevent overloading of the server?)
7. Bring new production /dr online and test
Backout options:
1. Fix forward any issues
2. Go back to Original hardware from the point of shut down.
If you have time could you give your feedback on this plan and advise if there is another alternative method that would achieve the same outcome.
We are planning to do an initial test on a UAT environment to obtain some upgrade timings.
Thanks in advance.
Best regards,
Robert
Hi Robert,
when you get new HA, all is fine.
You can build up a standby on the new production hw, and cascade a standby from this standby to the new DR server.
Then you’d do a switchover, and you are on the new hardware, but with 12.1.0.2 still in use.
If I understand you correctly, you want to move to the CDB architecture.
Please be aware that this is not only an upgrade, but a migration as well.
So you will upgrade the new PROD to 19c, replicate the same to the standby on the new DR.
Then you will need an extra CDB on PROD with a standby CDB on the new DR hardware.
If you are in ASM, you need to generate the ASM alias list as otherwise the plugin on the DR side won’t find your files.
Please see our “Multitenant” webinar slides on https://MikeDietrichDE.com/slides from the Virtual Classroom Series. There you will find MOS notes and links to blog posts which explain this plugin operation.
Cheers,
Mike
Hi Mike,
Thanks for your feedback.
Our DBA team was worried about having a config file with 9 databases configured to upgrade. Their concern was the amount of resources being used in the rac servers causing an overload during the upgrade and processed blocking each other.
The new production and DR servers have 2 nodes each 8 cores and 512gb memory. An OVM is being built on this platform. External nimble storage of ~50tb for the migration.
Any feedback on this would be much appreciated.
Regards,
Can you advise
Hi Robert,
right now, AU does not balance the load between several databases. What it does is a delayed start for every n+1 database.
What you would do whenever you realize that too many resources are consumed:
Simply use the “abort” command in AU to stop one or several databases.
Then, once your first upgrades are finished, you’d use “resume” to progress the upgrades you have paused before with “abort”.
AU will continue the upgrades seamlessly.
Cheers,
Mike