Actually I missed to blog about this change but luckily a colleague did ask a question the other week about the migration of jobs when you upgrade to Oracle 19c. Let me shed some light on DBMS_JOB – Behavior Change in Oracle 19c.
DBMS_JOB vs DBMS_SCHEDULER
Back in the 10g days, we introduced a new interface to run and monitor jobs in the database: The scheduler. The scheduler with
DBMS_SCHEDULER is way more powerful than the old job interface. And we use it internally a lot as well. But as things appear in the real world. the introduction of a better interface doesn’t lead to rapid migrations.
In addition, in case you haven’t see this, we deprecated the
DBMS_JOB interface with Oracle Database 188.8.131.52.
Conversion during the upgrade
With the upgrade to Oracle Database 19c there will be a conversion of jobs under the hood of the old
DBMS_JOB interface. No worries – relax!
What happens? And what does that mean for your jobs in
First of all, you can’t prevent or skip the migration under the scheduler’s control. But there is also no reason to do so. During the preupgrade check you may see the following warning:
No panic. There are three clarifications to mention:
- During the 19c upgrade for each job in
DBMS_JOBa corresponding entry will be created with
- The old
DBMS_JOBinterface still works. But using it will always create a corresponding entry in the scheduler
- The check in preupgrade.jar is only checking for inconsistencies or any issues
What this means is: You can still use
DBMS_JOB but under the cover we are using
DBMS_SCHEDULER. The internal procedures have been changed. Your calls will work the same way, but
DBMS_JOB is now a legacy interface to the
A simple testcase
As I always like to try out things, here’s a very simple testcase creating a job with DBMS_JOB in Oracle 184.108.40.206. Then I upgrade the database – and check afterwards.
connect system/oracle drop user crc cascade; grant connect, resource, dba to crc identified by crc; connect crc/crc alter session set nls_language = american; alter session set nls_date_format= 'DD-MM-YY HH24:MI:SS'; create table jobtest ( a date); var jobno number var jobstring VARCHAR2(2000) -- setup of a simple job inserting a value into a test table every 3 minutes: execute dbms_job.submit(:jobno, 'BEGIN insert into jobtest values (sysdate); commit; END;', sysdate, interval=> 'SYSDATE + 3/1440'); SELECT job,what,next_date,interval FROM dba_jobs WHERE job = :jobno;
This is what I have in my database before the upgrade – and of course this job is not visible in
JOB WHAT NEXT_DATE INTERVAL --- ---------------------------------------------------------- -------------------- ------------------ 1 BEGIN insert into jobtest values (sysdate); commit; END; 24-05-19 09:36:32 SYSDATE + 3/1440
But after the upgrade, the informatin in
DBA_JOBS is still present. And in addition, in
DBA_SCHEDULER_JOBS I have a new entry:
JOB_NAME -------------------- DBMS_JOB$_1
In DBMS_JOB, jobs have no name. Instead they get identified by an ID. For mapping purposes a new dictionary table
scheduler$_dbmsjob_map has been defined.
This works transparently during upgrade and downgrade. And finally, this isn’t really a behavior change as everything happens underneath the covers.
More information and Links?
- Connor McDonald’s blog post: DBMS_JOB – the joy of transactions (May 28, 2019)
- Scheduler Concepts – Oracle 19c Admin Guide
- Database Upgrade Guide 19c – Support for DBMS_JOB
- MOS Note: 2117140.1 – An Example to Convert from DBMS_JOB Jobs to DBMS_SCHEDULER Jobs
- MOS Note: 807159.1 – Sample Code and Select Statements to Build Simple Testcases for Scheduler / Jobs
- DBMS_JOB Deprecation Note in Oracle 220.127.116.11
- What does deprecated mean, what does desupported mean?
- MOS Note: 2380601.1 – Database Preupgrade tool check list
The advantage of DBMS_JOB over scheduler was transactionality of the job creation. Presumably this must have been addressed for DBM_JOB to have been replaced?
Connor McDonald had the same comment – and investigates this now internally.
I will update the blog post once we have a clarification.
Some information here https://connor-mcdonald.com/2019/05/28/dbms_job-the-joy-of-transactions/
Added it to the the blog post in the “links” section, too.
We have seen issues with the migration of DBMS_JOB to DBMS_SCHEDULER.
I deliberately created a DBMS_JOB with no start time and interval on an Oracle 12.2 DB. This job was not reported during the preupgrade run and it was dropped/deleted after the upgrade to Oracle 19c.
preupgrade.jar does not check for incomplete jobs. But we are discussing this now with the owner of DBMS_JOB/SCHEDULER.
We can only rely on queries given to us.
I will follow up with you via email and update the blog post once we have a conclusion.
I had issues during the migration: I have 3 schemas with jobs in them, and one the jobs is a stats gathering job. After the upgrade, the stat jobs are failing with privilege error. If I resubmit the stat jobs, they are working OK. I’ve checked DBA_SCHEDULER_JOBS and the migrated jobs have a job creator=’SYS’, while the resubmitted ones have the actual schema that (re)created them. So the migration works OK but we should be prepared to resubmit some jobs.
BTW, When DBMS_SCHEDULER is called from DBMS_JOB, the former works transactionally internally.
did you grant the CREATE JOB privilege to the user schema who’s supposed to start the gather stats job?
This had been granted to the schema before upgrade. Other jobs succeeded, stat job failed, eg.:
Schema1 (create job granted)
dbms_stat job failure
other job in the same schema succeeds
Probably it has something to do with log_user/priv_user/schema_user. When querying user_jobs
for migrated jobs, log_user = ‘SYS’, (probably because sys user migrated the DB ?)
for resubmitted ones, it is the actual schema
priv_user and schema_user is the actual schema for both migrated and resubmitted jobs.
But I have not gone deeper, because I do not have many jobs, and resubmitting them solves the problem.
BTW, I’ve upgraded my DB from 12.2 to 19.3 on Windows 🙂
I haven’t seen this before, and we would need an SR to dig deeper. If you want, please go forward.
If you are saying “well, it works for me now” I’m good with it, too. And this way it is documented on the blog for others.
Not really nice!
A widely accepted rule is, to name a user’s object NOT like a sys-object.
Although it is possible, but we should never name a user’s job like sys.package!
The better approach would be, to point the developers to Doc ID 2117140.1 and name the jobs in a reasonable way – it’s not much work and avoids confusion.
Very interesting article.
Is the DBMS_JOB package de’supported in 19c?
as the blog post says, it is deprecated since 18.104.22.168. There is no desupport announcement meaning: “We will still fix upcoming issues if necessary”.
Thanks. We have a complex program relying on the DBMS_JOB job id and we hoped to continue using the job _id but instead of retrieving it from the SYS_CONTEXT( ‘USERENV’, ‘BG_JOB_ID’ ); to use a procedure like:
a. Get the Scheduler job id from the same call above
b. Get the session id from checking v$scheduler_running_jobs
c. Get the Job name from dba_scheduler_job_run_details (column session_id)
d. Get the DBMS_JOB Id from the new mapping table sys.scheduler$_dbmsjob_map provided by Oracle by checking against the job_name
But we still face a lot of problems. Should we use another parameter for sys_context call? Or do you see anyhting else wrong with this approach?
please open an SR – share the SR number than with me via email.
was there an sr opened and/or any further information on this?
You mean whether there was a bug opened for this issue?
I don’t know out of the blue.
Do you have this problem?
The carefull if your database have a large number of Jobs created with DBMS_JOB, because on upgrade to 19c, due that Behaviour Change, the upgrade can slow down a lot…
SQL>select count(1) from dba_jobs;
SQL>select count(1) from dba_scheduler_jobs;
Oracle 19c – BPO up: 1.2h, 1 ins, 0 sn, 0 us, 2.0G sga, 0%db
ID %CPU %DCP LOAD AAS ASC ASI ASW IDL MBPS %FR PGA UTPS RT/X DCTR DWTR
EVENT (C) TOT WAITS TIME(s) AVG_MS PCT WAIT_CLASS
DB CPU 4223 70
db file sequential read 61474 333 5.4 6 User I/O
db file parallel write 3847 171 44.6 3 System I/O
ID SID SPID USR PROG S PGA SQLID/BLOCKER OPN E/T STA STE EVENT/*LA W/T
1 242 14287180 SYS sqlp D 152M 9ca6qk4nt1wpw DEL 1.7h ACT CPU cpu runqu 149u
DELETE FROM SYS.SCHEDULER$_DBMSJOB_MAP WHERE JOB_NAME NOT IN (SELECT NAME FROM SYS.OBJ$ WHERE NAME LIKE ‘DBMS_JOB$_%’)
I noticed something else that might be confusing though …
DBMS_JOBS are recreated and migrated to scheduler jobs, that’s ok. But not only DBMS_JOBS … Scheduler jobs are recreated too and that’s not very cool. They now have a new name and that could be confusing. Our applications are controlled by external companies and perhaps in their applications, they are relating to job names but job names have all changed. How can we prevent this from arriving while migrating ? I understand that old jobs have to be migrated to scheduler jobs, but why does it happen to already scheduled jobs ?
I am not aware of a change to the name of existing scheduler jobs.
Can you send me an example please? If you have already, I’ll get back to you next week (not working yet).
so there is NO reason to migrate the DBA_JOBS to DBA_SCHEDULER_JOBS yourself?
It’s all done by the 19c upgrade?
I was quite unsure because I found a lot of articles on Websites that deal with migration scripts.
But obviously they were written BEFORE 19c was released…!?
correct – BUT … I have a backlog of blog posts regarding DBMS_JOB as there seems to be a few other “surprises”.
Still, from the upgrade perspective, nothing needs to be done extra.
can you give a hint? what kind of “surprises”?
there are potential issues with job-ids or with timestamps. When time allows, I will blog a bit more about it.
Could it be that one of the “surprises” regarding timestamps is fixed starting with 19.10. Datatype of the next_date, last_date, … columns in user_jobs / dba_jobs, … is now DATE instead of TIMESTAMP again. Unfortunately I could not find the bug id in 2523220.1.
Looking forward to see the blog posts regarding DBMS_JOB that you have already in the pipeline.
Best regards, Martin
I have 3 emails about issues buried in my inbox about DBMS_JOB. Still waiting for the days with less workload and time to write the stuff I have in the pipeline.
There are changes in DBA_JOBS from 12c to 19c.
We use materialized views to sync data between databases and rely on the dbms_jobs.last_date to know if the data is stale.
In 12c when a job fails dbms_jobs.last_date retains the previous date of the last successful run and dbms_jobs.failures is incremented. When the job completes successfully dbms_jobs.last_date is updated and dbms_jobs.failures is set to zero. If a new job never runs successfully dbms_jobs.last_date will remain null and dbms_jobs.failures increments.
In 19c when a job fails dbms_jobs.last_date is updated and dbms_jobs.failures is incremented. When the job completes successfully dbms_jobs.last_date is updated however dbms_jobs.failures does not get reset to zero. If a new job never runs successfully dbms_jobs.last_date is updated and dbms_jobs.failures increments.
Thanks Joe – and I have a backlog of additional blog posts with differences.
Thanks for adding this one to my list …
We used autoupgrade to upgrade our 12.2 databases to 19c . There’s one “Oracle delivered” job called FILE_WATCHER missing after the upgrade. Is this expected? I know we don’t really use it but not sure if that’s related to the DBMS_SCHEDULER change.
Does anyone notice about this FILE_WATCHER job missing after the upgrade?
I checked some newly created Oracle19c db, this job exists though. So, somewhere along the upgrade, this job went missing.. I understand that there’s metalink doc to put it back, but just to see if there’re some bugs somewhere
I don’t even know what the file_watcher program really does – I see that it gets created by $ORACLE_HOME/rdbms/admin/execsch.sql which gets executed as part of the upgrade.
So you please may need to check upgrade logs and alert.log to see whether there was an error.
I didn’t find an indication that there is an issue with upgrades.
Thanks Mike. I reviewed the upgrade log and confirm that it ran the execsch.sql without error. Then, the upgrade called this script rdbms/admin/a1202000.sql which actually drop the program
09:34:59 SQL> Rem DESCRIPTION
09:34:59 SQL> Rem Additional upgrade script to be run during the upgrade of an
09:34:59 SQL> Rem 12.2.0 database to the new release.
09:34:59 SQL> Rem
09:34:59 SQL> Rem This script is called from catupgrd.sql and a1201000.sql
09:34:59 SQL> Rem *************************************************************************
09:34:59 SQL> Rem BEGIN changes for Scheduler new dbms_ischedfw package
09:34:59 SQL> Rem *************************************************************************
09:34:59 SQL> — Drop File watcher job and program
09:34:59 SQL> BEGIN
09:34:59 2 dbms_scheduler.disable(‘SYS.FILE_WATCHER’, TRUE);
09:34:59 3 dbms_scheduler.stop_job(‘SYS.FILE_WATCHER’, TRUE);
09:34:59 4 EXCEPTION
09:34:59 5 WHEN others THEN
09:34:59 6 IF sqlcode = -27366 or sqlcode = -27476 THEN
09:34:59 7 NULL; — Supress job not running (27366), “does not exist” (27476)
09:34:59 8 ELSE
09:34:59 9 raise;
09:34:59 10 END IF;
09:34:59 11 END;
09:34:59 12 /
PL/SQL procedure successfully completed.
I’ll continue to look at what is about the a1202000.sql ….
Thanks for pointing out the execsch.sql for me.
After upgrade to 19c, I can’t use anymore FILE_WATCHERs either. I have the same problem as Jenifer, so that the FILE_WATCHER job run by the scheduler on background is gone.
If I run the dbms_isched.file_watch_job manually, it works. My question now, can I recreate the FILE_WATCH_JOB manually using the information inside the script a1202000.sql ?
SQL> oradebug setmypid
SQL> oradebug tracefile_name
SQL> exec dbms_ischedfw.file_watch_job ;
PL/SQL procedure successfully completed.
The trace file shows that the file watcher works and it triggers the execution of the job as always. I believe the problem is that it does not run by itself because the mentioned job is gone and not scheduled anymore.
this is a question I can’t answer. You may give it a try but I can’t guarantee it.
Recreating the file watcher program and the file watcher job does not work in my case. I mean, the file watcher does not execute anything. As a workaround, I did the following
1. Create a Shell script which runs these SQL commands
2. Create a Job in DBMS_SCHEDULER using as repeat_interval the filewatcher calendar ( in my case every 30 minutes ) and as type EXTERNAL_SCRIPT with the credentials of the oracle owner
3. The shell basically triggers the sql if it finds the file(s) I am looking for. Obviously, I had to be smart enough to put a flag that in case the filewatcher runs again after 20 minutes and the chain that is triggered by it is running, it does not run again.
Anyway, I open a case in Oracle Support to verify whether this a bug or not, but it looks like it is. I will keep you updated.
3. The job triggers the SQL Script
, I created a job that runs the package using the calendar schedule expression defined for the filewatcher. That way, it works.
In my case, as workaround, I used the following:
– I create a shell script which runs the commands
Thanks a lot for sharing, Roberto!!!
We’ve noticed this interesting and maybe also important change in jobs defined via dbms_job interface .
After upgrade from 12.2 to 19.9 the NLS_ENV attributes of these jobs were CHANGED !
You can check these values with this query
select job,what,nls_env from dba_jobs ;
For example , before upgrade the value for one of these jobs was
NLS_LANGUAGE=’CZECH’ NLS_TERRITORY=’CZECH REPUBLIC’ …
After upgrade the value has changed to NLS_LANGUAGE=’AMERICAN’ NLS_TERRITORY=’AMERICA’ …
(the new values are same as our database default )
SQL results of some jobs can depend on these parameters !
thanks a lot for this hint. I have a queue of things to blog about what is not exactly the same between DBMS_JOB and the scheduler when you move to 19c. While it works well for 95% of all environment, there is a little chance for a change. And I confess, that this is not documented well. Connor McDonald had some blog posts, and I have plans for some as well.
Just want to confirm, having job_queue_processes to a non-zero value is NOT a pre-requisite for AutoUpgrade, is that correct? We want to set it to zero because datapatch is reporting locking issue on one of the databases as there is a materialized view refresh that runs every 15minutes. We will be re-instating the original value post-refresh.
job_queue_processes must not set to 0 during an upgrade. And there won’t be any MV refresh during upgrade since that is disabled in STARTUP UPGRADE mode.
Mike – is there a way to get the dbms_job => scheduler utility call in the upgrade process as a standalone.
We would like to do ahead of upgrade to reduce risk.
Run script against cloned environment.
Generate the drop DBMS-JOB
Rebuild as scheuler only job.
Promote to prod.
Prior to upgrade.
I raised a SR but they said no.
you need to do this before via a script (for instance “upg1.before_action=myscript.sh Y”.
I fear that you can’t run jobs during upgrade since STARTUP UPGRDE disables this functionality intentionally.
We set broken=Y and then run AutoUpgrade. The jobs are migrated to scheduler which is good. There seems to be some kind of relationship between job and scheduler. If we broken=Y/N on job it disable/enable on scheduler. If we enable on scheduler it doesn’t set the next run date properly, it is schedule in the Year4000. We can get around this, running broken=Y/N again. SO, now we are in a bit of dilemma and in some sort of guessing game. If we do a remove on job, will it remove on scheduler as well and vice versa? Is there no way to break these ‘relationship’ so we can just remove the dba job one. I suppose the only way is to recreate the job on scheduler? That completely negate the fact that AutoUpgrade has just done it for you though 🙁 …
P.S. – running some test atm
this is an area where I do need to do some research by myself. Connor McDonald has some blog posts about it, too.
And once time allows, I need to dig deeper.
Meanwhile, you please need to check with Oracle Support.
We have recently upgraded from oracle 12c to 19c. We found out dbms scheduler uses session timezone as start date instead of db timezone. Issue here is my session timezone is in EST and the dB timezone is in UTC. When I create a jon with start date as sys date, I would like it to run immediately. Instead the scheduler converts the start time to utc+4.
Can we not make the scheduler use the db timezone instead of the session timezone or have the time conversions done correctly.
dbms_scheduler. stime shows EST time. Thanks in Advance,
I think this is one of the limitations we learned about later.
Did you open an SR with an example?
If yes, could you please share the SR number with me?
Finally managed to test as below:
ON a 22.214.171.124 database, set broken to Y, and the start date changes to the Year 4000. So this is not a 19c bug, maybe the job/scheduler team has set this to be the case
SQL> select * from v$version ;
Oracle Database 12c Enterprise Edition Release 126.96.36.199.0 – 64bit Production 0
PL/SQL Release 188.8.131.52.0 – Production 0
CORE 184.108.40.206.0 Production 0
TNS for Solaris: Version 220.127.116.11.0 – Production 0
NLSRTL Version 18.104.22.168.0 – Production 0
5 rows selected.
345 04-JUL-2022 13:56:06
1 row selected.
SQL> exec dbms_job.broken( &job, TRUE);
Enter value for job: 345
PL/SQL procedure successfully completed.
SQL> select what, job, next_date from dba_jobs where job=345 ;
345 01-JAN-4000 00:00:00
1 row selected.
For jobs that AU has migrated to scheduler, we do want to move forward and just use the scheduler. The migrated jobs that are now on the scheduler are working fine so we are now wanting to remove the ones in dba_job, doing dbms_jobs.remove however also removes the job from dba_scheduler_jobs. The same thing happens if we are to remove from the scheduler. Not sure if there is any way around this. Same behaviour after the 19c upgrade when we create a new job in dba_job, it creates one in scheduler too. Can’t find a doc to say this is the expected behaviour from 19c onward. Not sure if there is a parameter to turn off/on this behaviour.
I think it is intentional to keep the entry in DBMS_JOB as well for those being migrated over.
And just to be precise, it isn’t AutoUpgrade but the upgrade/migration script run as part of the database upgrade and provided by the team owning the job interface with DBMS_JOB and DBMS_SCHEDULER. I still have several examples in my inbox to test/verify/blog about. As soon as time allows … and meanwhile, please have an SR to check this (which I think you did already).
Thanks for letting me know!!!
Here’s a testing update.
Finally manage to dbca a generic 19c database. Looks like the dbms_job now creates a corresponding job as well in dba_scheduler_jobs. I thought this is just for jobs that has been migrated during the AutoUpgrade. So on a new 19c database, a dbms_job.submit also creates a job in dba_scheduler_jobs. So I supposed it is expected that a dbms_job.remove does the remove in dba_scheduler_jobs. Didn’t know that this is the case. Would be nice if they have allow dbms_job.remove to accept another parameter Y or N so that it doesn’t remove the job in dba_scheduler_jobs especially for those that has been migrated and working.
DBMS_JOB interface is still kept – you still can use it in 19c. But behind the scenes, it uses the scheduler.