Camunda Job Executor is stuck with RejectedExecution forever

Hi,
Using Camunda 7.19 with Spring boot starter project, I have a BPMNs comprising of

  1. Service Task → Make HTTP Call
  2. Timer
  3. Service Task - External Topic
  4. End

It runs fine for sometime but after some time JobExecutor won’t pick any new jobs and is stuck with following logs
{"@timestamp":"2023-11-07T07:43:14.87Z","timestamp":"2023-11-07 13:13:14.870","level":"DEBUG","reqId":"","message":"ENGINE-14011 Job acquisition thread sleeping for 99 millis"} {"@timestamp":"2023-11-07T07:43:14.974Z","timestamp":"2023-11-07 13:13:14.974","level":"DEBUG","reqId":"","message":"ENGINE-14012 Job acquisition thread woke up"} {"@timestamp":"2023-11-07T07:43:14.974Z","timestamp":"2023-11-07 13:13:14.974","level":"DEBUG","reqId":"","message":"ENGINE-14012 Job acquisition thread woke up"} {"@timestamp":"2023-11-07T07:43:14.974Z","timestamp":"2023-11-07 13:13:14.974","level":"DEBUG","reqId":"","message":"ENGINE-14022 Acquired 0 jobs for process engine 'default': []"} {"@timestamp":"2023-11-07T07:43:14.974Z","timestamp":"2023-11-07 13:13:14.974","level":"DEBUG","reqId":"","message":"ENGINE-14022 Acquired 0 jobs for process engine 'default': []"} {"@timestamp":"2023-11-07T07:43:30.599Z","timestamp":"2023-11-07 13:13:30.599","level":"DEBUG","reqId":"","message":"ENGINE-14023 Execute jobs for process engine 'default': [4ec3a589-7d34-11ee-9caf-0a928f8b5397]"} {"@timestamp":"2023-11-07T07:43:30.599Z","timestamp":"2023-11-07 13:13:30.599","level":"DEBUG","reqId":"","message":"ENGINE-14023 Execute jobs for process engine 'default': [4ec3a589-7d34-11ee-9caf-0a928f8b5397]"}

This is even happening in my local and upon deep dive we seem to getting exception while submitting task in ThreadPoolExecutor, Job executor would retry them instead of fetching new Jobs.

Exception:
java.util.concurrent.RejectedExecutionException: Task org.camunda.bpm.engine.impl.jobexecutor.ExecuteJobsRunnable@38960577 rejected from java.util.concurrent.ThreadPoolExecutor@605380a6[Running, pool size = 10, active threads = 10, queued tasks = 3, completed tasks = 138] at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2065) at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:833) at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1365) at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.execute(ThreadPoolTaskExecutor.java:360) at org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor.executeJobs(SpringJobExecutor.java:59) at org.camunda.bpm.engine.impl.jobexecutor.SequentialJobAcquisitionRunnable.executeJobs(SequentialJobAcquisitionRunnable.java:139) at org.camunda.bpm.engine.impl.jobexecutor.SequentialJobAcquisitionRunnable.run(SequentialJobAcquisitionRunnable.java:81) at java.base/java.lang.Thread.run(Thread.java:833)

It would start picking Jobs when server is restarted but would have the same fate after few hours.
We have NodeJS external task client pollers in place for the same engine.

Hi @guptaashish327,

I’ve seen a similar effect once and it turned out that some http calls made from the HTTP connector never returned and filled the job queue.

No more jobs could be acquired.

Check the http calls, if they return a response.

Hope this helps, Ingo