Have a short running process that has a Message Start Event (a few seconds).
Have a short running process (a few minutes at max)
There was ~300 messages delivered in a short period of time to process #1 to the Message Start Event. Process #1 sends messages to process #2.
Process #1 processed about half of the messages and then started to stack up process instances on the Message Start Event, and the executor appeared to be suck/stalled/hanging?.
Both process 1 and 2 have large number of async tasks, and multiple parallel branches.
When i restarted the engine, all of the waiting instances at the Message Start Event in process 1 and all waiting tokens in process 2 were nearly instantly processed/executed.
There was no error that i could see in the logs related to Messages. But if there is a keyword is search for i can look back in the logs.
Anyone have ideas on what would cause the executor to hang/stall/get stuck?
Thanks!
Edit:
Note that if i look up the job and manually execute it through the rest API, it executes successfully, but then is stuck on the next wait.
Ive expereinced similar behaviour on rare occasions in dev environment and once in production on a very old version. Ive never tracked down the cause…
A few questions - how are these processes deployed, as part of a process application or via the Rest API? (I ask as Im still testing the bahaviour of deployment aware and deployment via the REAT API…)
Ive tried unsuccessfully to trap the job executor SQL query when this condition occurs - perhaps you may have better luck. Ive not been able to consistently produce the behaviour, particularly when Im logging the DB queries…
I’m afraid the log is not so useful without the SQL statements being logged. The latter would tell us exactly when and how the engine queries for jobs.
Calling up the active job through the REST for the process instance shows a normal looking process (a payload you would expect).
If i try and manually execute the job from the Start Event, it will execute the transaction and then wait at the task (in this case, the service task which is running HTTP-connector). When i manually execute the second job (the service task), I never receive a response from the server. I am using Postman, and it just runs “forever”, never seems to timeout or provide a server error.
There is a lot of HTTP-Connector usage in the process definition. Is there a possible connecting with http-connector using up resources? I have my doubts about this given that the logs show the executor looking for jobs but finding none (even for jobs that are waiting as a start event).
Edit:
Further Context:
Just noticed that if you manually execute a job that is stuck as the Start Event, the execution of the job does not show up in the logs. Is this normal? @thorben
If i try to execute the job a second time (after the job has already been completed), i get a InvalidRequestException and the error shows up in the logs.
For the loggers that we activated, this is normal. Another useful logger is org.camunda.bpm.engine.cmd which logs whenever the engine begins and finishes commands. This logger should write something whenever you start executing a job.
This (and the exceptions in you log file) sounds very much like the HTTP requests take a long time and therefore consume all the job executor’s threads. In this case, the job acquisition thread slows down and acquires less jobs or no jobs at all until jobs can be executed again.
Could you check if your HTTP endpoint is generally slow? You can also configure connect to use a timeout when making requests, so you the jobs should fail in this case and free the execution resources. See HTTP Connector | docs.camunda.org and http://www.baeldung.com/httpclient-timeout (section 4) for an example.
So ran some further tests and doing curl or wget directly from shell of camunda server to the HTTP endpoints that http-connector are connecting to provided consistently fast responses.