We’ve set up executor_lockTimeInMillis to some value. Using JavaDelegate we send tasks to get processed by our service. Sometimes, some tasks need longer to get executed than what the value of lockTimeInMillis is set to and Camunda starts the execution of that same task again since it didn’t get a response in a timely fashion. This results in that same task being executed in parallel.
Depending on the data we have stored, some tasks can take quite long to execute and the only solution we’ve found thus far is to set lockTimeInMillis to a ridiculously long time. But then what if there happens to be a task that takes longer to execute still?
I’d say our biggest issue is that these retries happen infinitely. If our task has no chance of being completed in the provided lockTimeInMillis value, Camunda will keep sending that same task to execution whenever the lock expires, this service task for the given process instance won’t have a chance of getting completed and we’ll have an ever-increasing number of tasks running in parallel if the disparity between time needed for the task to complete and lock time is high enough.
Any ideas how to solve this? It would probably be ideal if we could dynamically extend lock for a given task instance from our service, once we know what we’re dealing with. Also, can we stop Camunda from calling execute after certain number of retries?