OptimisticLockingException even on tasks not modifying data

Hi

OptimisticLocking hits again…

I have a parallel gateway with two async service tasks, I want them to run in parallel for the process to finish the fastest way possible.

Both tasks only wait for a few seconds, they don’t modify any variable in the execution context or anything, just system.out.println and sleep, but I still get the OptimisticLockException:

10:51:36.546 [taskExecutor-2] WARN org.camunda.bpm.engine.jobexecutor - ENGINE-14006 Exception while executing job 32af0977-ebd6-11e7-9667-0a0027000004:
org.camunda.bpm.engine.OptimisticLockingException: ENGINE-03005 Execution of ‘UPDATE ExecutionEntity[32aa4e7e-ebd6-11e7-9667-0a0027000004]’ failed. Entity was updated by another transaction concurrently.

10:51:36.562 [taskExecutor-2] WARN org.camunda.bpm.engine.jobexecutor - ENGINE-14006 Exception while executing job 32af0977-ebd6-11e7-9667-0a0027000004:
org.camunda.bpm.engine.OptimisticLockingException: ENGINE-03005 Execution of ‘UPDATE ExecutionEntity[32aa4e7e-ebd6-11e7-9667-0a0027000004]’ failed. Entity was updated by another transaction concurrently.

I thought that was going to happen only if different tasks updated the same value, but seems that the JobExecutor generates the exception when marking the tasks as completed.

How can I deal with that?
What would be the point of running something in parallel if then the optimisticLockException makes the entire task run again making the execution longer than if it run sequentially?

Thanks

I’m adding my simple bpmn definition in case I’m missing something in it

camundaParallel.bpmn (6.5 KB)

Hi - personally I prefer to interpret the BPMN parallel gateway as indicating that the order in which tasks are performed is irrelevant and that they can even be performed concurrently. I don’t really believe it is about parallel processing in the sense of parallel threads…

Hence using a parallel path to improve processing time may be false economy. Consider the additional overheads required to manage the parallel processing; Assuming asynchronous continuations, two or more jobs need to be flushed to the database. In addition, the job executor now needs to acquire these jobs, a database read followed by locking the jobs, database write, followed by the actual execution. Then the jobs have to be deleted from the job table and a joining gateway needs to be updated.

Hence it could be that serial execution is faster than parallel execution due to lower overheads…

P.S. consider marking the join gateway as asynch before to reduce optimistic locks and thus retry of your tasks…

regards

Rob

4 Likes

Thanks for your detailed response, marking the join gateway as async solved the OptimisticLocking exception issues.
Now with that working example and the experience, I’ll be able to explain my client why sequential execution is better.

1 Like