OptimisticLockingException for parallel service taks

Hi there, I modeled 3 tasks in parallel using parallel gateway with Async Before=true and Exclusive=false. When the process ran, it threw OptimisticLockingException and some of the parallel tasks ran multiple times.

org.camunda.bpm.engine.OptimisticLockingException: ENGINE-03005 Execution of 'UPDATE ExecutionEntity[359ae2fc-0041-11ec-901a-aeb181d0d01e]' failed. Entity was updated by another transaction concurrently.
	at org.camunda.bpm.engine.impl.db.EnginePersistenceLogger.concurrentUpdateDbEntityException(EnginePersistenceLogger.java:135)
	at org.camunda.bpm.engine.impl.db.entitymanager.DbEntityManager.handleConcurrentModification(DbEntityManager.java:411)

Any idea how the tasks can run parallel without any issue?

Hi,

This is the expected behaviour. I assume you have a parallel fork->Service Tasks->Join. The issue is the ptimistic locking exception occurs at the join as multiple, concurrent transactions are trying to update the same join. Hence, only one will succeed resulting in the optimistic exception and thus retry on the other tasks.

The exclusive flag prevents this behaviour as it enforces that only one parallel execution is runnng at a time. When you set it to false, this is the behaviour…

To avoid rerunning service tasks, you could also set them to async after. You will still get optimistic locking exceptions, however their ‘scope’ will no only be at the join, not the service tasks. Hence an exception with subsequent retry at the join is usually benign.

The real question is why do you want to run parallel, non-exclusive tasks? Often this can be a desire for greater performance, however often the overhead of synchronising parallel paths exceeds any performance gains…

The guideline I advocate is use parallel flows when the order of tasks is not important rather than treat a parallel path as truly parallel processing…

regards

Rob

1 Like

Thank you @Webcyberrob for your response.

You are right. Our purpose of running parallel tasks is to speed up the execution time of parallel tasks.

I attached a screenshot of my test business process that uses parallel gateway.

I used Async Before=true & Exclusive=false for parallel gateway(left/fork), then used Async After=true & Exclusive=false for all the 3 parallel service tasks and no async for parallel gateway(right/join) . This made all the parallel tasks run in single thread serially one after another and still got a warning about OptimisticLockingException, but no retry of the parallel tasks.

Is there a way that I could make the parallel tasks run truly parallel without any exceptions and without retries?

Thank you!

Hi,

I would suggest leaving the left fork as part of the preceeding flow and just mark your three service tasks as async before and after, and set exclusive to false.

If you really dont need to wait for all three service tasks to complete, dont even use a join, you could send two of the parallel paths to their own end events. (I prefer symmetric diagrams eg each fork has a corresponding join, hence this is a bit of an anti pattern…)

You may still find that serial execution is faster in terms of throughput as parallel execution requires a lot more overhead…

regards

Rob

1 Like

Thank you so much Rob !! This worked as expected.

This is a dummy use case that I am trying to implement.

I agree “symmetric diagrams eg each fork has a corresponding join, hence this is a bit of an anti pattern…”

I am thinking of use cases where my service tasks are making remote service calls and waiting for responses(I/O) and the service calls can be parallelized. I will keep your suggestion in mind and do the actual load testing and tune these.

Once again, thank you so much for your valuable time!

  • Jana

The real question is why do you want to run parallel, non-exclusive tasks?

For example if one want to implement a timeout warning on a group of tasks. A timer can fire and send messages or change specific states of external data objects. In that case true parallelism is needed.

However, by implementing this in business processes we see OptimisticLockingExceptions triggered. The timer task therefore interferes with regular processing. This is quite problematic, as the intended parallel timer task is completely independent of other tasks. The only side effects to other tasks occur via raising OptimisticLockingExceptions here.

We have no idea how to solve this problem. If you have any suggestions, please let me know: you are very welcome!

Hi,

Without a process model, hard to see what you want to achieve…

Perhaps a non interrupting event subprocess may be what you want…in this case you coukd use a non interrupting timer event…

You can nest these inside inline subprocess scopes so they are only active when your process is in particular process states…

Regards

Rob