we have a problem when fetching external tasks asynchronously:
We fetch external tasks periodically
Asynchronously we execute them and we complete them
The problem is that eventual calls to fetchAndLock that will occur before we complete the task, will make completion fail with an optimistic locking exception. At the moment we solved that by keeping track of running tasks and avoid to call fetchAndLock if there are tasks that are in process, however this will typically reduce our capacity to parallelize tasks.
In production, we can also make our polling frequency > average time to complete tasks, but this is not a solution either.
Only include external tasks that are currently not locked
(i.e. they have no lock or it has expired).
Value may only be true, as false matches any external task.
Im puzzled - I thought the intent of fetch and lock was to enable processing nodes to āreserveā and thus take ownership of unprocessed external tasks. Then when the task is complete, mark it as complete. In other words, the behaviour is the same is the job executor acquisition and processingā¦
Surprisingly there is no ānotLockedā on the java Api 7.5.3. My feeling @StephenOTT is that we should abandon the idea of interacting in memory with the engine and start interacting only via rest
@Webcyberrob your point is fair though obviously you need a polling consumer that fetches tasks and completes them. If you do this asynchronously, you might end up fetching the same task twice (i.e. therefore increasing its version number in the database) and you will get an exception when you try to complete it.
Just to clarify the behavior: ExternalTaskService#fetchAndLock should never return the same task twice unless the lock timeout has expired previously. The idea of this method is to fetch and lock a task in one transaction. Of course locking the task may fail with OptimisticLockingException if there are multiple such requests in parallel. In that case, the exception is caught within the engine and less tasks are returned than requested.