Multi-instance, collecting outputs

Hi, I am experimenting with parallel multi-instance user task in Camunda 7.21. I am trying to satisfy the following requirements:

  1. For a given list of approvers, ask each approver (in parallel) for their decision (approved or rejected)
  2. Capture the decision of each user
  3. If any user rejects, then complete the multi-instance immediately and cancel any other approval requests and make a note (for later) that the multi-instance was short-circuited.

Here is my model:


multi_instance_collect_part_1.bpmn (7.8 KB)

  • The user task has a form variable and input variable called “decision” to ensure the decision variable is scoped just to the user task.
  • A task completion listener puts the (local) decision variable into the process scoped “approvers” variable (map) using the current task assignee as the key.
  • A completion condition is defined based on the “decision” at completion of each user task.

This works in this prototype testing, but I would really appreciate feedback on the reliability of this approach?

I’ve read a lot of posts on multi-instance in Camunda 7, but it is still unclear to me if this is a valid way to “collect” results safely in a parallel multi-instance. My understanding is “approvers” would be optimistically locked, and therefore I would have to code/handle/retry for the possibility that two tasks are completed concurrently and one would “lose” with an optimistic lock exception and any work in the current transaction for that user would be rolled back?

Many thanks !

First of all: thank you for a good question! :slight_smile:

IMO this is not a valid approach since setting of a process scoped variable from within a parallel multiinstance activity always bears a risk of a race condition.

I don’t have a ready to use solution for you but I’d try event based subprocesses. The approval task would be a part of such a subprocess. If a user rejects, the subprocess would send a message to its parent process which would then eventually run into a terminating end event. Or a subprocess could start (in the case of a rejection) another subprocess with an interrupting start event.

The clue here is to remove all tokens after a rejection has been detected.

Thanks for taking the time to reply to my question @fml2. I’ve stripped out the requirement for early completion to focus only the variables scopes/race aspect. I’ve also renamed “approvers” process variables to be “approversWithDecisions” to make the intention clearer for this variable (i.e. the keys are the multi-instance collection and the values are updated with decision of each approver).

The simplified BPMN is now:

multi_instance_collect_part_3.bpmn (7.0 KB)

I’ve drawn what I think are the activities and the variable scopes below. I’ve also attached 3 screenshots of the variables at 3 different scopes (root scope, multi-instance body scope and instance).

The “decision” var is local to each “approval request” instance and my hypothesis is that I can read, update and write the parent (root) process scope “approversWithDecisions” in the task completion listener. If both “approval request” task instances happen to execute concurrently then one will fail at the writing stage with an optimistic lock exception (because the process var is versioned).

I know it is possible to target particular scopes when updating a variable and I am wondering if I need to explicitly target the root scope for the “approversWithDecisions” set variable for this to work?

@herrier thank you for the great pictures! I’m not an expert on this topic but I still don’t quite like the approach. Updating a data structure from a parallel multi instance task (no matter what it does) is risky IMO. (By the way: does one have to uncheck the “exclusive” mark in order to have a real parallel execution?) Even if the engine detects a race condition and throws an exception – is this what you want? In your case: The user’s decision will not be accepted, a weird (for the user) error message will display. The engine will be happy but the user (or, in general, the business process) will not.

In the rare cases I had to collect the output of a multiinstance task, I used sequential execution. If it really has to happen simultaneously, I’d probably try this: since you know how many approvals you have and each instance knows its index, you can set variables with different names, e.g. approval_0, approval_1 etc. After the multiinstance activity has finished its work you can collect all these variables into a list or a map for a more comfortable processing afterwards. This way you should experience no race conditions under no circumstances.

Hi @herrier,

If the configured history level permits storing variable history, you can perform a historical variables query after the approval task to obtain the count of approvals. An example can be found in the following post.

But isn’t it still not 100% race condition resistent?

It should be completely safe because you are only querying historical data without making any updates to process instance scoped variables.

Hey @fml2 - the race condition itself is only dangerous if there is no mechanism to detect conflicting writes, but Camunda 7 does employ such a mechanism - optimistic locking.

The scenario in question has two task completion listeners (transactions) which want to read, update and write a process scoped variable. If it happens that they both read the same version of that variable, modify it and then attempt to write it, then through optimistic locking implementation, Camunda guarantees only one of those writes will succeed.

The failing writer only has two options:

  1. Allow the optimistic lock exception to bubble up and the transaction is rolled back.
  2. Catch the exception and retry the update. Re-read the variable, re-apply the update and re-try the save.

The chance of conflicting writes is very small in this scenario (transaction executes in a few milliseconds and is triggered by a human completing a task, a few approvers only) then the re-try will almost certainly succeed.

From a consistency perspective this looks ok to me.

This is true, i.e. Camunda will perform a roll back so that the DB will be in a consistent state. But the form submission would fail so that the user could experience some inconvenience. If that’s ok for you then everything is fine and the implementation will be straight forward.

1 Like

I’ve understood that the OP wanted to collect the approvals into a single process variable.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.