I have a model of two pools interacting with each other:
The first task in the in the subprocess launches the process defined in the bottom pool using a message with payload (startProcessInstanceByMessage() ). Upon finishing, a message gets sent back that invokes the receiving task “Daten empfangen”. In order to guarantee a timely abortion of this async. external pool, I want to enforce a duration timer on the subprocess.
On first glance – if the bottom pool finishes fast – everything seems to work. However, on closer inspection (by putting the bottom service task to sleep()), I observe the following behavior:
- Upon reaching the receiving task (“Daten empfangen”), the top process instance does not appear in the list of process instances (history) as I would expect. I had been under the impression that a receiving task should always persist the process instance. However, also defining asynchronous before/after between the two tasks in the subprocess does not change this behavior.
- If the duration of the bottom process exceeds the duration defined for the timer, the subprocess “Daten ermitteln” is NOT interrupted, but finishes normally. AFTER that, the timer event gets triggered.
How can I achieve the desired behavior?
is anything in the lower pool marked as async?
if I mark the service task in the lower pool as async., the top pool instance now is persisted while the lower one is working. However, the timer event is still only triggered after the bottom pool finishes (and with a large delay of approx. 10 seconds), even if that time exceeds the set timer duration.
The basic question is: Is my design sensible in terms of BPMN? If so, is this behavior of the engine correct?
I noticing your pattern, specifically the usage of send and receive tasks as a means of collaboration between process-pools. And, I wanted to share my experience - if it helps.
Pointing out first some good reference to this type of collaboration - also refereed to as “Process Aware” systems (Barbara Weber) and Process-Driven Applications (Volker Stiehl).
The Send/Receive event patterns just don’t appear to work well in a “stock” Camunda installation given the fact that events supporting this style of collaboration are directly managed within the underlying BPM database (see Camunda’s discussion of sync/async configuration in context to effects on DBMS transactions). There’s very good reasons for this - and these reasons are required in support of Camunda’s other implementations of transacted sub-process (with compensation) and error event handling (transaction rollback).
What we need for this inter-process collaboration pattern (referenced in process-driven/aware applications) is direct support for the concept of a “persistent event subscription”. And, this is only possible via a message-event infrastructure such as an embedded JMS sub-system and/or Camel.
The reason for this message-infrastructure requirement is that both your requesting and responding pools enter into something like a race condition whereby the requesting pool’s receive event isn’t ready (token not yet arrive at) the receive task or intermediate catch event. And, any workaround doesn’t really solve the problem. Worse, you’ll likely end up with a transaction fault depending on how many tasks you configured for ‘async’ to get the timings worked out (which is what I eventually discovered).
A “persistent subscription” on the other hand, is always ready. And, once it receives an event, it makes this information available to interested BPMN-event subscribers as they receive the token and are capable of processing the message.
As an alternate construct, it looks a lot like you want to effect an asynchronous task integration. You could consider achieving this using an external task and thus not require the message send or receive tasks. See  for external task details.
yes, I realize there are more efficient or concise ways to get the job done. The thing is, for demonstration purposes (customer) I explicitly need to model it as two interacting pools.
no prob, if thats the case I would advocate Sebastian’s suggestion. I would;
In the bottom pool make the message start event async after, and the service task async after.
This will ensure that the top pools thread is not the thread actually running in the bottom pool. Even if you do this, you may still have a race condition that the top pool receive task is not ready before the bottom pool completes and tries to send a message back…
BTW, I re-read your original request…
In order to guarantee a timely abortion of this async. external pool, I want to enforce a duration timer on the subprocess.
Hence as modeled, I would expect you top process to be interrupted by the timer and thus not receive the message sent from the bottom process as there will no longer be a receive task listening. Note however that the bottom process will continue to run to completion. Is this the behavior you are trying to model or do you want to interrupt the bottom pool if not compete within a time period?