How to use message start event (non-interrupting)?


I have this basic example of what I am trying to achieve. There is message intermediate throw event which sends out a request and receives a synchronous response. This is being done with the job “Send Message”. While the “Send Message” job is inprogress, the process receives the kafka event and the kafka thread starts to correlate to the subprocess’ message start event (non-interrupting).
This thread correlates successfully and weirdly updates the “Send message” job with new child execution id and thus upgrades the revision of that job.
Now when the “Send message” job finishes and tries to delete it from job table I get OptimisticLockException.

org.camunda.bpm.engine.OptimisticLockingException: ENGINE-03005 Execution of 'UPDATE ExecutionEntity[8ec5d35a-8e37-11f0-938a-f20af6900e41]' failed. Entity was updated by another transaction concurrently.
	at org.camunda.bpm.engine.impl.db.EnginePersistenceLogger.concurrentUpdateDbEntityException(EnginePersistenceLogger.java:138) ~[camunda-engine-7.22.0.jar:7.22.0]

I really want to know how to fix this without adding some delays in sending the kafka event and make everything run smoothly. Non-interrupting start event was supposed to solve this but it got me confused.
I have tried all combination of “Async continuation” property set on all elements involved here. I want to know the correct approach to solve this.

What is the relationship between the message sent by “Send Message” and the on received by “Received Kafka Event”?

Message Throw Events are supposed to be asynchronous - from Bruce Silver’s BPMN Method and Style:

Send and Receive tasks, or throwing and catching Message events, represent asynchronous communications. As soon as the process sends the message, the flow continues on the outgoing sequence flow. It does not wait for a response message.

Synchronous calls should be represented by Service Tasks - from the same book:

A Service task is an example of synchronous communications. Recall that a Service task represents an automated action. In the BPMN 2.0 metamodel, the Service task actually means an automated request for an action performed by some external system, with receipt of that system’s response. The request and response are really messages, but usually we do not represent them as message flows in the diagram. They are simply implied. The Service task is not complete until it receives the response from the system that performs the action. That is what synchronous means.

In an executable process, synchronous tasks are short-running, completing in milliseconds or seconds. If an automated task is long-running, meaning it takes minutes, hours, or even weeks to complete, it is modeled in BPMN as an asynchronous request, using a Send task or throwing Message event, not a Service task. While this distinction is important for executable processes, it is a good convention to apply to non-executable BPMN as well: If an automated function is long-running, represent it with separate Send and Receive tasks (with message flows).

This is the theory. In the real life, there are situations where the synchronous call lasts longer and the asynchronous response comes before the sync part is completed.

I don’t have a 100% solution for the OP, but I’d try to set all activities to “async before” and not exclusive.

I understand and I tried replacing the message throw event with service task, but the result is same.
The messages in “Send message” and “Received kafka event” are not related. “Send message” will send out to “A” and “Receive kafka event” will be sent by “B” to this service.
Right now the simulation I am doing is A and B is in the same program (citrus test), where I receive the message sent by “Send message” and responds back. The immediate next line sends the kafka event that is correlated to “Received kafka event”.

I tried that still getting OptimisticLockingException.
Changed “Send message” and “Received kafka event” to “async before”.

Then you might have hit a general restriction or a flaw in the Camunda design.

A possible solution could be to receive the Kafka event in another process (and not in an event based subprocess) and then send it from there to the original process.

@ksushant881 is the code that processes Message A and sends back Message B running on the same thread?

I quickly run this scenario in Camunda 8, forcing the code that processed Message A to pause for 4 minutes before completing the task.

Then I manually sent Message B to using Postman and passing a proper correlation Key. The result was the expected: the token remained at Message A and the sub-process was started by Message B and finished.

image

Then after 4 minutes the job that was processing the message terminated and the entire process finished:

image

However, in order to have full parallelism I had to start more than one instance of worker processes. When I used only one, everything got stuck waiting for the thread that was processing Message A to finish.

is the code that processes Message A and sends back Message B running on the same thread?

I did not understand the question. Let me try to put the scenario in few points:

  1. There are 2 externals here A and B. A receives the message from the “Send message” component and B sends the kafka event that correlates to “Received kafka event”. So I do not understand when you say “and sends back Message B” above.
  2. Goal is to make the service receive the kafka event anytime and not stop the main flow, that’s why the non interrupting event (atleast I thought it fits for the use case).
  3. Currently I have written a test that mocks both A and B in the same test where the pseudo code looks like:
citrusRunner.receiveMessageIntendedForA_AndRespondBack(response)
citrusRunner.publishKafkaEventActingAsB(event)
  1. Now if I put a thread.sleep(3000) in the above program between the two lines it works well, but I am preparing for the worst and there may not be 3 sec difference between the two operations.

However, in order to have full parallelism I had to start more than one instance of worker processes.

More than one instance of worker processes with same business key?

When I used only one, everything got stuck waiting for the thread that was processing Message A to finish.

Does that mean you also saw the OptimisticLockingException?

Okay I see.
The kafka event can come anytime, that is why I used the Message start event (non-interrupting). I think this can be solution to extract the subprocess as a separate process altogether and not even send it to original process. There maybe some camunda variables required in the new process from the original process but I believe they can be accessed easily in code.

No, job workers.

Does that mean you also saw the OptimisticLockingException?

I didn’t but it was Camunda 8. If there is a problem, it could be restricted to Camunda 7.

I’m not familiar with Citrus but it could also be that you bumped on a scenario that would not happen in real life because A (receiver) and B (Kafka event sender) would be separate modules.

This is not too much of an edgy case and Camunda would have heard complaints about it if there was indeed a defect.

Anyway, the idea of moving the content of the subprocess to a separate process could be a solution, if you can pass the process variables from one process to another.

Does Camunda 8 work the same as 7 in this regard?

I don’t recall in details how Camunda 7 implemented Message Throw and Service Task, but for what I’ve seen, it appears that Camunda 8 implements Service Task and Message Throw is a similar (if not exactly) way that Camunda 7 implemented External Service Tasks. So much so, that Camunda 8 does not seem to have External Service Tasks anymore.

In summary when you configure a Message Throw and Service Task, you define the job (Job Type) that will hand those requests.

Job workers implement handlers for each Job Type and keep polling Zeebe for jobs ready to be processed. Given this architecture, it appears that concurrency issues may have been mitigated, if not eliminated.

:slight_smile: ello @ksushant881,

Use async continuation on parent execution

  1. Make the message intermediate throw event async before or async after.
  • This ensures the send-message job is committed before it actually starts modifying the execution.
  1. Make the subprocess triggered by Kafka async before.
  • This will push the Kafka correlation to a separate transaction.
  1. Outcome: Both threads operate in separate transactions, so they don’t collide on the parent execution. :slight_smile: