Queuing Inbound messages


Is there a simple way to queue / delay inbound messages in camunda?

We have a problem regarding messages that are sent to the REST API.

In problem case, a legacy system sends us two messages. First message initiates the process and second message contains an event that must be correlated to the correct process instance. Correlation keys are in the first message.

Problem happens when the initiating system sends the two messages with very small delay. If opur engine is under load, we have not processed the first message when the second arrives and we get an error message:

"@timestamp":"2021-10-11T05:56:23.272Z", "log.level":"ERROR", "message":"ENGINE-16004 Exception while closing command context: Cannot correlate message 'closeEvent': No process definition or execution matches the parameters", "service.name":"process-engine","process.thread.name":"http-nio-8080-exec-10","log.logger":"org.camunda.bpm.engine.context","transaction.id":"3XBrkaoK","trace.id":"Hwi8ZqAV","labels.TRACE_ID":"Hwi8ZqAV","error.type":"org.camunda.bpm.engine.MismatchingMessageCorrelationException","error.message":"Cannot correlate message 'closeEvent': No process definition or execution matches the parameters","error.stack_trace":"org.camunda.bpm.engine.MismatchingMessageCorrelationException: Cannot correlate message 'closeEvent': No process definition or execution matches the parameters\n\tat org.camunda.bpm.engine.impl.cmd.CorrelateMessageCmd.execute(CorrelateMessageCmd.java:88)\n\tat org.camunda.bpm.engine.impl.cmd.CorrelateMessageCmd.execute(CorrelateMessageCmd.java:42)\n\tat org.camunda.bpm.engine.impl.interceptor.CommandExecutorImpl.execute(CommandExecutorImpl.java:28)\n\tat

Simple way to fix this would be if we could set the engine to try the correlation with delay for like 3 times. Is there a way to do this?

Wew would prefer not to touch the initiating system as its legacy.

Hi @JussiL

You’ve come across a very common issue. So common in fact that one of the main differences between Camunda Platform and Camunda Cloud at the moment is that we have redesigned message correlation to give messages a time to live so that they don’t have this race condition… But speaking about platform there are some ways of dealing with it…

You could look into where the transactions are being committed. If you add an async after on the message start event it will complete the message receive and create the process instance as quickly as possible. You could also then create an event sub process to catch the second message, which would become active as soon as the first message commits.

There’s other options that a little more weird - but i’ve seen them work - which would be to have a single process that works as a message que - which will always be available to take messages and then it could forward them onto the processes once they’re ready.

Thanks @Niall

I have to look into the event based subprocess thing. Will it really work differently from having two intermediate catch events in a sequence?

What I have now is this:

I wonder would this really be different, I guess I have to test it:

Also, we use local correlation keys as at some use cases we have to have multiple message nodes waiting within the same process instance.

I guess you cant use start node of an event based subprocess with local correlation keys?

So I assumed that you have a message start event, but if you’re waiting for the two messages after the process has started it’s a lot easier :slight_smile:
The trick is to try to wait for both messages at the same time so there’s no race condition. This will do the trick

But we need to store the key in message one to correleate the message two.

I tried the event based approach bit it runs into same issues. If the message two arrives too quickly in succession, we do not have the event based subprocess ready to correlate the message.

I guess we need to inject some sort of delay in the legacy.

do you add timer tasks inbound message


I don’t understand how timers would help. They would just make the things worse I believe.