we are migrating Mesos to Rancher Kubernetes. we deployed the services in Rancher Kubernetes. we get rid of consul for service discovery. we are using Kubernetes service discovery for Retrofit calls. Process is creating and externally terminating immediately.
Below is the error.
2024-06-29T10:48:26.469034219Z at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_272]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_272]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_272]
2024-06-29T10:48:26,479 ERROR [org.camunda.bpm.engine.context] [scheduling-1] ENGINE-16004 Exception while closing command context: ENGINE-13031 Cannot correlate a message with name ‘ProductPackageStatusUpdatedBySeo’ to a single execution. 2 executions match the correlation keys: CorrelationSet [businessKey=null, processInstanceId=2908968370, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
2024-06-29T10:48:26.481940973Z org.camunda.bpm.engine.MismatchingMessageCorrelationException: ENGINE-13031 Cannot correlate a message with name ‘ProductPackageStatusUpdatedBySeo’ to a single execution. 2 executions match the correlation keys: CorrelationSet [businessKey=null, processInstanceId=2908968370, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
Probably at the moment you are calling your message event, you have more than one equal processInstanceId running, for example… you may have active parallel processing (parallel gateway for example), among others, and then the Camunda checks that there is more than one instance, and cannot actually understand which instance you want to activate with the correlate.
I suggest that when correlating messages, do not do it just using processInstanceId, but rather check more forms of validation so that Camunda does not get lost… such as:
The issue resolved after adding processDefinitionID. But I am running the Camunda process server in POD (Deployed in Rancher Kubernetes). POD is crashing every 10 minutes due to memory leakage. I see some IBatsis configuration object is taking more memory. Can you please help me where exactly it is taking more memory and how to resolve the issue.
Thank you, Nathan. I have created new topic on this. I need to run 7.12 process engine as a docker and see memory leakage on that instead of deploying our service which is created on top of 7.12 camunda engine.