Hi, I am new to BPMN workflows and trying to understand how the process engine executes the same workflow(Same BPMN definition) in parallel.
Here is my use case:
We have a platform which publishes some data to BPMN Definition file by using variables, below is the line of snippet on how we send data to BPMN.
public String initiateProcess(WorkflowRequest workflowRequest)
{
Map<String, Object> variables = new HashMap<>();
variables.put("request", workflowRequest);
ProcessInstance processInstance = processEngine.getRuntimeService().startProcessInstanceByKey(processId, variables);
}
Where processId is the BPMN definitionKey.
Now within BPMN workflow/definition file, i want to read these variables data and process it and then publish the processed data to some other external systems via REST APIs.
Here the problem is the method initiateProcess will be triggered very frequently for multiple times with in the application with different set of workflowRequest data. Which means we want to publish different set of data to the same external system via workflow for multiple times very frequently.
If i run above logic, sometimes i get variables value as null in BPMN service tasks, (even though the variable value is not null in the initiateProcess method) and some times we get an error saying the variable “request” already exist. I think parallel execution of same workflow/BPMN process definition is failing. Please help me out on how to execute the same workflow/BPMN process definition in parallel. I dont want to run service tasks in parallel, but wanted to run entire process definition (BPMN file) in parallel.
I am really looking for an answer on this, can anyone could reply to this question. Kindly let me know if you need any further details if the question is not clear. Thanks in advance.
Process definitions know nothing about one another when they are started. They are completely independent and so i don’t think the issue here has anything to do with the engine. Can you give more information about your setup. Camunda Version, Database, Clustered Setup?
What version of camunda are you using, also did you create the model using the camunda modeler?
How are you deploying the processes - do you simply have them in the resources directory?
In our case when ever a new BPMN file is uploaded to our spring boot application by the user, we store that bpmn files in an external directory and load them for deployment using below lines of code. So, its like deployment happens only once, i.e, at the time of file upload to our application.
Firstly, you’re using a deprecated tool for modeling make sure your models work with the latest version of the stand alone modeler: https://camunda.com/download/modeler/
If you’re deploying this way then you should change
config.setJobExecutorDeploymentAware(true);
to
config.setJobExecutorDeploymentAware(false);
Otherwise you might have problems on restarting springboot.
There could be a lot of problems created from using the wrong modeler, so let me know if any issues arise from loading your modeling into the new modeler.
Hi Niall, I did changed the modeler for modeling our workflow and observed the differences in xml namespaces and tags, below is the sample snippet of changes.
And as you suggested, we also applied the change in
config.setJobExecutorDeploymentAware(false);
and restarted the application, but its still giving the same error.
The workflow works fine, if a single request is raised to that workflow. But if we trigger the same workflow for multiple times very fastly, then few requests getting processed and few requests throwing error about the variables as null or already exists.
I tried many other ways to see if this works, but no luck.
Looks like Camunda Engine can not execute the same workflow definition for multiple times very frequently.
Some times I also see below exceptions
Cause: org.apache.ibatis.executor.BatchExecutorException: org.camunda.bpm.engine.impl.persistence.entity.VariableInstanceEntity.insertVariableInstance (batch index #3) failed. 2 prior sub executor(s) completed successfully, but will be rolled back. Cause: org.h2.jdbc.JdbcBatchUpdateException: Referential integrity constraint violation: "ACT_FK_VAR_BYTEARRAY: PUBLIC.ACT_RU_VARIABLE FOREIGN KEY(BYTEARRAY_ID_) REFERENCES PUBLIC.ACT_GE_BYTEARRAY(ID_) ('45')"; SQL statement:
And some times i see the other exception also:
[ERROR] 2019-10-11 01:09:57.261 [Thread-18] context - ENGINE-16004 Exception while closing command context: ENGINE-02004 No outgoing sequence flow for the element with id 'checkEventType' could be selected for continuing the process.
checkEventType is conditional block, which will be having atleast a default value, so there is no chance of empty value for this conditional block, but the error says, no sequence flow!!
I’m very confused by the main issue you’re having - there really shouldn’t be any issue with starting multiple instance of the same process definition on a regularly configured system. So that might not be the underlying problem.
How many instance are you trying to start per second?
Can you please upload your model?
Im sorry Im not getting your question, but we are using the script field for having the business logic to implement, written in groovy, and we are able to load the values of that field in WorkflowServiceExecutor class (which i have already posted above ) and able to execute the groovy script.
JavaDelegate will execute the fields right ? I mean to say, like service task can internally use this interface and execute the scripts/expressions from the fields injected right ?
Well, I changed the workflow from using service task to script task, but its behaving the same manner,
This time i observed one process instance is getting executed repeatedly.
For Ex: if i invoke initiateProcess method with workflowRequest as input parameter by setting one of its field, lets say workflowRequest .eventId=400, now its expected that in the workflow, if i print this eventID, then 400 should appear for only one time. But here i see this line getting printed with same 400 ID for 3 to 4 times.
And it skips some other events which is being passed from our application to workflow, like i can see a log line printed with eventID as 401 in initiateProcess method, but the same doesn’t appear in workflow log lines.
Not only the input data, but also when i print the process instance id in workflow, it gets printed repeatedly with same processInstanceID. I think this shouldn’t be the case.
Please attach your current BPMN XML so we can better understand the scenario we are talking about. Even better if you can share a sample project on github that reproduces the problem.
In your original Java Delegate, you have this code:
This code is not thread-safe. The script engine is cached, so every execution of the service task uses the same script engine. You constantly overwrite a global variable of that scripting engine and then evaluate the actual script. It is likely that you experience race conditions.