All works fine, but now I want the parent process to known which projects the user actually reported on, Basically, I want to retain projects on that list that the user reports on, and to remove the ones they don’t report on.
I know I can store this information out-board, but I have a good reasons to want to keep all my process related state on the process. It seems to me a pretty common use case.
The only way I can think of solving this problem is to pass the parent process id into the sub-process and then to update the process variable in the “Send to Harvest” message task.
Could you use a script that executed at the parent process level that gets the reported time returned from the subprocess, and the parent process would add that object/value to a array variable in the parent process. I would recommend using SPIN Json Array to do this.
See Pattern Review: DMN Looping for Array Input for a similar scenario where the output of a multi instance DMN is stored on a array/Json SPIN variable at the parent process level.
It’s actually not so simple, when you use a subprocess, instead of a dmm task, all of the local subprocess variables are gone by the time you listen to the end event. I may yet have a workaround, but this is much more difficult than it ought to be.
That step wasn’t obvious from the DMN example, though I think you mentioned it.
I mean this to be an operation that will be used across many process and wound up implementing a delegate listener on the sub-process end state and then setting a variable on the DelegateExecution SuperExecution.
@StephenOTT I have a question. Based on the spec of Caumnda, Mulit-Instance doesn’t support output mappings. As each instance output data to the same variable, the different instance maybe overwrite the variable.
Is it possible the end listener on your “Some sub-Process” task gets the variable which had been overwriten by other instance? If so, the result is not as we expected.
But the user guide of Camunda said that " No output mapping for multi-instance constructs" and “The engine does not support output mappings for multi-instance constructs. Every instance of the output mapping would overwrite the variables set by the previous instances and the final variable state would become hard to predict.”.
YOu are correct that the docs to say that. BUT in my example you will see how we use the end listener mapping in the listeners to use a script to append a single variable. Every time the sequential instance completes, it appends to a existing variable. (“Adding to the existing list”)
@cheppin This statement is valid for embedded sub-processes. If you change the call activity to an embedded sub process, you will notice that “Output Parameters” section in “input/output” tab is vanished.
This brings me to my question. I am taking liberty to ask the question in the same thread rather than opening a new one as the heading is “How to Accomplish Multi-instance I/O Mapping” and it’s not specific to Call Activities.
@StephenOTT Do you have any idea how we can achieve this when it’s a multi-instance embedded subprocess? For instance, when in your pattern process array-input-dmn.bpmn if you also want the output of the individual subprocesses, how to achieve that? By setting super-execution variables?
BTW, your examples are winners!
@chaitanyajoshi if you want each output of a embedded sub process then you follow the same pattern as the dmn array loop thread. If you want all variables to be merged into a single in the parent then you need to use sequential multi instance. When you are saving your variable in the sub process, you should be able to access the variables of the main process. So you just need to update the main process variable with your additional data.
There are some weirdness to how embedded sub processes are executed: executions, instance ids etc.
Fore sure. At present I solved the problem by adding a variable directly to the process instance (remember, for embedded sub-process, it is only process instance and not parent process instance as there is no parent process per se. Embedded sub processes are just variable scopes for with new activity IDs.) from the activity in the subprocess.
This is for serial multi-instance therefore the code is so straight forward. For a parallel multi-instance process, I would have had to create a container variable prior to starting the sub-processes and then add the local variable values to this container as had been explained in Stephen’s template.
One would argue that this variable will be overwritten in the parent process every time it’s set. I am aware of this and this is by design as I am using this variable as a termination condition. If it’s not a termination condition variable then again the approach is storing variables in a container.
@StephenOTT Once again, thanks for your help and directions. And ParallelUpdateMap plugin for Camunda is a brilliant idea!
The output of a multi-instance activity (e.g. the result of a calculation) can be collected from the instances by defining the outputCollectionand the outputElement variable.
outputCollection defines the name of the variable under which the collected output is stored (e.g. results ). It is created as local variable of the multi-instance body and gets updated when an instance is completed. When the multi-instance body is completed, the variable is propagated to its parent scope.
outputElement defines the variable the output of the instance is collected from (e.g. result ). It is created as local variable of the instance and should be updated with the output. When the instance is completed, the variable is inserted into the outputCollection at the same index as the inputElement of the inputCollection . So, the order of the outputCollection is determined and matches to the inputCollection , even for parallel multi-instance activities. If the outputElement variable is not updated then null is inserted instead.
If the inputCollection is empty then an empty array is propagated as outputCollection .
Very nice mechanism in Zeebe. I wonder if Camunda will follow?