Creating reusable global subprocesses in Camunda BPM with well-defined input/output parameters

Hello,

We have two basic 'beginner" questions regarding implementing reusable sub workflows in Camunda BPMN:

Question 1: Required data mapping between parent processes and global subprocesses

I have read the Camunda documentation about parent processes, embedded subprocesses and global subprocesses (https://camunda.org/bpmn/reference/#activities-call-activity).

If I understand it correctly, the optimal path for creating reusable subcomponents (or subprocesses) in BPMN is to use global subprocesses.

We want to implement a well-defined API for each global subprocess (i.e. well-defined input and output parameters and datatypes), but we don’t know how to implement this correctly and what the best practices are. The main purpose is that the parent process only passes the needed/required input parameters to the global subprocess which returns well-defined output parameters back to the parent process.

In the Camunda documentation, they call this “required data mapping between parent processes and global subprocesses”.

Can you point us to more information about this data mapping between parent processes and global subprocesses? Any best practices? Is there example code somewhere that could help us?

Question 2: Difference between calling BPMN process via Camunda REST API or via global subprocesses (activity call)

Instead of using global subprocesses, we also considered implementing a microservice architecture (using HTTP REST API calls) by calling BPMN subworkflows via the Camunda REST API. EG. we would create a Service Task that launches another BPMN process via the Camunda REST API.

What are the (dis)advantages of both approaches (global subprocess vs. calling process via the external Camunda REST API)? Has anyone implemented a microservice architecture with the Camunda REST API?

Thanks in advance.

Kind regards,
Bart

Personally, I would just build processes that perform the “service” you want to implement and let other processes call them through the Call Activity. That’s what we’re doing.

It has the advantage of using the existing tooling for documentation and implementation, and you can pass data between parent and called processes very easily. Another advantage is that you limit the size of the processes to very well defined functions so that they don’t become cumbersome.

For example, I have a process that takes and alarm, parses it and decides which workflow to invoke next. Each of these workflows can do any number of things. One of them checks a service on host and attempts to restart it. If the service doesn’t restart, a different process is invoked to cut a ticket for someone to deal with.

These individual service processes can be tied together through the use of DMN tables, which allow for somewhat easier maintenance of large decision matrices, though they’re by no means perfect. For example, in my implementation, the parsed alarm message value indicating which service is down is passed to a DMN table whose outputs are the next process to be implemented. This can continue for any number of iterations and is limited only by your ability to manage them.