I have a use case like this.
A rest service endpoint in java receives a request, it does some DB operations, sends a message to kafka, a python program receives message from kafka, processes it and ends with sending an email.
I want all this to be captured in a single task in camunda workflow/
I understand that rest endpoint getting hit can be modeled with message start event.
What standard approach would be a good fit for rest of the flow?
I wouldn’t model it as a single task.
It might make sense to model it as 3 items:
(Message Start) - This is your REST call
(Call Activity) - This is all the actions, DB Operations, Python Calls, etc
(Message End) - This is the email back saying “Ok, it’s done”
But this is just from the modeling perspective.
From an implementation perspective, the process that this “overview” process calls would have an activity for each function (DB Update, Python Script).
Hi @mghildiy ,
Capturing multi-service processing in a single task using Camunda 8 is possible, but there are trade-offs depending on how tightly coupled or complex your services are. Here’s a breakdown of how you can approach this, along with best practices and implementation options.
When to Use This:
- The service calls are tightly coupled and logically form a single unit of work.
- You want to simplify the BPMN diagram.
- Failure handling, retrying, or compensation logic is managed internally (not separately modeled in BPMN).
Option 1: Use a Single External Task Worker (Job Worker)
Camunda 8 uses Zeebe job workers. You can implement a worker that handles multiple service invocations in sequence or in parallel within the same worker.
Worker Code (JavaScript/Java/etc.):
zbc.createWorker({
taskType: 'multi-service-processor',
taskHandler: async (job) => {
try {
const result1 = await callServiceA(job.variables);
const result2 = await callServiceB(result1);
const finalResult = await callServiceC(result2);
return job.complete({ finalResult });
} catch (err) {
console.error('Service failure:', err);
return job.fail('Multi-service task failed');
}
}
});
Option 2: Use Script Execution Inside a Connector (e.g., Python, JavaScript, etc.)
If you’re using connectors or a custom connector, you can embed logic that chains multiple services.
This works well for:
- REST connectors with scripting
- Low-code / no-code use cases
Option 3: Call a Micro Orchestration Inside the Worker
Create a separate orchestration layer (e.g., via code or another BPMN model) that handles the multi-step process, and call it as a single task from the main BPMN.
You can do this if the multi-service step becomes too complex to manage inline.
Caveats:
- Error handling becomes internal — BPMN won’t reflect individual failure points.
- Observability is reduced — logs or tracing must expose internal steps.
- Retry logic for substeps must be managed in code (not BPMN).
- Compensation (undoing changes) is harder to model visually.