How to implement a synchronous execution in zeebe

eg, when a restful api request arrives,I will start a zeebe process instance, i want to pend restful request until some job be executed.

I think a distributed lock may be a helpful solution,but does zeebe has an official supported for this case?
or zeebe has a recommended solution?

Hi @aximo, there are two different patterns for a synchronous REST response wrapping a Zeebe workflow demonstrated here:

good example
how do i implement this in java client?

Broad question.

In the most general sense: implement the same logic in Java code, using the Zeebe Java client, and some Java web server.

If I were doing it: I already know how to program in Java, syntax-wise, although I would have to look a few things up along the way, but I could get started (that’s how I approached my PR to Bernd’s Kafka connector).

I would have to look into which web server to use (maybe I’d look at netty, or I have used one of the microhttp projects on GitHub before, so I might use that for a POC). I’d probably make a decision on which to use based on what I planned to eventually use, or what I have used in the past. That would replace Express in the Node example. But I wouldn’t waste too much time deciding. I’d want to start with something pretty much immediately.

The EventEmitter pattern/library - not sure how you do that in Java idiomatically, so that would be the next thing I would investigate.

And if I didn’t have time, or the interest to do any of that (like if programming Zeebe in Java had no future for me, so it was not an investment to learn it, or I needed to roll it out fast and had other things to do, or if I were reading “The Four-Hour Work Week” and outsourcing/automating my day job), I would go on and pay someone to fork that repo and do a Java implementation. And then probably read it afterwards to see how they did it.

1 Like

Here is a Java EventEmitter:

The issue with the “Synchronous HTTP response using an EventEmitter” pattern as a POC, is that it relies on a single worker running in the same memory space as the webserver.

To actually make this scalable, you would need something that does distributed publish/subscribe to emit the final output from any worker, running on any machine.

But I would do it in-memory on a single worker running in-process with the webserver, just to validate it. Because standing up the distributed pub/sub adds a bunch of complexity.

With the “Callback URL and client-side result polling” pattern you have to deal with caching and expiring the result in a distributed fashion. So it has a different scaling issue.

Redis could be used to solve either of these, I think.

Sorry for not clear question

I know your example but it has some limits, In my understand, your example are successful only when jobs run at the same progress which hold the restful request, but in most common cases, the job/workers are deployed in different progresses, so we maybe need distributed lock/notification to let restful api response to it‘s caller.

maybe zeebe can add a new feature , this feature allows worker/client register a listener, the worker will pull job execution events from broker continuously and invoke the listener. so client will have more flexibility in many cases,such as above

I see. You could do this now, by writing an exporter. You could run a process that allows the web server to register interest in an event, and call that web server on a callback channel when it arrives in the exported data.

So it’s pub/sub with a custom exporter.

Once you get to this point, though, I think it’s worth going back to the solution architecture to see if there is a more idiomatic way to solve the initial problem.

Because: you have to deal with a fundamental impedance mismatch here. If your HTTP request / response times out before the workflow completes (because load or latency, or external dependencies), you may have the web server send some kind of status back to the client, rather than just time out.

But, now you have a workflow in-flight, and no way to communicate the outcome. And, unless you are using start message events with idempotent start messages, or don’t need this operation to be idempotent, if the client retries the request you’ll now have a duplicate workflow instance.

Fundamentally: a workflow is an asynchronous, stateful, potentially long-running process, which can be inspected to get progress updates. And a REST request is asynchronous, but it is a one-shot, and it’s stateful only until the response is sent.

So I think that a callback to the client, either via a webhook (which means you don’t have to manage caching, but your clients do need a web server), or via a server-side results url and client polling, is a better impedance match. This way you can also expose a status endpoint to get progress.

If your workflows are always short-lived, and guaranteed to complete in time for a REST request - response, you still have to deal with the potential failure cases. This includes a partition going under quorum and accepting requests to start workflows, but not actioning them until quorum is achieved. That will definitely mess with a one-shot synchronous HTTP request-response.

thank you for your advice

In our plan, we will keep restful api idempotent. We will map user request and started workflow instance and record it to db/redis, so when client retry restful api, we will pick up the same workflow instances until get the job result.

1 Like

Putting the idempotency into your REST layer is a good idea.

At the moment we have idempotent workflow creation via message start event (See: and - but you cannot get a reference to the started workflow this way.

If you start the workflow via createWorkflowInstance, it cannot be guaranteed to be idempotent (See:, but you will get a reference to track the outcome.

So you’ll need to handle idempotency in a layer above createWorkflowInstance, then use that method to start a workflow and get a reference.

You could also use idempotent start messages and correlate the output to the request via a correlation key in the state. Although this way, because a start message is a “fire and forget” publish operation, your REST layer cannot tell the client whether it was a duplicate or not, unless it is tracking previous start messages.

At the moment, the correlation key + message id of a start message guarantees idempotency, and the hash of the correlation key is used to load-balance start messages across partitions, but the correlation key is then discarded and not available for correlation at the end.

You need to put a correlation key into the workflow state to have it flow through the system.

If you don’t need to report duplicates back to the client, you could just use idempotent start messages and correlation via a variable in the workflow state.

Tracking them in Redis from the REST layer gives you the best of both worlds, and the most flexibility / resilience.

1 Like

There is a new solution for this: