Microservice orchestration using camunda 8 and API gateway

We use camunda 8 workflow engine (zeebe - self hosted) to orchestrate our microservice api calls. There are 15-20 microservices in our application architecture. We have written wrappers over our existing microservice apis and exposed them as zeebe workers in the microservices code. These zeebe workers poll the zeebe engine to get tasks for execution.

Till now all the api requests made by external clients like REST clients or UI code go through kong (api gateway).

We want the same behaviour to be followed when the zeebe worker code is being executed so that the central plugins configured at the api gateway get applied. Central plugins include rate-limiting, monitoring, circuit breakers etc. We want to zeebe worker code execution (wrappers over api calls) to go through kong api gateway

What architecture suggestions would you have in order to support this requirement?

Thanks

Hi @jgeek1,

please have a look at the new Camunda connector SDK: Connector SDK | Camunda Platform 8 Docs

It might fit your use case.

Best regards,
Philipp

Thanks @Philipp_Ossler. How do you imagine a custom connector to be helpful in my use case?

Just a rough idea. The connector runtime environment could provide cross-functional features, like monitoring or rate-limiting.

You create a custom connector for each service. The connector can focus on the business logic (i.e. the calling of your service). The runtime environment encapsulates the cross-functional aspects.

1 Like

I already have the cross functional features enabled on Kong API gateway. Won’t I have to re-enable/configure them again in the connector SDK again?

I have no idea about the Kong API gateway.

If this API gateway works for you then you might be able to put it between the Zeebe client and the Zeebe gateway. Does this work for you?

I have no idea about the Kong API gateway.

It is no different from any other standard API gateway.

That might work. I can configure kong to accept and route gRPC calls but mapping them with the standard rest api calls is something I am trying to figure out.

I raised a question here to understand if there a solution you have seen among camunda 8 clients that use camunda engine for orchestrating api calls between microservices. Having an API gateway in front of all microservices is a pretty standard practice. More thoughts are welcome

Thanks.

I think you’re wanting to do something like this jgeek1

Thanks @GotnOGuts for the diagram. There are couple of changes that I suggest.

  1. The request flow from C8 Gateway to C8 Worker needs to go through the API Gateway
  2. I imagine C8 worker to be part of the Microservices block since C8 workers are implemented as methods in the microservices code.

I am assuming C8 workers are job workers. In our case job workers are implemented as java methods in the microservices code marked as @ZeebeWorker using the annotations provided by this spring-zeebe project

The above changes are a suggestion - it may not be the right solution. I will paste the problem statement again here

We use camunda 8 engine for orchestrating api calls between microservices. Our job workers are wrapper methods over these REST api calls implemented as part of the microservices codebase. We have kong API gateway in front of all our microservices for cross functional capabilities. How do we make the api calls made by job workers go through the API gateway?

Please take my comments with a big dose of caution… I’m still starting out with Camunda, so by no means an expert.


The C8 Worker (Job Worker, Camunda 8 Client) will want to reach out over gRPC directly to the C8 Gateway (Camunda Gateway).

This is where your polling of the Camunda engine to look to see if there’s work to be done will originate. If there’s work to be done, then the worker will reach out to the microservice over REST through the Kong gateway, to request that some microservice complete work. The worker will then either poll your microservice to confirm that the work is complete, or wait for the sync response from the microservice, and return the result back to the gateway.

How I read your problem statement is that you are wondering how to route your calls to the REST API made by the job workers through your Kong gateway.

It sounds instead like you have multiple API wrappers around your microservices (REST API is one of them, Job Worker is another). In that scenario, if the job worker isn’t actually initiating a REST call, then you can’t really route that through the Kong.

I suppose the biggest question is: What exactly is it that you’re trying to rate-limit?

Is it the calls to the microservices, or the calls to Camunda?

Please take my comments with a big dose of caution… I’m still starting out with Camunda, so by no means an expert.

Appreciate your efforts in brainstorming this.

If there’s work to be done, then the worker will reach out to the microservice over REST through the Kong gateway, to request that some microservice complete work.

This is where there is a difference between my understanding and yours. So far we have implement job workers as part of the microservices code repo. They are not running as a separate process. We can certainly code them and deploy them as separate application(s) but then that would mean

  1. For every microservice (15-20) and every REST api we will have to code and update this worker application.
  2. We will have to expose an SDK for each microservice to ease the worker application in calling the REST apis

This approach would work but has a big impediment when microservices are developed in parallel and are constantly changing.

Hence I wanted to check on this forum if there is a simpler way.

Thanks

I’ll try to update the image above to reflect what I’m hearing, but if your worker is simply doing a direct call to the microservice (think like a call activity) where the REST interface also calls the same microservice, then there’s not really a good way to integrate an API gateway into that work, since there’s not really an API call.

But it’s possible that I’m still misunderstanding.

there’s not really a good way to integrate an API gateway into that work, since there’s not really an API call.

Practically in our case, every job worker function is a wrapper over the REST api call. Hence we wanted them to be tracked just as the other REST api calls get tracked on Kong.

If the Job Worker is making a REST call, then route that through the Gateway.
Follow the “Other REST Client” communication path to see the data that you’re seeing today.

If your architecture is more the like the following, then you won’t really be able to put the job worker requests to the microservice through the API, since it’s a Java IPC call rather than a REST API.

That’s part of the detail I was trying to get to… what do these processes actually look like?

There is one important change to the diagram from our architecture. We don’t have a separate app right now that calls the microservice function - I am referring to “Microservice REST interface” block in your diagram.

The microservice REST api code resides in the “C8 Worker” block (app) in the diagram above. The task named “Call microservice function” is an internal java service method call inside the java spring boot app.

If you are trying to say that you’ve got your code written as per the following diagram, then you’re not following good practices.

You’re missing the differentiation of logical flow vs written code.

This setup still does not allow you to route your C8 scheduled tasks through your gateway, since you aren’t actually making a REST call from your worker.
The only way you can really enforce the pooled limits on your REST calls (including those from the C8 worker as well as any external REST client) would be to set your code up as shown in post 10.

Right

Ok so basically we should be calling our microservices through a generic rest connector right?

That really depends on what you want to do.
If you want to be able to set pooled limits on your REST calls (shared metering between external REST clients and C8 Clients), then you need to have your worker call the REST interface. That can be through a generic REST connector, or via a custom REST connector.

Your initial post indicated that your C8 worker was a wrapper around the REST call. If the latest diagram captures what you’re actually doing, then you haven’t made the C8 worker a wrapper around your REST, you’ve written two interfaces (REST & C8) that both call the same underlying Java function. That’s why you were struggling to tie in your API Gateway.

If instead you want to rate-limit your C8 workers, then you can do what was suggested in Phillipp’s posts. Note that this will not cause the C8 worker to participate in your REST API rate-limit… only how often a C8 worker can talk to the gateway.

Ok. We want to rate limit our rest api calls. We will go by calling them through generic-rest connector.

This sounds to be the standard approach to orchestrate microservice api calls using the camunda engine when we have an api gateway in our architecture. Is that a right understanding?

The reason I am not a bit confident on my understanding is because “microservice api orchestration” is core problem that camunda as a workflow engine is trying to solve and there isn’t any official documentation that states the above solution. Any comments?

Thanks.