Dynamic manage multi-tenant Process engine in combination with Spring Boot?

How should I configure the Spring boot part with multi tenancy (MT) in Camunda?

Our setup:
We have our own ProcessEngine Spring Factory that will create several underlying Camunda ProcessEngines (with own db) during startup, bases on the received tenant configuration. We use “RuntimeContainerDelegate.INSTANCE.get().registerProcessEngine” to register the process engines.

However, the camunda class SpringBootProcessApplication contains a default process engine, which is set to the first created process engine through the above call, and also registered this in the afterPropertieSet() method, that results in an Exception as their is already a ProcessEngine registered with the same name. That exception is easy to solve by overriding the method, however, I am wandering what the correct approach is concerning the (spring) configuration when using the above dynamic approach, that is: adding and removing your own process engines during runtime.

I read another post about this and this approach: LINK, but it doesn’t use Spring boot.
I am also seeing that a lof of beans are started by the Camunda Sprint boot config. What should I use or disable when using the above approach?

I think I should override the SpringBootProcessApplication and override the afterPropertiesSet and destroy() method to deal with multiple process engines, but is that enough? What do to more? And should I have som default process engine that I read about in other posts?

Please some advice?

multi tenant deployments can be achieved via processes.xml. But you will still have the default engine responsible for running all tenants.
Multiple engines was never considered for the starter, the assumption was that one would rather run multiple applications for this use case (following the idea of micro services/engines).
But since I got the request quite often in the past, it might become a feature worth supporting, could you file an issue in Jira?

1 Like

multi tenant deployments can be achieved via processes.xml.

That is not an option for us, as we retrieve the tenant configuration during startup from an external service.

But you will still have the default engine responsible for running all tenants.

What do you mean by this? I mean: I create my process engines my self, do I still have a default? And what is that default (the first one created)?

Multiple engines was never considered for the starter

Ok, I will put a request in Jira, but in the mean time, how can I get it to work with the starter?

Thanks for the quick reply.

@jangalinski What is the benefit of running multiple engines in the single application vs running multiple containers, each with their own app/engine instance?

1 Like

Think i found a good use case for the multi-engines:

… As this is configurable on engine level, you can also work in a mixed setup, when some deployments are shared between all nodes and some are not. You can assign the globally shared process applications to an engine that is not deployment aware and the others to a deployment aware engine, probably both running against the same database. This way, jobs created in the context of the shared process applications will get executed on any cluster node, while the others only get executed on their respective nodes.

You could use multi-engines to setup the type of engine processing you want, where you have some apps with specific code that can execute as a heterogeneous cluster, and still have your common homogeneous cluster engine running.

I get your point … but still, deployment aware already works when I run two spring boot camunda apps against the same database.
I get multi tenancy, but I still do not have a good use for multiple engines per spring boot runtime …
I used an extra engine to separate business and operation logic on engine level, but that was on jboss … today, I would just use tenants or multiple nodes.

1 Like

I am all for not having multiple engines. Was just pointing out a valid / interesting use case for having multiple engines in a single app. :wink:

1 Like

@StephenOTT related topic i created here. Multi-tenancy configuration / ProcessEngine & Datasource per tenant

Our current setup is like above @jangalinski mentioned, and multi-tenancy was handled with discriminator column. Default process engine was used to manage all tenant requests.

We won’t use any xml files to configure process engines.

  • Our current deployment is based on standalone process engines(REST api communication), running two spring boot camunda apps(Amazon EC2 instance) against the same database. It works fine.

  • Now the use case is, we got many tenants, lets say 50 tenants (tenant per DB/tenant per process engine). In this scenario we can’t run single process engine per spring boot runtime. It will results in 50 EC2 instances which is more expensive.

  • So we decided to create multiple process engines per tenant in one spring boot runtime. Using this approach, we can scale spring boot app will result in less number of EC2 instances.

  • If we have process engine per tenant in per spring boot app, thinking of scaling perspective will result in ending up with lots of resource usage.

Our use case is quite simple:
Hard client requirement. It’s a financial company that not has many clients, so not much tenants, but data has to be separated in separated db’s and locations.

@StephenOTT @jangalinski what would be the best practice?

In one app ==> one process engine + one db schema per tenant

(or)

In one app ==> multiple process engine + multiple schema per tenant

Also consider we have 50 tenants, and need to scale the application for each tenant.

IMO i dont think there is “a best practice”. Will come down to what data control plane you require.

We often go further, and will have Tenant Per DB Cluster, and then front all of the tenants through a “Services” layer. Where a Service is a Mapping to a BPMN. Many services can reuse the same BPMN.

So from a scale perspective, each client/tenant can independently scale their DB cluster as needed, and if they have internal tenants, they can scale those into other clusters if they want, or they can use internal tenants of the camunda db (column level tenants).

The Services layer lets you abstract the Camunda engine into a “tool” that can be reused as needed. the BPM DB is treated as a place for work, not for long term data storage / business data storage. So you can spin up new engines/BPM dbs as needed for whatever scale you require (or just performance tuning for your env).

1 Like