Deploying Camunda 8 with Helm: Zeebe StatefulSet without Persistent Volume?

Hello Camunda Community,

I’m currently planning to deploy a self-managed Camunda 8 platform using Helm. However, there’s a challenge in our infrastructure: we don’t deploy statefulsets that require persistent volumes.

While I have external instances of Elasticsearch and PostgreSQL, the default Zeebe statefulset mandates a persistent volume. In the Helm options, it is written that there’s a choice for PersistenceType: disk, local, and memory. My understanding is that if we opt for ‘local’ or ‘memory’, data might be lost upon pod restart.

Given our constraints, I’m trying to find out:

  1. What exactly does the Zeebe workflow engine store in its persistent volume?
  2. What are the potential implications or risks of running a Zeebe statefulset in a production environment without a persistent volume?(with PersistenceType: local, and memory)

I hope to get insights from someone who has tackled a similar challenge or has expertise in this area. Thank you in advance for your help!

Hi @abasha, welcome to the forums! Zeebe uses a local RocksDB instance to manage the current state of everything across the brokers. Have a look at this forum thread and this blog post to learn a bit more about how the Zeebe internals work and how/why it persists data.

Thank you for the insights. I understand from the resources you provided that Zeebe’s state in RocksDB contains crucial information about deployed process models and current process instance executions.
If we were to use the PersistenceType as “memory” and encounter a scenario where this state data is lost due to pod restarts, can you elaborate on the specifics of what we stand to lose? Will active workflows be disrupted? And how will this loss impact the system’s ability continue its operations?
Your clarification on this would be very valuable as we navigate our deployment choices.

Additionally, I have another query regarding the configuration of RocksDB. Is there a possibility to set up RocksDB externally, for example, on a separate Linux node, and then configure Zeebe brokers to use this external RocksDB instance as its database? I’m exploring alternative setups to align with our infrastructure needs. Any insights would be helpful.

@abasha - you would lose everything that isn’t historical data. I’m not sure there’s a way around needing a persistent volume to run Camunda. Another way to think of it would be to imagine that Zeebe is a database; you would want that database persisted in production.

There isn’t a way to connect to an external database. There are some issues with performance and scaling that aren’t easily solvable with an external database. The first couple minutes of this CamundaCon presentation speak a little bit to that.

@nathan.loding first of all i want to say thank you,
I’m currently looking into deploying the Camunda 8 platform on a Linux node using Docker containers. While going through the documentation, I noted that it specifies the Docker images are optimized for production use only on Linux systems,
However, I’m a bit uncertain about a few things:

  1. The documentation touches on Docker images, but what if one was to deploy these images via Helm in a Kubernetes environment, or simply as a standalone Docker container on a Linux node? Would this still align with the recommended best practices(for production)? Docker | Camunda Platform 8 Docs
  2. For those familiar with the setup, what are the best practices for deploying on a Linux node, especially when integrating components like Zeebe brokers? I intend to use Elasticsearch and PostgreSQL externally. Would it be advantageous to distribute Zeebe components across distinct Linux nodes for performance and reliability?

Any insights or experiences related to the above would greatly assist in shaping my infrastructure strategy. Thanks in advance!

The Docker images are production ready - our Kubernetes install uses the very same images. However, there’s obviously a bit more configuration required on your end to wire all the containers together. We prefer using Helm/k8s to Docker, but it is supported.

I am not sure on the second question as that’s something I haven’t worked with myself yet. I will ask and see what information I can dig up!

Hey @abasha - I talked with some of our engineers, and our Helm charts default to 1 Zeebe broker per k8s pod, so distributing brokers across nodes would make sense for a production environment.

Hope that helps!

Thank you, Nathan, for clarifying that. I wanted to double-check something. You mentioned that the Docker images are supported, but I’d like to understand more about the recommended environments. In the documentation, it clearly states:

“DANGER: While the Docker images themselves are supported for production usage, the Docker Compose files are designed to be used by developers to run an environment locally; they are not designed to be used in production. We recommend to use Kubernetes in production.” Docker | Camunda 8 Docs

Given this, is it indeed recommended to run these containers in Docker without the aid of k8s/helm for production purposes? Specifically, if we were to bypass Docker Compose and just deploy the Docker containers, would that be advisable in a production environment? I’d appreciate some more insights on this.

Hi @abasha,

The containers are always built from the same images and they can be used in a production environment.

It is not recommended to wire the containers to a cluster by yourself. In docker-compose, you have no failover for the Zeebe engine as you get it with the default setup from Helm in Kubernetes. Zeebe is just started in a single instance in docker, but three times (or even more) in Kubernetes.

Hope this helps, Ingo

Hello Ingo,

I understand that when using docker-compose, the Zeebe engine does not have the failover capabilities as it would in a Kubernetes setup. Given that, to rephrase my understanding: if a node goes down, there will be a period of downtime. However, once the node is back online and the containers are restarted, the system should resume its operations. Most importantly, there wouldn’t be any data loss due to the downtime, and everything would pick up from where it left off. I’m aware that the service will be restarted, but it will retain its configuration ,and everything will be as it was before. Is my understanding correct?

We’re contemplating deploying Camunda BPM v8 without the use of Kubernetes, given our infrastructure constraints around stateful applications like StatefulSets. With this context, do you have any suggestions on alternative deployment strategies for Camunda BPM?

Moreover, is there a practice to run three Zeebe containers without Helm/k8s, but on separate nodes? We’re aiming to achieve failover capabilities. Any advice or experiences shared around this would be of immense value to us.

Hi @abasha,

You can see a Zeebe cluster as a database for process instances.

Would you ever install a relational database without a file system?

It is useful for some scenarios (unit tests), but risk of loosing data without a filesystem increases in Zeebe as well as in relational databases the longer the process instances/data have to be stored.

Zeebe uses replication and partition distribution for failover: Partitions | Camunda 8 Docs. With this, another node can take over the work of a broken node. When the node is back up, it will become a follower for other nodes. You can read more details here: How to Achieve Geo-redundancy with Zeebe | Camunda

Hope this helps, Ingo

1 Like

Thank you for your insights.

So, to clarify, it’s feasible to deploy a production-ready Camunda8 in Docker containers, without k8s/helm, across separate Linux nodes to ensure Zeebe failover, correct?

Also, are there insights or experiences you could share on managing Camunda8 components like Operate, Tasklist, Optimize, Identity, and Connectors on Linux nodes in containers?

Hi @abasha,

take a look at our deployment recommendations here: Camunda 8 installation overview | Camunda Platform 8 Docs

All options contain links to further details.

Hope this helps, Ingo