Question - how partition is assigned to process instance

We have a 3 broker node and 2 gateway node cluster

Cluster size: 3
Partitions count: 3
Replication factor: 3
Gateway version: 1.2.4
Brokers:
  Broker 0 - address1:26501
    Version: 1.2.4
    Partition 1 : Leader, Healthy
    Partition 2 : Leader, Healthy
    Partition 3 : Follower, Healthy
  Broker 1 - address2:26501
    Version: 1.2.4
    Partition 1 : Follower, Healthy
    Partition 2 : Follower, Healthy
    Partition 3 : Follower, Healthy
  Broker 2 - address3:26501
    Version: 1.2.4
    Partition 1 : Follower, Healthy
    Partition 2 : Follower, Healthy
    Partition 3 : Leader, Healthy

We started a load test and we observed that Broker-0 reached 92% CPU utilisation while other brokers were around 30% CPU utilisation.
What could be the reason and how can we distribute load evenly?

Hi @ankit_joinwal

It depends on how you start your process instances. Can you share a bit about that?

1 Like

We start process instance using Zeebe Java library and send request to gateway.

Hi @ankit_joinwal. I assume then that you mean you use this to make the request to create process instances.

Is that the only way you create process instances? For example, do you have processes with a timer start event or a message start event?

Hi, yes we are starting using zeebe/ZeebeClient.java at develop · camunda-cloud/zeebe · GitHub

Currently our process definition does not have timer or message start event.

Another issue we observed is that after the load test, Memory consumption of Zeebe broker does not go down back to normal. When would zeebe broker free up memory? Also what could be the reason why memory is not freeing up? All our process instances were completed but the memory didnt free up.

Hi @ankit_joinwal,

Each process instance in Zeebe exists on a partition. The partition that it lives can depend on many factors, but your statement about how process instances are started greatly simplifies how this works.

Currently our process definition does not have timer or message start event.

So process instances are only started through the Create Process Instance RPC. This means the gateway applies a round-robin strategy to distribute the process instances over the different partitions. It will look-up the broker that is currently the leader for a partition and send the process instance there. For the next process instance it does the same for the next partition, and so forth.

Looking at your topology:

  • broker 0 leader for partition 1 and 2
  • broker 1 only follower
  • broker 2 leader for partition 3

It makes sense that most processing will happen on broker 0 (note that followers also do some work nowadays as they also build up state from the log for faster failover).

Leader distribution is not yet optimized for uniform distribution. However, recently experimental settings have been added to allow for a best-effort strategy to optimize for this uniform distribution using priority election. You can find these experimental settings here: zeebe/dist/src/main/config/broker.standalone.yaml.template at 1.2.4 · camunda/zeebe · GitHub. You can read more about priority election in ZEP-006.

Another way to do achieve a similar result is to simply restart a broker that is leader for too many partitions. This will result in a new election for those partitions where the followers can become leaders. It may take some tries but can allow you to end up in a uniform distribution. Note that this does mean additional downtime as failover may take some time.

Considering memory consumption, please open a new topic for that.

I hope this helps.

Nico

1 Like