Scale broker by adding cluster nodes and trigger rebalancing?

Howdy folks!

Quick question if my current understanding is correct:

  1. We cannot dynamically change the number of partitions
  2. But we can configure Zeebe to have “too many” partitions (let’s say 50 partitions for a 3 node cluster). This will cause some overhead for the current cluster, but it works.
  3. When we need to scale later, we can make sure more partitions get dedicated hardware. Therefore, we
  4. Add Zeebe broker nodes, even dynamically in Zeebe (let’s say we go from 3 to 10). This will not directly impact anything, as those new brokers don’t have anything to do.
  5. We trigger Rebalancing manually, which should lead to those new clusters being utilized as well, as they also would become leaders of some of those partitions.

Is there a fundamental flaw in the line of thinking above?

In a busy system, is rebalancing time-consuming (and what does it affect for that time being - throughput)? I would imagine that it takes some time to replicate data to those new nodes.

Thanks in advance!
Best
Bernd

Hey @berndruecker

I don’t think that this is possible how you have it described, or at least it is not supported yet see https://github.com/camunda-cloud/zeebe/issues/4391

Please correct me someone if I’m wrong, maybe @deepthi

Greets
Chris

You are right @Zelldon

Rebalancing operation re-distributes leaders among existing brokers which are already replicating a partition. It cannot move a partition to a new broker.

Regards
Deepthi

1 Like

OK, thanks. So you cannot add cluster nodes (only increase hardware of existing clusters) at the moment until we tackle https://github.com/camunda-cloud/zeebe/issues/4391. Got it - thanks for the latest status on it.

1 Like

Yes, using Kubernetes you can increase the resources of the brokers. One partition can consume about 2.5 vCPUs. I’d recomment making the number of partitions a multiple of the number of cluster nodes.

Would one be able to move a partition manually, e.g. by moving its data from one broker’s disk to another broker’s disk, to match a change in partition layout? Or is the partition layout somehow encoded in the data?