Anyway to config instance key / job key start value?

We plan to upgrade Zeebe from 0.26.0 to 1.24.0. Since we can’t perform rolling-upgrade, we have to set up another Zeebe cluster and develop an adapter between our application and Zeebe.
Current key generating rule implies that two different Zeebe cluster may generate same instance key / job key as the key is an auto increment number in each partition.

Related code as follows:


public final class DbKeyGenerator implements KeyGeneratorControls {
  private static final long INITIAL_VALUE = 0;
   * Initializes the key state with the corresponding partition id, so that unique keys are
   * generated over all partitions.
   * @param partitionId the partition to determine the key start value
  public DbKeyGenerator(
      final int partitionId, final ZeebeDb zeebeDb, final TransactionContext transactionContext) {
    keyStartValue = Protocol.encodePartitionId(partitionId, INITIAL_VALUE);
    nextValueManager =
        new NextValueManager(keyStartValue, zeebeDb, transactionContext, ZbColumnFamilies.KEY);

  public long nextKey() {
    return nextValueManager.getNextValue(LATEST_KEY);


public final class NextValueManager {
  public long getNextValue(final String key) {
    final long previousKey = getCurrentValue(key);
    final long nextKey = previousKey + 1;
    nextValueColumnFamily.put(nextValueKey, nextValue);

    return nextKey;

This may cause conflict and ambiguity when our application try to cancel instances or complete jobs as the following figure shows:


So, is there anyway to config the new Zeebe cluster DbKeyGenerator.INITITAL_VALUE?
Such that we can config the new cluster with a rather big key start value to avoid conflict.


Hey @TangJiong thanks for raising this.

I think this is a good and valid point. Currently it is not possible to configure the initial value. Feel free to open a feature request here with the description above.

Currently it would be necessary to know which instance is part of which cluster. Since you probably already run a while your old cluster I would guess your key’s are already moved forward a bit? So all lower keys are part of the new cluster and higher part of the old. Be aware that the partition id is part of the key, so you could decode the keys to compare them better see here zeebe/ at develop · camunda-cloud/zeebe · GitHub

Do you used any exporters? Then you might be able to build up a mapping in your adapter with the exported data.

I hope that helps a bit.


This question has been asked before - but it looks like no feature request was opened. See here: Unique keys per cluster?.

What about adding the cluster generation to the variable payload? That would let your adapter distinguish the target cluster.