Elasticsearch shard reaches the maximum shard limit (1000 shards)

We have Camunda instance deployed as self-managed using docker with version 8.5.0

and we are facing issue in the Elasticsearch is that the cluster has reached its maximum shard limit (1000 shards) and this preventing the zeebe from initiating new process instance.

and here is the logs to the Elasticsearch and Zeebe components
Elasticsearch & Zeebe logs: https://drive.google.com/file/d/11_vPtF49z9EHEGqkWeUmfZwF4i3AtLNS/view?usp=drivesdk

1 Like

Hi @Mohamed_Ahmed_Mohame - there’s a few different ways to address this, so it depends a lot on your use case, what data you need, and for how long you need it.

  • You can increase the number of shards in Elastic (as far as I know, you can have an unlimited number of shards if the resources support it)
  • Ensure you are taking backups as needed, so no needed data is lost
  • Configure data retention policies

I would also recommend reviewing the data about sizing your environment: Sizing your environment | Camunda 8 Docs

1 Like

Hi @Mohamed_Ahmed_Mohame ,

That error means your Elasticsearch cluster has hit the maximum number of shards allowed, which by default is 1000 shards per node. Here’s how you can solve or mitigate this:

Increase the Maximum Shard Limit (Temporary Fix)

You can increase the cluster.max_shards_per_node setting, but this is not recommended for the long term unless you’re sure your hardware can handle it.

PUT /_cluster/settings
{
   "persistent": {
       "cluster.max_shards_per_node": 2000 // or any safe number
   }
}

Use with caution — too many shards can degrade performance significantly.

Reduce the Number of Shards

This is the preferred solution.

a. Lower number of shards per index

Many indices default to 5 primary shards. For small datasets, that’s overkill. You can set fewer shards when creating indices:

PUT /your-index-name
{
     "settings": {
         "number_of_shards": 1,
         "number_of_replicas": 1
      }
}

b. Use index lifecycle management (ILM) and rollover

If you create many time-based indices (e.g., daily logs), use ILM with rollover to avoid creating too many shards.

c. Delete or shrink old indices

Delete old data you don’t need or use the _shrink API to reduce shard count:

POST /your-old-index/_shrink/your-shrunk-index
{
   "settings": {
      "number_of_shards": 1
    }
}

Index must be read-only and all shards must be on the same node before shrinking.

Monitor Your Shard Usage

Run this to check your current shard usage:

GET _cat/indices?v&h=index,docs.count,store.size,pri,rep

And per-node shard count:

GET _cat/nodes?v&h=name,ip,node.role,shards

Scale Horizontally

If you need lots of shards (and thus indices), you should:
• Add more nodes to your cluster
• Distribute shards more evenly

1 Like