Hi @Mohamed_Ahmed_Mohame ,
That error means your Elasticsearch cluster has hit the maximum number of shards allowed, which by default is 1000 shards per node. Here’s how you can solve or mitigate this:
Increase the Maximum Shard Limit (Temporary Fix)
You can increase the cluster.max_shards_per_node
setting, but this is not recommended for the long term unless you’re sure your hardware can handle it.
PUT /_cluster/settings
{
"persistent": {
"cluster.max_shards_per_node": 2000 // or any safe number
}
}
Use with caution — too many shards can degrade performance significantly.
Reduce the Number of Shards
This is the preferred solution.
a. Lower number of shards per index
Many indices default to 5 primary shards. For small datasets, that’s overkill. You can set fewer shards when creating indices:
PUT /your-index-name
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
}
}
b. Use index lifecycle management (ILM) and rollover
If you create many time-based indices (e.g., daily logs), use ILM with rollover to avoid creating too many shards.
c. Delete or shrink old indices
Delete old data you don’t need or use the _shrink API to reduce shard count:
POST /your-old-index/_shrink/your-shrunk-index
{
"settings": {
"number_of_shards": 1
}
}
Index must be read-only and all shards must be on the same node before shrinking.
Monitor Your Shard Usage
Run this to check your current shard usage:
GET _cat/indices?v&h=index,docs.count,store.size,pri,rep
And per-node shard count:
GET _cat/nodes?v&h=name,ip,node.role,shards
Scale Horizontally
If you need lots of shards (and thus indices), you should:
• Add more nodes to your cluster
• Distribute shards more evenly