Camunda Operate Self Managed

I am using Camunda 8 self managed and getting the following error in APIs on my self-managed operate dashboard. Could anyone guide me as to what could be the issue
{
“timestamp”: “2025-04-28T06:36:17.788+00:00”,
“status”: 500,
“error”: “Internal Server Error”,
“message”: “Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]”,
“path”: “/api/incidents/byProcess”
}

Hi @Awais_Sabir Based on the error message: Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]

Operate tried to query Elasticsearch (to fetch incidents, from /api/incidents/byProcess) but the query failed on all shards.

Possible causes:

  • Elasticsearch is down or not reachable from Operate.
  • Index corruption or missing index that Operate expects.
  • Elasticsearch cluster health is red (some shards are unassigned or failed).
  • Mapping or schema mismatch in the Operate-related Elasticsearch indices.
  • Authentication issues (if you have security enabled between Operate and Elasticsearch).
  • Resource problems — e.g., Elasticsearch is out of disk, memory, or CPU.

Check Elasticsearch cluster health:

curl -X GET "http://<your-elasticsearch-host>:9200/_cluster/health?pretty"

If it shows status: red, shards are failing.

  • Check Elasticsearch logs (very important):

Look for errors like shard failure, disk full, circuit breaker, index missing, etc.

  • Check the indices used by Operate:
    Operate stores data in Elasticsearch indices like:
  operate-*
  zeebe-*

You can list indices:

curl -X GET "http://<your-elasticsearch-host>:9200/_cat/indices?v"

  • If you recently upgraded Camunda or reinstalled, Operate might expect indices that don’t exist.
  • Or if Elasticsearch data was wiped/reset incorrectly.
  • Check disk usage on Elasticsearch nodes.
  • Check memory/heap usage.