Since some days we find some warnings in the Logs related to Elasticsearch problems. We migrated from Camunda 8.3.4 to 8.5.7. Maybe this could be related.
We run the Camunda Self-hosted in a Kubernetes Cluster. In the Elasticsearch Pod we have warnings similar to these ones:
[WARN ][r.suppressed ] [camunda-elasticsearch-master-1] path: /operate-batch-operation-1.0.0_/search, params: {typed_keys=true, max_concurrent_shard_requests=5, index=operate-batch-operation-1.0.0, request_cache=false, search_type=query_then_fetch, batched_reduce_size=512}, status: 503 Failed to execute phase [query], all shards failed; shardFailures {[na][operate-batch-operation-1.0.0_][0]: org.elasticsearch.action.NoShardAvailableActionException
and
[WARN ][r.suppressed ] [camunda-elasticsearch-master-1] path: /operate-list-view-8.3.0_/search, params: {typed_keys=true, max_concurrent_shard_requests=5, index=operate-list-view-8.3.0, request_cache=false, search_type=query_then_fetch, batched_reduce_size=512}, status: 503 Failed to execute phase [query], all shards failed; shardFailures {[na][operate-list-view-8.3.0_][0]: org.elasticsearch.action.NoShardAvailableActionException
The status of Zeebe, Operate and Elasticsearch is green and we encountered no problems on runtime so far. The setting on our testing system looks like this:
Could you please give us a hint what could cause these warnings and how we can solve?