C8run 8.8 won't start because ElasticSearch Cluster health status changed from [RED] to [YELLOW]

I am using c8run 8.8 on windows, and c8run was working fine, and after stopping using c8run.exe stop, and restarting my machine, when trying c8run.exe start, I noticed that elasticSearch won’t start, and I get the below error:

[2025-10-15T20:21:47,619][INFO ][o.e.c.r.a.AllocationService] [LPNCGOLY909909] current.health=“YELLOW” message=“Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[camunda-role-8.8.0_][0], [tasklist-task-8.8.0_][0]]]).” previous.health=“RED” reason=“shards started [[camunda-role-8.8.0_][0], [tasklist-task-8.8.0_][0]]”

so I tried Explain the shard allocations API as follows:

GET /_cluster/allocation/explain

response was:

{

"note": "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API. See https://www.elastic.co/guide/en/elasticsearch/reference/8.17/cluster-allocation-explain.html for more information.",

"index": "operate-decision-instance-8.3.0\_",

"shard": 0,

"primary": **false**,

"current_state": "unassigned",

"unassigned_info": {

    "reason": "CLUSTER_RECOVERED",

    "at": "2025-10-15T17:21:41.254Z",

    "last_allocation_status": "no_attempt"

},

"can_allocate": "no",

"allocate_explanation": "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",

"node_allocation_decisions": \[

    {

        "node_id": "M-APbVXXSX-lFk11BNfk_w",

        "node_name": "LPNCGOLY909909",

        "transport_address": "127.0.0.1:9300",

        "node_attributes": {

            "transform.config_version": "10.0.0",

            "xpack.installed": "true",

            "ml.config_version": "12.0.0"

        },

        "roles": \[

            "data",

            "data_cold",

            "data_content",

            "data_frozen",

            "data_hot",

            "data_warm",

            "ingest",

            "master",

            "ml",

            "remote_cluster_client",

            "transform"

        \],

        "node_decision": "no",

        "deciders": \[

            {

                "decider": "same_shard",

                "decision": "NO",

                "explanation": "a copy of this shard is already allocated to this node \[\[operate-decision-instance-8.3.0\_\]\[0\], node\[M-APbVXXSX-lFk11BNfk_w\], \[P\], s\[STARTED\], a\[id=vlbhbIvSSJaFdrla5Kz9jA\], failed_attempts\[0\]\]"

            }

        \]

    }

\]

}

The camunda health is as follows:

{
“cluster_name”: “elasticsearch”,
“status”: “yellow”,
“timed_out”: false,
“number_of_nodes”: 1,
“number_of_data_nodes”: 1,
“active_primary_shards”: 46,
“active_shards”: 46,
“relocating_shards”: 0,
“initializing_shards”: 0,
“unassigned_shards”: 34,
“unassigned_primary_shards”: 0,
“delayed_unassigned_shards”: 0,
“number_of_pending_tasks”: 0,
“number_of_in_flight_fetch”: 0,
“task_max_waiting_in_queue_millis”: 0,
“active_shards_percent_as_number”: 57.49999999999999
}

The only way to fix this issue and make camunda run is by changing number of replicas as follows:

curl --location --request PUT ‘``http://localhost:9200/_all/_settings’``
–header ‘Content-Type: application/json’
–data ‘{
“index”: {
“number_of_replicas”: 0
}
}’
Please advise, why I am facing this issue, and if there’s any configuration I can add to fix it permanently.

I could reproduce the issue, if it’s multiple elastic search node, having replicas higher than 0 make sense. But for a single node, it will not add any value. As per ElasticSearch documentation replica will take time, that’s the reason we see the status as YELLOW.

Solution is same as what you already applied, this will be permanent, I did restart couple of times after modifying the replica value to 0.

  1. Start the server

  2. Change the replica value from 1 to 0.

  3. Stop and Start the server after that.

after setting number_of_replicas to 0 camunda starts, but I can sett in elastic search logs the following logs, and after stopping c8run and trying to start again the same issue happens again:

[2025-10-16T08:43:39,917][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-decision-8.3.0_]
[2025-10-16T08:43:40,284][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-web-session-8.8.0_]
[2025-10-16T08:43:40,294][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-tenant-8.8.0_]
[2025-10-16T08:43:40,304][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-authorization-8.8.0_]
[2025-10-16T08:43:40,315][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [tasklist-form-8.4.0_]
[2025-10-16T08:43:40,327][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-metric-8.3.0_]
[2025-10-16T08:43:40,340][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-mapping-rule-8.8.0_]
[2025-10-16T08:43:40,350][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [tasklist-import-position-8.2.0_]
[2025-10-16T08:43:40,359][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-decision-requirements-8.3.0_]
[2025-10-16T08:43:40,369][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-group-8.8.0_]
[2025-10-16T08:43:40,628][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-process-8.3.0_]
[2025-10-16T08:43:40,637][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-role-8.8.0_]
[2025-10-16T08:43:40,649][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-user-8.8.0_]
[2025-10-16T08:43:40,900][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-import-position-8.3.0_]
[2025-10-16T08:43:40,906][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [tasklist-metric-8.3.0_]
[2025-10-16T08:43:40,911][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-decision-instance-8.3.0_]
[2025-10-16T08:43:40,919][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-usage-metric-8.8.0_, camunda-usage-metric-8.8.0_2025-10-15]
[2025-10-16T08:43:40,930][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-correlated-message-subscription-8.8.0_]
[2025-10-16T08:43:40,938][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [camunda-usage-metric-tu-8.8.0_]
[2025-10-16T08:43:40,945][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-event-8.3.0_]
[2025-10-16T08:43:41,171][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-variable-8.3.0_]
[2025-10-16T08:43:41,180][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-sequence-flow-8.3.0_]
[2025-10-16T08:43:41,187][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [tasklist-task-8.8.0_]
[2025-10-16T08:43:41,196][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [tasklist-draft-task-variable-8.3.0_]
[2025-10-16T08:43:41,404][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-batch-operation-1.0.0_]
[2025-10-16T08:43:41,413][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-operation-8.4.1_]
[2025-10-16T08:43:41,420][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-message-8.5.0_]
[2025-10-16T08:43:41,425][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-flownode-instance-8.3.1_]
[2025-10-16T08:43:41,432][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [tasklist-task-variable-8.3.0_]
[2025-10-16T08:43:41,438][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-post-importer-queue-8.3.0_]
[2025-10-16T08:43:41,629][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-job-8.6.0_]
[2025-10-16T08:43:41,861][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-incident-8.3.1_]
[2025-10-16T08:43:41,870][INFO ][o.e.c.m.MetadataUpdateSettingsService] [LPNCGOLY909909] updating number_of_replicas to [1] for indices [operate-list-view-8.3.0_]

@devmsaleh - another thing to remember is that C8 Run is not meant to be a permanent fixture. Sometimes it might be best to delete the local data and start fresh. Because it is running directly in your user space and not in a container, it is subject to a bit more volatility. In other words, one our first troubleshooting steps will always be: delete the data and start C8 Run again, do you still have issues?

If you want a more stable environment, I would recommend using our Docker environment.

delete the data and start C8 Run again works perfectly fine, but for me now the best solution is after elasticsearch start i just update the number_of_replicas to 0 and it works very fine, but when c8run starts it updates again number_of_replicas=1, why and how to disable that ?

@nathan.loding you have to consider that there are plenty of customers in the middle east that want to adopt camunda having challenges with docker adoption, and It will be better for them to work with c8run if there will be a production version of c8run

from the documentation I found the following about **Replica count changes:

Replica count changes (number-of-replicas and per-index overrides`)**

  • For newer versions (8.8+), changes are applied to existing indices on the next application restart—their settings are updated in place.

  • Also written to the index templates so that newly created indices inherit the updated replica configuration.

@devmsaleh - there are no current plans to make C8 Run a production-ready product, for a variety of reasons. (Of course, that could change, and if it does then how C8 Run works will be rearchitected to support production deployments.) We offer it because we know not everyone can run Docker, and C8 Run is a fantastic way to get started with local development. It is meant to be a semi-transient local development sandbox, and it works wonderfully for that.

1 Like

@nathan.loding how to force c8run/elasticsearch to use number-of-replicas=0

becuase when I update the number to 0 from elasticsearch APIs, afther c8run starts, it updates again to 1, then I face the same original issue again.

so the situation right now, is that each time I start c8run I have to manually update number-of-replicas using elasticsearch APIs

quick update on this from the sidelines: there’s a bug currently being worked on that will resolve your issue: C8Run doesn't restart after stopping · Issue #39665 · camunda/camunda · GitHub

hth, v.

2 Likes

Thank you @vobu47 for the update.

Thanks for the update