Operate fails to connect to Elasticsearch on http://camunda-elasticsearch:9200

I am trying Camunda 8 components to install on local machine using helm. I have updated the actions/setup performed in detailed at https://medium.com/@pramod.rajane/camunda-8-self-managed-on-localhost-using-helm-2d4371f62571

However, in short, I am trying to install operate, zeebe, zeebe gateway and Elasticsearch components of which except operate, all components are up.

When I checked operate pod logs, I see it failing to connect with Elasticsearch instance on http://camunda-elasticsearch:9200 with connection refuse exception.

I’m not getting any clue what is missing to execute more?

FYI : elastic search component is up and tested index listing by pod exe -it command

Below is my operate pod log

C:\Users\Pramod>kubectl logs -f camunda-operate-6f66f75599-hqjjp
Operate is a tool in Camunda 8 stack to provide visibility into all your processes and resolve any incident from a centralized platform. More information can be found on https://camunda.com/platform/operate/

License
The licensing information for Operate available at: https://docs.camunda.io/docs/reference/licenses/

Login with the credentials:
  User:     demo
  Password: demo

   ___   _   __  __ _   _ _  _ ___   _      ___  ___ ___ ___    _ _____ ___
  / __| /_\ |  \/  | | | | \| |   \ /_\    / _ \| _ \ __| _ \  /_\_   _| __|
 | (__ / _ \| |\/| | |_| | .` | |) / _ \  | (_) |  _/ _||   / / _ \| | | _|
  \___/_/ \_\_|  |_|\___/|_|\_|___/_/ \_\  \___/|_| |___|_|_\/_/ \_\_| |___|

                                                           8.7.21


[2025-12-22 11:11:23.395] [main] INFO
        io.camunda.application.StandaloneOperate - Starting StandaloneOperate v8.7.21 using Java 21.0.9 with PID 8 (/usr/local/operate/lib/camunda-zeebe-8.7.21.jar started by camunda in /usr/local/operate)
[2025-12-22 11:11:23.412] [main] INFO
        io.camunda.application.StandaloneOperate - The following 3 profiles are active: "operate", "standalone", "auth"
[2025-12-22 11:11:38.866] [main] WARN
        org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'io.camunda.operate.webapp.security.CustomMethodSecurityExpressionHandler' of type [io.camunda.operate.webapp.security.CustomMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying). Is this bean getting eagerly injected/applied to a currently created BeanPostProcessor [metricsConfiguration]? Check the corresponding BeanPostProcessor declaration and its dependencies/advisors. If this bean does not have to be post-processed, declare it with ROLE_INFRASTRUCTURE.
[2025-12-22 11:11:38.878] [main] WARN
        org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'io.camunda.operate.webapp.security.MethodSecurityConfig' of type [io.camunda.operate.webapp.security.MethodSecurityConfig$$SpringCGLIB$$0] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying). Is this bean getting eagerly injected/applied to a currently created BeanPostProcessor [metricsConfiguration]? Check the corresponding BeanPostProcessor declaration and its dependencies/advisors. If this bean does not have to be post-processed, declare it with ROLE_INFRASTRUCTURE.
[2025-12-22 11:11:40.782] [main] INFO
        org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port 8080 (http)
[2025-12-22 11:11:43.304] [main] INFO
        org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 19164 ms
[2025-12-22 11:11:46.621] [netty-messaging-event-epoll-server-0] INFO
        io.atomix.cluster.messaging.impl.NettyMessagingService - TCP server listening for connections on 10.244.2.77:26502
[2025-12-22 11:11:47.515] [netty-messaging-event-epoll-server-0] INFO
        io.atomix.cluster.messaging.impl.NettyMessagingService - Started messaging service bound to [10.244.2.77:26502], advertising 10.244.2.77:26502, and using plaintext
[2025-12-22 11:11:48.271] [netty-unicast-event-nio-client-0] INFO
        io.atomix.cluster.messaging.impl.NettyUnicastService - Started plaintext unicast service bound to 0.0.0.0:26502, advertising 10.244.2.77:26502
[2025-12-22 11:11:48.294] [atomix-cluster-0] INFO
        io.atomix.cluster.discovery.BootstrapDiscoveryProvider - Local node Node{id=camunda-operate-6f66f75599-hqjjp, address=10.244.2.77:26502} joined the bootstrap service
[2025-12-22 11:11:48.478] [atomix-cluster-0] INFO
        io.atomix.cluster.protocol.swim - Started
[2025-12-22 11:11:48.485] [atomix-cluster-0] INFO
        io.atomix.cluster.impl.DefaultClusterMembershipService - Started cluster membership service for member Member{id=camunda-operate-6f66f75599-hqjjp, address=10.244.2.77:26502, properties={}}
[2025-12-22 11:11:48.487] [atomix-cluster-0] INFO
        io.atomix.cluster.messaging.impl.DefaultClusterCommunicationService - Started
[2025-12-22 11:11:48.570] [atomix-cluster-0] INFO
        io.atomix.cluster.messaging.impl.DefaultClusterEventService - Started
[2025-12-22 11:11:49.702] [main] WARN
        io.camunda.search.connect.es.ElasticsearchConnector - Username and/or password for are empty. Basic authentication for elasticsearch is not used.
[2025-12-22 11:11:54.105] [main] INFO
        io.camunda.operate.connect.OperateDateTimeFormatter - rfc3339ApiDateFormat is set to false, operate API will format datetimes in the existing format
[2025-12-22 11:11:55.287] [main] WARN
        io.camunda.service.license.CamundaLicense - No license detected when one is expected. Please provide a license through the CAMUNDA_LICENSE_KEY environment variable.
[2025-12-22 11:11:55.713] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Username and/or password for are empty. Basic authentication for elasticsearch is not used.
[2025-12-22 11:11:59.478] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Elasticsearch cluster is not accessible
[2025-12-22 11:12:01.003] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup started.
[2025-12-22 11:12:01.004] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup: validate index versions.
[2025-12-22 11:12:06.171] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup: validate index mappings.
[2025-12-22 11:12:09.880] [main] INFO
        io.camunda.operate.schema.IndexSchemaValidator - Index name: variable. Field deletion is requested, will be ignored. Fields: [IndexMappingProperty{name='city', typeDefinition={type=text, fields={keyword={ignore_above=256, type=keyword}}}}, IndexMappingProperty{name='surname', typeDefinition={type=text, fields={keyword={ignore_above=256, type=keyword}}}}]
[2025-12-22 11:12:10.767] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup: schema won't be created, it either already exist, or schema creation is disabled in configuration.
[2025-12-22 11:12:11.386] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup: migrate schema.
[2025-12-22 11:12:11.702] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.1.0-0_variable_script.json
[2025-12-22 11:12:13.575] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-0_list-view_script.json
[2025-12-22 11:12:13.971] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-1_incident_script.json
[2025-12-22 11:12:13.982] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-2_user_script.json
[2025-12-22 11:12:13.988] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-3_user_script.json
[2025-12-22 11:12:13.991] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-4_user_script.json
[2025-12-22 11:12:14.078] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-5_user_delete-fields_script.json
[2025-12-22 11:12:14.085] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.3.0-0_incident_pending.json
[2025-12-22 11:12:14.088] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.1.0-0_list-view_pendingIncident.json
[2025-12-22 11:12:14.090] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.1.0-1_list-remove_fields.json
[2025-12-22 11:12:14.167] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.2.0-0_add_sequence_import.json
[2025-12-22 11:12:14.169] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.2.0-1_add_incident_bpmnProcessId.json
[2025-12-22 11:12:14.187] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-0_import-position_add_post-importer.json
[2025-12-22 11:12:14.189] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-10_sequence-flow_tenantId.json
[2025-12-22 11:12:14.192] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-11_variable_tenantId.json
[2025-12-22 11:12:14.267] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-12_metric_tenantId.json
[2025-12-22 11:12:14.270] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-1_post-importer-queue.json
[2025-12-22 11:12:14.284] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-2_incident_delete-fields.json
[2025-12-22 11:12:14.286] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-3_decision-requirements_tenantId.json
[2025-12-22 11:12:14.289] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-4_decision_tenantId.json
[2025-12-22 11:12:14.291] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-5_process_tenantId.json
[2025-12-22 11:12:14.293] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-6_decision-instance_tenantId.json
[2025-12-22 11:12:14.368] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-7_event_tenantId.json
[2025-12-22 11:12:14.371] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-8_flownode-instance_tenantId.json
[2025-12-22 11:12:14.375] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.0-9_list-view_tenantId.json
[2025-12-22 11:12:14.387] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.1-0_incident_delete-fields.json
[2025-12-22 11:12:14.390] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.3.1-1_incident_tenantId.json
[2025-12-22 11:12:23.067] [migration_1] INFO
        io.camunda.operate.schema.migration.Migrator - Check if index decision needs to migrate.
[2025-12-22 11:12:23.067] [migration_2] INFO
        io.camunda.operate.schema.migration.Migrator - Check if index decision-requirements needs to migrate.
[2025-12-22 11:12:31.086] [migration_2] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for post-importer-queue, no previous indices found and no data initializer.
[2025-12-22 11:12:31.678] [migration_5] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for user-task, no previous indices found and no data initializer.
[2025-12-22 11:12:31.775] [migration_3] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for operation, no previous indices found and no data initializer.
[2025-12-22 11:12:32.471] [migration_4] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for sequence-flow, no previous indices found and no data initializer.
[2025-12-22 11:12:32.681] [migration_1] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for variable, no previous indices found and no data initializer.
[2025-12-22 11:12:32.769] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup finished.
[2025-12-22 11:12:34.088] [main] INFO
        io.camunda.operate.WebappModuleConfiguration - Starting module: webapp
[2025-12-22 11:12:37.885] [main] INFO
        io.camunda.operate.management.ModelMetricProvider - Register BPMN/DMN model metrics.
[2025-12-22 11:12:39.878] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Username and/or password for are empty. Basic authentication for elasticsearch is not used.
[2025-12-22 11:12:51.285] [main] WARN
        io.camunda.operate.util.RetryOperation - Retry Operation Connect to Elasticsearch cluster [elasticsearch] at http://camunda-elasticsearch:9200 failed: java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused
org.elasticsearch.ElasticsearchException: java.util.concurrent.ExecutionException: java.net.ConnectException: Connection refused
        at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2695) ~[elasticsearch-rest-high-level-client-7.17.29.jar:8.13.4]
        at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:2171) ~[elasticsearch-rest-high-level-client-7.17.29.jar:8.13.4]
        at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:2137) ~[elasticsearch-rest-high-level-client-7.17.29.jar:8.13.4]
        at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:2105) ~[elasticsearch-rest-high-level-client-7.17.29.jar:8.13.4]

Hi @pramod_Rajane,

Thanks for sharing the detailed logs and your Medium article! The “Connection refused” error from Operate to http://camunda-elasticsearch:9200 indicates a networking issue where the HTTP request never reaches the Elasticsearch service.

Let’s troubleshoot this systematically:

1. Verify Elasticsearch Service and Pods

First, check if Elasticsearch is actually running and accessible:

# Check ES pods
kubectl get pods | grep elasticsearch

# Check ES services  
kubectl get svc | grep elasticsearch

# Verify ES health (if accessible)
kubectl exec -it <elasticsearch-pod-name> -- curl -X GET "localhost:9200/_cluster/health?pretty"

2. Confirm Service Name and Port

The error shows Operate trying to connect to http://camunda-elasticsearch:9200. Verify that:

  • A service named camunda-elasticsearch exists
  • It’s listening on port 9200/TCP

If the service name or port differs, you’ll need to adjust your Helm values to match.

3. Test Connectivity from Operate Pod

Test the connection directly from the Operate pod:

kubectl exec -it camunda-operate-6f66f75599-hqjjp -- curl -v http://camunda-elasticsearch:9200

If this also fails with “connection refused”, the issue is at the Kubernetes networking/service level, not with Operate’s configuration.

4. Check Your Helm Configuration

Based on your setup, please share:

  • Your values.yaml file (with any secrets redacted)
  • Output of kubectl get svc | grep elasticsearch

This will help identify if you’re using:

  • Internal Elasticsearch (deployed with the chart)
  • External Elasticsearch (requires different configuration)

5. Common Configuration Issues

If you’re using external Elasticsearch, ensure your values.yaml includes:

global:
  elasticsearch:
    enabled: true
    external: true
    url:
      protocol: http
      host: <your-es-host>
      port: 9200
elasticsearch:
  enabled: false

If you’re using internal Elasticsearch (ECK), verify the Elasticsearch custom resource is healthy:

kubectl get elasticsearch

Next Steps

Could you please share:

  1. Output of kubectl get svc | grep elasticsearch
  2. Output of kubectl get pods | grep elasticsearch
  3. Your Helm values.yaml (redacted)

This will help pinpoint whether it’s a service discovery issue, configuration mismatch, or Elasticsearch availability problem.

References:

@zee-bot Thanks for replying, below are some details,

I do not have ‘elasticsearch’ service/pods installed, rather names are different.

C:\Users\Pramod\Developement\workspace\app\helm>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
camunda-elasticsearch ClusterIP 10.102.28.33 9200/TCP,9300/TCP 29s
camunda-elasticsearch-master-hl ClusterIP None 9200/TCP,9300/TCP 29s
camunda-operate ClusterIP 10.108.220.78 80/TCP,9600/TCP 29s
camunda-zeebe ClusterIP None 9600/TCP,26502/TCP,26501/TCP 29s
camunda-zeebe-gateway ClusterIP 10.104.114.229 9600/TCP,26500/TCP,8080/TCP 29s

C:\Users\Pramod\Developement\workspace\app\helm>kubectl get pod
NAME READY STATUS RESTARTS AGE
camunda-elasticsearch-master-0 1/1 Running 1 (2m35s ago) 6m57s
camunda-operate-57f5795d98-xnz7w 0/1 Running 1 (66s ago) 6m57s
camunda-zeebe-0 1/1 Running 0 6m57s
camunda-zeebe-gateway-5984586c9b-pmdzt 1/1 Running 0 6m57s

Please find attached helm configuration

values.yaml (1.8 KB)

Perfect! Now I can see the issue. Looking at your service output, you have the correct service camunda-elasticsearch running on port 9200, which is exactly what Operate should be connecting to.

The Issue

Your setup looks correct from a networking perspective:

  • :white_check_mark: Service camunda-elasticsearch exists on 10.102.28.33:9200
  • :white_check_mark: Pod camunda-elasticsearch-master-0 is running
  • :white_check_mark: Operate is trying to connect to http://camunda-elasticsearch:9200 (correct service name)

However, I notice your Operate pod is showing 0/1 Ready and has had 1 restart, which suggests it’s still failing.

Troubleshooting Steps

1. Test Connectivity from Operate Pod

Let’s verify if the networking is working:

kubectl exec -it camunda-operate-57f5795d98-xnz7w -- curl -v http://camunda-elasticsearch:9200

If this works, the issue might be with Operate’s configuration or timing.

2. Check Elasticsearch Health

Test if Elasticsearch is actually ready to accept connections:

kubectl exec -it camunda-elasticsearch-master-0 -- curl -X GET "localhost:9200/_cluster/health?pretty"

3. Check Your Values.yaml Configuration

I see you mentioned attaching your values.yaml but I can’t access the attachment. Could you paste the relevant Elasticsearch configuration from your values.yaml? Specifically:

global:
  elasticsearch:
    # your configuration here

elasticsearch:
  # your configuration here

4. Common Issues and Solutions

Based on the logs, here are potential issues:

A. Elasticsearch not fully ready: Even though the pod is running, Elasticsearch might still be initializing. Check the ES logs:

kubectl logs camunda-elasticsearch-master-0

B. Resource constraints: Elasticsearch needs sufficient memory. Check if the pod has enough resources:

kubectl describe pod camunda-elasticsearch-master-0

C. Timing issue: Operate might be starting before Elasticsearch is fully ready. This usually resolves itself with retries.

5. Quick Test

Try this quick connectivity test:

# Test from your local machine (port-forward)
kubectl port-forward svc/camunda-elasticsearch 9200:9200

# In another terminal
curl http://localhost:9200

If this works, the issue is likely timing-related and should resolve as Elasticsearch fully starts up.

Expected Behavior

Your configuration appears correct - Camunda components should connect to the service camunda-elasticsearch (not directly to the pod camunda-elasticsearch-master-0), which is exactly what’s happening.

Could you please share:

  1. Your values.yaml elasticsearch configuration
  2. Output of kubectl logs camunda-elasticsearch-master-0
  3. Result of the connectivity test from the Operate pod

This will help identify if it’s a configuration, resource, or timing issue.

References:

Please check configuration from values.yaml as below


#Global configuration
global:
  elasticsearch:
    disableExporter: true
    enabled: true
  url:
    protocol: http
    host: camunda-elasticsearch
    port: 9200

  identity:
    auth:
      enabled: false

#Elasticsearch configuration
elasticsearch:
  enabled: true
  master:
    replicaCount: 1
    persistence:
      size: 3Gi

#Operate configuration
operate:
  enabled: true
  configuration: |-
    camunda:
      operate:
        elasticsearch:
          clusterName: elastic
          host: camunda-elasticsearch
          port: 9200
          prefix: zeebe-record-operate
  readinessProbe:
    enabled: true
    initialDelaySeconds: 180
  migration:
    enabled: false
  image:
    repository: camunda/operate
    tag: 8.7.21

#Zeebe configuration
zeebe:
  enabled: true
  clusterSize: 1
  partitionCount: 1
  replicationFactor: 1
  pvcSize: 2Gi
  configuration: |-
    zeebe:
      broker:
        exporters:
          elasticsearch:
            className: "io.camunda.zeebe.exporter.ElasticsearchExporter"
            args:
              url: "http://camunda-elasticsearch:9200"
              index:
                prefix: "zeebe-record"
  resources:
    requests:
      cpu: 400m
      memory: 800Mi
    limits:
      cpu: 400m
      memory: 800Mi
  image:
    repository: camunda/zeebe
    tag: 8.7.21

#Zeebe gateway configuration
zeebeGateway:
  enabled: true
  replicas: 1
  resources:
    requests:
      cpu: 400m
      memory: 800Mi
    limits:
      cpu: 400m
      memory: 800Mi
  image:
    repository: camunda/zeebe
    tag: 8.7.21

identity:
  enabled: false

identityKeycloak:
  enabled: false

connectors:
  enabled: false
  inbound:
    mode: disabled

optimize:
  enabled: false

tasklist:
  enabled: false

Found the issue! :bullseye:

The Problem

Your values.yaml has a configuration error in the global Elasticsearch section. You have:

global:
  elasticsearch:
    disableExporter: true
    enabled: true
  url:                    # ❌ WRONG - this should be nested under elasticsearch
    protocol: http
    host: camunda-elasticsearch
    port: 9200

The url section is incorrectly placed at the global level instead of under global.elasticsearch.url.

The Fix

Update your values.yaml to use the correct structure:

global:
  elasticsearch:
    enabled: true
    url:                    # ✅ CORRECT - nested under elasticsearch
      protocol: http
      host: camunda-elasticsearch
      port: 9200
  identity:
    auth:
      enabled: false

Remove the disableExporter: true line as it’s deprecated in newer chart versions and conflicts with enabled: true.

Complete Corrected Configuration

Here’s your corrected values.yaml:

# Global configuration
global:
  elasticsearch:
    enabled: true
    url:
      protocol: http
      host: camunda-elasticsearch
      port: 9200
  identity:
    auth:
      enabled: false

# Elasticsearch configuration
elasticsearch:
  enabled: true
  master:
    replicaCount: 1
    persistence:
      size: 3Gi

# Operate configuration
operate:
  enabled: true
  configuration: |-
    camunda:
      operate:
        elasticsearch:
          clusterName: elastic
          host: camunda-elasticsearch
          port: 9200
          prefix: zeebe-record-operate
  readinessProbe:
    enabled: true
    initialDelaySeconds: 180
  migration:
    enabled: false
  image:
    repository: camunda/operate
    tag: 8.7.21

# Rest of your configuration remains the same...

Why This Fixes It

The Helm chart templates expect the Elasticsearch URL configuration to be at global.elasticsearch.url.*, not global.url.*. With the incorrect structure:

  1. The chart couldn’t find the proper Elasticsearch configuration
  2. Operate fell back to trying to connect to the default service name
  3. But the connection details weren’t properly configured

Next Steps

  1. Update your values.yaml with the corrected structure
  2. Upgrade your Helm deployment:
    helm upgrade camunda camunda/camunda-platform -f values.yaml
    
  3. Wait for the pods to restart and check if Operate connects successfully

The connection should work now since your service camunda-elasticsearch is running correctly - it was just a configuration parsing issue.

References:

Thanks, I corrected configuration, still there is connection issue.
below is corrected configuration values.yaml


#Global configuration
global:
  elasticsearch:
    disableExporter: true
    enabled: true
    url:
      protocol: http
      host: camunda-elasticsearch
      port: 9200

  identity:
    auth:
      enabled: false

#Elasticsearch configuration
elasticsearch:
  enabled: true
  master:
    replicaCount: 1
    persistence:
      size: 3Gi

#Operate configuration
operate:
  enabled: true
  configuration: |-
    camunda:
      operate:
        elasticsearch:
          clusterName: elastic
          host: camunda-elasticsearch
          port: 9200
          prefix: zeebe-record-operate
  readinessProbe:
    enabled: true
    initialDelaySeconds: 180
  migration:
    enabled: false
  image:
    repository: camunda/operate
    tag: 8.7.21

#Zeebe configuration
zeebe:
  enabled: true
  clusterSize: 1
  partitionCount: 1
  replicationFactor: 1
  pvcSize: 2Gi
  configuration: |-
    zeebe:
      broker:
        exporters:
          elasticsearch:
            className: "io.camunda.zeebe.exporter.ElasticsearchExporter"
            args:
              url: "http://camunda-elasticsearch:9200"
              index:
                prefix: "zeebe-record"
  resources:
    requests:
      cpu: 400m
      memory: 800Mi
    limits:
      cpu: 400m
      memory: 800Mi
  image:
    repository: camunda/zeebe
    tag: 8.7.21

#Zeebe gateway configuration
zeebeGateway:
  enabled: true
  replicas: 1
  resources:
    requests:
      cpu: 400m
      memory: 800Mi
    limits:
      cpu: 400m
      memory: 800Mi
  image:
    repository: camunda/zeebe
    tag: 8.7.21

identity:
  enabled: false

identityKeycloak:
  enabled: false

connectors:
  enabled: false
  inbound:
    mode: disabled

optimize:
  enabled: false

tasklist:
  enabled: false

Output of command - kubectl exec -it camunda-elasticsearch-master-0 – curl -X GET “localhost:9200/_cluster/health?pretty”
Defaulted container “elasticsearch” out of: elasticsearch, sysctl (init), copy-default-plugins (init)
{
“cluster_name” : “elastic”,
“status” : “yellow”,
“timed_out” : false,
“number_of_nodes” : 1,
“number_of_data_nodes” : 1,
“active_primary_shards” : 36,
“active_shards” : 36,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 1,
“unassigned_primary_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 97.2972972972973
}

Output of command - kubectl exec -it camunda-operate-57f5795d98-hws7d – curl -v http://camunda-elasticsearch:9200
OCI runtime exec failed: exec failed: unable to start container process: exec: “curl”: executable file not found in $PATH: unknown
command terminated with exit code 126

Great progress! I can see two important things:

:white_check_mark: Good News

  1. Elasticsearch is healthy: The cluster status is “yellow” which is normal for a single-node setup (yellow just means some replica shards can’t be assigned, but all primary shards are active)
  2. Elasticsearch is accessible: The health check worked from inside the pod

:cross_mark: Configuration Issue Found

You still have a conflicting configuration that’s likely causing the problem:

global:
  elasticsearch:
    disableExporter: true    # ❌ This disables the exporter
    enabled: true           # ✅ This enables elasticsearch

The disableExporter: true is deprecated and conflicts with enabled: true. This creates ambiguity in how Zeebe and Operate should connect to Elasticsearch.

The Fix

Remove the disableExporter line completely from your values.yaml:

global:
  elasticsearch:
    enabled: true           # This is all you need
    url:
      protocol: http
      host: camunda-elasticsearch
      port: 9200

Alternative Connectivity Test

Since curl isn’t available in the Operate pod, let’s test connectivity differently:

# Test with wget (if available)
kubectl exec -it camunda-operate-57f5795d98-hws7d -- wget -qO- http://camunda-elasticsearch:9200

# Or test with netcat to check if port is reachable
kubectl exec -it camunda-operate-57f5795d98-hws7d -- nc -zv camunda-elasticsearch 9200

Complete Corrected Configuration

Here’s your values.yaml with the fix:

global:
  elasticsearch:
    enabled: true                    # ✅ Only this line needed
    url:
      protocol: http
      host: camunda-elasticsearch
      port: 9200
  identity:
    auth:
      enabled: false

# Rest of your configuration stays the same...
elasticsearch:
  enabled: true
  master:
    replicaCount: 1
    persistence:
      size: 3Gi

operate:
  enabled: true
  # ... rest of operate config

Deploy the Fix

  1. Update your values.yaml to remove disableExporter: true
  2. Upgrade your deployment:
    helm upgrade camunda camunda/camunda-platform -f values.yaml
    
  3. Wait for pods to restart and check Operate logs

This should resolve the connection issue since the conflicting exporter configuration was likely preventing proper Elasticsearch connectivity setup.

References:

Unfortunately, I still see connection issue from operate pod to connect to elastic search. Nevertheless, wget from operate pod works to connect elastic search. Below is the detail

kubectl exec -it camunda-operate-57f5795d98-bdvll – wget -qO- http://camunda-elasticsearch:9200/
{
“name” : “camunda-elasticsearch-master-0”,
“cluster_name” : “elastic”,
“cluster_uuid” : “9GcsdFSCSDOqw8lmyw3wRA”,
“version” : {
“number” : “8.17.4”,
“build_flavor” : “default”,
“build_type” : “tar”,
“build_hash” : “c63c7f5f8ce7d2e4805b7b3d842e7e792d84dda1”,
“build_date” : “2025-03-20T15:39:59.811110136Z”,
“build_snapshot” : false,
“lucene_version” : “9.12.0”,
“minimum_wire_compatibility_version” : “7.17.0”,
“minimum_index_compatibility_version” : “7.0.0”
},
“tagline” : “You Know, for Search”
}

Excellent! :bullseye: Now we’re getting to the root cause. The fact that wget works from the Operate pod confirms that:

:white_check_mark: Network connectivity is fine
:white_check_mark: Elasticsearch is accessible and responding
:white_check_mark: Service discovery is working

But I notice a potential version compatibility issue: You’re running Camunda 8.7.21 with Elasticsearch 8.17.4.

The Real Issue: Configuration Precedence

Looking at your values.yaml, you have conflicting Elasticsearch configurations:

  1. Global config (used by Helm templates):

    global:
      elasticsearch:
        url:
          host: camunda-elasticsearch
          port: 9200
    
  2. Operate-specific config (overrides global):

    operate:
      configuration: |-
        camunda:
          operate:
            elasticsearch:
              host: camunda-elasticsearch  # This overrides global!
              port: 9200
    

Diagnosis Step

Let’s check what configuration Operate is actually using:

kubectl get configmap camunda-operate-configuration -o yaml

This will show the actual configuration that Operate is reading.

The Fix

Since network connectivity works, the issue is likely in the application-level configuration. Try this simplified approach:

Remove the explicit Operate elasticsearch configuration and let it use the global settings:

global:
  elasticsearch:
    enabled: true
    url:
      protocol: http
      host: camunda-elasticsearch
      port: 9200

operate:
  enabled: true
  # Remove the elasticsearch configuration block entirely
  readinessProbe:
    enabled: true
    initialDelaySeconds: 180
  migration:
    enabled: false
  image:
    repository: camunda/operate
    tag: 8.7.21

Alternative: Fix the Operate Configuration

If you want to keep the explicit configuration, ensure it matches exactly:

operate:
  configuration: |-
    camunda:
      operate:
        elasticsearch:
          url: "http://camunda-elasticsearch:9200"
          # Remove individual host/port if using url
        zeebeElasticsearch:
          url: "http://camunda-elasticsearch:9200"
          prefix: "zeebe-record"

Version Compatibility Note

Your Elasticsearch version (8.17.4) might be too new for Camunda 8.7.21. The supported versions for Camunda 8.7 typically require Elasticsearch 8.13+, but there might be upper limits. Consider using a more conservative ES version like 8.15 or 8.16 if the configuration fixes don’t work.

Next Steps

  1. Check the actual Operate configuration: kubectl get configmap camunda-operate-configuration -o yaml
  2. Try the simplified values.yaml (removing explicit Operate ES config)
  3. Redeploy and check if Operate connects successfully

Let me know what the ConfigMap shows - that will tell us exactly what configuration Operate is trying to use!

References:

kubectl get configmap camunda-operate-configuration -o yaml
apiVersion: v1
data:
application.yaml: |
camunda:
operate:
elasticsearch:
url: “http://camunda-elasticsearch:9200
zeebeElasticsearch:
url: “http://camunda-elasticsearch:9200
prefix: “zeebe-record”
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: camunda
meta.helm.sh/release-namespace: default
creationTimestamp: “2025-12-25T21:28:16Z”
labels:
app: camunda-platform
app.kubernetes.io/component: operate
app.kubernetes.io/instance: camunda
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: camunda-platform
app.kubernetes.io/part-of: camunda-platform
app.kubernetes.io/version: 8.7.21
helm.sh/chart: camunda-platform-12.7.4
name: camunda-operate-configuration
namespace: default
resourceVersion: “129302”
uid: 6ec0dec7-6415-45db-b57a-dc194fb06fa3

Perfect! The ConfigMap shows that Operate is configured correctly with the right URLs:

camunda:
  operate:
    elasticsearch:
      url: "http://camunda-elasticsearch:9200"
    zeebeElasticsearch:
      url: "http://camunda-elasticsearch:9200"

Since wget works from the Operate pod but the Java application fails, this points to a Java-specific connectivity issue rather than network problems.

Likely Root Cause: Elasticsearch Version Compatibility

The issue is most likely the Elasticsearch version mismatch:

  • Your setup: Camunda 8.7.21 + Elasticsearch 8.17.4
  • Expected: Camunda 8.7.21 typically works with Elasticsearch 8.13-8.16

Elasticsearch 8.17.4 is quite new and may have compatibility issues with the Elasticsearch Java client used by Camunda 8.7.21.

Solution 1: Downgrade Elasticsearch (Recommended)

Update your values.yaml to use a compatible Elasticsearch version:

elasticsearch:
  enabled: true
  image:
    tag: "8.15.0"  # Use a more conservative version
  master:
    replicaCount: 1
    persistence:
      size: 3Gi

Solution 2: Check Java Connection Settings

The Java Elasticsearch client might be having issues that wget doesn’t encounter. Check if there are any Java-specific connection settings needed.

Let’s also check the full Operate logs to see if there are more specific error details:

kubectl logs camunda-operate-57f5795d98-bdvll --tail=100

Solution 3: Temporary Workaround - Increase Timeouts

If it’s a timing issue, you can try increasing Operate’s connection timeouts:

operate:
  configuration: |-
    camunda:
      operate:
        elasticsearch:
          url: "http://camunda-elasticsearch:9200"
          socketTimeout: 30000
          connectTimeout: 10000
        zeebeElasticsearch:
          url: "http://camunda-elasticsearch:9200"
          prefix: "zeebe-record"
          socketTimeout: 30000
          connectTimeout: 10000

Debugging Steps

  1. Check Elasticsearch logs for any errors when Operate tries to connect:

    kubectl logs camunda-elasticsearch-master-0 --tail=50
    
  2. Verify Elasticsearch is fully ready (not just running):

    kubectl exec -it camunda-elasticsearch-master-0 -- curl -X GET "localhost:9200/_cluster/health?wait_for_status=green&timeout=30s"
    

Recommended Action

I strongly recommend downgrading Elasticsearch to version 8.15.0 or 8.16.0 as this is the most likely cause of the Java connection issues. The fact that wget works but the Java client fails is a classic symptom of version compatibility problems.

Try the Elasticsearch downgrade first, then let me know if Operate connects successfully!

References:

Downgraded elasticsearch image to 8.14.3 and now elastic search is failing to start with error as ‘Could not load codec ‘Lucene912’. Did you forget to add lucene-backward-codecs.jar?

Below is log trace

[2025-12-26T12:07:44,064][INFO ][o.e.e.NodeEnvironment    ] [camunda-elasticsearch-master-0] heap size [1gb], compressed ordinary object pointers [true]
[2025-12-26T12:07:44,104][ERROR][o.e.b.Elasticsearch      ] [camunda-elasticsearch-master-0] fatal exception while booting Elasticsearch java.lang.IllegalArgumentException: Could not load codec 'Lucene912'. Did you forget to add lucene-backward-codecs.jar?
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:520)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.SegmentInfos.parseSegmentInfos(SegmentInfos.java:409)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:368)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:304)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:88)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:77)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:820)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:109)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:67)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:60)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.gateway.PersistedClusterStateService.nodeMetadata(PersistedClusterStateService.java:354)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.env.NodeEnvironment.loadNodeMetadata(NodeEnvironment.java:611)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:334)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.node.NodeConstruction.validateSettings(NodeConstruction.java:509)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.node.NodeConstruction.prepareConstruction(NodeConstruction.java:256)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.node.Node.<init>(Node.java:192)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:240)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:240)
        at org.elasticsearch.server@8.14.3/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:75)
        Suppressed: org.apache.lucene.index.CorruptIndexException: checksum passed (65423dae). possibly transient resource issue, or a Lucene or JVM bug (resource=BufferedChecksumIndexInput(NIOFSIndexInput(path="/bitnami/elasticsearch/data/_state/segments_4s")))
                at org.apache.lucene.core@9.10.0/org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:501)
                at org.apache.lucene.core@9.10.0/org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:375)
                ... 16 more
Caused by: java.lang.IllegalArgumentException: An SPI class of type org.apache.lucene.codecs.Codec with name 'Lucene912' does not exist.  You need to add the corresponding JAR file supporting this SPI to your classpath.  The current classpath supports the following names: [Elasticsearch814, Lucene99, Lucene70, Lucene80, Lucene84, Lucene86, Lucene87, Lucene90, Lucene91, Lucene92, Lucene94, Lucene95, BWCLucene70Codec, Lucene62, Lucene60, SimpleText]
        at org.apache.lucene.core@9.10.0/org.apache.lucene.util.NamedSPILoader.lookup(NamedSPILoader.java:113)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.codecs.Codec.forName(Codec.java:118)
        at org.apache.lucene.core@9.10.0/org.apache.lucene.index.SegmentInfos.readCodec(SegmentInfos.java:516)
        ... 18 more

Ah! This is a classic Elasticsearch downgrade issue. The error occurs because:

  1. Elasticsearch 8.17.4 created indices using the Lucene 9.12 codec (Lucene912)
  2. Elasticsearch 8.14.3 only supports up to Lucene 9.10 codecs
  3. The existing data on your persistent volume is incompatible with the older version

The Problem

You cannot directly downgrade Elasticsearch when there’s existing data because newer versions use newer Lucene codecs that older versions cannot read. This is a fundamental Lucene/Elasticsearch limitation, not specific to Camunda.

Solution Options

Option 1: Clean Start (Recommended for Testing)

Since you’re in a development/testing environment, the simplest approach is to delete the persistent volume data and start fresh:

# Delete the deployment
helm uninstall camunda

# Delete the persistent volume claim (this removes all ES data)
kubectl delete pvc elasticsearch-master-camunda-elasticsearch-master-0

# Reinstall with ES 8.15.0
helm install camunda camunda/camunda-platform -f values.yaml

Update your values.yaml to use a compatible ES version:

elasticsearch:
  enabled: true
  image:
    tag: "8.15.0"  # Compatible with Camunda 8.7.21
  master:
    replicaCount: 1
    persistence:
      size: 3Gi

Option 2: Go Back to ES 8.17.4 (If You Need the Data)

If you have important data, revert to the original ES version:

elasticsearch:
  enabled: true
  image:
    tag: "8.17.4"  # Back to original version
  master:
    replicaCount: 1
    persistence:
      size: 3Gi

Then we can try other approaches to fix the Camunda connectivity issue.

Option 3: Use External Elasticsearch

Consider using an external Elasticsearch service where you have more control over versions:

global:
  elasticsearch:
    enabled: true
    external: true
    url:
      protocol: http
      host: <external-es-host>
      port: 9200

elasticsearch:
  enabled: false

Recommendation

For your development setup, I recommend Option 1 (clean start with ES 8.15.0). This will:

  • :white_check_mark: Avoid version compatibility issues
  • :white_check_mark: Give you a known working ES version with Camunda 8.7.21
  • :white_check_mark: Eliminate the Lucene codec problem

The data loss is acceptable since you’re in a development environment and can recreate any test data.

Which option would you prefer to try?

References:

I tried with option 1, by changing the ES version to 8.15.0, here is the output
kubectl exec -it camunda-operate-695c8f584d-n9dx6 – wget -qO- http://camunda-elasticsearch:9200
{
“name” : “camunda-elasticsearch-master-0”,
“cluster_name” : “elastic”,
“cluster_uuid” : “exvMIasgRT60k9-5pwlWEA”,
“version” : {
“number” : “8.15.0”,
“build_flavor” : “default”,
“build_type” : “tar”,
“build_hash” : “1a77947f34deddb41af25e6f0ddb8e830159c179”,
“build_date” : “2024-08-05T10:05:34.233336849Z”,
“build_snapshot” : false,
“lucene_version” : “9.11.1”,
“minimum_wire_compatibility_version” : “7.17.0”,
“minimum_index_compatibility_version” : “7.0.0”
},
“tagline” : “You Know, for Search”
}

However still connection issue exist from operate to ES.

Great! Now we have Elasticsearch 8.15.0 running and accessible. Since wget works but Operate still can’t connect, let’s dig deeper into the Java application-level issue.

Next Debugging Steps

1. Get the Full Operate Logs

Let’s see the exact error Operate is encountering:

kubectl logs camunda-operate-695c8f584d-n9dx6 --tail=100

This will show us the specific Java exception and connection details.

2. Check Operate’s Actual Configuration

Verify what configuration Operate is using:

kubectl get configmap camunda-operate-configuration -o yaml

3. Test with Verbose Connection Details

Let’s test the connection with more verbose output to see what’s happening:

# Test with curl to see HTTP response headers
kubectl exec -it camunda-operate-695c8f584d-n9dx6 -- curl -v -H "Content-Type: application/json" http://camunda-elasticsearch:9200/_cluster/health

Potential Java-Specific Issues

Since wget works but the Java client fails, this could be:

A. Java DNS Resolution Issue

The Java application might be resolving the hostname differently than the shell commands.

B. HTTP Client Configuration Issue

The Elasticsearch Java client might have different timeout or connection pool settings.

C. Application Startup Timing

Operate might be trying to connect before Elasticsearch is fully ready.

Temporary Workaround - Increase Timeouts

Try adding explicit timeout settings to your Operate configuration:

operate:
  configuration: |-
    camunda:
      operate:
        elasticsearch:
          url: "http://camunda-elasticsearch:9200"
          socketTimeout: 60000
          connectTimeout: 30000
        zeebeElasticsearch:
          url: "http://camunda-elasticsearch:9200"
          prefix: "zeebe-record"
          socketTimeout: 60000
          connectTimeout: 30000
    logging:
      level:
        io.camunda.operate: DEBUG
        org.elasticsearch: DEBUG

This will:

  • Increase connection and socket timeouts
  • Enable debug logging for Elasticsearch connections

Alternative Test - Direct IP Connection

Let’s also test if it’s a DNS issue by trying the direct service IP:

# Get the service IP
kubectl get svc camunda-elasticsearch

# Test from Operate pod using the IP directly
kubectl exec -it camunda-operate-695c8f584d-n9dx6 -- wget -qO- http://<SERVICE_IP>:9200

Please share:

  1. The full Operate logs (kubectl logs camunda-operate-695c8f584d-n9dx6 --tail=100)
  2. The current ConfigMap (kubectl get configmap camunda-operate-configuration -o yaml)

This will help identify whether it’s a timeout, DNS, HTTP client configuration, or another Java-specific issue.

References:

Operate able to connect and get back response from ES, however its not getting some indexes.
When I check indexes present on ES node, I see only operate indexes created.

I have no name!@camunda-elasticsearch-master-0:/$ curl http://localhost:9200/_cat/indices?v
health status index                                     uuid                   pri rep docs.count docs.deleted store.size pri.store.size dataset.size
green  open   operate-operation-8.4.1_                  IVMTSOXgTr6x2fHcnWnS6Q   1   0          0            0       249b           249b         249b
green  open   operate-job-8.6.0_                        tFmP7RSPSP2C0JVGO4IkcQ   1   0          0            0       249b           249b         249b
green  open   operate-metric-8.3.0_                     I73ZaY7rS4uiL0gJ4w7xPg   1   0          0            0       249b           249b         249b
green  open   operate-import-position-8.3.0_            fY-7BEUgTxmekXDTLTcIaQ   1   0          0            0       249b           249b         249b
green  open   operate-migration-steps-repository-1.1.0_ 9tbVgABoTOCDbxO8NtKBoQ   1   0         27            1     29.6kb         29.6kb       29.6kb
green  open   operate-decision-8.3.0_                   QbpSgfLaRrmsx6omEm24MA   1   0          0            0       249b           249b         249b
green  open   operate-flownode-instance-8.3.1_          ACkQaSoSTviW5ZTIvYtTJQ   1   0          0            0       249b           249b         249b
green  open   operate-event-8.3.0_                      NB4skJ2_R2K8m2bYciM7bA   1   0          0            0       249b           249b         249b
green  open   operate-incident-8.3.1_                   Dl50FgecSpu-vL1578CbGA   1   0          0            0       249b           249b         249b
green  open   operate-variable-8.3.0_                   a9jUEX68T2KgggjoaioOQw   1   0          0            0       249b           249b         249b
green  open   operate-message-8.5.0_                    cB2ciZd_RhSOp5Lh25mOsg   1   0          0            0       249b           249b         249b
green  open   operate-decision-requirements-8.3.0_      ytTiz12kTuKMqzYzW9oetA   1   0          0            0       249b           249b         249b
green  open   operate-batch-operation-1.0.0_            1Z5mY1k6SXqH05ugXqk9UQ   1   0          0            0       249b           249b         249b
green  open   operate-web-session-1.1.0_                Wlgaqea5R8i6mJ-TLzyicg   1   0          0            0       249b           249b         249b
green  open   operate-list-view-8.3.0_                  9mSkeuhaQJWfsyVLxjt43Q   1   0          0            0       249b           249b         249b
green  open   operate-metadata-8.8.0_                   FRzWLoDbS4yPP6n5vtjgfA   1   0          1            0      4.6kb          4.6kb        4.6kb
green  open   operate-post-importer-queue-8.3.0_        VaMsNEV9RlqcofEH0wLo-g   1   0          0            0       249b           249b         249b
green  open   operate-user-task-8.5.0_                  bd-gsm1KSu6cm_KHoJxsRg   1   0          0            0       249b           249b         249b
green  open   operate-user-1.2.0_                       5HZNAzI9TqGvEzW95ksO9Q   1   0          3            0      6.8kb          6.8kb        6.8kb
green  open   operate-process-8.3.0_                    -DN42bthTjK8ThzuzpVo8w   1   0          0            0       249b           249b         249b
green  open   operate-decision-instance-8.3.0_          xuMSvavlQIqcPx05qeYV4g   1   0          0            0       249b           249b         249b
green  open   operate-sequence-flow-8.3.0_              ZgxfE-6vT16K5WVzhuweqw   1   0          0            0       249b           249b         249b

Below is operate log

Operate is a tool in Camunda 8 stack to provide visibility into all your processes and resolve any incident from a centralized platform. More information can be found on https://camunda.com/platform/operate/

License
The licensing information for Operate available at: https://docs.camunda.io/docs/reference/licenses/

Login with the credentials:
  User:     demo
  Password: demo

   ___   _   __  __ _   _ _  _ ___   _      ___  ___ ___ ___    _ _____ ___
  / __| /_\ |  \/  | | | | \| |   \ /_\    / _ \| _ \ __| _ \  /_\_   _| __|
 | (__ / _ \| |\/| | |_| | .` | |) / _ \  | (_) |  _/ _||   / / _ \| | | _|
  \___/_/ \_\_|  |_|\___/|_|\_|___/_/ \_\  \___/|_| |___|_|_\/_/ \_\_| |___|

                                                           8.7.21


[2025-12-26 14:10:09.649] [main] INFO
        io.camunda.application.StandaloneOperate - Starting StandaloneOperate v8.7.21 using Java 21.0.9 with PID 7 (/usr/local/operate/lib/camunda-zeebe-8.7.21.jar started by camunda in /usr/local/operate)
[2025-12-26 14:10:09.658] [main] INFO
        io.camunda.application.StandaloneOperate - The following 3 profiles are active: "operate", "standalone", "auth"
[2025-12-26 14:10:15.881] [main] WARN
        org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'io.camunda.operate.webapp.security.CustomMethodSecurityExpressionHandler' of type [io.camunda.operate.webapp.security.CustomMethodSecurityExpressionHandler] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying). Is this bean getting eagerly injected/applied to a currently created BeanPostProcessor [metricsConfiguration]? Check the corresponding BeanPostProcessor declaration and its dependencies/advisors. If this bean does not have to be post-processed, declare it with ROLE_INFRASTRUCTURE.
[2025-12-26 14:10:15.885] [main] WARN
        org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'io.camunda.operate.webapp.security.MethodSecurityConfig' of type [io.camunda.operate.webapp.security.MethodSecurityConfig$$SpringCGLIB$$0] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying). Is this bean getting eagerly injected/applied to a currently created BeanPostProcessor [metricsConfiguration]? Check the corresponding BeanPostProcessor declaration and its dependencies/advisors. If this bean does not have to be post-processed, declare it with ROLE_INFRASTRUCTURE.
[2025-12-26 14:10:16.589] [main] INFO
        org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port 8080 (http)
[2025-12-26 14:10:16.858] [main] INFO
        org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 6884 ms
[2025-12-26 14:10:17.973] [netty-messaging-event-epoll-server-0] INFO
        io.atomix.cluster.messaging.impl.NettyMessagingService - TCP server listening for connections on 10.244.0.39:26502
[2025-12-26 14:10:18.130] [netty-messaging-event-epoll-server-0] INFO
        io.atomix.cluster.messaging.impl.NettyMessagingService - Started messaging service bound to [10.244.0.39:26502], advertising 10.244.0.39:26502, and using plaintext
[2025-12-26 14:10:18.183] [netty-unicast-event-nio-client-0] INFO
        io.atomix.cluster.messaging.impl.NettyUnicastService - Started plaintext unicast service bound to 0.0.0.0:26502, advertising 10.244.0.39:26502
[2025-12-26 14:10:18.222] [atomix-cluster-0] INFO
        io.atomix.cluster.discovery.BootstrapDiscoveryProvider - Local node Node{id=camunda-operate-7d9598c5d5-xshpv, address=10.244.0.39:26502} joined the bootstrap service
[2025-12-26 14:10:18.243] [atomix-cluster-0] INFO
        io.atomix.cluster.protocol.swim - Started
[2025-12-26 14:10:18.244] [atomix-cluster-0] INFO
        io.atomix.cluster.impl.DefaultClusterMembershipService - Started cluster membership service for member Member{id=camunda-operate-7d9598c5d5-xshpv, address=10.244.0.39:26502, properties={}}
[2025-12-26 14:10:18.245] [atomix-cluster-0] INFO
        io.atomix.cluster.messaging.impl.DefaultClusterCommunicationService - Started
[2025-12-26 14:10:18.247] [atomix-cluster-0] INFO
        io.atomix.cluster.messaging.impl.DefaultClusterEventService - Started
[2025-12-26 14:10:18.363] [main] WARN
        io.camunda.search.connect.es.ElasticsearchConnector - Username and/or password for are empty. Basic authentication for elasticsearch is not used.
[2025-12-26 14:10:18.832] [main] INFO
        io.camunda.operate.connect.OperateDateTimeFormatter - rfc3339ApiDateFormat is set to false, operate API will format datetimes in the existing format
[2025-12-26 14:10:19.139] [main] WARN
        io.camunda.service.license.CamundaLicense - No license detected when one is expected. Please provide a license through the CAMUNDA_LICENSE_KEY environment variable.
[2025-12-26 14:10:19.273] [main] DEBUG
        io.camunda.operate.connect.ElasticsearchConnector - Creating Elasticsearch connection...
[2025-12-26 14:10:19.279] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Username and/or password for are empty. Basic authentication for elasticsearch is not used.
[2025-12-26 14:10:19.904] [elasticsearch-rest-client-1-thread-2] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:19.947] [main] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/_cluster/health?master_timeout=30s&level=cluster&timeout=30s] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:19.954] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Elasticsearch cluster is not accessible
[2025-12-26 14:10:20.145] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup started.
[2025-12-26 14:10:20.753] [main] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-job*
[2025-12-26 14:10:20.771] [main] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/operate-job*?master_timeout=30s] returned [HTTP/1.1 200 OK]
        io.camunda.operate.schema.IndexSchemaValidator - Index fields are up to date. Index name: decision.
[2025-12-26 14:10:21.031] [main] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Index fields are up to date. Index name: decision-requirements.
[2025-12-26 14:10:21.033] [main] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Index fields are up to date. Index name: import-position.
[2025-12-26 14:10:21.103] [main] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/operate*?master_timeout=30s] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.136] [main] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/_alias/operate*] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.139] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup: schema won't be created, it either already exist, or schema creation is disabled in configuration.
[2025-12-26 14:10:21.148] [main] DEBUG
        org.elasticsearch.client.RestClient - request [HEAD http://camunda-elasticsearch:9200/operate-metadata-8.8.0_] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.172] [main] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/operate-metadata-8.8.0_/_doc/schema-version] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.182] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup: migrate schema.
[2025-12-26 14:10:21.186] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.1.0-0_variable_script.json
[2025-12-26 14:10:21.306] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-0_list-view_script.json
[2025-12-26 14:10:21.310] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-1_incident_script.json
[2025-12-26 14:10:21.311] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-2_user_script.json
[2025-12-26 14:10:21.313] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-3_user_script.json
[2025-12-26 14:10:21.314] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-4_user_script.json
[2025-12-26 14:10:21.315] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.2.0-5_user_delete-fields_script.json
[2025-12-26 14:10:21.316] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 1.3.0-0_incident_pending.json
[2025-12-26 14:10:21.318] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.1.0-0_list-view_pendingIncident.json
[2025-12-26 14:10:21.321] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.1.0-1_list-remove_fields.json
[2025-12-26 14:10:21.324] [main] INFO
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Read step 8.2.0-0_add_sequence_import.json
[2025-12-26 14:10:21.354] [main] DEBUG
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Find all steps from Elasticsearch at http://camunda-elasticsearch:9200
[2025-12-26 14:10:21.455] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.497] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&scroll=60000ms&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.518] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.535] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.547] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.555] [main] DEBUG
        org.elasticsearch.client.RestClient - request [DELETE http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.566] [main] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_?master_timeout=30s] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.576] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_refresh] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.590] [migration_1] INFO
        io.camunda.operate.schema.migration.Migrator - Check if index decision needs to migrate.
[2025-12-26 14:10:21.594] [migration_5] INFO
        io.camunda.operate.schema.migration.Migrator - Check if index metric needs to migrate.
[2025-12-26 14:10:21.618] [migration_1] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-decision*
[2025-12-26 14:10:21.593] [migration_4] INFO
        io.camunda.operate.schema.migration.Migrator - Check if index metadata needs to migrate.
[2025-12-26 14:10:21.619] [migration_4] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-metadata*
[2025-12-26 14:10:21.591] [migration_2] INFO
        io.camunda.operate.schema.migration.Migrator - Check if index decision-requirements needs to migrate.
[2025-12-26 14:10:21.591] [migration_3] INFO
        io.camunda.operate.schema.migration.Migrator - Check if index import-position needs to migrate.
[2025-12-26 14:10:21.619] [migration_2] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-decision-requirements*
[2025-12-26 14:10:21.620] [migration_3] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-import-position*
[2025-12-26 14:10:21.628] [migration_5] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-metric*
[
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.872] [migration_5] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.878] [migration_2] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.883] [migration_1] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.886] [migration_4] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/operate-batch-operation*?master_timeout=30s] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.888] [migration_4] DEBUG

        io.camunda.operate.schema.migration.Migrator - Check if index flownode-instance needs to migrate.
[2025-12-26 14:10:21.944] [migration_2] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-flownode-instance*
[2025-12-26 14:10:21.949] [migration_3] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.967] [migration_5] DEBUG
        org.elasticsearch.client.RestClient - request [DELETE http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:21.968] [migration_5] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for process, no previous indices found and no data initializer.

        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-job*
[2025-12-26 14:10:22.006] [migration_2] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.018] [migration_5] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/operate-incident*?master_timeout=30s] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.019] [migration_5] DEBUG
        io.camunda.operate.schema.migration.elasticsearch.ElasticsearchStepsRepository - Find 'not applied steps' for index incident from Elasticsearch at http://camunda-elasticsearch:9200
[2025-12-26 14:10:22.022] [migration_4] DEBUG

[2025-12-26 14:10:22.070] [migration_5] DEBUG


[2025-12-26 14:10:22.181] [migration_1] DEBUG
        org.elasticsearch.client.RestClient - request [DELETE http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.182] [migration_1] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for list-view, no previous indices found and no data initializer.
[2025-12-26 14:10:22.182] [migration_1] INFO
        io.camunda.operate.schema.migration.Migrator - Check if index user-task needs to migrate.
[2025-12-26 14:10:22.182] [migration_1] DEBUG
        io.camunda.operate.schema.IndexSchemaValidator - Getting all indices for operate-user-task*
[2025-12-26 14:10:22.186] [migration_5] DEBUG
        org.elasticsearch.client.RestClient - request [DELETE http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.187] [migration_5] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for incident, no previous indices found and no data initializer.
[2025-12-26 14:10:22.192] [migration_5] INFO

[2025-12-26 14:10:22.242] [migration_1] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&scroll=60000ms&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.244] [migration_1] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for user-task, no previous indices found and no data initializer.
[2025-12-26 14:10:22.245] [migration_5] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.246] [migration_3] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&scroll=60000ms&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.255] [migration_3] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.262] [migration_3] DEBUG
        org.elasticsearch.client.RestClient - request [DELETE http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.263] [migration_3] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for sequence-flow, no previous indices found and no data initializer.
[2025-12-26 14:10:22.279] [migration_5] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-migration-steps-repository-1.1.0_/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true&ignore_throttled=false&scroll=60000ms&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.295] [migration_5] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.301] [migration_5] DEBUG
        org.elasticsearch.client.RestClient - request [DELETE http://camunda-elasticsearch:9200/_search/scroll] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.302] [migration_5] INFO
        io.camunda.operate.schema.migration.Migrator - No migration needed for variable, no previous indices found and no data initializer.
[2025-12-26 14:10:22.302] [main] INFO
        io.camunda.operate.schema.SchemaStartup - SchemaStartup finished.
[2025-12-26 14:10:22.320] [main] INFO
        io.camunda.operate.WebappModuleConfiguration - Starting module: webapp
[2025-12-26 14:10:22.380] [main] INFO
        io.camunda.operate.management.ModelMetricProvider - Register BPMN/DMN model metrics.
[2025-12-26 14:10:22.436] [main] DEBUG
        io.camunda.operate.connect.ElasticsearchConnector - Creating Elasticsearch connection...
[2025-12-26 14:10:22.437] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Username and/or password for are empty. Basic authentication for elasticsearch is not used.
[2025-12-26 14:10:22.447] [elasticsearch-rest-client-2-thread-2] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.452] [main] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/_cluster/health?master_timeout=30s&level=cluster&timeout=30s] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:22.453] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Elasticsearch cluster is not accessible
[2025-12-26 14:10:22.462] [main] INFO
        io.camunda.operate.ArchiverModuleConfiguration - Starting module: archiver
[2025-12-26 14:10:22.482] [main] INFO
        io.camunda.operate.zeebe.ZeebeConnector - Use plaintext connection to zeebe
[2025-12-26 14:10:23.194] [main] INFO
        io.camunda.operate.ImportModuleConfiguration - Starting module: importer
[2025-12-26 14:10:23.205] [main] DEBUG
        io.camunda.operate.connect.ElasticsearchConnector - Creating ElasticsearchClient ...
[2025-12-26 14:10:23.207] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Username and/or password for are empty. Basic authentication for elasticsearch is not used.
[2025-12-26 14:10:23.317] [main] DEBUG
        org.elasticsearch.client.RestClient - request [GET http://camunda-elasticsearch:9200/_cluster/health] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:23.365] [main] INFO
        io.camunda.operate.connect.ElasticsearchConnector - Elasticsearch cluster health: Green
[2025-12-26 14:10:23.365] [main] WARN
        io.camunda.operate.connect.ElasticsearchConnector - Elasticsearch cluster is not accessible
[2025-12-26 14:10:23.817] [main] INFO
        io.camunda.operate.webapp.StartupBean - INIT: Create users in elasticsearch if not exists ...
[2025-12-26 14:10:23.836] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-user-1.2.0_alias/_search?typed_keys=true&max_concurrent_shard_requests=5&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:23.852] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-user-1.2.0_alias/_search?typed_keys=true&max_concurrent_shard_requests=5&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:23.864] [main] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-user-1.2.0_alias/_search?typed_keys=true&max_concurrent_shard_requests=5&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:23.865] [main] INFO
        io.camunda.operate.webapp.StartupBean - INIT: Start operation executor...
[2025-12-26 14:10:23.866] [main] INFO
        io.camunda.operate.webapp.StartupBean - INIT: DONE
[2025-12-26 14:10:23.883] [Thread-13] DEBUG
        org.elasticsearch.client.RestClient - request [POST http://camunda-elasticsearch:9200/operate-operation-8.4.1_/_search?typed_keys=true&max_concurrent_shard_requests=5&search_type=query_then_fetch&batched_reduce_size=512] returned [HTTP/1.1 200 OK]
[2025-12-26 14:10:23.896] [Thread-13] DEBUG
        io.camunda.operate.store.elasticsearch.ElasticsearchBatchRequest - Execute batchRequest with 0 requests and refresh
[2025-12-26 14:10:23.897] [Thread-13] DEBUG
        io.camunda.operate.webapp.elasticsearch.writer.BatchOperationWriter - 0 operations locked
[2025-12-26 14:10:24.999] [main] INFO
        io.camunda.operate.archiver.Archiver - INIT: Start archiving data...
[2025-12-26 14:10:25.000] [main] DEBUG
        io.camunda.operate.zeebe.PartitionHolder - Requesting partition ids
[2025-12-26 14:10:25.276] [main] WARN
        io.camunda.operate.zeebe.PartitionHolder - Error occurred when requesting partition ids from Zeebe client: io exception
io.camunda.zeebe.client.api.command.ClientStatusException: io exception
        at io.camunda.zeebe.client.impl.ZeebeClientFutureImpl.transformExecutionException(ZeebeClientFutureImpl.java:122) ~[zeebe-client-java-8.7.21.jar:8.7.21]
        at io.camunda.zeebe.client.impl.ZeebeClientFutureImpl.join(ZeebeClientFutureImpl.java:54) ~[zeebe-client-java-8.7.21.jar:8.7.21]
        at io.camunda.webapps.zeebe.StandalonePartitionSupplier.getPartitionsCount(StandalonePartitionSupplier.java:23) ~[camunda-zeebe-8.7.21.jar:8.7.21]
        at io.camunda.operate.zeebe.PartitionHolder.getPartitionIdsFromZeebe(PartitionHolder.java:109) ~[operate-common-8.7.21.jar:8.7.21]
        at io.camunda.operate.zeebe.PartitionHolder.getPartitionIdsWithWaitingTimeAndRetries(PartitionHolder.java:67) ~[operate-common-8.7.21.jar:8.7.21]
        at io.camunda.operate.zeebe.PartitionHolder.getPartitionIds(PartitionHolder.java:50) ~[operate-common-8.7.21.jar:8.7.21]
        at io.camunda.operate.archiver.Archiver.startArchiving(Archiver.java:49) ~[operate-archiver-8.7.21.jar:8.7.21]
        at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103) ~[?:?]
        at java.base/java.lang.reflect.Method.invoke(Method.java:580) ~[?:?]

:tada: Great progress! Your Camunda Operate is now successfully connecting to Elasticsearch and functioning properly.

:white_check_mark: What’s Working Now

Looking at your logs, I can see that:

  1. Elasticsearch connection is successful:

    • request [GET http://camunda-elasticsearch:9200/] returned [HTTP/1.1 200 OK]
    • Elasticsearch cluster health: Green
  2. Operate is fully operational:

    • Schema migration completed successfully
    • All Operate indices are created and healthy
    • User management is working
    • Web interface is accessible at port 8080
  3. The only remaining issue is that Zeebe is not running, which is why you see:

    • Only Operate indices (no Zeebe record indices)
    • Error occurred when requesting partition ids from Zeebe client: io exception

:wrench: The Solution That Fixed It

The key fixes that resolved your Elasticsearch connectivity issue were:

  1. Corrected the global Elasticsearch configuration structure:

    global:
      elasticsearch:
        enabled: true
        url:                    # ✅ Properly nested under elasticsearch
          protocol: http
          host: camunda-elasticsearch
          port: 9200
    
  2. Removed the conflicting disableExporter: true setting

  3. Used a compatible Elasticsearch version (8.15.0) instead of the newer 8.17.4

:rocket: Next Steps

To complete your setup, you need to ensure Zeebe is running:

# Check if Zeebe pods are running
kubectl get pods | grep zeebe

# If not running, check Zeebe logs
kubectl logs <zeebe-pod-name>

Once Zeebe is running, it will:

  • Create the zeebe-record-* indices in Elasticsearch
  • Allow Operate to display process instances and workflow data
  • Enable the full Camunda 8 functionality

:books: Key Learnings

  • Configuration precedence: Global Helm values vs component-specific configuration
  • Version compatibility: Matching Camunda and Elasticsearch versions
  • Network connectivity: Distinguishing between network issues and application-level problems

Your Operate is now ready to monitor and manage your Camunda processes once Zeebe is running!

References:

Error snippet from zeebe log is below

2025-12-27 05:50:03.345] [atomix-cluster-heartbeat-sender] WARN
        io.atomix.cluster.protocol.swim.probe - 0 - Failed to probe camunda-operate-85f856b696-qdlgc
java.util.concurrent.TimeoutException: Request atomix-membership-probe to 10.244.0.51:26502 timed out in PT0.1S
        at io.atomix.cluster.messaging.impl.NettyMessagingService.lambda$sendAndReceive$4(NettyMessagingService.java:288) ~[zeebe-atomix-cluster-8.7.21.jar:8.7.21]
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.128.Final.jar:4.1.128.Final]
        at java.base/java.lang.Thread.run(Unknown Source) [?:?]
[2025-12-27 05:50:03.743] [zb-actors-2] [{actor-name=ZeebePartition-1, actor-scheduler=Broker-0, partitionId=1}] INFO
        io.camunda.zeebe.broker.system - Registered component Exporter-1:Exporter-1
[2025-12-27 05:50:03.745] [zb-actors-2] [{actor-name=ZeebePartition-1, actor-scheduler=Broker-0, partitionId=1}] INFO
        io.camunda.zeebe.broker.system - Transition to LEADER on term 5 - transitioning BackupApiRequestHandler
[2025-12-27 05:50:03.851] [zb-actors-0] [{actor-name=ZeebePartition-1, actor-scheduler=Broker-0, partitionId=1}] INFO
        io.camunda.zeebe.broker.system - Transition to LEADER on term 5 - transitioning Admin API
[2025-12-27 05:50:03.947] [zb-actors-0] [{actor-name=ZeebePartition-1, actor-scheduler=Broker-0, partitionId=1}] INFO
        io.camunda.zeebe.broker.system - Transition to LEADER on term 5 completed
[2025-12-27 05:50:04.048] [zb-actors-0] [{actor-name=ZeebePartition-1, actor-scheduler=Broker-0, partitionId=1}] INFO
        io.camunda.zeebe.broker.system - ZeebePartitionHealth-1 recovered, marking it as healthy
[2025-12-27 05:50:04.143] [zb-actors-2] [{actor-name=HealthCheckService, actor-scheduler=Broker-0}] INFO
        io.camunda.zeebe.broker.system - Partition-1 recovered, marking it as healthy
[2025-12-27 05:50:04.144] [zb-actors-2] [{actor-name=HealthCheckService, actor-scheduler=Broker-0}] INFO
        io.camunda.zeebe.broker.system - Partition-1 recovered, marking it as healthy
[2025-12-27 05:50:04.451] [atomix-cluster-heartbeat-sender] WARN
        io.atomix.cluster.protocol.swim.probe - 0 - Failed to probe camunda-zeebe-gateway-5984586c9b-7zxlv
java.util.concurrent.TimeoutException: Request atomix-membership-probe to 10.244.0.52:26502 timed out in PT0.1S
        at io.atomix.cluster.messaging.impl.NettyMessagingService.lambda$sendAndReceive$4(NettyMessagingService.java:288) ~[zeebe-atomix-cluster-8.7.21.jar:8.7.21]
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:?]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.128.Final.jar:4.1.128.Final]
        at java.base/java.lang.Thread.run(Unknown Source) [?:?]
[2025-12-27 05:50:06.142] [main] INFO
        org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port 8080 (http) with context path '/'
[2025-12-27 05:50:06.257] [zb-fs-workers-2] [{actor-name=Exporter-1, actor-scheduler=Broker-0, partitionId=1}] INFO
        io.camunda.zeebe.broker.exporter.elasticsearch - Exporter opened
[2025-12-27 05:50:07.446] [main] INFO
        org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port 9600 (http)
[2025-12-27 05:50:07.744] [main] INFO
        org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 1587 ms
[2025-12-27 05:50:08.059] [main] INFO
        org.springframework.boot.actuate.endpoint.web.EndpointLinksResolver - Exposing 15 endpoints beneath base path '/actuator'
[2025-12-27 05:50:09.150] [main] INFO
        org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port 9600 (http) with context path '/'
[2025-12-27 05:50:09.343] [main] INFO
        io.camunda.application.StandaloneBroker - Started StandaloneBroker in 65.892 seconds (process running for 72.472)
[2025-12-27 05:50:10.248] [main] WARN
        org.springdoc.core.events.SpringDocAppInitializer - SpringDoc /v3/api-docs endpoint is enabled by default. To disable it in production, set the property 'springdoc.api-docs.enabled=false'
[2025-12-27 05:50:10.248] [main] WARN
        org.springdoc.core.events.SpringDocAppInitializer - SpringDoc /swagger-ui.html endpoint is enabled by default. To disable it in production, set the property 'springdoc.swagger-ui.enabled=false'
[2025-12-27 05:50:13.441] [zb-actors-0] [{actor-name=ClusterConfigGossip, actor-scheduler=Broker-0}] WARN
        io.camunda.zeebe.dynamic.config.gossip.ClusterConfigurationGossiper - Failed to sync with camunda-operate-85f856b696-qdlgc
java.util.concurrent.CompletionException: io.atomix.cluster.messaging.MessagingException$NoRemoteHandler: No remote message handler registered for this message, subject cluster-topology-sync
        at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.CompletableFuture.completeThrowable(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(Unknown Source) ~[?:?]
        at io.atomix.cluster.messaging.impl.NettyMessagingService.lambda$executeOnPooledConnection$25(NettyMessagingService.java:696) ~[zeebe-atomix-cluster-8.7.21.jar:8.7.21]
        at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31) ~[guava-33.3.1-jre.jar:?]
        at io.atomix.cluster.messaging.impl.NettyMessagingService.lambda$executeOnPooledConnection$26(NettyMessagingService.java:694) ~[zeebe-atomix-cluster-8.7.21.jar:8.7.21]
        at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.CompletableFuture.postComplete(Unknown Source) ~[?:?]
        at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(Unknown Source) ~[?:?]
        at io.atomix.cluster.messaging.impl.AbstractClientConnection.dispatch(AbstractClientConnection.java:45) ~[zeebe-atomix-cluster-8.7.21.jar:8.7.21]
        at io.atomix.cluster.messaging.impl.AbstractClientConnection.dispatch(AbstractClientConnection.java:29) ~[zeebe-atomix-cluster-8.7.21.jar:8.7.21]
        at io.atomix.cluster.messaging.impl.NettyMessagingService$MessageDispatcher.channelRead0(NettyMessagingService.java:1255) ~[zeebe-atomix-cluster-8.7.21.jar:8.7.21]
        at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.atomix.cluster.messaging.impl.NettyMessagingService$HeartBeatHandler.channelRead(NettyMessagingService.java:951) ~[zeebe-atomix-cluster-8.7.21.jar:8.7.21]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) ~[netty-codec-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) ~[netty-codec-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:289) ~[netty-handler-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1357) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:868) ~[netty-transport-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799) ~[netty-transport-classes-epoll-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:501) ~[netty-transport-classes-epoll-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:399) ~[netty-transport-classes-epoll-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:998) ~[netty-common-4.1.128.Final.jar:4.1.128.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.128.Final.jar:4.1.128.Final]
        at java.base/java.lang.Thread.run(Unknown Source) ~[?:?]
Caused by: io.atomix.cluster.messaging.MessagingException$NoRemoteHandler: No remote message handler registered for this message, subject cluster-topology-sync
[2025-12-27 05:50:17.249] [http-nio-0.0.0.0-9600-exec-1] INFO
        org.springframework.web.servlet.DispatcherServlet - Initializing Servlet 'dispatcherServletRegistration'
[2025-12-27 05:50:17.251] [http-nio-0.0.0.0-9600-exec-1] INFO
        org.springframework.web.servlet.DispatcherServlet - Completed initialization in 1 ms

values.yaml

#Global configuration
global:
  elasticsearch:
    enabled: true
    url:
      protocol: http
      host: camunda-elasticsearch
      port: 9200

  identity:
    auth:
      enabled: false

#Elasticsearch configuration
elasticsearch:
  enabled: true
  image:
    tag: "8.15.0"
  master:
    replicaCount: 1
    persistence:
      size: 3Gi

#Operate configuration
operate:
  enabled: true
  configuration: |-
    camunda:
      operate:
        elasticsearch:
          url: "http://camunda-elasticsearch:9200"
          socketTimeout: 60000
          connectTimeout: 30000
    logging:
      level:
        io.camunda.operate: DEBUG
        org.elasticsearch: DEBUG
  readinessProbe:
    enabled: true
    initialDelaySeconds: 180
  migration:
    enabled: false
  image:
    repository: camunda/operate
    tag: 8.7.21

#Zeebe configuration
zeebe:
  enabled: true
  clusterSize: 1
  partitionCount: 1
  replicationFactor: 1
  pvcSize: 2Gi
  resources:
    requests:
      cpu: 400m
      memory: 800Mi
    limits:
      cpu: 400m
      memory: 800Mi
  image:
    repository: camunda/zeebe
    tag: 8.7.21

#Zeebe gateway configuration
zeebeGateway:
  enabled: true
  replicas: 1
  resources:
    requests:
      cpu: 400m
      memory: 800Mi
    limits:
      cpu: 400m
      memory: 800Mi
  image:
    repository: camunda/zeebe
    tag: 8.7.21

identity:
  enabled: false

identityKeycloak:
  enabled: false

connectors:
  enabled: false
  inbound:
    mode: disabled

optimize:
  enabled: false

tasklist:
  enabled: false

Any update on above comment?