Camunda 8.7 exporter state UNRECOGNIZED

I’m trying to install Camunda 8.7 in Kubernetes using helm version 12.5.0
We want to use our external elasticsearch so I’ve added to our values file according to the documentation

global:
  elasticsearch:
    enabled: true
    external: true
    auth:
      username: elastic
      password: pass
    url:
      protocol: http
      host: elastic.example.com
      port: 443

elasticsearch:
  enabled: false

(using our own host etc ofc)
But when the zeebe pod starts up, in the logs, I first get

  • echo ‘No exporters available.’

then after the applikation starts I get a stacktrace with
io.camunda.zeebe.broker.system - Aborting startup process due to exception during step Cluster Topology Manager
java.lang.IllegalStateException: Unknown exporter state UNRECOGNIZED

Is there something else I need to add to the values file?

Hi there! :wave:

I can see you’re encountering an issue with Camunda 8.7 and external Elasticsearch configuration. The “Unknown exporter state UNRECOGNIZED” error combined with “No exporters available” suggests there’s a configuration issue with the exporter setup.

Classification: Problem :wrench:

Based on your configuration and the error messages, here are the most likely causes and solutions:

1. Password Propagation Issue

There’s a known issue where passwords aren’t always correctly propagated from Helm values to the Zeebe exporter configuration. Try using a Kubernetes secret instead:

global:
  elasticsearch:
    enabled: true
    external: true
    auth:
      username: elastic
      existingSecret: elasticsearch-credentials
      existingSecretKey: password
    url:
      protocol: http
      host: elastic.example.com
      port: 443

elasticsearch:
  enabled: false

Create the secret:

kubectl create secret generic elasticsearch-credentials --from-literal=password=your-actual-password

2. Protocol Configuration

I notice you’re using protocol: http with port: 443. Port 443 is typically used for HTTPS. Make sure your protocol matches your actual Elasticsearch setup:

  • If using HTTPS: protocol: https with port: 443
  • If using HTTP: protocol: http with port: 80 or 9200

3. Verify Generated Configuration

Check if the exporter configuration is properly generated in the Zeebe ConfigMap:

kubectl get configmap <zeebe-configmap-name> -o yaml

Look for the exporters section - it should contain the Elasticsearch exporter configuration with your credentials.

4. TLS Configuration

If your Elasticsearch uses TLS, you may need to add TLS configuration:

global:
  elasticsearch:
    enabled: true
    external: true
    tls:
      enabled: true
      existingSecret: <your-tls-secret>  # if using custom certificates
    auth:
      username: elastic
      existingSecret: elasticsearch-credentials
      existingSecretKey: password
    url:
      protocol: https
      host: elastic.example.com
      port: 443

Next Steps:

  1. Try the Kubernetes secret approach first
  2. Verify your protocol/port combination matches your Elasticsearch setup
  3. Check the generated Zeebe ConfigMap for the exporter configuration
  4. Review Zeebe logs for any additional error details

Could you please share:

  • The exact Elasticsearch setup you’re using (HTTP vs HTTPS, port configuration)
  • The generated Zeebe ConfigMap content (you can redact sensitive information)
  • Any additional error logs from the Zeebe pod

This will help us pinpoint the exact cause of the issue.

References:

Did you solve your problem?
I have the exact same problem with an almost identical setup(kubernetes, ElasticSearch within Kub) and getting the same error “No exporrters available”.

I enabled tls in the global ES config and I’m using Kub secrets:

elasticsearch:
    enabled: true
    external: true
    tls:
      enabled: true
      existingSecret: elastic-jks # Secret with ca.crt for Elasticsearch
    auth:
      username: ${namespace_name}
      existingSecret: ${namespace_name}
      existingSecretKey: elasticsearch-instance-secret
    url:
      protocol: ${elastic_service_protocol} # https
      host: ${elastic_service_url}          # 
      port: ${elastic_service_port}         # 9200
    prefix: ${namespace_name}-zeebe

All other apps like Tasklist, Operate… can write their indices to the ES Cluster but Zeebe not.

ES svc ist configured to listen to port 9200:

Name:                     elasticsearch
Namespace:                namespace-elastic01
Labels:                   app.kubernetes.io/component=master
                          app.kubernetes.io/instance=elasticsearch
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=elasticsearch
                          app.kubernetes.io/version=8.16.1
                          helm.sh/chart=elasticsearch-21.4.0
Annotations:              meta.helm.sh/release-name: elasticsearch
                          meta.helm.sh/release-namespace: namespace-elastic01
Selector:                 app.kubernetes.io/component=master,app.kubernetes.io/instance=elasticsearch,app.kubernetes.io/name=elasticsearch
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       1.1.1.1 # changed here due to sensitivity
IPs:                      1.1.1.1 # changed here due to sensitivity
Port:                     tcp-rest-api  9200/TCP
TargetPort:               rest-api/TCP
Endpoints:                10.244.1.199:9200,10.244.2.225:9200,10.244.3.89:9200
Port:                     tcp-transport  9300/TCP
TargetPort:               9300/TCP
Endpoints:                10.244.1.199:9300,10.244.2.225:9300,10.244.3.89:9300
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

Is this port config correct?

The Zeebe configmap looks like this:

apiVersion: v1
data:
  application.yaml: |
    zeebe:
      broker:
        exporters:
          elasticsearch:
            className: "io.camunda.zeebe.exporter.ElasticsearchExporter"
            args:
              authentication:
                username: "user01"
              url: "https://es.namespace-elastic01.svc.cluster.local:9200"
              index:
                prefix: "namespace01-zeebe"
        gateway:
          enable: true
          network:
            port: 26500
          security:
            enabled: false
            authentication:
              mode: none
        network:
          host: 0.0.0.0
          commandApi:
            port: 26501
          internalApi:
            port: 26502
          monitoringApi:
            port: "9600"
        cluster:
          clusterSize: "3"
          replicationFactor: "3"
          partitionsCount: "3"
          clusterName: namespace01-zeebe
        threads:
          cpuThreadCount: "3"
          ioThreadCount: "3"
        data:
          snapshotPeriod: "5m"
          disk:
            freeSpace:
              processing: "2GB"
              replication: "1GB"

    # Camunda Database configuration
    camunda.database:
      type: elasticsearch
      # Cluster name
      clusterName: elasticsearch
      username: "user01"
      # Elasticsearch full url
      url: "https://es.namespace-elastic01.svc.cluster.local:9200"
  broker-log4j2.xml: ""
  startup.sh: |
    #!/usr/bin/env bash
    set -eux -o pipefail

    export ZEEBE_BROKER_CLUSTER_NODEID=${ZEEBE_BROKER_CLUSTER_NODEID:-$[${K8S_NAME##*-} * 1 + 0]}

    if [ "$(ls -A /exporters/)" ]; then
      mkdir -p /usr/local/zeebe/exporters/
      cp -a /exporters/*.jar /usr/local/zeebe/exporters/
    else
      echo "No exporters available."
    fi

    if [ "${ZEEBE_RESTORE}" = "true" ]; then
      exec /usr/local/zeebe/bin/restore --backupId=${ZEEBE_RESTORE_FROM_BACKUP_ID}
    else
      exec /usr/local/zeebe/bin/broker
    fi
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: namespace01
    meta.helm.sh/release-namespace: namespace01
  labels:
    app: camunda-platform
    app.kubernetes.io/component: zeebe-broker
    app.kubernetes.io/instance: namespace01
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: camunda-platform
    app.kubernetes.io/part-of: camunda-platform
    app.kubernetes.io/version: 8.7.10
    helm.sh/chart: camunda-platform-12.4.0
  name: namespace01-zeebe-configuration
  namespace: namespace01
  resourceVersion: "69889002"

I solved this problem with adding in the JAVA_OPTIONS a truststore which holds the ElasticSearch self-signed certificate and a copy of cacert from all official certificates and the JKS format and the password. It was also helpful to deploy a bpmn file to the instance to see it working and forcing the last errors.

1 Like

@Cris_Ron - is there any feedback or suggestions you have on how to make the documentation more helpful here?