Camunda 8.7 exporter state UNRECOGNIZED

I’m trying to install Camunda 8.7 in Kubernetes using helm version 12.5.0
We want to use our external elasticsearch so I’ve added to our values file according to the documentation

global:
  elasticsearch:
    enabled: true
    external: true
    auth:
      username: elastic
      password: pass
    url:
      protocol: http
      host: elastic.example.com
      port: 443

elasticsearch:
  enabled: false

(using our own host etc ofc)
But when the zeebe pod starts up, in the logs, I first get

  • echo ‘No exporters available.’

then after the applikation starts I get a stacktrace with
io.camunda.zeebe.broker.system - Aborting startup process due to exception during step Cluster Topology Manager
java.lang.IllegalStateException: Unknown exporter state UNRECOGNIZED

Is there something else I need to add to the values file?

Hi there! :wave:

I can see you’re encountering an issue with Camunda 8.7 and external Elasticsearch configuration. The “Unknown exporter state UNRECOGNIZED” error combined with “No exporters available” suggests there’s a configuration issue with the exporter setup.

Classification: Problem :wrench:

Based on your configuration and the error messages, here are the most likely causes and solutions:

1. Password Propagation Issue

There’s a known issue where passwords aren’t always correctly propagated from Helm values to the Zeebe exporter configuration. Try using a Kubernetes secret instead:

global:
  elasticsearch:
    enabled: true
    external: true
    auth:
      username: elastic
      existingSecret: elasticsearch-credentials
      existingSecretKey: password
    url:
      protocol: http
      host: elastic.example.com
      port: 443

elasticsearch:
  enabled: false

Create the secret:

kubectl create secret generic elasticsearch-credentials --from-literal=password=your-actual-password

2. Protocol Configuration

I notice you’re using protocol: http with port: 443. Port 443 is typically used for HTTPS. Make sure your protocol matches your actual Elasticsearch setup:

  • If using HTTPS: protocol: https with port: 443
  • If using HTTP: protocol: http with port: 80 or 9200

3. Verify Generated Configuration

Check if the exporter configuration is properly generated in the Zeebe ConfigMap:

kubectl get configmap <zeebe-configmap-name> -o yaml

Look for the exporters section - it should contain the Elasticsearch exporter configuration with your credentials.

4. TLS Configuration

If your Elasticsearch uses TLS, you may need to add TLS configuration:

global:
  elasticsearch:
    enabled: true
    external: true
    tls:
      enabled: true
      existingSecret: <your-tls-secret>  # if using custom certificates
    auth:
      username: elastic
      existingSecret: elasticsearch-credentials
      existingSecretKey: password
    url:
      protocol: https
      host: elastic.example.com
      port: 443

Next Steps:

  1. Try the Kubernetes secret approach first
  2. Verify your protocol/port combination matches your Elasticsearch setup
  3. Check the generated Zeebe ConfigMap for the exporter configuration
  4. Review Zeebe logs for any additional error details

Could you please share:

  • The exact Elasticsearch setup you’re using (HTTP vs HTTPS, port configuration)
  • The generated Zeebe ConfigMap content (you can redact sensitive information)
  • Any additional error logs from the Zeebe pod

This will help us pinpoint the exact cause of the issue.

References:

Did you solve your problem?
I have the exact same problem with an almost identical setup(kubernetes, ElasticSearch within Kub) and getting the same error “No exporrters available”.

I enabled tls in the global ES config and I’m using Kub secrets:

elasticsearch:
    enabled: true
    external: true
    tls:
      enabled: true
      existingSecret: elastic-jks # Secret with ca.crt for Elasticsearch
    auth:
      username: ${namespace_name}
      existingSecret: ${namespace_name}
      existingSecretKey: elasticsearch-instance-secret
    url:
      protocol: ${elastic_service_protocol} # https
      host: ${elastic_service_url}          # 
      port: ${elastic_service_port}         # 9200
    prefix: ${namespace_name}-zeebe

All other apps like Tasklist, Operate… can write their indices to the ES Cluster but Zeebe not.

ES svc ist configured to listen to port 9200:

Name:                     elasticsearch
Namespace:                namespace-elastic01
Labels:                   app.kubernetes.io/component=master
                          app.kubernetes.io/instance=elasticsearch
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=elasticsearch
                          app.kubernetes.io/version=8.16.1
                          helm.sh/chart=elasticsearch-21.4.0
Annotations:              meta.helm.sh/release-name: elasticsearch
                          meta.helm.sh/release-namespace: namespace-elastic01
Selector:                 app.kubernetes.io/component=master,app.kubernetes.io/instance=elasticsearch,app.kubernetes.io/name=elasticsearch
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       1.1.1.1 # changed here due to sensitivity
IPs:                      1.1.1.1 # changed here due to sensitivity
Port:                     tcp-rest-api  9200/TCP
TargetPort:               rest-api/TCP
Endpoints:                10.244.1.199:9200,10.244.2.225:9200,10.244.3.89:9200
Port:                     tcp-transport  9300/TCP
TargetPort:               9300/TCP
Endpoints:                10.244.1.199:9300,10.244.2.225:9300,10.244.3.89:9300
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

Is this port config correct?

The Zeebe configmap looks like this:

apiVersion: v1
data:
  application.yaml: |
    zeebe:
      broker:
        exporters:
          elasticsearch:
            className: "io.camunda.zeebe.exporter.ElasticsearchExporter"
            args:
              authentication:
                username: "user01"
              url: "https://es.namespace-elastic01.svc.cluster.local:9200"
              index:
                prefix: "namespace01-zeebe"
        gateway:
          enable: true
          network:
            port: 26500
          security:
            enabled: false
            authentication:
              mode: none
        network:
          host: 0.0.0.0
          commandApi:
            port: 26501
          internalApi:
            port: 26502
          monitoringApi:
            port: "9600"
        cluster:
          clusterSize: "3"
          replicationFactor: "3"
          partitionsCount: "3"
          clusterName: namespace01-zeebe
        threads:
          cpuThreadCount: "3"
          ioThreadCount: "3"
        data:
          snapshotPeriod: "5m"
          disk:
            freeSpace:
              processing: "2GB"
              replication: "1GB"

    # Camunda Database configuration
    camunda.database:
      type: elasticsearch
      # Cluster name
      clusterName: elasticsearch
      username: "user01"
      # Elasticsearch full url
      url: "https://es.namespace-elastic01.svc.cluster.local:9200"
  broker-log4j2.xml: ""
  startup.sh: |
    #!/usr/bin/env bash
    set -eux -o pipefail

    export ZEEBE_BROKER_CLUSTER_NODEID=${ZEEBE_BROKER_CLUSTER_NODEID:-$[${K8S_NAME##*-} * 1 + 0]}

    if [ "$(ls -A /exporters/)" ]; then
      mkdir -p /usr/local/zeebe/exporters/
      cp -a /exporters/*.jar /usr/local/zeebe/exporters/
    else
      echo "No exporters available."
    fi

    if [ "${ZEEBE_RESTORE}" = "true" ]; then
      exec /usr/local/zeebe/bin/restore --backupId=${ZEEBE_RESTORE_FROM_BACKUP_ID}
    else
      exec /usr/local/zeebe/bin/broker
    fi
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: namespace01
    meta.helm.sh/release-namespace: namespace01
  labels:
    app: camunda-platform
    app.kubernetes.io/component: zeebe-broker
    app.kubernetes.io/instance: namespace01
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: camunda-platform
    app.kubernetes.io/part-of: camunda-platform
    app.kubernetes.io/version: 8.7.10
    helm.sh/chart: camunda-platform-12.4.0
  name: namespace01-zeebe-configuration
  namespace: namespace01
  resourceVersion: "69889002"

I solved this problem with adding in the JAVA_OPTIONS a truststore which holds the ElasticSearch self-signed certificate and a copy of cacert from all official certificates and the JKS format and the password. It was also helpful to deploy a bpmn file to the instance to see it working and forcing the last errors.

1 Like

@Cris_Ron - is there any feedback or suggestions you have on how to make the documentation more helpful here?

@nathan.loding Hi, as a new user to Camunda I can say that the documentation of Camunda wasn’t helpful during the whole configuration and installation process of Camunda within Kubernetes and with Etnra ID authentication.
There are some basic information which help to get a rough grasp of what to do but very many small configuration elements were missing like the correct and full configuration of Entra ID app registrations and their permissions, correct configuration of environment variables for authentication, key information about authentication to ElasticSearch (mandatory truststore config for Zeebe), there exist env variables or config elements which are nowhere documented which makes it difficult to tell if chatgpt is telling the truth or hallucinating again^^
General speaking I would like to have a complete and working config yaml file for Entra ID configuration with documented config elements and values which tell exactly if this element is needed or can be set otherweise/optionally.
I’m currently trying to install Camunda 8.8 and the documentaiton seems even worse than 8.7, so many things changed and nothing is working again even after following the upgrading guide^^

Hi @Cris_Ron - thanks for the candid feedback. I’ve shared this with our documentation team, as well as the team that manages the Helm charts and configurations. To answer two other points you made:

  • I don’t know if the trust store config is mandatory in all environments; there are so many different possible environment configurations out there that it is impossible to have a guide for all of them. If it is mandatory at all times, I’ve asked the teams to update the docs.

  • Regarding an example Helm values.yaml file, you can view an example configuration with EntraID here.

Hi @nathan.loding , thx for caring and forwarding my feedback.

I saw this example file already nd it helped for the basic config but there were still small configs to be done that the integration was fully working, especially for Camunda 8.7 and the Entra ID app registration configs. Also there are several different places where e.g. Identity is described with yaml file examples but there are often different config values shown, which makes it difficult do know, which values are now necessary for my environment.

8.8 seems simplified regarding Entra ID config tho.

The reason I was installing 8.7 was because 8.8-alpha was giving me alot of headaces. But since 8.7 was so different i configuration and also gave me headaces I waited for 8.8 real release and moved from there so this question will probably remain an unsolved mystery.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.