Camunda 8.8 setting ealsticsearch user/password

Hi I’m trying to install Camunda8.8-alpha4.1 (Yes I’ve been trying since that was the latest alpha. Yes I am questioning my lifechoices). I’m using the helm chart deploying to our kubernetes cluster.
Keycloak, postgresql, identity runs fine now but I have an issue with camunda-platform-zeebe pods not being able to connect to elasticsearch.

early in the logs I get a warning:

[main] WARN 
 io.camunda.search.connect.es.ElasticsearchConnector - Username and/or password for are empty. Basic authentication for elasticsearch is not used.

Then I get a stacktrace starting with

co.elastic.clients.elasticsearch._types.ElasticsearchException: [es/cluster.health] failed: [security_exception] missing authentication credentials for REST request [/_cluster/health]

This is the elasticsearch part of my values file

global:
  elasticsearch:
    enabled: true
    external: true
    tls:
      enabled: true
      existingSecret: tls-secret
    url:
      protocol: https
      host: elkd.sumsum.co
      port: 443
    prefix: bv-loc-zeebe-record
    auth:
      username: zeebe-elast
      password: myPassword

And this is the generated camunda-platform-core-configmap (relevant zeebe part)

zeebe:
		  host: 0.0.0.0
		  log:
		    level: \"info\"
		
		  broker:
		    # zeebe.broker.gateway
		    gateway:
		      enable: true
		      network:
		        host: 0.0.0.0
		        port: 26500
		      multitenancy:
		        enabled: true
		
		    # zeebe.broker.network
		    network:
		      advertisedHost: \"${K8S_NAME}.${K8S_SERVICE_NAME}\"
		      host: 0.0.0.0
		      commandApi:
		        port: 26501
		      internalApi:
		        port: 26502
		
		    # zeebe.broker.cluster
		    cluster:
		      # The value of \"nodeId\" is set via the \"ZEEBE_BROKER_CLUSTER_NODEID\" env var.
		      # As it depends on the Pod name, which cannot be templated at the installation time.
		      # nodeId:
		      initialContactPoints:
		        - camunda-platform-zeebe-0.${K8S_SERVICE_NAME}:26502
		        - camunda-platform-zeebe-1.${K8S_SERVICE_NAME}:26502
		        - camunda-platform-zeebe-2.${K8S_SERVICE_NAME}:26502
		      clusterSize: \"3\"
		      replicationFactor: \"3\"
		      partitionsCount: \"3\"
		      clusterName: camunda-platform-zeebe
		
		    # zeebe.broker.data
		    data:
		      snapshotPeriod: 5m
		      freeSpace:
		        processing: 2GB
		        replication: 3GB
		
		    # zeebe.broker.threads
		    threads:
		      cpuThreadCount: \"3\"
		      ioThreadCount: \"3\"
		
		    # zeebe.broker.exporters
		    exporters:
		      elasticsearch:
		        className: \"io.camunda.zeebe.exporter.ElasticsearchExporter\"
		        args:
		          authentication:
		            username: \"zeebe-elast\"
		          url: \"https://elkd.sumsum.co:443\"
		          index:
		            prefix: \"bv-loc-zeebe-record\"
		      CamundaExporter:
		        className: \"io.camunda.exporter.CamundaExporter\"
		        args:
		          connect:
		            type: elasticsearch
		            url: \"https://elkdsumsum.co:443\"
		            username: \"zeebe-elast\"
		          history:
		            elsRolloverDateFormat: \"date\"
		            rolloverInterval: \"1d\"
		            rolloverBatchSize: 100
		            waitPeriodBeforeArchiving: \"1h\"
		            delayBetweenRuns: 2000
		            maxDelayBetweenRuns: 60000

I’ve tried to find somewhere to add ES environment variables but zeebe is hidden under core in config.

Any pointers?

Based on your configuration and the generated configmap, I can see that the username is being correctly propagated to both the ElasticsearchExporter and CamundaExporter, but the password is missing from the authentication configuration. This appears to be an issue with the Camunda 8.8-alpha4.1 Helm chart.

Classification: Problem

Your configuration looks correct according to the documentation. The issue is that while the username (zeebe-elast) is appearing in the generated configmap, the password is not being included in the authentication args for either exporter.

Potential Solutions:

1. Try Using a Kubernetes Secret (Recommended)

Instead of putting the password directly in the values file, try using a Kubernetes secret:

global:
  elasticsearch:
    enabled: true
    external: true
    tls:
      enabled: true
      existingSecret: tls-secret
    url:
      protocol: https
      host: elkd.sumsum.co
      port: 443
    prefix: bv-loc-zeebe-record
    auth:
      username: zeebe-elast
      existingSecret: "elasticsearch-credentials"
      existingSecretKey: "password"

Create the secret:

kubectl create secret generic elasticsearch-credentials --from-literal=password=myPassword

2. Override Zeebe Configuration Manually

As a workaround, you can explicitly override the Zeebe exporter configuration:

zeebe:
  configuration: |
    zeebe:
      broker:
        exporters:
          elasticsearch:
            className: "io.camunda.zeebe.exporter.ElasticsearchExporter"
            args:
              url: "https://elkd.sumsum.co:443"
              index:
                prefix: "bv-loc-zeebe-record"
              authentication:
                username: "zeebe-elast"
                password: "myPassword"
          CamundaExporter:
            className: "io.camunda.exporter.CamundaExporter"
            args:
              connect:
                type: elasticsearch
                url: "https://elkd.sumsum.co:443"
                username: "zeebe-elast"
                password: "myPassword"

3. Check for Known Issues

Since you’re using an alpha version (8.8-alpha4.1), this could be a known issue. I recommend:

  1. Checking the GitHub issues: Camunda Platform Issues
  2. Trying a newer alpha version if available
  3. Filing a bug report if this issue isn’t already reported

Verification Steps:

After implementing one of the solutions above:

  1. Redeploy your Helm chart
  2. Check the generated configmap again to verify the password is now included
  3. Monitor the Zeebe pod logs to confirm the authentication warning is gone
  4. Verify that Zeebe can successfully connect to Elasticsearch

Additional Notes:

  • Your TLS configuration and other Elasticsearch settings look correct
  • The issue seems specific to password propagation in the alpha version
  • Using Kubernetes secrets is generally a better security practice than hardcoding passwords in values files

References:

Would you like me to help you implement any of these solutions, or do you need assistance with creating the Kubernetes secret?

I started with using secret (existingSecret, existingSecretKey) and got the same error, from that I started using password instead just incase there was some error with the secret.

looking at the configmap.yaml template password doesnt exist in the configmap:

# zeebe.broker.exporters
        exporters:
        {{- if and (not .Values.global.elasticsearch.disableExporter) .Values.global.elasticsearch.enabled }}
          elasticsearch:
            className: "io.camunda.zeebe.exporter.ElasticsearchExporter"
            args:
              {{- if .Values.global.elasticsearch.external }}
              authentication:
                username: {{ .Values.global.elasticsearch.auth.username | quote }}
              {{- end }}
              url: {{ include "camundaPlatform.elasticsearchURL" . | quote }}
              index:
                prefix: {{ .Values.global.elasticsearch.prefix | quote }}
              {{- if .Values.core.history.retention.enabled }}
              retention:
                enabled: true
                minimumAge: {{ .Values.core.history.retention.minimumAge | quote }}
                policyName: {{ .Values.core.history.retention.policyName | quote }}
              {{- end }}

I do not want to do option 2. Our goal is to have the password in a secret

Yes. unfortunatly the comfigmap.yaml for core in the helm chart doesn’t have any mapping for the password

exporters:
        {{- if and (not .Values.global.elasticsearch.disableExporter) .Values.global.elasticsearch.enabled }}
          elasticsearch:
            className: "io.camunda.zeebe.exporter.ElasticsearchExporter"
            args:
              {{- if .Values.global.elasticsearch.external }}
              authentication:
                username: {{ .Values.global.elasticsearch.auth.username | quote }}
              {{- end }}
              url: {{ include "camundaPlatform.elasticsearchURL" . | quote }}
              index:
                prefix: {{ .Values.global.elasticsearch.prefix | quote }}
              {{- if .Values.core.history.retention.enabled }}
              retention:
                enabled: true
                minimumAge: {{ .Values.core.history.retention.minimumAge | quote }}
                policyName: {{ .Values.core.history.retention.policyName | quote }}
              {{- end }}

I’ve tried setting it in the generated configmap in kubernetes but that didn’t help

Are you using docker compose or helm chart? Would you please share the link from where you downloaded the file.

I downloaded the files from GitHub - camunda/camunda-platform-helm: Camunda Platform 8 Self-Managed Helm charts

and when I install the helm chart i go via the repo https://helm.camunda.io

Can you share the values file? (remove any sensitive data if needed)

This is the values file (cleaned)
values-yupp.yaml (18.1 KB)

It seems to be working fine with version 8.7 with a secret. Could be a regression defect.