Trying to expose zeebe out from the cluster

Hi I run zeebe on my kubernet cluster, everything is working with port forward. I am trying to change it to ingress. I prepare ingress, read some examples. Tried a similar set up as is mentioned here ingress-nginx/docs/examples/grpc at master · kubernetes/ingress-nginx · GitHub

I was trying to upload something via modeler and I got “Should point to a running zeebe cluster”.

I tried to verify that everything is running via grpcurl but it looks like reflection is not on (no idea how to turn it on)

but even “Failed to list services: server does not support the reflection API” is kind of success for me. No idea how to move further.

I was trying to find some guide to check if my environment variables cant be the problem
but didn’t find anything. Any tips ?

        - name: ZEEBE_GATEWAY_NETWORK_HOST
          value: 0.0.0.0
        - name: ZEEBE_GATEWAY_NETWORK_PORT
          value: "26500"
        - name: ZEEBE_GATEWAY_CLUSTER_HOST

If you are in dev create a local host service with ingress that points to your deployment of the gateway. Something like this should work. It’s what I use to expose the tasklist for local dev.

apiVersion: v1
kind: Service
metadata:
name: gateway
namespace: default
labels:
app: gateway
spec:
externalTrafficPolicy: Local
ports:

  • name: http
    port: 26500
    protocol: TCP
    targetPort: 26500
    selector:
    app: gateway
    type: LoadBalancer
    status:
    loadBalancer:
    ingress:
    • hostname: localhost

you mean on localhost? no i am not running it on localhost. but I think my ingress and service configuration is okay. I tried this setup ingress-nginx/docs/examples/grpc at master · kubernetes/ingress-nginx · GitHub and it was working, I used it for me zeebe.

I can telnet to the URL and port 443, but when I try upload something via modeler I got “Should point to a running zeebe cluste” error

#my service
apiVersion: v1
kind: Service
metadata:
  name: "zeebe-cluster-zeebe-gateway"
  namespace: zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: gateway
spec:
  type: ClusterIP
  ports:
    - port: 9600
      protocol: TCP
      name: http
    - port: 26500
      protocol: TCP
      name: gateway
  selector:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: gateway

#my ingress 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: "zeebe-cluster-zeebe"
  namespace: zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: gateway
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/server-alias: "<....URL.....>"
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
  rules:
  - host: "<....URL.....>"  
    http:
      paths:
      -  backend:
           serviceName: "zeebe-cluster-zeebe-gateway"
           servicePort: gateway
  tls:
    - hosts:
      - "<....URL.....>"
      secretName: "zeebe-cluster-cert"

so I think this is correct but maybe problem with zeebe configuration, I tried to add ZEEBE_ADVERTISED_HOST but didn’t helped

As far as I know modeller cannot work with https, i tried and failed, so in my nginx I opened second port for modeller and used http protocol

# Zeebe Modeller
server {
   listen 2721 http2;

   access_log /var/log/nginx/zeebe-modeller-access.log upstream_time;
   error_log /var/log/nginx/zeebe-modeller-error.log;

   allow x.x.x.x;
   deny all;

   location / {
       grpc_pass grpc://zeebe:26500;
   }
}
1 Like

so it looks like i was exposing also HTTP port
I tried it but still same message “Should point to a running zeebe cluster”
still not sure if this is problem of my configuration in zeebe or configuration in kubernet :confused:

ok so it looks like problem was only with madder, I tried to upload bpmn via zbctl and it is working

I am using shell scripts to upload bunch of models through rest api point that connects to zeebe through https, modeller only for development purpose.

so I thought that everyting is working. But when I tried call one of my test workflow I got this error :confused:

Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason

still checking the web for similar problem, first problem mentioned combination of expecting tls but getting plain text. so your solution you are working with zeebe gateway throught https ?

I use nginx loadbalancer with https and gateways with http. Rest Client connect to nginx and run any task on any gateway in cluster. Workers connects to gateways, i do not use nginx for workers. Actually workers and gateways can work on https too, but i turned it off, because operate cannot connect zeebe with https

something like:

version: "2"

services:
  zeebe-nginx:
    image: nginx:1.19.6
    container_name: zeebe-nginx-$ENVIRONMENT
    restart: always
    ports:
      - "$CLUSTER_GATE_PORT:443"
      - "$ELASTIC_PORT:9200"
    expose:
      - 443
    depends_on:
      - gateway
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.2
        aliases:
          - elasticsearch        
    volumes:
      - ./cfg/zeebe.conf:/etc/nginx/conf.d/default.conf
      - ./cfg/ssl/zeebe.crt:/ssl/zeebe.crt:ro
      - ./cfg/ssl/zeebe.key:/ssl/zeebe.key:ro
      - ./cfg/ssl/dhparam.pem:/ssl/dhparam.pem:ro
      - /etc/localtime:/etc/localtime:ro

  gateway:
    image: camunda/zeebe:1.0.0-alpha5
    container_name: zeebe-cluster-gateway-$ENVIRONMENT
    restart: always
    depends_on:
      - node0
    ports:
      - "$MODELLER_PORT:26500"
    expose:
      - 26500
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.3
    environment:
      - ZEEBE_LOG_LEVEL=debug
      - ZEEBE_STANDALONE_GATEWAY=true
      - ZEEBE_GATEWAY_CLUSTER_CONTACTPOINT=node0:26502
      - ZEEBE_GATEWAY_CLUSTER_MEMBERID=gateway
      - JAVA_OPTS=-Xmx4g -XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError
    volumes:
      - ./cfg/gateway.yaml:/usr/local/zeebe/config/application.yaml:ro
      - /etc/localtime:/etc/localtime:ro

  gateway2:
    image: camunda/zeebe:1.0.0-alpha5
    container_name: zeebe-cluster-gateway2-$ENVIRONMENT
    restart: always
    depends_on:
      - node2
    expose:
      - 26500
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.4
    environment:
      - ZEEBE_LOG_LEVEL=debug
      - ZEEBE_STANDALONE_GATEWAY=true
      - ZEEBE_GATEWAY_CLUSTER_CONTACTPOINT=node2:26502
      - ZEEBE_GATEWAY_CLUSTER_MEMBERID=gateway2
      - JAVA_OPTS=-Xmx4g -XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError
    volumes:
      - ./cfg/gateway.yaml:/usr/local/zeebe/config/application.yaml:ro
      - /etc/localtime:/etc/localtime:ro

  node0:
    image: camunda/zeebe:1.0.0-alpha5
    container_name: zeebe-cluster-node0-$ENVIRONMENT
    restart: always
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.5
    environment:
      - ZEEBE_LOG_LEVEL=debug
      - ZEEBE_BROKER_CLUSTER_NODEID=0
      - ZEEBE_BROKER_GATEWAY_CLUSTER_HOST=gateway
      - JAVA_OPTS=-Xmx4g -XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError
    volumes:
      - ./db/node0:/usr/local/zeebe/data
      - ./cfg/broker.yaml:/usr/local/zeebe/config/application.yaml:ro
      - /etc/localtime:/etc/localtime:ro

  node1:
    image: camunda/zeebe:1.0.0-alpha5
    container_name: zeebe-cluster-node1-$ENVIRONMENT
    restart: always
    depends_on:
      - node0
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.6
    environment:
      - ZEEBE_LOG_LEVEL=debug
      - ZEEBE_BROKER_CLUSTER_NODEID=1
      - ZEEBE_BROKER_GATEWAY_CLUSTER_HOST=gateway
      - JAVA_OPTS=-Xmx4g -XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError
    volumes:
      - ./db/node1:/usr/local/zeebe/data
      - ./cfg/broker.yaml:/usr/local/zeebe/config/application.yaml:ro
      - /etc/localtime:/etc/localtime:ro

  node2:
    image: camunda/zeebe:1.0.0-alpha5
    container_name: zeebe-cluster-node2-$ENVIRONMENT
    restart: always
    depends_on:
      - node1
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.7
    environment:
      - ZEEBE_LOG_LEVEL=debug
      - ZEEBE_BROKER_CLUSTER_NODEID=2
      - ZEEBE_BROKER_GATEWAY_CLUSTER_HOST=gateway
      - JAVA_OPTS=-Xmx4g -XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError
    volumes:
      - ./db/node2:/usr/local/zeebe/data
      - ./cfg/broker.yaml:/usr/local/zeebe/config/application.yaml:ro
      - /etc/localtime:/etc/localtime:ro

  operate:
    image: camunda/operate:1.0.0-alpha5
    container_name: zeebe-cluster-operate-$ENVIRONMENT
    restart: always
    depends_on:
      - es01
    ports:
      - "$OPERATE_PORT:8080"
    expose:
      - 8080
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.8
    volumes:
      - ./cfg/operate.yml:/usr/local/operate/config/application.yml:ro
      - /etc/localtime:/etc/localtime:ro

  es01:
    image: elastic/elasticsearch:7.11.1
    container_name: zeebe-cluster-es01-$ENVIRONMENT
    restart: always
    expose:
      - 9200
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.9
    environment:
      - node.name=es01
      - cluster.name=elasticsearch
      - discovery.seed_hosts=es02,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - cluster.routing.allocation.disk.threshold_enabled=false
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./cfg/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
      - ./elastic/es01:/usr/share/elasticsearch/data
      - /etc/localtime:/etc/localtime:ro

  es02:
    image: elastic/elasticsearch:7.11.1
    container_name: zeebe-cluster-es02-$ENVIRONMENT
    restart: always
    depends_on:
      - es01
    expose:
      - 9200
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.10
    environment:
      - node.name=es02
      - cluster.name=elasticsearch
      - discovery.seed_hosts=es01,es03
      - cluster.initial_master_nodes=es01,es02,es03
      - cluster.routing.allocation.disk.threshold_enabled=false
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./cfg/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
      - ./elastic/es02:/usr/share/elasticsearch/data
      - /etc/localtime:/etc/localtime:ro

  es03:
    image: elastic/elasticsearch:7.11.1
    container_name: zeebe-cluster-es03-$ENVIRONMENT
    restart: always
    depends_on:
      - es02
    expose:
      - 9200
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.11
    environment:
      - node.name=es03
      - cluster.name=elasticsearch
      - discovery.seed_hosts=es01,es02
      - cluster.initial_master_nodes=es01,es02,es03
      - cluster.routing.allocation.disk.threshold_enabled=false
      - http.cors.enabled=true
      - http.cors.allow-origin=*
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ./cfg/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
      - ./elastic/es03:/usr/share/elasticsearch/data
      - /etc/localtime:/etc/localtime:ro

  kibana:
    image: kibana:7.10.1
    container_name: zeebe-cluster-kibana-$ENVIRONMENT
    depends_on:
      - es01
    restart: always
    ports:
      - "$KIBANA_PORT:5601"
    expose:
      - 5601
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.12
    volumes:
      - ./cfg/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
      - /etc/localtime:/etc/localtime:ro
    environment:
      ELASTICSEARCH_URL:    http://es01:9200
      ELASTICSEARCH_HOSTS: '["http://es01:9200","http://es02:9200","http://es03:9200"]'

  script-worker1:
    image: camunda/zeebe-script-worker:0.8.0
    container_name: zeebe-script-worker1-$ENVIRONMENT
    restart: always
    depends_on:
      - gateway
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.13
    environment:
      - zeebe.client.broker.contactPoint=gateway:26500
      - zeebe.client.worker.defaultName=script-worker1
      - zeebe.client.job.timeout=10000
    volumes:
      - /etc/localtime:/etc/localtime:ro

  script-worker2:
    image: camunda/zeebe-script-worker:0.8.0
    container_name: zeebe-script-worker2-$ENVIRONMENT
    restart: always
    depends_on:
      - gateway2
    environment:
      - zeebe.client.broker.contactPoint=gateway2:26500
      - zeebe.client.worker.defaultName=script-worker2
      - zeebe.client.job.timeout=10000
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.14
    volumes:
      - /etc/localtime:/etc/localtime:ro

  dmn-worker1:
    image: camunda/zeebe-dmn-worker:0.5.0
    container_name: zeebe-dmn-worker1-$ENVIRONMENT
    depends_on:
      - gateway
    environment:
      - zeebe.client.broker.contactPoint=gateway:26500
      - zeebe.client.worker.defaultName=dmn-worker1
      - zeebe.client.worker.dmn.repository=/dmn-repo
      - zeebe.client.job.timeout=10000
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.15
    volumes:
      - ./dmn:/dmn-repo
      - /etc/localtime:/etc/localtime:ro

  dmn-worker2:
    image: camunda/zeebe-dmn-worker:0.5.0
    container_name: zeebe-dmn-worker2-$ENVIRONMENT
    depends_on:
      - gateway2
    environment:
      - zeebe.client.broker.contactPoint=gateway2:26500
      - zeebe.client.worker.defaultName=dmn-worker2
      - zeebe.client.worker.dmn.repository=/dmn-repo
      - zeebe.client.job.timeout=10000
    networks:
      zeebe_network:
        ipv4_address: $CLUSTER_NETWORK.16
    volumes:
      - ./dmn:/dmn-repo
      - /etc/localtime:/etc/localtime:ro

  worker1:
    image: maximmonin/zeebe-cluster-worker
    container_name: zeebe-cluster-worker1-$ENVIRONMENT
    restart: always
    depends_on:
      - gateway
    network_mode: host
    environment:
      - ENVIRONMENT
      - SERVER
      - LogLevel=INFO
      - ZeebeUrl=gateway:26500
      - RedisUrls=$REDIS_PORTS
      - RedisPass=$REDIS_PASSWORD
      - ResponseTimeout=50000
      - LongPolling=60000
      - JobsToActivate=200
      - TaskType=InternalService
      - workerId=worker1
      # keep redis file cache in Hours
      - redisCacheHours=1
      # rotate logfile days
      - maxLogDays=14
      # rotate error logfile days
      - maxLogErrDays=60
    volumes:
      - ./workers/node/server.js:/app/server.js:ro
      - ./workers/js:/app/js
      #- ./cfg/ssl/zeebe.crt:/ssl/zeebe.crt:ro
      #- ./cfg/ssl/zeebe.key:/ssl/zeebe.key:ro
      #- ./cfg/ssl/rootcamundaCA.crt:/ssl/camundaCA.crt:ro
      - /etc/localtime:/etc/localtime:ro
    extra_hosts:
      - "gateway:$CLUSTER_NETWORK.3"
      - "camunda:$CAMUNDA"
      - "camunda2:$CAMUNDA2"

  worker2:
    image: maximmonin/zeebe-cluster-worker
    container_name: zeebe-cluster-worker2-$ENVIRONMENT
    restart: always
    depends_on:
      - gateway2
    network_mode: host
    environment:
      - ENVIRONMENT
      - SERVER
      - LogLevel=INFO
      - ZeebeUrl=gateway2:26500
      - RedisUrls=$REDIS_PORTS
      - RedisPass=$REDIS_PASSWORD
      - ResponseTimeout=50000
      - LongPolling=60000
      - JobsToActivate=200
      - TaskType=InternalService
      - workerId=worker2
      # keep redis file cache in Hours
      - redisCacheHours=1
      # rotate logfile days
      - maxLogDays=14
      # rotate error logfile days
      - maxLogErrDays=60
    volumes:
      - ./workers/node/server.js:/app/server.js:ro
      - ./workers/js:/app/js
      #- ./cfg/ssl/zeebe.crt:/ssl/zeebe.crt:ro
      #- ./cfg/ssl/zeebe.key:/ssl/zeebe.key:ro
      #- ./cfg/ssl/rootcamundaCA.crt:/ssl/camundaCA.crt:ro
      - /etc/localtime:/etc/localtime:ro
    extra_hosts:
      - "gateway2:$CLUSTER_NETWORK.4"
      - "camunda:$CAMUNDA"
      - "camunda2:$CAMUNDA2"

networks:
  zeebe_network:
    name: zeebe-cluster-$ENVIRONMENT
    driver: bridge
    driver_opts:
      com.docker.network.enable_ipv6: "false"
      com.docker.network.bridge.name: zeebe_$ENVIRONMENT
    ipam:
      driver: default
      config:
        - subnet: $CLUSTER_NETWORK.0/24
          gateway: $CLUSTER_NETWORK.1

thx for that but i still strugle.
to summarize. I am tryting to expose gateway with ingress
I am able to upload bpmn via zbctl.
I can connect and check status via zbctl.
But when I tried call my gateway from java and put some data there I get

Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: Network closed for unknown reason

I am kind of despret so I will try to push my configuration, maybe someone will see some error :confused:

---
# Source: zeebe-cluster/templates/configmap.yaml
kind: ConfigMap
metadata:
  name: zeebe-cluster
  namespace: id00a1-zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
apiVersion: v1
data:
  startup.sh: |
    #!/usr/bin/env bash
    set -eux -o pipefail

    export ZEEBE_BROKER_NETWORK_ADVERTISEDHOST=${ZEEBE_BROKER_NETWORK_ADVERTISEDHOST:-$(hostname -f)}
    export ZEEBE_BROKER_CLUSTER_NODEID=${ZEEBE_BROKER_CLUSTER_NODEID:-${K8S_POD_NAME##*-}}

    # As the number of replicas or the DNS is not obtainable from the downward API yet,
    # defined them here based on conventions
    export ZEEBE_BROKER_CLUSTER_CLUSTERSIZE=${ZEEBE_BROKER_CLUSTER_CLUSTERSIZE:-1}
    contactPointPrefix=${K8S_POD_NAME%-*}
    contactPoints=${ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS:-""}
    if [[ -z "${contactPoints}" ]]; then
      for ((i=0; i<${ZEEBE_BROKER_CLUSTER_CLUSTERSIZE}; i++))
      do
        contactPoints="${contactPoints},${contactPointPrefix}-$i.$(hostname -d):26502"
      done

      export ZEEBE_BROKER_CLUSTER_INITIALCONTACTPOINTS="${contactPoints}"
    fi
    
    if [ "$(ls -A /exporters/)" ]; then
      mkdir /usr/local/zeebe/exporters/
      cp -a /exporters/*.jar /usr/local/zeebe/exporters/
    else  
      echo "No exporters available."
    fi

    exec /usr/local/zeebe/bin/broker

  application.yaml: |

  broker-log4j2.xml: |

  gateway-log4j2.xml: |
---
# Source: zeebe-cluster/templates/gateway-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: "zeebe-cluster-zeebe-gateway"
  namespace: id00a1-zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: gateway
spec:
  type: ClusterIP
#  type: NodePort
  ports:
    - port: 9600
      protocol: TCP
      name: http
    - port: 26500
      protocol: TCP
      name: gateway
  selector:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: gateway
---
# Source: zeebe-cluster/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: "zeebe-cluster-zeebe"
  namespace: id00a1-zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: broker
    app: zeebe
  annotations:
    null    
spec:
  clusterIP: None
  publishNotReadyAddresses: true
  type: ClusterIP
  #type: NodePort
  ports:
    - port: 9600
      protocol: TCP
      name: http  
    - port: 26502
      protocol: TCP
      name: internal
    - port: 26501
      protocol: TCP
      name: command
  selector:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: broker
---
# Source: zeebe-cluster/templates/gateway-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: "zeebe-cluster-zeebe-gateway"
  namespace: id00a1-zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: gateway
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: zeebe-cluster
      app.kubernetes.io/instance: zeebe-cluster
      helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
      app.kubernetes.io/version: "0.23.4"
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/component: gateway
  template:
    metadata:
      labels:
        app.kubernetes.io/name: zeebe-cluster
        app.kubernetes.io/instance: zeebe-cluster
        helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
        app.kubernetes.io/version: "0.23.4"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: gateway
    spec:
      containers:
        - name: zeebe-cluster
          image: "zeebe:0.25.0"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9600
              name: http
            - containerPort: 26500
              name: gateway
            - containerPort: 26502
              name: internal
          env:
            - name: ZEEBE_STANDALONE_GATEWAY
              value: "true"
            - name: ZEEBE_GATEWAY_CLUSTER_CLUSTERNAME
              value: zeebe-cluster-zeebe
            - name: ZEEBE_GATEWAY_CLUSTER_MEMBERID
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: ZEEBE_LOG_LEVEL
              value: "debug"
            - name: JAVA_TOOL_OPTIONS
              value: "-XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError"
            - name: ZEEBE_GATEWAY_CLUSTER_CONTACTPOINT
              value: zeebe-cluster-zeebe:26502
            - name: ZEEBE_GATEWAY_NETWORK_HOST
              value: 0.0.0.0
            - name: ZEEBE_GATEWAY_NETWORK_PORT
              value: "26500"
            - name: ZEEBE_GATEWAY_CLUSTER_HOST
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: ZEEBE_GATEWAY_CLUSTER_PORT
              value: "26502"
            - name: ZEEBE_GATEWAY_MONITORING_HOST
              value: 0.0.0.0
            - name: ZEEBE_GATEWAY_MONITORING_PORT
              value: "9600"
            - name: ZEEBE_GATEWAY_SECURITY_ENABLED
              value: "false"  
          resources:
            limits:
              cpu: 1000m
              memory: 1Gi
            requests:
              cpu: 1000m
              memory: 1Gi
          volumeMounts:
          securityContext:
            null
          readinessProbe:
            tcpSocket:
              port: gateway
            initialDelaySeconds: 20
            periodSeconds: 5
      volumes:
        - name: config
          configMap:
            name: zeebe-cluster
            defaultMode: 0744
---
# Source: zeebe-cluster/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: "zeebe-cluster-zeebe"
  namespace: id00a1-zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: broker
    app: zeebe
  annotations:   
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: zeebe-cluster
      app.kubernetes.io/instance: zeebe-cluster
      helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
      app.kubernetes.io/version: "0.23.4"
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/component: broker
  serviceName: "zeebe-cluster-zeebe"
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app.kubernetes.io/name: zeebe-cluster
        app.kubernetes.io/instance: zeebe-cluster
        helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
        app.kubernetes.io/version: "0.23.4"
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: broker
      annotations:   
    spec:
      initContainers:    
      containers:
      - name: zeebe-cluster
        image: "zeebe:0.25.0"
        imagePullPolicy: IfNotPresent
        env:
        - name: ZEEBE_BROKER_CLUSTER_CLUSTERNAME
          value: zeebe-cluster-zeebe
        - name: ZEEBE_LOG_LEVEL
          value: "debug"
        - name: ZEEBE_BROKER_CLUSTER_PARTITIONSCOUNT
          value: "1"
        - name: ZEEBE_BROKER_CLUSTER_CLUSTERSIZE
          value: "1"
        - name: ZEEBE_BROKER_CLUSTER_REPLICATIONFACTOR
          value: "1"
        - name: ZEEBE_BROKER_THREADS_CPUTHREADCOUNT
          value: "2"
        - name: ZEEBE_BROKER_THREADS_IOTHREADCOUNT
          value: "2"
        - name: ZEEBE_BROKER_GATEWAY_ENABLE
          value: "false"
        - name: ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_CLASSNAME
          value: "io.zeebe.exporter.ElasticsearchExporter"
        - name: ZEEBE_BROKER_EXPORTERS_ELASTICSEARCH_ARGS_URL
          value: "ELASTIC_URL:9200"
        - name: ZEEBE_BROKER_NETWORK_COMMANDAPI_PORT
          value: "26501"
        - name: ZEEBE_BROKER_NETWORK_INTERNALAPI_PORT
          value: "26502"
        - name: ZEEBE_BROKER_NETWORK_MONITORINGAPI_PORT
          value: "9600"         
        - name: ZEEBE_BROKER_GATEWAY_MONITORING_ENABLED
          value: "true"            
        - name: K8S_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name              
        - name: JAVA_TOOL_OPTIONS
          value: "-XX:MaxRAMPercentage=25.0 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError"
        ports:
        - containerPort: 9600
          name: http
        - containerPort: 26501
          name: command
        - containerPort: 26502
          name: internal
        readinessProbe:
          httpGet:
            path: /ready
            port: 9600
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
            limits:
              cpu: 1000m
              memory: 2Gi
            requests:
              cpu: 1000m
              memory: 2Gi
        volumeMounts:
        - name: config
          mountPath: /usr/local/zeebe/config/application.yaml
          subPath: application.yaml
        - name: config
          mountPath: /usr/local/bin/startup.sh
          subPath: startup.sh
        - name: data
          mountPath: /usr/local/zeebe/data
        - name: exporters
          mountPath: /exporters
        securityContext:
          null
      volumes:
      - name: config
        configMap:
          name: zeebe-cluster
          defaultMode: 0744
      - name: data
        emptyDir:
          sizeLimit: "10Gi"
      - name: exporters
        emptyDir: {}
---
#gateway ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: "zeebe-cluster-gateway"
  namespace: id00a1-zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: gateway
  annotations:
    kubernetes.io/ingress.class: "nginx"
    #ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/server-alias: "URL1"
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    cert-manager.io/cluster-issuer: "ingress"
spec:
  rules:
  - host: "URL1"  
    http:
      paths:
      -  backend:
           serviceName: "zeebe-cluster-zeebe-gateway"
           servicePort: gateway                   
  tls:
    - hosts:
      - "URL1"
      secretName: "zeebe-cluster-cert" 
---
#cluster ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: "zeebe-cluster"
  namespace: id00a1-zeebe
  labels:
    app.kubernetes.io/name: zeebe-cluster
    app.kubernetes.io/instance: zeebe-cluster
    helm.sh/chart: zeebe-cluster-0.1.0-SNAPSHOT
    app.kubernetes.io/version: "0.23.4"
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: broker
  annotations:
    kubernetes.io/ingress.class: "nginx"
    #ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/server-alias: "URL2"
    cert-manager.io/cluster-issuer: "ingress"
spec:
  rules:
  - host: "URL2"  
    http:
      paths:
      -  backend:
           serviceName: "zeebe-cluster-zeebe"
           servicePort: http                   
  tls:
    - hosts:
      - "URL2"
      secretName: "zeebe-cluster-cert"