Pod camunda-platform-zeebe does not start in kubernetes

Hello everyone. I used helm, but I have problems with pods.

helm repo add camunda https://helm.camunda.io
helm repo update
helm install camunda-platform camunda/camunda-platform

But when I output pods I see that my three pods are not started

camunda-platform-zeebe-0                          0/1     Pending            0          100m
camunda-platform-zeebe-1                          0/1     Pending            0          100m
camunda-platform-zeebe-2                          0/1     Pending            0          100m

I checked the pod description, saw the message:

 Warning  FailedScheduling  109m  default-scheduler  0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.

I checked the PVC data-camunda-platform-zeebe-0:

Normal  FailedBinding  4m43s (x482 over 124m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

Although I created a PV pv-volume.yaml before that:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-camunda-platform-zeebe-0
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 300Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

My PV:

NAME                            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
data-camunda-platform-zeebe-0   300Gi      RWO            Retain           Available           manual                  107m

Please tell me how do I deal with this?

Hi @Hadagan , and welcome to the forum!

@jgeek1 will likely have a bunch of good comments to add on this, but my understanding is that in your values.yaml, you need to specify the pvcStorageClassName (try setting it to “manual” to match your PV definition)
Since you don’t have a default storage class name set in your cluster, the Helm chart can’t create a PVC that works with the default pvcStorageClassName of “”

@GotnOGuts Thanks for the advice. I created PVC, PV, SC separately and then launched helm. I’ll leave the configs here, in case someone needs them.
PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: camunda-platform
    app.kubernetes.io/component: zeebe-broker
    app.kubernetes.io/instance: camunda-platform
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: zeebe
    app.kubernetes.io/part-of: camunda-platform
  name: data-camunda-platform-zeebe-0
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: data-camunda-platform-zeebe-0
  resources:
    requests:
      storage: 32G

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: camunda-platform
    app.kubernetes.io/component: zeebe-broker
    app.kubernetes.io/instance: camunda-platform
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: zeebe
    app.kubernetes.io/part-of: camunda-platform
  name: data-camunda-platform-zeebe-1
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: data-camunda-platform-zeebe-1
  resources:
    requests:
      storage: 32G

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app: camunda-platform
    app.kubernetes.io/component: zeebe-broker
    app.kubernetes.io/instance: camunda-platform
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: zeebe
    app.kubernetes.io/part-of: camunda-platform
  name: data-camunda-platform-zeebe-2
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: data-camunda-platform-zeebe-2
  resources:
    requests:
      storage: 32G

PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-camunda-platform-zeebe-0
  labels:
#    type: local
    app: camunda-platform
    app.kubernetes.io/component: zeebe-broker
    app.kubernetes.io/instance: camunda-platform
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: zeebe
    app.kubernetes.io/part-of: camunda-platform
spec:
  storageClassName: data-camunda-platform-zeebe-0
  capacity:
    storage: 32Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data-camunda-00"

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-camunda-platform-zeebe-1
  labels:
#    type: local
    app: camunda-platform
    app.kubernetes.io/component: zeebe-broker
    app.kubernetes.io/instance: camunda-platform
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: zeebe
    app.kubernetes.io/part-of: camunda-platform
spec:
  storageClassName: data-camunda-platform-zeebe-1
  capacity:
    storage: 32Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data-camunda-01"

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-camunda-platform-zeebe-2
  labels:
#    type: local
    app: camunda-platform
    app.kubernetes.io/component: zeebe-broker
    app.kubernetes.io/instance: camunda-platform
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: zeebe
    app.kubernetes.io/part-of: camunda-platform
spec:
  storageClassName: data-camunda-platform-zeebe-2
  capacity:
    storage: 32Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data-camunda-02"

CS:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: data-camunda-platform-zeebe-0
provisioner: kubernetes.io/pvc-protection
volumeBindingMode: WaitForFirstConsumer

---

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: data-camunda-platform-zeebe-1
provisioner: kubernetes.io/pvc-protection
volumeBindingMode: WaitForFirstConsumer

---

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: data-camunda-platform-zeebe-2
provisioner: kubernetes.io/pvc-protection
volumeBindingMode: WaitForFirstConsumer

@Hadagan @GotnOGuts - I have never needed to create pv and pvc’s manually. I have executed tests on aws too that use ssd but I do not configure the pvcStorageClassName too. I just configure the pvcSize and helm automatically creates the required pv’s and pvc’s.

I didn’t quite catch the need to define separate manifest files above.

@jgeek1 Of course I’m not an expert, but perhaps the reason is that I do it all locally through Oracle VM VirtualBox

@Hadagan - I don’t think local deployment should matter. What happens if you don’t specify pvcStorageClass and not define any manifest for pv or pvc? Does it not create pv and pvc’s?

I expect that AWS has a storage class set by default, so the default storage class name that is set by helm would take effect on AWS, so the pvc can happen based just on size (which would then be applied on the default pv )