Hi,
I’m trying to set up a local Zeebe environment using zeebe-full
Below my Kubernetes cluster (using minikube):
Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB)
multinode-demo | Ready | v1.18.3
multinode-demo-m02 | Ready | v1.18.3
multinode-demo-m03 |Ready | v1.18.3
After installing
helm install zeebe-full zeebe/zeebe-full
I noticed many pods in error state:
elasticsearch-master-0 | CrashLoopBackOff
elasticsearch-master-1 | Pending
elasticsearch-master-2 | Pending
zeebe-full-nginx-ingress-controller-5874cff79f-hbgqv | Running
zeebe-full-nginx-ingress-default-backend-6d79746479-cnv46 | Running
zeebe-full-operate-bc779cb76-ccrvb | Running
zeebe-full-zeebe-0 | Pending
zeebe-full-zeebe-1 | Pending
zeebe-full-zeebe-2 | Pending
zeebe-full-zeebe-gateway-76744877cc-7q8pd | Running
Zeebe gateway pod logs
++ hostname -i
- export ZEEBE_HOST=10.244.2.4
- ZEEBE_HOST=10.244.2.4
- ‘[’ true = true ‘]’
- export ZEEBE_GATEWAY_CLUSTER_HOST=10.244.2.4
- ZEEBE_GATEWAY_CLUSTER_HOST=10.244.2.4
- exec /usr/local/zeebe/bin/gateway
2020-07-02 10:26:49.270 [atomix-0] WARN io.atomix.primitive.partition.impl.DefaultPartitionGroupMembershipService - Failed to locate management group via bootstrap nodes. Please ensure partition groups are configured either locally or remotely and the node is able to reach partition group members.
2020-07-02 10:26:50.283 [atomix-partition-group-membership-service-0] WARN io.atomix.primitive.partition.impl.DefaultPartitionGroupMembershipService - Failed to locate management group via bootstrap nodes. Please ensure partition groups are configured either locally or remotely and the node is able to reach partition group members.
All others containers logs logs with:
The selected container has not logged any messages yet.
What am I missing?
Thanks in advance
@user0409 I think that you Kubernetes cluster doesn’t have enough resources to start all the pods. That is why the gateway is failing I think…
Can you share kubectl get pods
?
Notice that you can describe the pods that are not ready to see why they are not starting with kubectl describe pod <pod id>
Thanks @salaboy, I noticed it, so I’m trying to improve resources
What can be the starting size in terms of resources and kubernetes cluster nodes?
@user0409 if it is a dev or testing cluster you could lower the number of brokers and the number of resources requested.
For example put something like the following in the value file used to deploy it… we are using this but with zeebe-cluster, not with zeebe-full, but should be similar.
zeebe-cluster:
clusterSize: "2"
replicationFactor: "2"
resources:
requests:
memory: 512Mi
cpu: 200m
@lucasledesma be careful with that CPU: 200m, which means 0.2 CPU requested for the pod to work… that will really slow down things.
Thanks @lucasledesma
I’m trying to set up a test performance environment most closely to a production one
Hi @salaboy,
I’ve increased k8s resources running a cluster with:
6 CPU , 12 GB RAM
I’m getting this error:
running “VolumeBinding” filter plugin for pod “elasticsearch-master-0”: pod has unbound immediate PersistentVolumeClaims
To solve this I’m increasing Persistent Volumes (last set to 150Gi) and removing but still 2 Elasticsearch pending
What can be the starting size? And how can define it during helm installation?
Thanks in advance
@user0409 did you changed the elastic search configurations? Maybe try with this: https://github.com/zeebe-io/zeebe-version-stream-helm/blob/master/zeebe-version-stream/values.yaml#L32 or even if you don’t really need 3 ElasticSearch nodes for development try with one. As configured in the linked value file.
@salaboy This is very useful, thanks!
1 Like
@lucasledesma you are welcome! I will be trying to create different of those profiles with different setups to make sure that people doesn’t actually need to worry about these details.
2 Likes
Hi @lucasledesma
I’ll try to set up using your posted informations
Thanks a lot
2 Likes