HI, We are doing migration from TIBCO BPM to CAMUNDA 8 Self Managed. As part of this we have installed Kubernates in Redhat Linux box with 1 master node with 2 worker nodes. on top of this I am installing CAMUNDA 8.
Master and worker nodes have memories like below after Kubernates and CAMUNDA 8 Installation. Master Node as below - #df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-root 4.0G 104M 3.9G 3% / #free -h
total used free shared buff/cache available
Mem: 7.5Gi 2.2Gi 973Mi 330Mi 4.4Gi 4.7Gi
Swap: 0B 0B 0B
1st Worker Node as below -
df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-root 4.0G 95M 3.9G 3% /
Issue
After installing Camunda all pods of CAMUNDA is not getting up and running.
Ealstic search pods are not getting to ready state due to below issues
0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
so i made master node also to work as worker node… after making this change i dont see “1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane” issue… but still am getting “Insufficient cpu. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod”,
Can someone help what is the minimum sizing and cpus/RAM/memories required for Camunda 8 self managed with Kubernates setup in redhat linux box required? please node that its for dev environment…
and i have seen link sizing your environement . but for me currently its required for camunda 8 self managed installation on kubernetes in dev environment. so in dev we create very less process instances. so for dev environment i need to know minimum requirements for all nodes… please help on this.
Thank you so much for a quick reply…
its not on Cloud environment… its on Premises. we are using Docker compose in local environment… but before going to UAT environment we planned to setup kubernetes on one development environment then we planned to setup in UAT environment.
So 4 CPUs minimum required… if 4 cpus then all camunda pods will run without any issues? and what about other memories. becuase i need to ask Infra team on this…
Below is the current memories which we have… Please let me know about other memories as well.
Master Node as below - #df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-root 4.0G 104M 3.9G 3% / #free -h
total used free shared buff/cache available
Mem: 7.5Gi 2.2Gi 973Mi 330Mi 4.4Gi 4.7Gi
Swap: 0B 0B 0B
1st Worker Node as below -
df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-root 4.0G 95M 3.9G 3% /
If you are going to build the environment like dev and UAT, did you know how many process instances/definition are you going to create. There are few rules involved.
As you mentioned you are running docker, observe your memory usage for your process and then define your memory limit. There is no upfront hard limit, each environment is different.
you can start with 8GB for worker nodes, watch your environment without deploying any processes and then depending upon your processes/worker nodes, you can increase your memory for worker nodes.
Are you using any vendor K8’s or native K8 cluster?
We are using native Kubernetes cluster, set up using kubeadm. and 8 GB for worker nodes in the sense is it RAM? if you dont mind can you please give me minimum memory (like cpus/RAM/Physical memory etc…) required based on below high level requirements.
I will be having currently in TIBCO BPM total 15 workflows (in which 1 Main workflow which will internally call the remaining other 14 workflows based on some conditions. not all 14 workflows will be call for every main process… it will call like 4 to 5 workflow from the main workflow.) if I create 10 claims then 10 workflow instances will be created and each workflow will call again 4 to 5 sub processes. so in dev per day 30 to 4 claims will be created.