How to determine approximate memory usage of workflow execution?

We are using camunda 8.2.12. We have hazelcast as one of the exporters configured in zeebe.

To determine the memory allocation to the zeebe broker pods and to hazelcast cluster, I am trying to figure out ways to understand how much state does a workflow execution need on an average. We could execute few processes in our test environment to understand the memory usage and then configure these components accordingly.

I have operate deployed in our test environment but I do not see any such metric being capture. Can optimize help us determine this memory usage?


Hey @jgeek1 I am also facing a similar situation where I need to measure the memory usage of some of the processes to better understand the workflows. Did you get the solution to this question?

Hi @Divyansh_Garg - I didn’t find any solution. As a temporary workaround what I do is estimate the payload size in memory that we run with and run multiple processes with the sample attached to this issue. Observing the memory usage of the broker gives some rough idea as to how much memory our workflows would take at runtime.