Zeebe Persistence

Hi people! I have some questions. We have a on premise zeebe-cluster with elasticsearch. The problem is that we have to clean too much times the machines because the disk storage is completly full. We don’t need to perserve the info in the BPMN process. We found that the persistance cna be disabled, but we need to know what’s happen if we desactivate this, for example if we restart all the zeebe-cluster pods if we will lose the BPMN topology? and, we need to redeploy this one on this case?.

Kind regards and thanks !!

We found that the persistance cna be disabled

Assuming you are referring to the retention-policy for zeebe data. There are no issues with this config. It is for auto-clean up of data.

For elasticsearch we have restricted it by specifying the storage limit. I assume k8s would handle it by cleaning up old data when full. How do you handle it?

Hi @jgeek1, I’m also looking for the same info.
I see by default it is disabled, if I enable it, do I need to do any other configuration? Does this delete data from zeebe storage after the retention period?

yes you would need to configure the retention period

That’s what its suppose to do. We haven’t tested it but let me know if you try it with 1 day retention

I’ll give the parameters with the below configuration and try.
retention:
enabled: true
minimumAge: 1d

Another question: will this configuration delete data PVC volume or data in Zeebe broker memory?

In my use case, Zeebe broker’s memory usage is going up to 80% while processing the data. And the usage is not coming down after sitting idle. Any workaround for this?

ok let me know

Only PVC volume

that’s a different topic but we haven’t observed this yet in production. You can try taking heap dumps and report it on the zeebe forum along with other metrics from prometheus + grafana

Hello. How can I apply this in docker compose?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.