@sbirrer you can always create an exporter of events (as the one mentioned by “The Real @jwulf” ) and then back them up in any preferred storage that you use for disaster scenarios. Does that make sense? If you already have a defined procedure in your company, you can follow the same approach by connecting the Zeebe cluster with those tools using a Custom Exporter.
@walt-liuzw that is absolutely true, because docker compose will automatically delete the storage associated with the broker containers. If you want to keep using the data you will need to configure a volume that doesn’t get deleted.
@walt-liuzw that is the default behaviour for docker compose, it is automatically removing all the storage for the pods. You need to configure a volume so the brokers can store the data in a directory in the host system.
that still throws exceptions after restarting docker-compose
2019-12-18 16:23:14.0430 ERROR Status(StatusCode=NotFound, Detail=“Command rejected with code ‘CREATE’: Expected to find workflow definition with process ID ‘demo-purchase-order’, but none found”)
@walt-liuzw that is a feature… but the problem is docker compose getting rid of your persistent storage. If you run a broker outside docker you will see that the process definitions and instances don’t go away.
If you are running a cluster like this via docker compose, you need to provide a volume for each of the brokers. You have one persistent volume mounted to the gateway, and you are running three brokers.
Do the same thing for the three broker nodes - create a volume and mount it.
The gateway doesn’t need a persistent volume. The brokers do.
hello, we are using the persistent volume concept and data can preserved after docker compose up & down. However, we have problem pretty close to the original question on this thread, i.e. how to backup and restore the workflow status, instances, variables … etc, for whatever reason that it is needed to. e.g. disaster recovery, server migration, software upgrade …
I think it is hard to avoid a kind of backup restore process in our production for any kind of unexpected failure. We have to retain the ‘exact’ status it is without affecting our end user or requiring them to ‘re-do’ certain process again.
We tried to backup the data folder (/usr/local/zeebe/data) from container as a zip, but we have no idea how to restore the zipped files onto the new container/volume. We are not even sure such kind of backup restore is workable or not.
So what is the best practise of zeebe broker for backup & restore in production environment
Use replication factor 3 or more, to different hosts. And it will be auto backup. It is raft copy logs from partition to partition. And dynamic change of leader broker for partition’s. Every broker can be master or be slave. I remeber cassandra used almost the same method.
I think we can stop cluster, do snapshot. And stop cluster + restore snapshot, as second method