Database quickly growing issue

Camunda provides you with quite some options all described here: History and Audit Event Log

1st: reduce the history detail level
2nd: implement your own history level (to only log what you really need)

In both cases, reading the data via HistoryService is retained. Any other option would most likely make you loose that option (e.g. via custom History Backend).

Anyway, didn’t you make some tests and calculations to predict the impact of the requirement that you have to follow for the database? Also, if you have such a requirement you should also have a time interval (e.g. 3 months, 1 year, 10 years, …) so that you actually can do your math for the database capacity and performance impact.

I’d also highly recommend to either use some auto-partitioning techniques or do manual / periodic partitioning of the history tables to maintain BPM performance over time. However, I cannot recommend any MySQL-specific stuff here because I am not that much into MySQL.

Generally speaking, the provided history levels from Camunda BPM are pretty generic and can give you a good start, but in most cases need to be thought of with your specific requirements for auditing in mind which requires changes to either fulfill the requirements or to optimize (on multiple levels: Processes, Camunda BPM configuration, database, (virtual) hardware?).

If you need to track “everything” also to retain data for future use-cases, you really should look into history event tracking pushing data into a different backend (which is a better fit for historic/time-series data). Depending on your requirements, this might be a better fit for you: JSON History Provider (usecase: ElasticSearch indexing)