Tune Camunda For Better Performance

Hey guys,

Require a little help regarding performance optimization for Camunda.

Currently, we have 170,000 running process instances 969 Deployments, and 226 Process Definitions. With an increase in the number of running process instances our response times to REST APIs have significantly taken a hit.

With regards to other information, we are using Camunda 7.11 and mysql 5.7 as the primary database.

We tried resolving the performance problem by deleting the historical data and setting the history level to none. But unfortunately, this did not improve the performance in any way. Rather it remained the same. Based on the camunda documentation the key highlight is to reduce the history level and delete the historical data but this tuning didnt work out and we are not sure where to look for other tuning opportunities.

We did check the metrics related to Camunda and our Database and found the following

Mysql CPU Utilization would fluctuate between 40 - 70 percent during peak traffic. Whereas mysql would have a max of 14.69 mb write throughput and an average of 5mb write throughput, 14.5 mb read throughput and an average of 1mb read throughput

For camunda the max CPU utiliization would be 12 percent and the average would be below 1 percent
as for memory the max memory utilization would fluctuate between 3 - 4 percent 4 being the max

Metrics related to the historical data are. We had 1.6 million historic process instances and now we have around 200k historical instances after deletion. We have set the history level to none so no new data is added to the historical tables.

Based on the above metrics and numbers could someone please point out how we could optimize Camunda better in order to improve the response times for the API’s

Do you only have one Camunda Engine or have you got a cluster?

Hi @Navroze_Bomanji

With this part, are you referring to deleting historical data on the engine side only or did you also configure history cleanup in Optimize?

Hi @Niall , yes we are running Camunda via Kubernetes and have 5 pods of Camunda running on EKS

Hi @Helene , not exactly this but we have written a script to fetch the historic process instances from the database and delete them using this API. Also we have set the historical level to none and from now on no new historical process instances will get added

Have you done anything to change the default behavior of the job executor on these nodes?

No, we haven’t. Are there some system-level metrics that we can use to identify what changes need to be made on the job executor?

It’s often different for each setup because it depends on so many factors, but i’m pretty certain that you’ll get a LOT of performance improvement by changing the default settings.

I suggest you read through this section of the docs so that you understand what kind of issues could be causing your peformance and also understand the ramifications of changing the settings

Hi @Niall we checked the total number of jobs that are running for Camunda and we currently have only one Job running. We verified this by hitting the ACT_RU_JOB table and found the relevant count. I am not sure tuning the Job executor, in this case would provide any performance boost unless I am missing something critical

Well how many jobs does the job executor get per request?
Also what is the average wait for jobs that are ready to be executed?