Intermediate timer triggered every 10 sec causes extreme CPU load on ElasticSearch from Optimize

I have a process which sends REST requests (to CAMUNDA REST API, but it probably does not matter) every 10 seconds, it uses an intermediate timer event.
Running even 10 processes of this kind in parallel causes CONSTANT high CPU load on ElasticSearch from Operate (if I stop the operate pod, ElasticSearch CPU load goes back to practically zero) - around 500m (kubectl top po command), 100 such processes in parallel would overload the system (8CPU cores, 8Gb for ElasticSearch) and Optimize would start to lag behind with information on its dashboards.

What is causing such a high CONSTANT CPU load on ElasticSearch from Optimize?
Execept for instances of this process there is nothing running in my onPrem Camunda instance.

Can I disable export to Optimize for a given ProcessDefinition to mitigate the issue?

Currently I’m on 8.7.2 camunda | 8.7.1 Optimize but I’m having this problem sinze I started with camunda 8.4
test_load.bpmn (6.6 KB)

As per Camunda 8 architecture, Zeebe exports data to ElasticSearch and Tasklist/Operate/Optimize consume data from ElasticSearch.

Did you tune your environment?
Did you check your container logs during high CPU?

It looks like you are running Camunda using docker-compose. Try tuning the value for Zeebe in the environment variable and observe the results.

ZEEBE_ELASTICSEARCH_NUMBER_OF_SHARDS=5
ZEEBE_ELASTICSEARCH_NUMBER_OF_REPLICAS=1

You can also go through this Blog to understand more about Camunda 8 tuning.
https://camunda.com/blog/2025/01/performance-tuning-camunda-8/

I’m running my camunda instance in kuber using the provided helm charts.
I don’t think that I’m having problems with export from zeebe to ES - otherwise the information in Operate & Tasklist would lag behind too, but it doesn’t - the only problem is with Optimize.

In the optimize log there are repetitive errors:

07:49:50.952 [ImportJobExecutor-pool-ZeebeProcessInstanceImportService-0] ERROR i.c.o.s.i.j.ProcessInstanceDatabaseImportJob - Error while executing import to database
io.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were 3 failures while performing bulk on Zeebe process instances.
If you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit) or enabling the skipping of documents that have reached this limit during import (import.skipDataAfterNestedDocLimitReached). See Optimize documentation for details. Message: Update document_parsing_exception [1:2150277] The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting. , Update document_parsing_exception [1:2150084] The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting. , Update document_parsing_exception [1:2150221] The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.
        at io.camunda.optimize.service.db.es.OptimizeElasticsearchClient.doBulkRequestWithoutRetries(OptimizeElasticsearchClient.java:923)
        at io.camunda.optimize.service.db.es.OptimizeElasticsearchClient.doBulkRequest(OptimizeElasticsearchClient.java:912)
        at io.camunda.optimize.service.db.es.OptimizeElasticsearchClient.executeImportRequestsAsBulk(OptimizeElasticsearchClient.java:612)
        at io.camunda.optimize.service.importing.job.ProcessInstanceDatabaseImportJob.persistEntities(ProcessInstanceDatabaseImportJob.java:41)
        at io.camunda.optimize.service.importing.DatabaseImportJob.executeImport(DatabaseImportJob.java:57)
        at io.camunda.optimize.service.importing.DatabaseImportJob.run(DatabaseImportJob.java:39)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)

How can I monitor the Number of records not exported without using Grafana?

I increased index.mapping.nested_objects.limit from 10k to 1000k and the errors in the optimize log are gone, but the CPU load on ElasticSearch from just 10 instances of the process with 1 outgoing REST-request every 10 sec stays the same - around 500m and will increase proportionally to the increase of the number of running instances of this process - 20 instances cause constant around 1000m CPU load on ES, 50 instances around 2500m and so on.

So - is there a way to exclude a certain process definition from processing in Optimize if it is so processor heavy with such a simple polling process?
I configurer historyCleanup.processDataCleanup .perProcessDefinitionConfig.${key}.ttl = P1D, but it doesn’t prevent the high CPU on ES cause it processes this precesses anyway

Would you please share your chart file. Did you tune your file or using the default values provided by Camunda. How many system are you using it for SM.