We are using licensed camunda 8 in prod but from last few weeks we observed that our operate loading data slow…we use operate to monitor prod processing. Even after processing is completed, and we verified the client application logs, still in operate it shows processing…it takes an hour or two extra to load all the data…
Is there a way we can improve this???
One suggestion was to add more operate pods with just exporter enabled but i was not able to find any resource related to it…
Hi @Bittu - are you experiencing this in SaaS or in a self-managed installation? The reason for it is because Operate (and the other components, like Optimize and Tasklist) use a cold, read-only data store for operations; the data needs to be exported from Zeebe then imported/indexed into that data store. There are options available to adjust how that process works, but it depends on whether you are using SaaS or self-managed.
If you are using self-managed, I would recommend checking out the available configuration options here. If you are using SaaS, because you have a license, your best - and fastest - option is to contact the Camunda support team.
(Edit - I should clarify that typically the export delay is in minutes, not hours, so there’s definitely room for improvement here!)
Hi Nathan,
we have self managed and we are connecting with camunda team and waiting for their response…but meanwhile we wanted to try few things to resolve this in dev since we are getting it in prod… camunda team suggested to add a env variable in operate related to post importer ignore missing data but i am not sure if this will help… we are also exploring to seperate excporter,importer and webapp from operate and have multiple importer but we are not sure if this will resolve our issue
@Bittu - this is likely a situation where there’s multiple possible causes, and therefore multiple possible solutions, and finding the right one will require some troubleshooting and testing. It depends on environmental factors (vCPUs, I/O throughput, RAM, etc.), as well as process factors (how many PIs at one time, how complex, how many tasks/jobs, how many variables, etc.).
we think so…but the issue is everything else is working fine…there is no issue in processing and neither pods resources are exhausted…same is for operate… only this one issue that when we have higher load then operate becomes slow and it takes around and hour or two extra to show everything in operate after all instances gets completed
Hi @nathan.loding
To resolve this issue, we are thinking to add 2 importer and 1 archiver and 1 webapp component…but not sure how to do it…can you please help us out here…there is not much documentation available
Importer and archiver | Camunda 8 Docs
Here is what we are doing but it’s not working…
So we got one values.yaml for deployment in openshift…we didnt change anything there and created a new yaml file with our modification…this was suggested by camudna team so that we don’t change default values in original files…we are using both file for deployement…this is our operate component
operate:
image:
repository: docker.io/camunda-operate
env:
- name: CAMUNDA_OPERATE_ELASTICSEARCH_USERNAME
value: "some username here"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_IMPORTER_POSTIMPORTERIGNOREMISSINGDATA
value: "true"
- name: JAVA_OPTS
value: >
-Djavax.net.ssl.trustStore=location to jks
-Djavax.net.ssl.trustStorePassword=some password here
extraVolumes:
- name: elasticsearch-truststore
secret:
secretName: some secret name
- name: zeebe-client-config-path
emptyDir: {}
extraVolumeMounts:
- name: elasticsearch-truststore
mountPath: location to truststore
- name: zeebe-client-config-path
mountPath: /.camunda
retention:
enabled: true
minimumAge: 1d
this is working…but when i try to add more compoent then it fails to pull image
operateWebapp:
archiverEnabled: false
importerEnabled: false
image:
repository: docker.io/camunda-operate
env:
- name: CAMUNDA_OPERATE_IMPORTER_ENABLED
value: "false"
- name: CAMUNDA_OPERATE_ARCHIVER_ENABLED
value: "false"
- name: CAMUNDA_OPERATE_WEBAPP_ENABLED
value: "true"
- name: CAMUNDA_OPERATE_ENTERPRISE
value: "true"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_IMPORTER_POSTIMPORTERIGNOREMISSINGDATA
value: "true"
- name: JAVA_OPTS
value: >
-Djavax.net.ssl.trustStore=
-Djavax.net.ssl.trustStorePassword=
extraVolumes:
- name: elasticsearch-truststore
secret:
secretName:
- name: zeebe-client-config-path
emptyDir: {}
extraVolumeMounts:
- name: elasticsearch-truststore
mountPath:
- name: zeebe-client-config-path
mountPath: /.camunda
retention:
enabled: true
minimumAge: 1d
operateImporter1:
archiverEnabled: false
webappEnabled: false
clusterNode:
nodeCount: 2
currentNodeId: 0
image:
repository: docker.io/camunda-operate
env:
- name: CAMUNDA_OPERATE_IMPORTER_ENABLED
value: "true"
- name: CAMUNDA_OPERATE_ARCHIVER_ENABLED
value: "false"
- name: CAMUNDA_OPERATE_WEBAPP_ENABLED
value: "false"
- name: CAMUNDA_OPERATE_CLUSTER_NODE_COUNT
value: "2"
- name: CAMUNDA_OPERATE_CLUSTER_NODE_CURRENT_NODE_ID
value: "0"
- name: CAMUNDA_OPERATE_ENTERPRISE
value: "true"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_IMPORTER_POSTIMPORTERIGNOREMISSINGDATA
value: "true"
- name: JAVA_OPTS
value: >
-Djavax.net.ssl.trustStore=
-Djavax.net.ssl.trustStorePassword=
extraVolumes:
- name: elasticsearch-truststore
secret:
secretName: camunda-elasticsearch-truststore-secret
- name: zeebe-client-config-path
emptyDir: {}
extraVolumeMounts:
- name: elasticsearch-truststore
mountPath:
- name: zeebe-client-config-path
mountPath: /.camunda
retention:
enabled: true
minimumAge: 1d
operateImporter2:
archiverEnabled: false
webappEnabled: false
clusterNode:
nodeCount: 2
currentNodeId: 1
image:
repository: docker.io/camunda-operate
env:
- name: CAMUNDA_OPERATE_IMPORTER_ENABLED
value: "true"
- name: CAMUNDA_OPERATE_ARCHIVER_ENABLED
value: "false"
- name: CAMUNDA_OPERATE_WEBAPP_ENABLED
value: "false"
- name: CAMUNDA_OPERATE_CLUSTER_NODE_COUNT
value: "2"
- name: CAMUNDA_OPERATE_CLUSTER_NODE_CURRENT_NODE_ID
value: "1"
- name: CAMUNDA_OPERATE_ENTERPRISE
value: "true"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_IMPORTER_POSTIMPORTERIGNOREMISSINGDATA
value: "true"
- name: JAVA_OPTS
value: >
-Djavax.net.ssl.trustStore=/usr/local/share/ca-certificates/truststore.jks
-Djavax.net.ssl.trustStorePassword=changeit
extraVolumes:
- name: elasticsearch-truststore
secret:
secretName: camunda-elasticsearch-truststore-secret
- name: zeebe-client-config-path
emptyDir: {}
extraVolumeMounts:
- name: elasticsearch-truststore
mountPath: /usr/local/share/ca-certificates/
- name: zeebe-client-config-path
mountPath: /.camunda
retention:
enabled: true
minimumAge: 1d
operateArchiver:
webappEnabled: false
importerEnabled: false
image:
repository: docker.io/camunda-operate
env:
- name: CAMUNDA_OPERATE_IMPORTER_ENABLED
value: "false"
- name: CAMUNDA_OPERATE_ARCHIVER_ENABLED
value: "true"
- name: CAMUNDA_OPERATE_WEBAPP_ENABLED
value: "false"
- name: CAMUNDA_OPERATE_ENTERPRISE
value: "true"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_USERNAME
value: "elastic"
- name: CAMUNDA_OPERATE_ZEEBEELASTICSEARCH_PASSWORD
valueFrom:
secretKeyRef:
name: elasticsearch-es-elastic-user
key: elastic
- name: CAMUNDA_OPERATE_IMPORTER_POSTIMPORTERIGNOREMISSINGDATA
value: "true"
- name: JAVA_OPTS
value: >
-Djavax.net.ssl.trustStore=
-Djavax.net.ssl.trustStorePassword=
extraVolumes:
- name: elasticsearch-truststore
secret:
secretName:
- name: zeebe-client-config-path
emptyDir: {}
extraVolumeMounts:
- name: elasticsearch-truststore
mountPath:
- name: zeebe-client-config-path
mountPath: /.camunda
retention:
enabled: true
minimumAge: 1d
Am i adding it correctly?? I even try adding just one importer and disabling everything else just to test but in logs i can still see that all to components are loading webapp, importer, archiever
@Bittu - I admit, this isn’t something I’m familiar with. Some of it does still depend on your environment, and OpenShift has some quirks that I’m not familiar with. I would recommend opening a support ticket for this issue.
Hi @nathan.loding
We have raised a support ticket and they are looking into it…meanwhile we are trying few things out…
I found this in importer and archiver section of operate docs,
Configuration parameter | Description | Default value |
---|---|---|
camunda.operate.clusterNode.partitionIds | Array of Zeebe partition ids this Importer (or Archiver) node must be responsible for. | Empty array, meaning all partitions data is loaded. |
camunda.operate.clusterNode.nodeCount | Total amount of Importer (or Archiver) nodes in the cluster. | 1 |
This it the link to page
Importer and archiver | Camunda 8 Docs
Can we directly add this env variable CAMUNDA_OPERATE_CLUSTERNODE_NODECOUNT=2 to increase the importer node count??? or do we need to do some other configuration also??
@Bittu - again, it’s not something I’m familiar with, but I believe you need the additional configuration you were already attempting. Each single importer/archiver node must be configured using the following configuration parameters
is what the docs say, so while that defines the number of nodes, it doesn’t configure each node.