Job-executor taking longer time to complete

Hi Team,

Version- Camunda BPM 7.12
Setup - Spring Boot
Database - MariaDB (10.3)
Environment : Openshift
no changes in the default configuration

We noticed that sometimes camunda job execution taking longer time (around 2 mins) to commit when we use ‘asysc-before’ option of the service tasks.

we have 3 SpringBoot-Camunda apps running with one CamundaDB (as depicted below)
image opal_deal_eval_processing_user_rule_process.bpmn (8.1 KB)

Below are spring data-source related properties used in our sprint-boot apps. mostly we haven’t altered the default camunda properties

Attaching BPMN files used.

Pls let me know if any more details needed.

Thanks,
Prabhakar

How exactly are you getting this measurement?

Hi @Niall,
We are invoking camunda message correlation from the helper class as shown below. Just comparing time before and after returning to calculate elapsed time.

In each servicetask we have logger and through which we came to know which service task took longer time to complete.

It really depends on you’ve configured the wait states in your process and also what exactly your code is doing.
When you’re sending a message to the engine it’s the client thread thats progressing the state not the engine’s threads.

If you have a wait state before your service task it creates a job for the engine to pick up. You could be counting the time in which the job waits to be picked up as well as execution time. in which case change the job executor settings to ensure more jobs get picked up or that the executor has a bigger thread pool might work.

Hi @Niall,

Thanks for your quick reply.
Currently we are using default settings for job-executor
max-jobs-per-acquisition - 3
max-pool-size - 10
queue-capacity - 3

We will try to increase these settings. Pls clarify below.

  1. Do we need to re-create camunda DB to get effect these settings
  2. Do we need to specify these settings in all Camunda-app or which ever app needed.

Thanks,
Prabhakar

Hi @Prabhakar_Mariyappan

No need to recreate the database. Just restarting the application should work seamlessly.

If it’s homogeneous cluster setup, then you need to set this property in all the nodes in the cluster. If it’s heterogeneous cluster, then set these properties to the nodes in which job execution is enabled.

Thanks @aravindhrs.

We are expecting load half a mil transactions a day. Is there any relation between below params (specific combination)?

max-jobs-per-acquisition - 10
max-pool-size - 25
queue-capacity - 10

Or independently can specify values for these props according to the load?

Thanks again!