Not able to perform concurrent dmn tasks

Hi team , I want to perform dmn tasks for 1600 records at once , when doing at once it is taking 1 min 20 seconds , so i decided to split the records into 200 and do the task parallel, to my dismay this also takes the same time, then i read online and turned async before true for the subprocess , then time reduced to 45 seconds, i also tried to uncheck exclusive but still time remains the same, any way to do this task in under 20 seconds, i have 1600 text description coming and i am checking them in rule engine using contains function , so there are 30 rules. anyway to do this task parallel as i see a single batch only takes 5-10 seconds.
here is the image :


here is the file bpmn:
mediaCheck.bpmn (12.3 KB)
thanks for your support

Hi @Aryan_Agarwal - nice question. If you are expecting your embedded subprocess to truly run in parallel (using different job worker threads) then I think you need to adjust your “async” config of the embedded sub-process and also the DMN MediaCheck.

Change 1 Processing Each Batch

The before asynchronous continuation you have configured is only introducing a wait state before the embedded subprocess multi-instance. It doesn’t result in parallel creation and execution of each instance of your sub-process. If you want to keep this checkpoint that is fine, but I’d leave exclusive enabled on it, so as not to confuse things.

To get true parallel batches you need to enable the multi-instance asynchronous before and make it non-exclusive. This makes it possible for the job executor to run batches concurrently on different threads.


… change to …

Change 2 - DMN MediaCheck

Remove the async before, this won’t help and only introduces more overhead of writing to the DB again. After change 1 you are already have a thread dedicated to running the contents of 1 batch. Because you are running sequentially through each element of the batch I’d just let the single thread rip through and not introduce any continuations on the DMN MediaCheck multi-instance.

But, I would enable the after asynchronous continuation with exclusive lock. This will help with a cleaner synchronisation at the end of each batch and should more or less eliminate any OptimisticLockingException (and retries) you would otherwise encounter as each batch finishes and rushes to update the internal state (loopCounter etc) of the enclosing embedded sub-process multi-instance!


… change to …

I’ve done some testing locally and these changes gave me the best results but its always a case of doing structured testing with different scenarios.

The other variables are the machine architecture, CPU/cores, JVM you use and and also how you configure the Camunda JobExecutor - in particular the number of threads you allocate it.

Hopefully this advice is accurate, let me know if that improves anything! Good luck!

1 Like

thanks a lot that was brilliant