Service orchestration with subprocesses / How to trigger job acquisition explicitly

Service orchestration with subprocesses / How to trigger job acquisition explicitly

Hi, I have some questions and would be really happy if someone could help me with some hints. :slight_smile:

Background

We want to orchestrate the execution of Tasks between different applications with camunda bpm. Imagine multiple applications each with its own workflow-engine (like the hybrid approach written here) on top of a shared database.

We are in a Spring-Boot environment.

Example

There are two applications. A parent application with a workflow-engine which deploys the parent-process.bpmn. A second application with a workflow-engine which deploys the child-process.bpmn. The parent-process.bpmn executes a ServiceTask (in this case just logging), then triggers the "Call Subprocess"-CallActivity (which is then doing its stuff) and then executes another ServiceTask. (When setting the Asynchronous Boundaries correctly this is working like charm)

Because both applications are deployment-aware the child-process is only executed when it gets picked up by the JobExecutor of the child workflow-engine. So the child is continuously polling the database for unlocked processes. It can take quite long until the subprocess gets picked up.

Questions

  1. Is there anything in this ecosystem to eliminate the necessity of this polling? Because in some cases this is just not fast enough (imagine the surrounding ServiceTasks to be UserTasks, no user wants to wait such long time).

What we have in mind is something like an event handling mechanism. An event that is thrown when the process instance is pausing the execution. This event can then get picked up by any other responsible workflow-engine to continue the process instance execution. (E.g. with a message bus)

The camunda-bpm-reactor seemed to go into this direction, but I read somewhere in this forum that it is not under development anymore.

  1. Is there any way to trigger the Job Acquisition explicitly? The only possibility I found is the ManagementService but its intention seems very different.

  2. Is it a bad idea to model this without MessageEvents? We are modeling it without because in our opinion this is not business but architecture related.

So to reiterate: When ParentProcess creates a instant of the ChildProcess, the ChildApp is taking too long to pickup the job in the shared DB?

If this is correct, you can change the polling frequency. These guys do a great job at explaining the various concepts of clustering and the “catch22s”

What you seem to be describing is essentially a heterogeneous cluster. By tweaking the job manager settings as described in the video, you can have faster pickup

2 Likes

Thank you Stephen, this will help for most cases I think.

But we still wonder if there is the possibility to actively trigger the JobAcquisition thread.
For example it is possible to execute the AcquireJobsCmd with the CommandExecutor but its intention seems to be internal use:

    @Autowired
    private JobExecutor jobExecutor;
    
    @Autowired
    private CommandExecutor commandExecutor;
    
    @GetMapping(value = "/acquire")
    public void acquire() {
        AcquiredJobs acquiredJobs = this.commandExecutor.execute(new AcquireJobsCmd(this.jobExecutor));
    }

The documentation in that regard is not really detailled so I wonder about the kind of problems we buy with this. Is there any recommended way to trigger the job acquisition actively?

Are you trying to use user Tasks to create a Page Flow (like a wizard)?

In the example you are providing in your original post, you mention user tasks being in sequential order; in this pattern you would then have sync steps, There would not be a need to manually execute a job because it occurred automatically when you move from UserTask1 to UserTask2.

Why do you believe you need to start the job executor manually?

Hi @Robin,

you don’t need to trigger the job executor actively.

When you enable this logging level to debug: “org.camunda.bpm.engine.jobexecutor” you can follow the job acquistion and execution in the console (or your logfile).

Every time a new job is inserted, the engine will pick it up immediately:

09:23:54.179 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14012 Job acquisition thread woke up
09:23:54.181 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14022 Acquired 0 jobs for process engine 'default': []
09:23:54.182 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14011 Job acquisition thread sleeping for 59997 millis
spelling checked and it's OK
09:24:00.283 [http-nio-8081-exec-2] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14017 Notifying Job Executor of new job notifying job executor of new job
09:24:00.284 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14012 Job acquisition thread woke up
09:24:00.318 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14022 Acquired 1 jobs for process engine 'default': [[13d9926b-7550-11e9-b6c4-00155d23094d]]
09:24:00.318 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14023 Execute jobs for process engine 'default': [13d9926b-7550-11e9-b6c4-00155d23094d]
09:24:00.322 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14022 Acquired 0 jobs for process engine 'default': []
09:24:00.322 [JobExecutor[org.camunda.bpm.engine.spring.components.jobexecutor.SpringJobExecutor]] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14011 Job acquisition thread sleeping for 4998 millis

Hope this helps, Ingo

Hi @Ingo_Richtsmeier,

that the job acqusition is logged at debug level is a very nice hint. But I think in your scenario there are no distributed applications because the JobExecutor thread is always the same, while in our environment there multiple workflow-engines picking up asynchronous jobs. With a single workflow-engine the job acquisition indeed starts immediately.

But this line looks very interesting to me currently because this reads like there is a possibility to notify the job executor of new jobs:

09:24:00.283 [http-nio-8081-exec-2] DEBUG org.camunda.bpm.engine.jobexecutor - ENGINE-14017 Notifying Job Executor of new job notifying job executor of new job

We currently do not have a specific scenario where we need it. We are more evaluating what the different workflow-engines do and do not. And in a possible scenario where a lot of different applications (read microservices) are talking to each other we think that the latency will increase drastically with the database polling approach, so we thought about alternatives.

Flowable for example seems to have a Message Queue based Async Executor with the following feature (but sadly no deployment awareness :frowning:):

When a new async job is created by the engine, a message is put on a message queue (in a transaction committed transaction listener, so we’re sure the job entry is in the database) containing the job identifier. A message consumer then takes this job identifier to fetch the job, and execute the job.

Which sounds neat, so we just wondered if in the camunda ecosystem something like that exists as well.

If it is not allowed to post about different workflowengine features here, please tell me about it and I fix this post.

You can recreate that feature of flowable with a transaction listener on the db/command executor.

1 Like

I will look into it. Thank you for your help. :slight_smile:

Take a look at this as a example:

You can inject a transaction listener on the COMMITTED state of the transaction, and you can look in the entities that are part of the transaction for a Job entity (or whatever specific type of job, etc) and then pass that to a eventbus / message queue.

That example above is being attached on a per process instance basis. But you should be able to use a Engine Plugin to attach a listener as part of the engines configuration init. (same listener just being attached at a different point)

1 Like

But with this I can only inform about a new job being committed.

Isn’t the missing part still that I need to trigger the job acquisition on the other application?

Yes. But you asked for the same feature that flowable mentions in their docs

When a new async job is created by the engine, a message is put on a message queue (in a transaction committed transaction listener, so we’re sure the job entry is in the database) containing the job identifier. A message consumer then takes this job identifier to fetch the job, and execute the job.

In flowable you would still need to setup a message consumer to take the job and trigger the execution.

You will also have race scenarios where the job executor picks up the job before you get the chance to execute it, assuming you leave the job executor daemon active