Fetch and Lock External Task for a specific process ID

Hi I am processing External Task in a seperate micro service which gets notifications (through Kafka) from the Camunda engine when an external task needs to be processed
Currently there is no way that I can fetch and lock a task filtering for a particular process id (I am using the REST API) since the REST endpoint does not accept any filters as parameters.
This is causing some issues when I am trying to fetch and process the external tasks on the same topic but belonging to different processes. Since this occurs in parallel, the fetch and lock is causing all the tasks to be locked
The way i have implemented is to fetch the tasks and then look for the one i am interested in, after which i am unlocking the other tasks (putting them back).
If a lot of workers are simultaneously trying to do this, some of the processes do not find the service task of interest.
The way i am trying to get around this issue is to use some retry mechanism to retry after some random sleep time to do a fetch and lock again and try to find the task I am interested in.

Is there any way I can solve this given the fact that the REST API does not have any way to fetch and lock a specific task

Why don’t you use different topics for this case?

1 Like

The service task is the same service task called in the same process but from different instances of the same process. I have 2 processes. Process 1 creates multiple instances of process 2 based on certain variable values. When the process 2 starts the service task in question is called by all the instances of the newly created process 2 instances.
So changing the topic is not feasible here. If you mean setting the topic dynamically - I think that is not a very clean solution too.
I believe the REST API should be extended to allow for locking on a particular instance of the service task belonging to a particular topic

What’s the reasoning and use case behind your design with notifications via Kafka where you would need this functionality? Why not have work distributed via regular fetch-and-lock requests along with appropriate backoff to avoid hammering the database with requests?

This is to enable using a custom UI which is driven by the workflow (which acts more like an orchestrator)
The execution of the service tasks is taking place in a microservice which is cloud enabled and can be scaled up if need be.

Wouldn’t it be better to model the tasks as user tasks? They can be fetched by processinstance id and activity id.

These do not need any user input. They are purely backend tasks

I believe what Ingo may be saying is a user task will give you the semantics you are after. A user task does not necessarily have to use a form or have any input…Hence in your case, you could consider a system as a process user…

You can 'lock, a task by claiming it, you can complete the task, again using he REST API to mark the task as complete

regards

Rob

One more thing I wanted to point out is that the way we would have service tasks being created in our processes would be at a very low throughput/rate. If it were processed in a batch, it would almost be processed with a batch of 1 because of the slow rate of creation. It would then boil down to the same kind of load on the database as it would have been if i were to process is one at a time.
In my opinion, the REST api should give both ways (fetch and lock in a batch as well as a single particular instance) and the user can choose which one to use based on the use case. This way it will offer more flexibility

1 Like