Hello,
At the moment i am using the FetchAndLock camunda api. I got a C# “worker” implementation for every unique topic name ( external bpmn task ), this worker is polling “work” with the FetchAndLock request. There is a timer between each such poll. So at the moment i got a 1:1 relation between my workers and the different external tasks on the different bpmn schemes which i am using.
Side note - I am using the external tasks in a “generic” manner so the same external task can be used in different bpmn’s with the same code behind logic.
I am thinking to make some adjustments maybe going as far as having only 1 such c# “worker” who is going to poll work from all of the active external tasks in my bpmn schemes in my different active workflows with the FetchAndLock api.
I wanted to ask a few questions about it,
-
Is there a limit to how many different external tasks (topics) i can ask for in the FetchAndLock API request?
-
I read that the variable
usePriority
in the FetchAndLock API can be set to decide how to fetch external tasks - based on their priority or arbitrarily. Is there some sort of mechanism to prevent an external task starvation?
For example - Lets say i begin to launch 10,000 different instances of a few bpmn workflow schemes i am working with.
Can it be that an external task which is active will have to wait for “hours/minutes” before it will be picked because the FetchAndLock request will pick tasks randomly? Or even worse what if i will continue to launch more and more instances of bpmn workflows and that external task which is in an active state will be forgotten, “starved”, and his workflow instance wont move forward for hours or even days. -
I was wondering what might be a better approach for using the FetchAndLock API?
Is it having 1 c# worker per external task ( 1:1 ) or maybe 1 worker for all of the tasks ( 1 : All ), or maybe some sort of a hybrid, for example maybe seperate the external tasks logically into groups and have 1 worker per each such group polling the active tasks?
Thank you from advance!