We’d like to limit the number of instances that can perform an activity concurrently.
We need a limit at activity- or task-level so we don’t overload an interface. We’ve already set a global limit for the thread pool and we’d prefer not to lower it for the sake of one or two tasks.
After some research it doesn’t seem possible “off-the-shelf”.
(Like a maximum instances variable for an activity that is queried during job acquisition and compared to how many jobs of that activity are already locked or something like that.)
(I’m only allowed to post two links, which is why there are only links for the two newest.)
The general consensus seems to be to either limit on another layer or to include extra steps in the process; neither of which would be an ideal solution for us.
Another approach you could consider is make this an external task. Thus you can limit the number of concurrent executions by limiting the number of external workers…
In addition, you could configure an additional, co-located logical engine with its own job executor which has a single repeating process to poll as the external worker…
Another method Ive seen if you are using a remote procedure call is, if you use the Apacche HTTP client library, you can configure the maximum number of concurrent client connections. In this case, requests via threads of control in the engine will be queued in the HTTP library. Thus this limits concurrent requests on the external service, however there is a risk that blocked threads time out. In addition, this ties up engine resources, so it can impact overall throughput in the engine…
thanks for the suggestions They would work, but we were looking for a more intergrated way.
The example for the conditional events seems like something we’re looking for if they work across process instances. If a script is used to evaluate the condition, does the event still listen for changes on a process variable? That would probably mean that we can’t use a conditional event unless there exists some kind of global process variable that the event could listen for.
I read that it’s possible to trigger all the event subscriptions, so it might still work. However, conditional events might still not be feasible in our case because of race conditions.
You could also remove the “set interface check variable” script and turn it into a Start listener script on the “Interface is Free” conditional.
Basically the idea is that if you were to build a modification to the Job Executor that did a check to see if the interface is free and available to be used, this design basically does the same thing:
When you reach a point where the interface is going to be used, a conditional makes the process wait. When that Conditional is first met, a instance of the Event Sub Process is activated and we check if the interface is being used (this could be a Runtime Query on the BPM engine that looks at how any instances of the Activity Definition are currently active.), if the interface was free then we update the parent variable so the “Interface is Free” conditional is executed and we execute the code against the interface.
If the interface is not busy then we wait N period of time (in this design, 2 min), and then check the interface again.
You could also just wrap the Conditional, the execution against the interface, and the Event Sub Process into its own BPMN process and call it all as a “Call Activity”
This design would non-blocking, as it is basically doing the same checks / steps as if you were using a Job Executor that was doing the check on the use of the interface
Heres an example where I use a Java Semaphore in conjunction with the job executor. Hence this version does not tie up job executor threads.
The sample uses groovy and basically creates a static semaphore. Hence the concurrent resolution is at the granularity of a classloader. If you have a cluster, you will need to configure the semaphore as appropriate.