Hi all, I’m a bit confused about the interplay of the various timing parameters like asyncResponseTimeout, maxTasks, backoffStrategy, lockDuration, autoFetching etc. in the Java client.
Our present application would have a rather low throughput of external tasks, like, say, a few per minute or less. Whenever an external task is due, however, it’s supposed to be started as quickly as possible, and the typical execution time of the task handler is short, like a few hundred milliseconds.
What we’re currently seeing is that some tasks get started immediately in the external client, while sometimes it can take several tens of seconds before they get started, causing unwanted delay.
Any recommendations for how to set the parameters or any ideas what might cause such delays? (We’re quite sure the delay is not caused by high load or maybe garbage collection, because both systems, Camunda and the external client, are essentially unloaded. Also, there’s no firewall in between that might interfere with TCP traffic.)
You should activate the long polling with a asyncResponseTimeout of maybe two minutes (value 120000) and decrease the backoff to 0. Then the client will look for a new task immediately after the last one is completed.
You should get as many tasks per fetch as you expect to be there and to be completed in one fetch. The external task client starts the handlers sequentially.
The lockDuration should be the longest expected completion time.
Ingo, thank you so much, that’s very helpful. We’ll test your suggestions.
As I understand it now, the principal cycle of the external task client works something like this:
If asyncResponseTimeout is NOT set (ie, no long polling):
Client connects to the server
Receives at most maxTasks
Handles these tasks (sequentially? in parallel?)
Requests more tasks immediately? Or closes connection and goes to 5 or 1?
If no more tasks are available from the server, closes connection and backs off as configured.
After backoff timeout, goto 1.
So, if you used NO long polling and NO backoff strategy together, the client would go into an ineffcient busy loop?
If asyncResponseTimeout is IS set (ie, long polling enabled):
Client connects to the server
Receives at most maxTasks
Handles these tasks (sequentially? in parallel?)
Requests more tasks immediately? Or does long polling always imply tasks are pushed to client?
If no more tasks are available from the server, waits for asyncResponseTimeout for more tasks.
After asyncResponseTimeout with no more tasks (or at asyncResponseTimeout after the initial connect?), closes connection and backs off as configured.
After backoff timeout, goto 1.
So, with long polling enabled and a null backoff strategy, on an idle system the client would re-establish the connection at constant intervals of asyncResponseTimeout?
Is this correct? If we can confirm this, we might find a good wording for this and make a pull request to enhance the javadoc.
Next tasks are fetched immediately. Backoff is increased, if no external tasks can be fetched.
correct.
Yes.
Yes.
The team accepted my pull request with improvements of the Java Doc. Have a look at the master branch on GitHub. You can clone the project and build the latest version in your private maven repo to work with it.