Context Deadline Exceeded - error for start process api

Hi,
When we have bulk orders(say 10k orders coming at one shot), camunda is throwing

“context deadline exceeded (Client.Timeout exceeded while awaiting headers)”

for start process rest api.

We are not sure where is the issue here. Can this be fixed by changing any configuration/environment variable?

Kindly provide your inputs.

Thank you in advance.

Removed response - response was written from a C8 perspective, missing that this was posted as a C7 question

Hi @GotnOGuts
Though 10k request came simultaneously, it hits independently to tomcat server. Yes, we expect result to know if any point failure is there(failure handled in process). We need this to retry from outside.

When you say 9 seconds, what this 9s represents in configuration? Can it be increased?

Sorry - I re-read your post, and noticed that this is posted in the C7 section. I had written my response from a C8 lens. Please disregard my prior answer.

Hello my friend!

There are several approaches we can use to try to improve this process…

for example, if you use HttpClient, in your configuration bean we can increase the timeout time, as per my example below:

HttpClient client = HttpClient.newBuilder()
 .connectTimeout(Duration.ofSeconds(60))
 .build();

If your Camunda application is running spring, you can increase your server’s timeout time by configuring in application.properties:

server.servlet.session.timeout=60s

You can also increase your Connection Pool, for example:

spring.datasource.hikari.maximum-pool-size=50
spring.datasource.hikari.minimum-idle=10
spring.datasource.hikari.idle-timeout=30000
spring.datasource.hikari.connection-timeout=60000

Another thing you can do is increase the number of threads in the executor job:

camunda.bpm.job-execution.core-pool-size=50
camunda.bpm.job-execution.max-pool-size=100
camunda.bpm.job-execution.queue-capacity=10
camunda.bpm.job-execution.max-jobs-per-acquisition=10

And last but not least, it would be interesting to divide these requests into smaller numbers using batch processing… Spring Batch is a great tool for this… and with it you can use the chunk() method if not I’m wrong, and divide requests into batches of 100 for example.

After having the batch service ready, you create a controller to call this service so as not to overload your server.

But these are just a few approaches we can take…

In addition to the ones I mentioned above, you can also talk to your infrastructure team to configure an auto-scale… load balancers, implement queues like Amazon SQS (excellent for those who maintain the entire infrastructure in AWS)…

Anyway, I hope I helped with some ideas.

William Robert Alves

1 Like

Hi @WilliamR.Alves
Thank you for the details.
As I know these properties only works with sprintboot project or image with :run(say
camunda/camunda-bpm-platform:run-latest) but we use image camunda/camunda-bpm-platform:latest. So question is how to place these properties in docker-compose.

Hi \o

You can go into the configuration folder and edit the production.yml or default.yml files, and then configure it the way you want, because as described in the official doc, this file accepts all the spring boot starter configuration properties, or that is, the same as possible in application.properties.

image

Below I will leave the link to the official documentation so you can check more about this:

Hope this helps.

William Robert Alves

1 Like

@WilliamR.Alves
Let me try…