Timeout when starting process instances with local Docker Compose

Hi everyone,
I am running into an issue with the local Docker Compose setup that seems to be related to the configuration.

I use the core Compose template supplied by the official GitHub, no changes were made to it. I am using this setup for local development.

My application is a plain Java SE application using 'io.camunda:zeebe-client-java:8.0.3 as the client. I am setting up the Zeebe client like this:

ZeebeClient.newClientBuilder().gatewayAddress("localhost:26500").usePlaintext().build();

Using this client, i can perform most tasks with no problems, such as running a topology request, registering worker or setting process variables.

However, when i try to start process instances by using the client, i get a connection timeout. This example code produces the error:

  client.newCreateInstanceCommand()
          .bpmnProcessId("my-process-id")
          .latestVersion()
          .withResult()
          .send()
          .whenComplete((result, error) -> {
              if (error != null) {
                  error.printStackTrace();
              }
              System.out.println("finished");
          });

After a few seconds, im getting the following stack trace output:

io.grpc.StatusRuntimeException: DEADLINE_EXCEEDED: Time out between gateway and broker: Request ProtocolRequest{id=1515, subject=command-api-1, sender=172.21.0.3:26502, payload=byte[]{length=143, hash=1153935640}} to 172.21.0.3:26501 timed out in PT10S
	at io.grpc.Status.asRuntimeException(Status.java:535)
	at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:478)
	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:562)
	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:743)
	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:722)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
finished

However, the requested process instance is indeed started, I can view it in Camunda Operate and it is running just fine. That tells me that the communication works in the direction Worker → Cluster but not reverse. I can set the timeout higher to something like 50 seconds but the outcome will be the same.

Does anyone have a clue on why this is happening? I need the response from Zeebe to get the process instance key and also need null on the error when everything works out as expected.

Thanks in advance, Janek

Hello @janekberg ,

you are using withResult() which will cause your command to stay open until the process instance is finished.

Just remove this line and you should be good to go.

Jonathan

1 Like

Building on what @jonathan.lukas said:

withResult() will timeout after 10 seconds by default. So if the process takes longer than that to complete, you will get a timeout error. You can change the timeout to something longer with requestTimeout() in the builder.

Josh

2 Likes

Hi @jonathan.lukas and @jwulf,

thanks for your fast replies. I was under the assumption that withResult() would be necessary to be able to obtain the process instance key in the callback, now I see this is not the case. Obviously didn’t read the docs correctly, sorry for that.

Everything works as expected now :slight_smile:

1 Like

@jonathan.lukas My usecase is to get the response back s can read the process variable and map to the caller. And I think that’s why we have withResult() function but it’s not working in 8.2.0 and getting same error like request timeout between gateway and broker.

So can anyone please help on this atlast?
Thanks in advance.

Please ask a new question rather than necro-posting. You can always refer to this thread in your new question with a link.

1 Like