Failed to activated jobs for worker warnings after upgrading Zeebe to 1.x


After upgrading Zeebe from 0.x to 1.x, I had to make a change in the Zeebe client build,



After Zeebe 1.x.

return ZeebeClient.newClientBuilder()

now the logs are full of warning stating:

03:51:58.627  WARN [          grpc-default-executor-486] i.c.zeebe.client.job.poller  : Failed to activated jobs for worker notify-ams-failure and job │
│ io.grpc.StatusRuntimeException: INTERNAL: Panic! This is a bug!                                                                                        │
│     at io.grpc.Status.asRuntimeException(                                                                                              │
│     at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(                                                      │
│     at io.grpc.internal.ClientCallImpl.closeObserver(                                                                          │
│     at io.grpc.internal.ClientCallImpl.access$300(                                                                              │
│     at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(                                     │
│     at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(                                    │
│     at                                                                                   │
│     at                                                                          │
│     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(                                                       │
│     at java.base/java.util.concurrent.ThreadPoolExecutor$                                                       │
│     at java.base/                                                                                                 │
│ Caused by: java.lang.NoSuchMethodError: 'void io.netty.buffer.PooledByteBufAllocator.<init>(boolean, int, int, int, int, int, int, boolean)'           │
│     at io.grpc.netty.Utils.createByteBufAllocator(                                                                                      │
│     at io.grpc.netty.Utils.access$000(                                                                                                   │
│     at io.grpc.netty.Utils$ByteBufAllocatorPreferDirectHolder.<clinit>(                                                                  │
│     at io.grpc.netty.Utils.getByteBufAllocator(                                                                                         │
│     at io.grpc.netty.NettyClientTransport.start(                                                                         │
│     at io.grpc.internal.ForwardingConnectionClientTransport.start(                                         │
│     at io.grpc.internal.ForwardingConnectionClientTransport.start(                                         │
│     at io.grpc.internal.InternalSubchannel.startNewTransport(                                                              │
│     at io.grpc.internal.InternalSubchannel.access$400(                                                                      │
│     at io.grpc.internal.InternalSubchannel$                                                                          │
│     at io.grpc.SynchronizationContext.drain( 

Any insight on what could be going wrong?


Hi @SubhamPramanik,

this sounds like a problem in the dependencies with Netty. Please share the build descriptor (i.e. pom.xml) of your project.

Best regards,

Hi @philipp.ossler,

I thought so as well but we’re not really using Netty in our services.
Here’s the pom.xml where the worker connection is failing: channel-pom -
And another pom.xml where things are working fine after the Zeebe 1.x upgrade (at least we see no warnings): bulk-processor-pom -

Additional note: we’re using Apache Camel with undertow.


Thanks for sharing. I recommend generating the Maven dependency tree. It should show if there is a conflict in the Netty dependency.

I took a look into dependency tree and I don’t see a conflict in the dependency but I can see the service which is not working is running Netty v4.1.43 and the working service is using v4.1.65.

Would the patch version difference matter?

Here’s the dependency tree for reference,
Failing service: channel-dep-tree -
Working service: bulk-processor-dep-tree -

Would the patch version difference matter?

Usually, it is not expected but we saw this kind of dependency conflict with Netty before (here).

You could either update all dependencies to the latest versions. It seems that the old version comes from*.

Or, import a fixed Netty version.


Thanks for sharing. Last night I added the Netty v4.1.65 in dependencyManagement and it seems to have fixed the issue. Error was indeed due to the lower version.

In another service where we’re not using Azure/AWS dependency, Zeebe and gRPC are the only ones using Netty, it was using v4.1.43 as well. So, I’m not entirely what is causing the em to use the lower version.
Here’s the dep tree for that service: gsma-dep-tree -

Thank you!


It seems that the Netty version comes together with

--> org.springframework.boot:spring-boot-starter:2.3.0.RELEASE
--> org.springframework.boot:spring-boot-dependencies:2.3.0.RELEASE
--> io.netty:netty-bom:4.1.49.Final

Ah nice catch. Completely missed that. Thanks for your help!