2025-04-28T22:28:39.013Z INFO 1 --- [ scheduling-1] i.c.c.r.i.state.ProcessStateStoreImpl : Detected changes in process elements
2025-04-28T22:28:39.013Z INFO 1 --- [ scheduling-1] i.c.c.r.i.state.ProcessStateStoreImpl : . 1 newly deployed
2025-04-28T22:28:39.014Z INFO 1 --- [ scheduling-1] i.c.c.r.i.state.ProcessStateStoreImpl : . Process: c8-sdk-demo, version: 7 for tenant: <default>
2025-04-28T22:28:39.014Z INFO 1 --- [ scheduling-1] i.c.c.r.i.state.ProcessStateStoreImpl : . 0 replaced with new version
2025-04-28T22:28:39.014Z INFO 1 --- [ scheduling-1] i.c.c.r.i.state.ProcessStateStoreImpl : . 0 deleted
2025-04-28T22:28:39.015Z INFO 1 --- [ scheduling-1] i.c.c.r.i.state.ProcessStateStoreImpl : Activating newly deployed process definition: c8-sdk-demo
2025-04-28T22:28:39.603Z ERROR 1 --- [pool-5-thread-1] i.c.c.r.i.e.BatchExecutableProcessor : Failed to create executable
java.util.NoSuchElementException: Connector inbound-azureServiceBus-connector is not registered
at io.camunda.connector.runtime.core.inbound.DefaultInboundConnectorFactory.lambda$getInstance$1(DefaultInboundConnectorFactory.java:67)
at java.base/java.util.Optional.orElseThrow(Unknown Source)
at io.camunda.connector.runtime.core.inbound.DefaultInboundConnectorFactory.getInstance(DefaultInboundConnectorFactory.java:66)
at io.camunda.connector.runtime.core.inbound.DefaultInboundConnectorFactory.getInstance(DefaultInboundConnectorFactory.java:40)
at io.camunda.connector.runtime.inbound.executable.BatchExecutableProcessor.activateSingle(BatchExecutableProcessor.java:155)
at io.camunda.connector.runtime.inbound.executable.BatchExecutableProcessor.activateBatch(BatchExecutableProcessor.java:98)
at io.camunda.connector.runtime.inbound.executable.InboundExecutableRegistryImpl.handleActivated(InboundExecutableRegistryImpl.java:149)
at io.camunda.connector.runtime.inbound.executable.InboundExecutableRegistryImpl.handleEvent(InboundExecutableRegistryImpl.java:92)
at io.camunda.connector.runtime.inbound.executable.InboundExecutableRegistryImpl.lambda$startEventProcessing$0(InboundExecutableRegistryImpl.java:76)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.base/java.util.concurrent.FutureTask.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Pre-requisites:
linux
jdk 21
docker
start Camunda cluster
cd ~/projects/Cognizant-Camunda-Connectors/camunda-local
docker compose up -d
cd ~
mkdir projects && cd projects
git clone git@github.com:AndriyKalashnykov/Cognizant-Camunda-Connectors.git
cd ~/projects/Cognizant-Camunda-Connectors
Build connector JAR
./build.sh
Build local Docker imageazure-servicebus-connector:latest
./docker-build.sh
Start Camunda 8.7 cluster
cd ~/projects/Cognizant-Camunda-Connectors/camunda-local
docker compose up -d
Observe error Connector inbound-azureServiceBus-connector is not registered in logs for connectors container:
Hi @AndriyK - I did not have time to go through all of the files in the repo, but I suspect the problem is the jar file needs to be /opt/app, not in a subfolder. From the configs, it looks like you are adding a few folders deep; try COPY ./azure-servicebus-connector/target/azure-servicebus-connector-3.0.0.jar /opt/app/
Thanks for point it out. It was a “copy-paste” typo introduced when i was cleaning project up for others to see.
So tried it before and tried both options below just now - unfortunately same result: connector .... is not registered
@AndriyK - have you tried using the camunda/connectors image, which doesn’t contain any out-of-the-box connectors? I might try that next.
I am also not a Java expert, but I noticed the class names in the resources/META-INF/services folder do not match the application; might need an update there as well.
@AndriyK - I was typing a different response, and I think I found your issue. The connector is using the spring-boot-start-camunda-connectors package, which is a runtime for the connector, not the connector SDK itself. When building the jar, I suspect it’s being packaged as the runtime not as a connector that registers with the runtime.
@nathan.loding I think you may be onto something there. I’ve tried several permutations of those packages and non of them worked. (if you’d look at original Cognizant’s pom.xml there were literally NO Camunda connector related packages.
I’m in the position to recommend this product to a customer, so far I’ve spend four days fot what seemed like a pretty simple: “Just add a custom Inboud Connector” to a Self-Hosted Camunda.
If anybody can actually make it work - great, so far I have nothing to report back to prospective customer.
I appreciate the frustration, but I can’t speak for the quality of a third party repository. Altering another project can introduce unique situations that we can’t account for in the documentation.
Did you try dropping the Spring packages and use io.camunda.connector.connector-core? Or try starting the project on its own without Docker? When using spring-boot-starter-camunda-connectors, you can launch that application like any other Spring Boot project to start the connector.
If you have the time to share some details, I’d love to hear about your journey over those days. There’s always room for improvement! Did you start with forking Cognizants and have wrestled with it since? Did you try writing one from scratch? Have you taken the Camunda Academy course about building a connector (it is for an outbound connector, but the general structure and requirements are the same)?
@nathan.loding I’ve been working with Andriy on a separate email thread. I provided inbound connector examples that I’ve published to the marketplace. Inbound and outbound connectors work a bit differently and taking an outbound connector and swapping annotations to turn it into an inbound connector isn’t enough. I made a few minor tweaks to get it to run on 8.7. This should get Andriy back on track.
@nathan.loding I honestly do not understand why generate random suggestions instead of actually trying to help by understanding the issue, it probably takes more time. Anyway.
I’ve created a PR hoping it could save someone time if they want just to deploy Custom Inbound Connector instead of trying to become full-time Camunda’s Java developer.
However, when Connector runs in Docker Comopose it produces tons of ERRORS and WARNING. If it’s by design - it will overwhelm Kubernetes one day. Unless it’s there to fill that “room for improvement”.
Cheers.
@Beagler We were talking only about Inbound Connectors as I mention before - i found Cognizants Inbound Connector for Azure Service Bus i.e. exactly what we were looking for, but unfortunately it was not ready for packaging - missing a Spring Boot dependency.
@AndriyK - I appreciate that you’re having issues with Cognizant’s code, but this is a community forum, not a priority support channel or free consulting work. The suggestions I’ve given are all based on the information you provided, knowing that the connector you forked is a third party connector that is not officially supported by Camunda. Please keep the conversations productive.
After chatting with @Beagler, I time boxed 30 minutes to fork Cognizant’s repository, and here’s what I discovered:
Connectors need to bundle all dependencies when you are using it with an external runtime (like inside Docker), so I added the maven-assembly-plugin plugin to the inbound connector pom.xml
I took your Dockerfile and modified it to include the JAR with deps:
FROM camunda/connectors-bundle:8.7.0 AS runtime
COPY ./azure-connectors/azure-servicebus-connector/target/azure-servicebus-connector-3.0.0-jar-with-dependencies.jar /opt/app/
As noted in the README, you need to add the Zeebe credentials. I added the environment variables to the Dockerfile to keep this setup simple. (To get the credentials, I added a new client in my SaaS cluster and copied the values from the “Environment Variables” tab.) (Also of note, the “tons of ERRORS” you are seeing are related to these missing credentials. The runtime doesn’t know how to connect to your Camunda cluster. And, as mentioned, the need for adding the credentials is listed in the README.)
I built the code, which had many failing tests, so I had the skip the tests. Then I built the image and started it, and got an extremely large amount of errors from Cognizant’s code, not from Camunda. Several of them appear to be related to log4j.
At that point I stopped. Here’s what I learned:
As I suspected originally, and what you claim are “random suggestions,” is that your initial configuration using the Runtime rather than the SDK was incorrect and preventing the JAR from automatically registering.
Cognizant appears to have intended these connectors to be dependencies in another project, rather than connectors that are functional in the runtime. The README says “In order to deploy this project in runtime environment follow below steps … Add following dependency … artifactId: cts-camunda-connectors”. I do not know why it was architected this way, and I’m not sure what code changes are needed to change this, and I’m not sure how they intended it to be used in another project. I suspect Cognizant’s internal usage of this is to start a new Spring Boot project and run it from there, not inside a container, but that is just speculation.
I recommend opening an issue in the Cognizant repository to see if they have any advice. Alternately, you could clone the inbound template, copy the code from Cognizant’s connector, and then make any edits necessary for it to work in the new project. This eliminates questions about how Cognizant intended for this connector to be consumed.
I do not believe spring-boot-starter is needed in that template, but I will make sure the engineer team sees that PR and reviews.
As I mentioned in previous reply, which you seem to read partially, i got them both connectors (onnector-template-inbound and Cognizant-Camunda-Connectors working and deployed… by adding missing dependency and they do generate a TON of ERROR and WARNING messages. If you’d look at these project README’s you’d see that i’m using stock Camunda’s docker-compose-core.yaml to deploy custom Docker Image:
Please do tell me which environment variables are missing?
And why do i need them to Dockerfile?
Let’s just think. So if I were using stock image for connectors image: camunda/connectors-bundle:${CAMUNDA_CONNECTORS_VERSION} those variables would be still missing?
I merely sharing my findings trying to fix issues i see and hoping to improve the product if applicable.
Peace.
@AndriyK - I followed the same process you stated, without the adding the dependency, and the images started without any issues and the connectors are functional.
I am not certain why you needed to add that dependency, but the process outlined in the documentation and repositories is a functional workflow. I am trying to assist, and I have read all of your comments. Let’s work together to find the root cause of what is happening.
You, of course, do not need to add environment variables to the Dockerfile. I did that to make it simpler and faster for me to test, as noted in my reply. You can provide those variables through other means (for instance, docker-compose if you use it, or an application.properties file). The variables are missing from both camunda/connectors and camunda/connectors-bundle images, because the variables are dependent on your deployment and can’t be hardcoded.
If you start with a clean clone of the connector-template-inbound repository and follow the steps below, do you still get errors or does it work?
The logs contain a series of messages as they boot up, which are expected while the Docker network and images initialize. And then everything is running.
Perhaps some of the confusion is due to some of messages you see during the startup phase? The connectors image depends on Operate and Zeebe to be running, which in turn depend on Elasticsearch to be up. If you start Elasticsearch, wait then start Zeebe, wait then start Operate, wait then start Connectors, you won’t see those messages. This is normal in this sort of environment. For a production deployment, we recommend you adjust the log levels to whatever is appropriate for your monitoring and observability policies.