Trouble getting BPMN Copilot work

I’m using Camunca 8.8 Self-Managed and i wanted to try out the BPMN Copilot functionality. For me its just a dev environment, so i use docker compose and added the necessary env-variables (like in this documentation: Copilot | Camunda 8 Docs). It was not clear stated, but i added the env-variables to the web-modeler-restapi container and the Copilot feature within the modeler was enabled (so i think i used the correct container).

#Camunda Copilot
      FEATURE_AI_ENABLED: true
      RESTAPI_BPMN_COPILOT_DEFAULT_LLM_PROVIDER: OLLAMA
      RESTAPI_FEEL_COPILOT_DEFAULT_LLM_PROVIDER: OLLAMA
      RESTAPI_FORM_COPILOT_DEFAULT_LLM_PROVIDER: OLLAMA
      RESTAPI_COPILOT_OLLAMA_DEFAULT_MODEL_ID: "llama3.3:70b"
      RESTAPI_COPILOT_OLLAMA_BASE_URL: https://internal.url.to.working.server

But when i type in the copilot chat, i get no answer back and i see the following in the logging of the container:

2025-10-20 09:35:03.539  INFO 1 --- [opilotExecutor1] [963fc827-14a8-45b9-a86d-3d506e9c452b] i.c.c.c.impl.base.BpmnCopilotClientImpl  : Invoking LLM model with request BpmnInvokeRequest{messages=[BpmnInternalMessage[role=USER, prompt={my Promt}, bpmnXml={my Process XML}]], copilotLlmConfiguration=null}

so my question is, what is missing, that the log-entry says “copilotLlmConfiguration=null”?

Some additional information. I added the two debug env-variables (RESTAPI_BPMN_COPILOT_LOG_REQUEST and RESTAPI_BPMN_COPILOT_LOG_RESPONSE)

here the request logging:

HTTP request:
- method: POST
- url: https://internal.url.to.working.server/api/chat
- headers:
- body: {
  "model" : "llama3.3:70b",
  "messages" : [ {
    "role" : "system",
    "content" : "the system prompt from Camunda"
}, {
    "role" : "user",
    "content" : "<user_prompt>my prompt</user_prompt><user_bpmn_code>some nearly empty bpmn</user_bpmn_code>"
  } ],
  "options" : {
    "temperature" : 0.3,
    "top_k" : 64,
    "top_p" : 0.95,
    "num_predict" : 8192,
    "stop" : [ ]
  },
  "stream" : true,
  "tools" : [ ]
}

here the respone logging:

HTTP response:
- status code: 200
- headers: [:status: 200], [content-type: application/x-ndjson], [date: Mon, 20 Oct 2025 15:40:44 GMT], [strict-transport-security: max-age=31536000; includeSubDomains], [x-envoy-upstream-service-time: 695]
- body: null

When using postman from my local pc and using the request content i get an answer that looks right and that i would expect as the content in the response body (in my case a “call_create”).

So for me it looks great, but i’m getting no answers.

Hi @DanielP,

Thanks for sharing the detailed configuration and logs! This is an interesting setup you’re working with.

Based on your description, it appears you’re trying to configure BPMN Copilot with Ollama in a Self-Managed environment. While the Copilot feature is showing up in the modeler UI (which suggests the FEATURE_AI_ENABLED: true is working), the copilotLlmConfiguration=null in the logs indicates that the LLM configuration isn’t being properly loaded or recognized.

This seems like a specific configuration issue that might require deeper investigation into how the environment variables are being processed by the web-modeler-restapi container, or there might be additional configuration steps needed that aren’t immediately apparent from the standard documentation.

I’m going to escalate this to one of our experts who can provide more detailed guidance on the Copilot configuration in Self-Managed environments, particularly around the Ollama integration and troubleshooting the null configuration issue.

They should be able to help you identify what might be missing or misconfigured in your setup.

Thanks for your patience!

I also tried getting copilot work with a non private LLM and wanted to use the standard Gemini Api like an openAi compatible LLM (the Gemini docu says, that should be possible).

When configuring the Copilot it still doesn’t work.

The Camunda documentation says i have to set RESTAPI_COPILOT_OPENAI_ENDPOINT to the endpoint of the openAi compatible server: in my case the Gemini Url (spaces to prevent href generation, but not used in env): https:// generativelanguage.googleapis .com/v1beta/openai/

And the corresponding openAI configuration of camunda (Model-ID and Bearer-Token). The configuration is accepted while the container starts (when altering the name of the env for the url, i get an error when starting).

When i then try the BPMN Copilot i also get the copilot request Log Entry and an error:

i.c.m.s.copilot.CopilotServiceAdapter    : Failed to invoke BPMN copilot

java.lang.IllegalStateException: apiKey must be provided when no custom endpoint is set

So, even when the containerstart recognized the setting of the env variable for the url, the request is not correct send. And when i set the RESTAPI_FEELCOPILOT_API_KEY camunda just sends the api to the standard openAi Url (as expected).

Long Story short: the openAi compatible url is not used when sending the Copilot request.

Hi @DanielP, welcome to the forums! I was just going to ask if you were using a privately hosted model or not, but you just answered that. How are you running Camunda 8.8 (Docker, local k8s cluster, C8 Run, etc.)?

Hi @nathan.loding ,
regarding the models i already tried both (an privately hostest model - which answers to my curl requests, and one public model).

My Camunda platform is using docker compose on a single host. I’m using Camunda 8.8.0 from 14.10.2025.

Hi @nathan.loding,
do you have some more insight about the topic?

Regards Daniel