I’m using Camunca 8.8 Self-Managed and i wanted to try out the BPMN Copilot functionality. For me its just a dev environment, so i use docker compose and added the necessary env-variables (like in this documentation: Copilot | Camunda 8 Docs). It was not clear stated, but i added the env-variables to the web-modeler-restapi container and the Copilot feature within the modeler was enabled (so i think i used the correct container).
When using postman from my local pc and using the request content i get an answer that looks right and that i would expect as the content in the response body (in my case a “call_create”).
So for me it looks great, but i’m getting no answers.
Thanks for sharing the detailed configuration and logs! This is an interesting setup you’re working with.
Based on your description, it appears you’re trying to configure BPMN Copilot with Ollama in a Self-Managed environment. While the Copilot feature is showing up in the modeler UI (which suggests the FEATURE_AI_ENABLED: true is working), the copilotLlmConfiguration=null in the logs indicates that the LLM configuration isn’t being properly loaded or recognized.
This seems like a specific configuration issue that might require deeper investigation into how the environment variables are being processed by the web-modeler-restapi container, or there might be additional configuration steps needed that aren’t immediately apparent from the standard documentation.
I’m going to escalate this to one of our experts who can provide more detailed guidance on the Copilot configuration in Self-Managed environments, particularly around the Ollama integration and troubleshooting the null configuration issue.
They should be able to help you identify what might be missing or misconfigured in your setup.
I also tried getting copilot work with a non private LLM and wanted to use the standard Gemini Api like an openAi compatible LLM (the Gemini docu says, that should be possible).
When configuring the Copilot it still doesn’t work.
The Camunda documentation says i have to set RESTAPI_COPILOT_OPENAI_ENDPOINT to the endpoint of the openAi compatible server: in my case the Gemini Url (spaces to prevent href generation, but not used in env): https:// generativelanguage.googleapis .com/v1beta/openai/
And the corresponding openAI configuration of camunda (Model-ID and Bearer-Token). The configuration is accepted while the container starts (when altering the name of the env for the url, i get an error when starting).
When i then try the BPMN Copilot i also get the copilot request Log Entry and an error:
i.c.m.s.copilot.CopilotServiceAdapter : Failed to invoke BPMN copilot
java.lang.IllegalStateException: apiKey must be provided when no custom endpoint is set
So, even when the containerstart recognized the setting of the env variable for the url, the request is not correct send. And when i set the RESTAPI_FEELCOPILOT_API_KEY camunda just sends the api to the standard openAi Url (as expected).
Long Story short: the openAi compatible url is not used when sending the Copilot request.
Hi @DanielP, welcome to the forums! I was just going to ask if you were using a privately hosted model or not, but you just answered that. How are you running Camunda 8.8 (Docker, local k8s cluster, C8 Run, etc.)?