Operate cannot store Zeebe credentials

I’m trying to deploy a self managed version of Camunda Platform to OpenShift with Restricted SCC.
I followed the installation requirements using the post renderer mentioned here:
camunda-platform-helm/charts/camunda-platform/openshift at main · camunda/camunda-platform-helm · GitHub.

Operate aquires a connection to the Zeebe Cluster an authenticates using the secrets provided by the helm charts, but storing the token will not work, due to insufficient file permissions.

Caused by: java.nio.file.AccessDeniedException: /.camunda
at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116)
at java.base/sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:389)
at java.base/java.nio.file.Files.createDirectory(Files.java:690)
at java.base/java.nio.file.Files.createAndCheckIsDirectory(Files.java:797)
at java.base/java.nio.file.Files.createDirectories(Files.java:783)
at io.camunda.zeebe.client.impl.oauth.OAuthCredentialsCache.ensureCacheFileExists(OAuthCredentialsCache.java:107)

Using default, the home directory in operate-Container is set to “HOME=/”.
Workaround that solved the issue for me:

operate:
  env:
    - name: HOME
      value: /tmp

→ I’m not very familiar with these issues, but I think the camunda containers should work without this workaround. Maybe the home directory needs to be set to a folder where a random user has access.

1 Like