Identity is showing 502 bad gateway error post login in self-managed instance

Hello Community,

I have recently installed Camunda 8 (8.2.13) in AWS EKS. All URLs are opening properly except Identity. While accessing Identity, after login I am getting 502 bad gateway error. I tried multiple options but couldn’t able to resolve yet. I haven’t encountered this issue using earlier versions of helm charts while installing in Azure. Kindly help me to identify and root cause and solve the problem.

YAML configuration for Helm? I have attached the values file that I am using.
camunda-values.yaml (4.0 KB)

What environment are you running C8 on? AWS EKS

Are all pods running? Yes

Hi @arijit.chanda, thanks for opening a new topic! I did a little bit of research, and it looks like it might be an issue with the header size being too large for the ingress and being truncated, causing a 502 error. You can read more about the issue here: [TASK] Upgrade keycloak for 8.3 release of camunda-platform-helm · Issue #849 · camunda/camunda-platform-helm · GitHub

It looks like it should be resolved with the 8.3 release next month, but also might be fixable with the Helm values shared in the second comment in that issue. Are you able to try that?

Thanks @nathan.loding for spending time to investigate the issue. As per the suggestions of the link, I have added below 2 additional configurations in values file but no luck. Same error I am getting. I tried with this latest configuration from fresh installation as well but same issue.

annotations: “64K”

proxy: edge

Hello community,

It would be really helpful if anyone can provide some alternate solution to resolve this issue?

@arijit.chanda - can you confirm that you are using the Nginx ingress controller?

@arijit.chanda - wanted to circle back on this again. First, the annotation provided above only works with the ingress-nginx controller; if you are using a different ingress controller, you’ll want to find the corresponding annotation in their documentation.

Second, in talking with some of our support team, they said they’ve had to increase the buffer to 128K in some situations. Since 64K didn’t work, perhaps try 128K and see if you have better results?

Let me know if that works or not!

Thanks @nathan.loding for spending time on the issue once again. I somehow missed your earlier response.

I am using nginx controller for my installation.

It worked with ingress annotations with 64K buffer size. I somehow wrongly configured the annotation earlier.

Do I need to maintain these extra configurations for upcoming release as well? I mean to say for next month 8.3 release?

Thank you once again for solving my problem. You are awesome. It helps me a lot.

1 Like

I’m glad it worked! I am not sure what the default configuration will be for the 8.3 release, as there’s still active development and testing happening. Leaving them in place won’t be an issue, so I would wait until 8.3 is ready for release before deciding. But if they remain in your values files it won’t have a negative effect.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.