Error during SSL handshake

Hello,

I am working with a very simple Camunda/Spring-Boot application that is nothing more than using the bare bones project generated by start.camunda.com and making two minor changes.

Using openssl I was able to successfully add my X509 certificate to the java keystore file specified in my application.yaml.

When camunda starts up there are several messages in the log indicating that port 8443 is being used for https.

2022-03-14 13:12:35.249 DEBUG 9 --- [           main] o.apache.tomcat.util.IntrospectionUtils  : IntrospectionUtils: setProperty(class org.apache.coyote.http11.Http11NioProtocol port=8443)
2022-03-14 13:12:35.936  INFO 9 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8443 (https)
2022-03-14 13:13:51.057  INFO 9 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8443 (https) with context path ''
2022-03-14 13:13:51.139 DEBUG 9 --- [o-8443-Acceptor] o.apache.tomcat.util.threads.LimitLatch  : Counting up[https-jsse-nio-8443-Acceptor] latch=0

When I attempt to access the login I get the following error (HTTP 400):

Bad Request
This combination of host and port requires TLS.

When I attempt to access the login I get the following error (HTTP 400):

2022-03-14 13:40:59.419 DEBUG 9 --- [nio-8443-exec-2] org.apache.tomcat.util.net.NioEndpoint   : Error during SSL handshake

java.io.IOException: Found an plain text HTTP request on what should be an encrypted TLS connection
        at org.apache.tomcat.util.net.SecureNioChannel.processSNI(SecureNioChannel.java:301) ~[tomcat-embed-core-9.0.52.jar!/:na]
        at org.apache.tomcat.util.net.SecureNioChannel.handshake(SecureNioChannel.java:154) ~[tomcat-embed-core-9.0.52.jar!/:na]
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1702) ~[tomcat-embed-core-9.0.52.jar!/:na]
        at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.52.jar!/:na]
        at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-9.0.52.jar!/:na]
        at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-9.0.52.jar!/:na]
        at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.52.jar!/:na]
        at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]

2022-03-14 13:40:59.420 DEBUG 9 --- [nio-8443-exec-2] o.apache.coyote.http11.Http11Processor   : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper@7de277b9:org.apache.tomcat.util.net.SecureNioChannel@2ba08c6:java.nio.channels.SocketChannel[connected local=/10.56.34.50:8443 remote=/10.56.32.63:50300]], Status in: [CONNECT_FAIL], State out: [CLOSED]

application.yaml:

spring.datasource.url: jdbc:h2:file:./camunda-h2-database

camunda.bpm.admin-user:
  id: demoX2
  password: demoXX

server:
  ssl:
    key-store: /camunda/configuration/keystore/keystore.jks
    key-store-password: mySecret
    key-store-type: pkcs12
    key-password: mySecret
    key-alias: 1
    enabled: true
  port: 8443

camunda:
  bpm:
    webapp:
      application-path: /support-the-monarchy/

logging:
  level.root: DEBUG
  file.name: logs/camunda-bpm-run.log

Am I missing something obvious???

I spent the day searching the intarWebz as well as setting up the project to run locally.

When I ran the project locally I was able to reproduce the error by entering http://localhost:8443/ in my browser.

Bad Request
This combination of host and port requires TLS.

I have changed my ingress definition to include the following annotation:

nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

Doing so results in a 404 error.

Whitelabel Error Page
This application has no explicit mapping for /error, so you are seeing this as a fallback.

Mon Mar 14 22:37:10 UTC 2022
There was an unexpected error (type=Not Found, status=404).

Looking at the logs I see this:

2022-03-14 21:53:18.836 DEBUG 8 --- [nio-8443-exec-4] o.apache.catalina.valves.RemoteIpValve   : Incoming request /dev03/poc-aks-camunda-run with originalRemoteAddr [10.56.32.63], originalRemoteHost=[10.56.32.63], originalSecure=[true], originalScheme=[https], originalServerName=[aks-docs-dev-api.autolendingapps.net], originalServerPort=[443] will be seen as newRemoteAddr=[10.80.118.127], newRemoteHost=[10.80.118.127], newSecure=[true], newScheme=[https], newServerName=[aks-docs-dev-api.autolendingapps.net], newServerPort=[443]

Why is the request going to port 443? I thought I had set up Tomcat to use port 8443 and I see that it is listening on port 8443 in the logs (as shown in first post) ???

Ingress file:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-poc-simple-camunda-app
  namespace: dev03
  annotations:
    BuildNumber: $(Build.BuildNumber)
    SourceBranchName:  $(Build.SourceBranchName)
    kubernetes.io/ingress.class: api  
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  labels:
    app: poc-poc-simple-camunda-app
    role: api
    target: release
spec:
  rules:
  - host: aks-my-hostname-here.net
    http:
      paths:
      - path: /dev/poc-simple-camunda-app(/|$)(.*)
        pathType: Prefix
        backend:
          service: 
            name: poc-poc-simple-camunda-app
            port:
              number: 8443 # also tried using 443
  tls:
  - secretName: aks-myhostnameherenet

I have spent a fair amount of time on this as well, so I feel your pain. :slight_smile:

First of all, if you try to go to an http address that does not have a re-director to https you will get that first error. One solution is to make sure you have a web server listening on port 80 that redirects to your port.

I see that you tried that with nginx, but it looks like there’s no explicit port-mapping to redirect from :443 to :8443 so that fails as well.

Try putting the following in your nginx configuration:

location /route/ {
        proxy_pass  http://127.0.0.1:8443;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

Which should redirect to the correct port. My nginx is super rust, so I could be wrong, but I think that should work.

Best Regards,
dg

Hello @davidgs,

Thanks for the reply.
If I am understanding you and I edit the nginx config, won’t all requests from :443 be re-routed to :8443? So every pod that uses the nginx controller will now have this mapping? If that is the case, then our existing .NET Core apps will break.
Sorry if this is a noobtacular question, I just got into Docker, K8s and AKS starting in mid-November and I want be explicitly clear.

Let me know when you get a minute!!!

On the K8s, etc. topics I have absolutely no knowledge. But since you’re defining a host for the camunda server, you should have the proxy only respond for that hostname?

dg

Thanks for the quick reply.

I found this link:
https://docs.nginx.com/nginx-ingress-controller/configuration/ingress-resources/advanced-configuration-with-annotations/

Is that what you meant by your example? Is so, where are the variables $host, $remote_addr, etc. defined or are they globally available…?

Hello @davidgs ,

I modified my ingress file as follows:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-poc-simple-camunda-app
  namespace: dev
  annotations:
    BuildNumber: $(Build.BuildNumber)
    SourceBranchName:  $(Build.SourceBranchName)
    kubernetes.io/ingress.class: api # 
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    ingress.kubernetes.io/service-upstream: "true"
    nginx.org/server-snippets: |
      location / { # also tried /dev/poc-simple-camunda-app/
        proxy_pass https://127.0.0.1:8443;
        proxy_set_header $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      }
    nginx.org/location-snippets: |
      add_header my-test-header test-valueXX
  labels:
    app: poc-simple-camunda-app
    role: api
    target: release
spec:
  defaultBackend:
    service:
      name: poc-simple-camunda-app
      port:
        number: 8443
  rules:
  - host: aks-my-hostname-here.net
    http:
      paths:
      - path: /dev/poc-simple-camunda-app(/|$)(.*)
        pathType: Prefix
        backend:
          service: 
            name: poc-simple-camunda-app
            port:
              number: 8443
  tls:
  - secretName: aks-myhostnameherenet

Still getting a 404 error.
The only thing that works is when I Bash into the pod and run curl:

bash-5.0$ curl -k -I https://127.0.0.1:8443
HTTP/1.1 302
Location: https://127.0.0.1:8443/dev/poc-simple-camunda-app/app/
Content-Language: en-US
Transfer-Encoding: chunked
Date: Wed, 16 Mar 2022 22:32:32 GMT

bash-5.0$ curl -I https://aks-my-hostname-here.net/dev/poc-simple-camunda-app
HTTP/2 404
date: Wed, 16 Mar 2022 22:38:06 GMT
content-type: application/json
vary: Origin
vary: Access-Control-Request-Method
vary: Access-Control-Request-Headers
strict-transport-security: max-age=15724800; includeSubDomains

Hoping you see something obvious…

Do you really want the ingress pod to try to connect to itself to reach Camunda?

Think of your ingress node much like a distinct VM running as a reverse proxy on the edge of your network. Everything in K8s is running on its own distinct network, so you have to set up a webserver at the edge of that network to allow any other network to come in. Your proxy_pass directive probably wants to be something like https://camunda-dev.dev.cluster.local:8443 depending on the service name, namespaces, and other setup.

(Sorry, I’m reading and following along, because I want to set up similar on a TrueNAS SCALE box which will be k3s)

Hello @GotnOGuts ,

I modified by ingress definition trying to follow your suggestion:

     nginx.org/location-snippets: |
      add_header my-test-header test-valueXX
    nginx.org/server-snippets: |
      location /dev/poc-simple-camunda-app/ {
        proxy_set_header $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass aks-my-hostname-here.net:8443/dev/poc-simple-camunda-app/;
      }
# also tried

nginx.org/location-snippets: |
      add_header my-test-header test-valueXX
    nginx.org/server-snippets: |
      location /dev/poc-simple-camunda-app/ {
        proxy_set_header $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass aks-my-hostname-here.net:8443/;
      }

Still getting the 404…!!!

Just want to add that in my application.yaml file I have the application-path set to:

camunda:
  bpm:
    webapp:
      application-path: /dev/poc-simple-camunda-app/

From your Kube control plane, what are the relevant svc and pod records?
I’m going to assume that aks-my-hostname-here.net is the world-resolvable address, not the cluster-resolvable address (but I have NO experience with aks, so may easily be on the wrong path)

Internet <-Firewall-> Inhouse Network <-IngressController-> Kube Network

Looks like AKS uses 10.244.0 as its network range…

So your reverse proxy will need to point to that address (much better to use the in-cluster DNS and use node-names!)
Here’s a guide that looks pretty complete.

This error is saying “Don’t do http://10.56.34.50:8443, do https://10.56.34.50:8443

Hello @GotnOGuts ,

Yes, I corrected the HTTP 400 error (Bad Request) by using the annotation:

nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

I worked on this problem a little bit today but had to work on another project as well. I did a quick look at the service & pod definition.

My camunda POC is only available on the company’s internal network.
The .NET core microservices I’ve set up did not have any of these issues. Much easier to set up TLS and access.

Basically, you’ll need to figure out what allows curl to work from the ingress node, not from the Camunda node. As long as you work with them as if they were different VMs behind a firewall, you should be able to get it to work

One more thought…
Usually Ingress Controllers, Services, and Pods are deployed into different namespaces.

Can your ingress controller actually resolve poc-simple-camunda-app service?
From your Ingress Controller, does https://poc-simple-camunda-app:8443/dev/poc-simple-camunda-app actually return anything (other than a 404?)
What do the http logs on the poc-simple-camunda-app pod say?

If I type in a browser or Postman:

https://aks-my-hostname-here.net/dev/poc-simple-camunda-app

I get the 404 error

if I type in a browser or Postman:

https://aks-my-hostname-here.net/dev/poc-simple-camunda-app/app

I get the 404 error

if I type in a browser or Postman:

https://aks-my-hostname-here.net/dev/poc-simple-camunda-app/app/

I get the camunda login screen.
Not sure why that is happening…I don’t understand how Tomcat routes requests. I am trying to get up to speed on that as quickly as possible…

What do you get when you do:
kubectl get svc -A
kubectl get pods -A

One of those should give you the hint that you need to get it connected.

Sorry - I missed this line!
the trailing / is a function of NGINX reverse proxying!

You do have it working, you just need to redirect to the full address with the trailing /

Hi,

I follow what you are saying about the NGINX trailing slash but I’m not sure what you mean by redirecting.

If you look at my ingress file above you can see that I am just taking the request coming in on port 443 and moving it over to port 8443.
I don’t use any nginx target-rewrite annotations.

I also have the application path set in my application.yaml file for camunda:

camunda:
  bpm:
    webapp:
      application-path: /dev/poc-simple-camunda-app/

So, I’m not sure where the redirect should be taking place.

Try removing the trailing slash from the application-path
From my recollection, if it’s there, then it has to be present on the URL as well.

Hi,

I tried this on Tuesday and it made no difference. Still the same results…
I’ve been meaning to post sooner but I got distracted by some Maven madness setting up a build task with an Azure DevOps YAML pipeline.