I am working with a very simple Camunda/Spring-Boot application that is nothing more than using the bare bones project generated by start.camunda.com and making two minor changes.
Using openssl I was able to successfully add my X509 certificate to the java keystore file specified in my application.yaml.
When camunda starts up there are several messages in the log indicating that port 8443 is being used for https.
2022-03-14 13:12:35.249 DEBUG 9 --- [ main] o.apache.tomcat.util.IntrospectionUtils : IntrospectionUtils: setProperty(class org.apache.coyote.http11.Http11NioProtocol port=8443)
2022-03-14 13:12:35.936 INFO 9 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8443 (https)
2022-03-14 13:13:51.057 INFO 9 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8443 (https) with context path ''
2022-03-14 13:13:51.139 DEBUG 9 --- [o-8443-Acceptor] o.apache.tomcat.util.threads.LimitLatch : Counting up[https-jsse-nio-8443-Acceptor] latch=0
When I attempt to access the login I get the following error (HTTP 400):
Bad Request
This combination of host and port requires TLS.
When I attempt to access the login I get the following error (HTTP 400):
2022-03-14 13:40:59.419 DEBUG 9 --- [nio-8443-exec-2] org.apache.tomcat.util.net.NioEndpoint : Error during SSL handshake
java.io.IOException: Found an plain text HTTP request on what should be an encrypted TLS connection
at org.apache.tomcat.util.net.SecureNioChannel.processSNI(SecureNioChannel.java:301) ~[tomcat-embed-core-9.0.52.jar!/:na]
at org.apache.tomcat.util.net.SecureNioChannel.handshake(SecureNioChannel.java:154) ~[tomcat-embed-core-9.0.52.jar!/:na]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1702) ~[tomcat-embed-core-9.0.52.jar!/:na]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.52.jar!/:na]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-9.0.52.jar!/:na]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-9.0.52.jar!/:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.52.jar!/:na]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
2022-03-14 13:40:59.420 DEBUG 9 --- [nio-8443-exec-2] o.apache.coyote.http11.Http11Processor : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper@7de277b9:org.apache.tomcat.util.net.SecureNioChannel@2ba08c6:java.nio.channels.SocketChannel[connected local=/10.56.34.50:8443 remote=/10.56.32.63:50300]], Status in: [CONNECT_FAIL], State out: [CLOSED]
Whitelabel Error Page
This application has no explicit mapping for /error, so you are seeing this as a fallback.
Mon Mar 14 22:37:10 UTC 2022
There was an unexpected error (type=Not Found, status=404).
Looking at the logs I see this:
2022-03-14 21:53:18.836 DEBUG 8 --- [nio-8443-exec-4] o.apache.catalina.valves.RemoteIpValve : Incoming request /dev03/poc-aks-camunda-run with originalRemoteAddr [10.56.32.63], originalRemoteHost=[10.56.32.63], originalSecure=[true], originalScheme=[https], originalServerName=[aks-docs-dev-api.autolendingapps.net], originalServerPort=[443] will be seen as newRemoteAddr=[10.80.118.127], newRemoteHost=[10.80.118.127], newSecure=[true], newScheme=[https], newServerName=[aks-docs-dev-api.autolendingapps.net], newServerPort=[443]
Why is the request going to port 443? I thought I had set up Tomcat to use port 8443 and I see that it is listening on port 8443 in the logs (as shown in first post) ???
I have spent a fair amount of time on this as well, so I feel your pain.
First of all, if you try to go to an http address that does not have a re-director to https you will get that first error. One solution is to make sure you have a web server listening on port 80 that redirects to your port.
I see that you tried that with nginx, but it looks like there’s no explicit port-mapping to redirect from :443 to :8443 so that fails as well.
Try putting the following in your nginx configuration:
Thanks for the reply.
If I am understanding you and I edit the nginx config, won’t all requests from :443 be re-routed to :8443? So every pod that uses the nginx controller will now have this mapping? If that is the case, then our existing .NET Core apps will break.
Sorry if this is a noobtacular question, I just got into Docker, K8s and AKS starting in mid-November and I want be explicitly clear.
On the K8s, etc. topics I have absolutely no knowledge. But since you’re defining a host for the camunda server, you should have the proxy only respond for that hostname?
Do you really want the ingress pod to try to connect to itself to reach Camunda?
Think of your ingress node much like a distinct VM running as a reverse proxy on the edge of your network. Everything in K8s is running on its own distinct network, so you have to set up a webserver at the edge of that network to allow any other network to come in. Your proxy_pass directive probably wants to be something like https://camunda-dev.dev.cluster.local:8443 depending on the service name, namespaces, and other setup.
(Sorry, I’m reading and following along, because I want to set up similar on a TrueNAS SCALE box which will be k3s)
From your Kube control plane, what are the relevant svc and pod records?
I’m going to assume that aks-my-hostname-here.net is the world-resolvable address, not the cluster-resolvable address (but I have NO experience with aks, so may easily be on the wrong path)
Internet <-Firewall-> Inhouse Network <-IngressController-> Kube Network
Looks like AKS uses 10.244.0 as its network range…
So your reverse proxy will need to point to that address (much better to use the in-cluster DNS and use node-names!)
Here’s a guide that looks pretty complete.
I worked on this problem a little bit today but had to work on another project as well. I did a quick look at the service & pod definition.
My camunda POC is only available on the company’s internal network.
The .NET core microservices I’ve set up did not have any of these issues. Much easier to set up TLS and access.
Basically, you’ll need to figure out what allows curl to work from the ingress node, not from the Camunda node. As long as you work with them as if they were different VMs behind a firewall, you should be able to get it to work
One more thought…
Usually Ingress Controllers, Services, and Pods are deployed into different namespaces.
Can your ingress controller actually resolve poc-simple-camunda-app service?
From your Ingress Controller, does https://poc-simple-camunda-app:8443/dev/poc-simple-camunda-app actually return anything (other than a 404?)
What do the http logs on the poc-simple-camunda-app pod say?
I get the camunda login screen.
Not sure why that is happening…I don’t understand how Tomcat routes requests. I am trying to get up to speed on that as quickly as possible…
I follow what you are saying about the NGINX trailing slash but I’m not sure what you mean by redirecting.
If you look at my ingress file above you can see that I am just taking the request coming in on port 443 and moving it over to port 8443.
I don’t use any nginx target-rewrite annotations.
I also have the application path set in my application.yaml file for camunda:
I tried this on Tuesday and it made no difference. Still the same results…
I’ve been meaning to post sooner but I got distracted by some Maven madness setting up a build task with an Azure DevOps YAML pipeline.