Proxy Protocol with LKE, Nginx-Ingress NodeBalancer TLS - ERR_SSL_PROTOCOL_ERROR
I have more or less followed the steps outlined in https://www.linode.com/community/questions/20620/how-can-i-use-proxy-protocol-with-linode-kubernetes-engine.
I require SSL for my apps, so I installed cert-manager, generated secret, created LetsEncrypt issuer, etc. When proxy_protocol is disabled on my NodeBalancer, everything works fine. TLS terminates and I get served the correct page, with a successful SSL connection.
When I enable proxy_protocol, however, I get ERR_SSL_PROTOCOL_ERROR on my ingress hosts. One of them is https://skyyhosting.com if you want to take a look.
When hitting the endpoint from Postman I get
GET https://skyyhosting.com/secure/5f7626837a587125cb2f572c Error: write EPROTO 4620207552:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../../vendor/node/deps/openssl/openssl/ssl/record/ssl3_record.c:252:
I have tried everything I can think of, from changing around the service ports, including various Ingress annotations, Ingress ConfigMap entries, etc.
Here is my configs and running NodeBalancer
kubectl describe service nginx-ingress
Name: nginx-ingress-nginx-ingress
Namespace: default
Labels: app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=nginx-ingress-nginx-ingress
helm.sh/chart=nginx-ingress-0.6.1
Annotations: meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: default
service.beta.kubernetes.io/linode-loadbalancer-proxy-protocol: v2
Selector: app=nginx-ingress-nginx-ingress
Type: LoadBalancer
IP: 10.128.115.132
LoadBalancer Ingress: 45.79.60.198
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30601/TCP
Endpoints: 10.2.2.216:80
Port: https 443/TCP
TargetPort: 80/TCP
NodePort: https 31204/TCP
Endpoints: 10.2.2.216:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
☝️ For Port: 443 above, I have tried setting it's TargetPort to 443 which is handled by the service and it made no difference
kubectl describe ing
Name: sniphy-dev-ingress
Namespace: default
Address: 45.79.60.198
Default backend: default-http-backend:80 (<none>)
TLS:
sniphy-dev-dashboard-api terminates dev-api.sniphy.com,skyyhosting.com
Rules:
Host Path Backends
---- ---- --------
dev-api.sniphy.com
sniphy-dashboard-dev:443 (10.2.0.174:80,10.2.2.210:80)
skyyhosting.com
skyy-hosting-service:443 (10.2.0.178:443,10.2.2.222:443)
Annotations:
cert-manager.io/issuer: letsencrypt-prod
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/issuer":"letsencrypt-prod","kubernetes.io/ingress.class":"nginx"},"name":"sniphy-dev-ingress","namespace":"default"},"spec":{"rules":[{"host":"dev-api.sniphy.com","http":{"paths":[{"backend":{"serviceName":"sniphy-dashboard-dev","servicePort":443}}]}},{"host":"skyyhosting.com","http":{"paths":[{"backend":{"serviceName":"skyy-hosting-service","servicePort":443}}]}}],"tls":[{"hosts":["dev-api.sniphy.com","skyyhosting.com"],"secretName":"sniphy-dev-dashboard-api"}]}}
kubernetes.io/ingress.class: nginx
Events: <none>
kubectl describe secret sniphy-dev-dashboard-api
Name: sniphy-dev-dashboard-api
Namespace: default
Labels: <none>
Annotations: cert-manager.io/alt-names: dev-api.sniphy.com,skyyhosting.com
cert-manager.io/certificate-name: sniphy-dev-dashboard-api
cert-manager.io/common-name: dev-api.sniphy.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group: cert-manager.io
cert-manager.io/issuer-kind: Issuer
cert-manager.io/issuer-name: letsencrypt-prod
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.crt: 3591 bytes
tls.key: 1675 bytes
kubectl get service skyy
Name: skyy-hosting-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/external-traffic":"OnlyLocal"},"name":"skyy-hos...
service.beta.kubernetes.io/external-traffic: OnlyLocal
Selector: environment=dev,tier=hosting
Type: NodePort
IP: 10.128.109.158
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32572/TCP
Endpoints: 10.2.0.178:443,10.2.2.222:443
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31216/TCP
Endpoints: 10.2.0.178:80,10.2.2.222:80
Session Affinity: None
External Traffic Policy: Local
Events: <none>
kubectl get deployment skyy
Name: skyy-hosting
Namespace: default
CreationTimestamp: Tue, 13 Oct 2020 01:20:00 -0400
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 3
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"skyy-hosting","namespace":"default"},"spec":{"replicas":2...
Selector: environment=dev,tier=hosting
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: environment=dev
tier=hosting
Containers:
sniphy-skyy-hosting:
Image: giohperez/sniphy-skyy-hosting:latest
Port: 443/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: skyy-hosting-65c8774775 (2/2 replicas created)
Events: <none>
Ingress ConfigMap
Name: nginx-ingress-nginx-ingress
Namespace: default
Labels: app=nginx-ingress-nginx-ingress
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"compute-full-forwarded-for":"true","use-forwarded-headers":"true","use-proxy-protocol":"true"},"kind":"ConfigM...
Data
====
use-forwarded-headers:
----
true
use-proxy-protocol:
----
true
compute-full-forwarded-for:
----
true
Events: <none>
I have tried only use-proxy-protocol, and a combination of other things, use-real-ip etc.. nothing works..
Here is a list of all things I tried
# proxy_protocol: "True"
# real-ip-header: "X-Forwarded-For" and "proxy_protocol"
# set-real-ip-from: "45.79.60.198" and "192.168.255.0/24"
# compute-full-forwarded-for: "true"
use-proxy-protocol: "true"
# use-forwarded-headers: "true"
# enable-real-ip: "true"
# proxy-real-ip-cidr: "45.79.60.198" and "0.0.0.0"
2 Replies
As you mentioned, you have port 443 of your ingress-nginx service directed at port 80 of ingress-nginx. The ERR_SSL_PROTOCOL_ERROR that you see in a browser is a result of getting an HTML response to a TLS handshake. You can confirm this with Wireshark with the filter ip.addr == 45.79.60.198
Can you switch the targetPort to 443 and then we can go from there?
Ok, I have switched the targetPort to 443
Name: nginx-ingress-nginx-ingress
Namespace: default
Labels: app.kubernetes.io/instance=nginx-ingress
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=nginx-ingress-nginx-ingress
helm.sh/chart=nginx-ingress-0.6.1
Annotations: meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: default
service.beta.kubernetes.io/linode-loadbalancer-proxy-protocol: v1
Selector: app=nginx-ingress-nginx-ingress
Type: LoadBalancer
IP: 10.128.115.132
LoadBalancer Ingress: 45.79.60.198
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30601/TCP
Endpoints: 10.2.2.216:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31204/TCP
Endpoints: 10.2.2.216:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Looking in Wireshark I see the same problem, even with the new TargetPort
BTW, I added my Ingress configmap to the original post since I forgot it