LKE: Deployment with secrets using volumes
In my helm chart, I have an existing deployment that uses a PVC, which is working well, however, I'm having difficulty trying to define volumes which use secrets.
My deployment looks like this (redacted irrelevant info):
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
...
volumes:
- name: {{ include "validator.fullname" . }}-data
persistentVolumeClaim:
claimName: {{ include "validator.fullname" . }}
- name: {{ include "validator.fullname" . }}-keys
secret:
secretName: {{ include "<release>.fullname" . }}-keys
- name: {{ include "validator.fullname" . }}-passwords
secret:
secretName: {{ include "validator.fullname" . }}-passwords
Inside spec.template.spec.containers[0]
:
volumeMounts:
- mountPath: {{ .Values.dataDir }}
name: {{ include "validator.fullname" . }}-data
- mountPath: {{ .Values.keyPath }}
name: {{ include "validator.fullname" . }}-keys
- mountPath: {{ .Values.passwordPath }}
name: {{ include "validator.fullname" . }}-passwords
When I get the description for my pod I see the following error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 12m (x12 over 20m) kubelet MountVolume.SetUp failed for volume "validator-keys" : secret "validator-keys" not found
Warning FailedMount 10m (x13 over 20m) kubelet MountVolume.SetUp failed for volume "validator-passwords" : secret "validator-passwords" not found
Warning FailedMount 5m17s kubelet Unable to attach or mount volumes: unmounted volumes=[validator-keys validator-passwords], unattached volumes=[validator-keys validator-passwords kube-api-access-h6lms validator-data]: timed out waiting for the condition
Output of Kubectl get secrets -n <ns>
:
NAME TYPE DATA AGE
validator-keys Opaque 2 33m
validator-passwords Opaque 2 34m
So the secrets are there, and named correctly. I'm curious if possibly this isn't supported by LKE?
1 Reply
@wfc Mounting secrets as volumes is a critical k8s feature that would break things like ServiceAccounts (tokens are a secret mounted as a volume). I don't have any problems myself with this, so there must be something with your specific configuration.
Just guessing, but I'd check your rendered deployments as it looks in the cluster first before assuming Helm is doing what you expect. Looking at the secretName: {{ include "<release>.fullname" . }}-keys
line I see an obvious problem <release>
is probably not what you want.