How do I rebind a PV when PVC gets re instated?
TLDR; Persistent Volume Claim cannot remember its previously bound Persistent Volume after being redeployed(the PVC), despite being unchanged.
Lets say i have PV named my-pv and a pvc named my-pvc.
When i deploy them together with no existing pv result is OK, and my-pvc is bound to my-pv.
But when i delete my-pvc either directly or via deleting the namespace it resides in and then re deploy my-pvc, my-pvc gets stuck at pending and outputs volume "my-pv" already bound to a different claim.
Oddly though, my-pv seems to detect and accept accept the newly redeployed my-pvc.
linode-blockstorage-csi-driver : 0.4.0
linode PAT secret : already installed and verified to work.
kubernetes: 1.19.8
Helm: 3
My current work around:
- Edit PV in kubectl, delete entire claimRef.
- PV now transitions from Released to Available.
- New instance of PVC, binds.
- I don't think this is the right way to do it.
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
csi:
driver: linodebs.csi.linode.com
fsType: ext4
volumeHandle: <MY_VOLUME_ID>-<MY_VOLUME_LABEL>
persistentVolumeReclaimPolicy: Retain
storageClassName: linode-block-storage-retain
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: linode-block-storage-retain
volumeMode: Filesystem
volumeName: my-pv
3 Replies
Hi there -
I looked into this with our LKE team, and this looks like it's an issue we'll need to fix on our end as we were able to reproduce the issue you're having.
We have also tested out your workaround, and it worked for us as well. I recommend using that until the fix is implemented.
FYI, this is still an active issue. Would be awesome to get this resolved. It has caused me a LOT of headaches with Helm upgrades for services that use PVCs.
In case it helps with identification of the problem, my Grafana installation in particular hits this issue every single time I try to update it. I'm using the bitnami/grafana
Helm chart, current version 8.1.1-debian-10-r0
.