✓ Solved

PVC Always Pending

Im having an issue where all my PVC's are always pending. Im sure its something im doing, but I cant put my finger on it. Here is my yaml for reference.

Manifest

apiVersion: v1
kind: Pod
metadata:
  name: howdfamily-ldap
  labels:
    app: ldap
spec:
  containers:
    - name: howdfamily-ldap
      image: osixia/openldap:1.5.0
      ports:
        - containerPort: 389
          name: ldap
        - containerPort: 636
          name: ldaps
      env:
        - name: LDAP_ORGANISATION
          value: "Howd Family"
        - name: LDAP_DOMAIN
          value: "howd.family"
        - name: LDAP_ADMIN_PASSWORD
          value: "REDACTED"
      volumeMounts:
        - mountPath: "/var/lib/ldap"
          name: howdfamily-var-lib-ldap
        - mountPath: "/etc/ldap/slapd.d"
          name: howdfamily-etc-ldap-slapd
  volumes:
    - name: howdfamily-var-lib-ldap
      persistentVolumeClaim:
        claimName: howdfamily-var-lib-ldap
    - name: howdfamily-etc-ldap-slapd
      persistentVolumeClaim:
        claimName: howdfamily-etc-ldap-slapd

howdfamily-var-lib-ldap

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: howdfamily-var-lib-ldap
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: howdfamily-var-lib-ldap

howdfamily-etc-ldap-slapd

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: howdfamily-etc-ldap-slapd
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: howdfamily-etc-ldap-slapd                                         
## kubectl get pvc
NAME                        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS                AGE
howdfamily-etc-ldap-slapd   Pending                                      howdfamily-etc-ldap-slapd   23m
howdfamily-var-lib-ldap     Pending                                      howdfamily-var-lib-ldap     23m

5 Replies

✓ Best Answer

I figured it out. Looks like it was my linode firewall. Which ports and URL's need to be added to allow block storage?

If your cluster managed by LKE, I think I see where the issue is. In your manifests, you're using two different Storage Classes and I think that's where the confusion may lie.

You can think of a StorageClass as different profiles for different types of storage. On LKE, there are only two types of persistent StorageClasses available: linode-block-storage and linode-block-storage-retain. These both create Block Storage volumes for persistent storage, but the linode-block-storage-retain class will retain the associated Block Storage Volume and its data when the PersistentVolumeClaim is deleted.

I'd try changing the storageClassName field to one of the two options listed above to see if that helps. I'd also suggest checking out our Deploying Persistent Volume Claims with the Linode Block Storage CSI Driver which goes a bit deeper into how the Linode Block Storage CSI Driver works on your LKE cluster.

Thanks for the reply. Ill give it a try with linode-block-storage-retain and report back. I will also try with linode-block-storage.

So I changed the storage class and waited a bit. They are still pending. This is the output from kubectl describe pvc

waiting for a volume to be created, either by external provisioner "linodebs.csi.linode.com" or manually created by system administrator

Any idea why this would happen?

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct