Create a PersistentVolume on LKE with a different fstype

We're deploying all/most of our backend services to LKE, coming from a hand-rolled (non containerized) environment. One of the needed components is MongoDB. For maximum performance MongoDB runs best on a XFS filesystem (see https://dzone.com/articles/xfs-vs-ext4-comparing-mongodb-performance-on-aws-e) and puts out a startup warning when running of a different (ext4) filesystem.

I'm installing MongoDB using this helm chart https://github.com/helm/charts/tree/master/stable/mongodb-replicaset

The chart allows to set a custom StorageClass, so I created the following StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linode-block-storage-retain-xfs
  annotations:
    lke.linode.com/caplke-version: v1.17.9-001
parameters:
    fstype: xfs
provisioner: linodebs.csi.linode.com
reclaimPolicy: Retain
volumeBindingMode: Immediate

The fstype: xfs part was cargo-culted from some other source.

When the StatefulSet tries to create the Pods it runs into a problem:

MountVolume.MountDevice failed for volume "pvc-c8da784c3d6a4e5a" : rpc error: code = Internal desc = Failed to format and mount device from ("/dev/disk/by-id/scsi-0Linode_Volume_pvcc8da784c3d6a4e5a") to ("/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-c8da784c3d6a4e5a/globalmount") with fstype ("xfs") and options ([]): executable file not found in $PATH

It seems the CSI driver (?) is trying to format the Volume (not sure who's responsible for formatting the Volume, I'm not really into the hard core Kubernetes/CSI internals (yet:)), but fails at finding the right tool (mkfs.xfs probably?)…

Is there a nice way to get support for XFS PeristentVolumes? Am I just CargoCulting the fstype part of the StorageClass incorrectly. Or would this be something that can/will be included into a subsequent Linode-CSI release?

There are some csi-linode- Pods running on the cluster. Would (manually) installing mkfs.xfs on these (nodes or master?) or on the Nodes itself be a possible workaround?

2 Replies

I had the same thought you had that installing mkfs.xfs would be a possible work around for the error you encountered. After reaching out to our Kubernetes team, they let me know that our CSI driver does not currently support raw block mode. This means the only filesystem available is ext4.

Raw block mode is something we are planning to add in the future. You can watch this issue on Github to get notified when we add raw block mode to our CSI driver.

This is now resolved. linode-blockstorage-csi-driver version v0.8.4 and higher should now support one to use xfs. Example:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linode-block-storage-retain-xfs
parameters:
    fstype: xfs
provisioner: linodebs.csi.linode.com
reclaimPolicy: Retain
volumeBindingMode: Immediate
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: csi-example-pod
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: csi-example-pod
  replicas: 1
  template:
    metadata:
      labels:
        app: csi-example-pod
    spec:
      containers:
      - name: csi-example-pod
        image: busybox
        volumeMounts:
        - mountPath: "/data"
          name: csi-example-volume
        command: [ "sleep", "1000000" ]
      volumes:
      - name: csi-example-volume
        persistentVolumeClaim:
          claimName: csi-example-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-example-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: linode-block-storage-retain-xfs

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct