Hello,
We've AWS EKS cluster running with Kubernetes version 1.24 and with Amazon EBS CSI Driver's version v1.19.0. Currently I'm trying to leverage AWS EBS storage as the PV using the instructions provide here in this link https://repost.aws/knowledge-center/eks-persistent-storage.
- [Successful]
- I'm able to create EBS storage class object resource with following
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp2
csi.storage.k8s.io/fstype: ext4
- I'm able to create PVC object as well
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
namespace: ebs-test
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 1Gi
- Given PVC above, now I'm trying to use the storage from the pod but getting this error:
Normal SuccessfulAttachVolume 3m14s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-******"
Warning FailedMount 73s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9qtds]: timed out waiting for the condition
Warning FailedMount 64s (x9 over 3m12s) kubelet MountVolume.MountDevice failed for volume ""pvc-******" : rpc error: code = Internal desc = could not format "/dev/nvme1n1" and mount it at "/var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.aws.com/********/globalmount": format of disk "/dev/nvme1n1" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/ebs.csi.aws.com/bb41e5ca7e3b6a43537cee9ad3461e53770a21b6ca86fde5c33d55319b185003/globalmount") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.42.9 (28-Dec-2013)
ext2fs_check_if_mount: Can't check if filesystem is mounted due to missing mtab file while determining whether /dev/nvme1n1 is mounted.
/dev/nvme1n1: Device or resource busy while setting up superblock
)
Any guidences would be appriciated here....
Also FYI, this is the AMI that I'm using in my EKS Cluster Nodes,
amazon-eks-node-1.24-v20230513
Below are the results:
Output of the pod status being described with
kubectl describe po app
Thanks.Hmm..So it doesn't depends on manifest.
Have you tried this Tshoot guide? https://repost.aws/knowledge-center/eks-troubleshoot-ebs-volume-mounts
FYI:I cant find any issues on github
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues