3 réponses
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
Hi. I am not sure whether manifests you are using are correct. So can you follow Test the Amazon EBS CSI driver Section of following link, and give me result? (You have to delete existing sc and pvc before do this) https://repost.aws/knowledge-center/eks-persistent-storage
0
Hi, I created it while not using this line, my manifest below:
csi.storage.k8s.io/fstype: ext4
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: 'true'
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
répondu il y a un an
0
Update: Thank you, guys, for your inputs in trying to find the resolution here! Much appreciated it!
The issue was related to EKS clusters having iSCIS enabled. For those who's running into the similar issue, you can check the issues in details here: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1417
répondu il y a un an
Contenus pertinents
- demandé il y a 4 mois
- demandé il y a 2 ans
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a 8 mois
- AWS OFFICIELA mis à jour il y a 8 mois
Also FYI, this is the AMI that I'm using in my EKS Cluster Nodes,
amazon-eks-node-1.24-v20230513
Below are the results:
Output of the pod status being described with
kubectl describe po app
Thanks.Hmm..So it doesn't depends on manifest.
Have you tried this Tshoot guide? https://repost.aws/knowledge-center/eks-troubleshoot-ebs-volume-mounts
FYI:I cant find any issues on github
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues