3 Risposte
- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
0
Hi. I am not sure whether manifests you are using are correct. So can you follow Test the Amazon EBS CSI driver Section of following link, and give me result? (You have to delete existing sc and pvc before do this) https://repost.aws/knowledge-center/eks-persistent-storage
0
Hi, I created it while not using this line, my manifest below:
csi.storage.k8s.io/fstype: ext4
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: 'true'
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
con risposta un anno fa
0
Update: Thank you, guys, for your inputs in trying to find the resolution here! Much appreciated it!
The issue was related to EKS clusters having iSCIS enabled. For those who's running into the similar issue, you can check the issues in details here: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1417
con risposta un anno fa
Contenuto pertinente
- AWS UFFICIALEAggiornata 8 mesi fa
- AWS UFFICIALEAggiornata un anno fa
Also FYI, this is the AMI that I'm using in my EKS Cluster Nodes,
amazon-eks-node-1.24-v20230513
Below are the results:
Output of the pod status being described with
kubectl describe po app
Thanks.Hmm..So it doesn't depends on manifest.
Have you tried this Tshoot guide? https://repost.aws/knowledge-center/eks-troubleshoot-ebs-volume-mounts
FYI:I cant find any issues on github
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues