3 Respuestas
- Más nuevo
- Más votos
- Más comentarios
0
Hi. I am not sure whether manifests you are using are correct. So can you follow Test the Amazon EBS CSI driver Section of following link, and give me result? (You have to delete existing sc and pvc before do this) https://repost.aws/knowledge-center/eks-persistent-storage
0
Hi, I created it while not using this line, my manifest below:
csi.storage.k8s.io/fstype: ext4
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gp3
provisioner: ebs.csi.aws.com
parameters:
type: gp3
encrypted: 'true'
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
respondido hace un año
0
Update: Thank you, guys, for your inputs in trying to find the resolution here! Much appreciated it!
The issue was related to EKS clusters having iSCIS enabled. For those who's running into the similar issue, you can check the issues in details here: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1417
respondido hace un año
Contenido relevante
- OFICIAL DE AWSActualizada hace un año
- OFICIAL DE AWSActualizada hace 3 años
Also FYI, this is the AMI that I'm using in my EKS Cluster Nodes,
amazon-eks-node-1.24-v20230513
Below are the results:
Output of the pod status being described with
kubectl describe po app
Thanks.Hmm..So it doesn't depends on manifest.
Have you tried this Tshoot guide? https://repost.aws/knowledge-center/eks-troubleshoot-ebs-volume-mounts
FYI:I cant find any issues on github
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues