Lost Node-Pod-Container Access from CLI, Nodes show Unknown Status in Console, EKSClusterRoleLatest missing

0

Overnight, I lost access to my Pods/Containers via my k9s tool. As far as I know, no changes were made. I can still see several resources - nodes, pods, containers - listed in my namespace, but I can no longer access the logs or shell into the containers. In the developer console, the nodes for this node group show unknown status in the EKS Service console.

I tried updating the Kubernetes version in the console and got the following error: Enter image description here

Trying to check out the cluster's IAM Access Role, EKSClusterRoleLatest fails to resolve: Enter image description here Enter image description here

My user appears to be okay in permissions for kubectl commands: ➜ ~ kubectl auth can-i get pods/exec yes ➜ ~ kubectl auth can-i create pods/exec yes

It seems some of the service accounts are having problems.

  • It seems creating my own EKSClusterRoleLatest with the EKS Cluster Role restored access. I believe this role likely is standard/populated by AWS and not typically a user created role. Unsure how that Role was lost and why my Cluster was still pointed at it.

답변 없음

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠