Lost Node-Pod-Container Access from CLI, Nodes show Unknown Status in Console, EKSClusterRoleLatest missing

0

Overnight, I lost access to my Pods/Containers via my k9s tool. As far as I know, no changes were made. I can still see several resources - nodes, pods, containers - listed in my namespace, but I can no longer access the logs or shell into the containers. In the developer console, the nodes for this node group show unknown status in the EKS Service console.

I tried updating the Kubernetes version in the console and got the following error: Enter image description here

Trying to check out the cluster's IAM Access Role, EKSClusterRoleLatest fails to resolve: Enter image description here Enter image description here

My user appears to be okay in permissions for kubectl commands: ➜ ~ kubectl auth can-i get pods/exec yes ➜ ~ kubectl auth can-i create pods/exec yes

It seems some of the service accounts are having problems.

  • It seems creating my own EKSClusterRoleLatest with the EKS Cluster Role restored access. I believe this role likely is standard/populated by AWS and not typically a user created role. Unsure how that Role was lost and why my Cluster was still pointed at it.

沒有答案

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南