Lost Node-Pod-Container Access from CLI, Nodes show Unknown Status in Console, EKSClusterRoleLatest missing

0

Overnight, I lost access to my Pods/Containers via my k9s tool. As far as I know, no changes were made. I can still see several resources - nodes, pods, containers - listed in my namespace, but I can no longer access the logs or shell into the containers. In the developer console, the nodes for this node group show unknown status in the EKS Service console.

I tried updating the Kubernetes version in the console and got the following error: Enter image description here

Trying to check out the cluster's IAM Access Role, EKSClusterRoleLatest fails to resolve: Enter image description here Enter image description here

My user appears to be okay in permissions for kubectl commands: ➜ ~ kubectl auth can-i get pods/exec yes ➜ ~ kubectl auth can-i create pods/exec yes

It seems some of the service accounts are having problems.

  • It seems creating my own EKSClusterRoleLatest with the EKS Cluster Role restored access. I believe this role likely is standard/populated by AWS and not typically a user created role. Unsure how that Role was lost and why my Cluster was still pointed at it.

没有答案

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则