- Newest
- Most votes
- Most comments
The first person or role that creates the EKS cluster is the system administrator.
I am going to assume your terraform AWS provider is using its own IAM access key or role in the account. If you use the same Role/Access Keys as TF is using, im 99% sure you will see everything thats missing
Even if you’re an IAM full administrator in the AWS account you will not be able to see the cluster fully.
You need to grant access as the system administrator to other IAM principles. This should resolve your issue.
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html
When you create an Amazon EKS cluster, the IAM principal that creates the cluster is automatically granted system:masters permissions in the cluster's role-based access control (RBAC) configuration in the Amazon EKS control plane. This principal doesn't appear in any visible configuration, so make sure to keep track of which principal originally created the cluster. To grant additional IAM principals the ability to interact with your cluster, edit the aws-auth ConfigMap within Kubernetes and create a Kubernetes rolebinding or clusterrolebinding with the name of a group that you specify in the aws-auth ConfigMap
Thanks, after applying the maprole in aws-auth, masters of kubernetes for the role of iam, It showing the nodes in aws console.
Relevant content
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
Please accept this answer if it resolved your issue. It helps me and others with the same issue.