- Newest
- Most votes
- Most comments
By default, the identity that created an EKS cluster can create Kubernetes resources inside. So you have two options:
- Create an EKS cluster using your OIDC IAM Role and then create Kubernetes resources (secrets, etc.)
- Modify "aws-auth" configMap and add your Terraform Cloud Role there https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html#aws-auth-users
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::555555555555:role/TerraformCloudRole
username: ops-user
groups:
- system:masters
idea is to manually create an eks cluster and then install the controller through terraform cloud, since i was facing same issue with controller i tried to install create secret but it is throwing unauthorized error.
kubectl edit cm aws-auth -n kube-system add your new role into mapRoles as in the example above after that Terraform Cloud will be able to create Secrets and other objects in Kubernetes
You need to pass cluster_ca_certificate
and token
to the Kubernetes Provider for Terraform
Here is one example: https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/eks/kubernetes-config/main.tf
resource "aws_eks_cluster" "k8s-acc" {
name = var.cluster_name
version = var.kubernetes_version
role_arn = aws_iam_role.k8s-acc-cluster.arn
vpc_config {
subnet_ids = aws_subnet.k8s-acc.*.id
}
# Ensure that IAM Role permissions are created before and deleted after EKS Cluster handling.
# Otherwise, EKS will not be able to properly delete EKS managed EC2 infrastructure such as Security Groups.
depends_on = [
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.k8s-acc-AmazonEKSVPCResourceController,
]
}
provider "kubernetes" {
host = data.aws_eks_cluster.target_eks.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.target_eks.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.target_eks_auth.token
}
Or another example using public EKS module
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.13.1"
cluster_name = local.eks_cluster_name
cluster_version = "1.26"
cluster_endpoint_public_access = false
cluster_addons = {
coredns = {
most_recent = true
}
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
before_compute = true
}
aws-ebs-csi-driver = {
most_recent = true
}
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
# This requires the awscli to be installed locally where Terraform is executed
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--role-arn", local.deploy_role]
}
}
resource "kubernetes_secret" "example" {
....
}
thanks for the answer, i have already configured ca_certificate and token
Relevant content
- asked 2 months ago
- asked a year ago
- asked 8 months ago
- asked 2 years ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 2 years ago
Please accept the answer if it was useful for you