Skip to content

I am trying to create EKS cluster v 1.33 using AWS AL2023 AMI , EKS cluster gets created but no node joining the cluster

0

From AWS console I am trying to create EKS cluster v 1.33 using AWS AL2023 AMI , EKS cluster gets created but no node joining the cluster. I have also tried creating EKS cluster v1.33 using Terraform and still same situation , somewhere I read that al2023 ami for v1.33 are not doesn't work straight away , so I am missing some configuration for getting the nodes joining the EKS cluster . Can someone details what configurations do I need to add to generic ami or in terraform , so that my terraform plan can deploy EKS cluster v1.33 irrespective of region or aws account. Just make more clear I can deploy EKS cluster v1.32 using al2 ami ( provided by AWS) straight out of the box.

asked 2 months ago533 views
2 Answers
0

Unlike Amazon Linux 2, Amazon Linux 2023 does not use the bootstrap.sh script for EKS Node Bootstrapping. Rather, it uses a tool called nodeadm which requires specially formatted EC2 user data. Here is an example set of User Data for an AL2023 EKS Node:

MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="//"

--//
Content-Type: application/node.eks.aws

---
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
 cluster:
   apiServerEndpoint: <replace-me>
   certificateAuthority: <replace-me>
   cidr: 10.100.0.0/16 <replace-me>
   name: my-cluster
 kubelet:
   config:
     clusterDNS:
     - 10.100.0.10 <replace-me>

--//--
AWS
answered 9 days ago
-1

When your EKS nodes fail to join a cluster, there are several common causes to investigate. Based on your experience with v1.33 clusters and AL2023 AMIs, here are the key configurations to check:

  1. Bootstrap Script Configuration: The most common issue is that the UserData of the worker node must contain the bootstrap script with the correct EKS cluster name. Ensure your bootstrap script in the UserData is properly configured with the exact cluster name:
#!/bin/bash
/etc/eks/bootstrap.sh your-cluster-name
  1. IAM Role Configuration: Verify that you've created proper access entries for the node's IAM role ARN (not the instance profile ARN). For self-managed nodes, ensure the node IAM role ARN is specified as the principal ARN in your access entry.

  2. ClusterName Parameter: In your CloudFormation template or Terraform configuration, ensure the ClusterName parameter exactly matches your EKS cluster name. An incorrect value will result in improper configuration of the node's /var/lib/kubelet/kubeconfig file.

  3. AMI Compatibility: While you mentioned AL2023 AMIs for v1.33 might have issues, ensure you're using the latest recommended AMI for your specific EKS version. You can reference the AMI from the SSM parameter store:

/aws/service/eks/optimized-ami/1.33/amazon-linux-2023/recommended/image_id
  1. Launch Template Configuration: Check that your launch template includes all necessary configurations, including security groups that allow proper communication between nodes and the control plane.

  2. Network Configuration: Ensure your VPC, subnets, and security groups are properly configured to allow communication between the nodes and the EKS control plane.

For your Terraform configuration specifically, make sure you're:

  • Using the correct AMI ID for EKS 1.33 with AL2023
  • Properly configuring the bootstrap script in the user_data section
  • Setting the correct IAM roles and policies
  • Configuring proper security groups

If you continue to face issues, you can run the AWS Systems Manager Automation runbook for troubleshooting EKS worker nodes, which can help identify specific configuration problems.
Sources
EKS node creation failed | AWS re:Post
aws eks upgreade to 1.32 issue and node can't join in | AWS re:Post
Troubleshoot problems with Amazon EKS clusters and nodes - Amazon EKS

answered 2 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.