- Newest
- Most votes
- Most comments
This issue is likely related to compatibility problems between the upgraded EKS cluster and the node group configuration. Here are some steps to troubleshoot and resolve the problem:
-
Check CNI plugin compatibility: Ensure that you have installed the correct version of the Amazon VPC CNI plugin that is compatible with Kubernetes 1.30. Verify the installed version and upgrade if necessary.
-
Verify kube-proxy version: Make sure kube-proxy has been upgraded to the version compatible with Kubernetes 1.30 (v1.30.0-eksbuild.3 or later).
-
Examine node logs: Check the logs on the new nodes that are failing to join the cluster. Look for specific error messages related to network plugin initialization or API server connectivity issues.
-
Network connectivity: Verify that the security groups and network ACLs allow proper communication between the control plane and the nodes. Ensure that all necessary ports, especially 443 (HTTPS), are open for inbound and outbound traffic.
-
DNS resolution: Confirm that DNS resolution for the API endpoint is working correctly on the new nodes.
-
Launch template configuration: Since the node group created with a launch template is failing, review the launch template configuration. Ensure it's up to date and compatible with the new Kubernetes version.
-
Update node group: Try updating the node group version using the AWS CLI or Management Console. This may trigger a reevaluation of the node group's status:
aws eks update-nodegroup-version --cluster-name your-cluster-name --nodegroup-name abc-node-group --kubernetes-version 1.30
-
Delete and recreate: If updating doesn't work, consider deleting the problematic node group and recreating it with the correct configuration for Kubernetes 1.30.
-
Check service quotas: Ensure that you haven't hit any service quotas that might prevent new nodes from being created or joining the cluster.
If these steps don't resolve the issue, you may need to contact AWS Support for further assistance, as there could be underlying problems specific to your cluster configuration.
Remember to take necessary precautions to maintain cluster stability and minimize downtime when making changes to your node groups.
Sources
Upgrade of AWS EKS Node group failed with 'CNI plugin not initialized' | AWS re:Post
Node group stuck at "Create failed" status | AWS re:Post
EKS problem after upgrade from 1.23 | AWS re:Post
Update a managed node group for your cluster - Amazon EKS
Relevant content
- asked a year ago
- asked 4 years ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated 10 days ago

Followed above steps still same problem, FYI the Node group "Nodes" are in terminated state, even not able to connect via ssh to view the log, So is there any way to view the upgrade time generated log?