- Newest
- Most votes
- Most comments
This issue with the AWS Load Balancer Controller deployment timing out could be related to several factors, particularly permissions or configuration issues.
First, let's consider the IAM permissions. Even though you've verified that the IAM roles and OIDC providers are in place, the specific error suggests there might be an issue with the controller's ability to access necessary resources. The AWS Load Balancer Controller requires specific permissions to lease resources from the coordination.k8s.io apiGroups. Without these permissions, you would typically see a "Forbidden" error in the logs where the controller cannot get the "leases" resource.
Since you're experiencing a timeout rather than a permission denied error, this suggests the controller might be unable to communicate with the Kubernetes API server or other AWS services. This could be due to:
- Network connectivity issues between the controller pods and the Kubernetes API server
- Misconfigured AWS credentials or expired access keys
- An incorrect cluster endpoint URL in the controller configuration
- Issues with the kubeconfig file configuration
To troubleshoot this issue:
-
Verify that the aws-load-balancer-controller service account has the correct IAM role attached with the AWSLoadBalancerControllerIAMPolicy
-
Check if the controller pods are being created but failing to start. You can examine the pod logs for more specific error messages
-
Ensure that the bootstrap script in the UserData of your worker nodes contains the correct EKS cluster name, as this is critical for nodes to join the cluster properly
-
Verify that your AWS credentials are valid and not expired
-
Check that the controller deployment has the correct configuration, including the cluster name parameter
-
If you're using a managed node group, verify its configuration in the EKS console
-
Try redeploying the controller with a fresh installation:
- Delete the existing deployment
- Recreate the IAM service account
- Reinstall the controller using Helm or manifests
After making these changes, verify the installation by checking the deployment status:
kubectl get deployment -n kube-system aws-load-balancer-controller
A successful deployment should show the pods as READY (either 1/1 or 2/2 depending on whether you deployed using manifests or Helm).
Sources
Resolve leader election issues with the AWS Load Balancer Controller | AWS re:Post
Troubleshot IAM role issues in Amazon EKS | AWS re:Post
EKS node creation failed | AWS re:Post
Unpacking the Cluster Networking for Amazon EKS Hybrid Nodes | AWS re:Post
Install AWS Load Balancer Controller with manifests - Amazon EKS
This could be a problem with Admission Webhooks.
Unlike regular deployments, the controller creates ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks
Check if webhooks exist and are causing circular dependency
kubectl get validatingwebhookconfigurations aws-load-balancer-webhook -o yaml
kubectl get mutatingwebhookconfigurations aws-load-balancer-webhook -o yaml
If the webhook configuration exists but the controller pods aren't running, Kubernetes may be waiting for webhook validation on its own pod creation, creating a deadlock.
I have those, deleted it and still not creating the pods. I will investigate more about the issue related to the webhook. Thanks
Relevant content
- asked 2 years ago
- AWS OFFICIALUpdated a month ago

The AI answer was not helpful as it does not go further in the details, and most of the explanation does not justify just the aws-load-balancer-controller failing to deploy I'm deploying it using helm, the latest version of the helm package and controller (helm 1.13.4 /controller v2.13.4). One of the clusters is running kuenernetes 1.30 and the other 1.31.