Failed build model due to NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors

0

I am running an Observability workload like Jaeger Query using Kubernetes Deployment, Service, and Ingress in the AWS EKS Cluster in my Non-Prod environment and it's running without any issues but in the PROD environment when I applied the same manifest in the AWS EKS Cluster then I am getting below error in Kubernetes Ingress component

"Failed build model due to NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors"

Also, I observed that in the Non-Prod environment when I applied the Kubernetes Ingress component manifest in the EKS Cluster then an ALB was created and I was able to see the ALB in the load balancer section of EC2 service

Please note I am using SSO login and not giving any secret key and access key in the manifest.

Below is the manifest of aws-load-balancer-controller

apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  annotations:
    meta.helm.sh/release-name: aws-load-balancer-controller
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2024-08-29T00:12:17Z"
  labels:
    app.kubernetes.io/instance: aws-load-balancer-controller
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: aws-load-balancer-controller
    app.kubernetes.io/version: v2.8.2
    helm.sh/chart: aws-load-balancer-controller-1.8.2
  name: aws-load-balancer-controller
  namespace: kube-system
  resourceVersion: "560473"
  uid: c3693edd-5270-42eb-94f1-6a3af669e697

Logs of AWS load balancer controller

Found 2 pods, using pod/aws-load-balancer-controller-6f476768c5-ptglx
{"level":"info","ts":"2024-08-29T00:13:24Z","msg":"version","GitVersion":"v2.8.2","GitCommit":"f39ae43121c3f4de0129dda483c10b17a687491d","BuildDate":"2024-08-09T20:18:06+0000"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"setup","msg":"adding health check for controller"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"setup","msg":"adding readiness check for webhook"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/mutate-v1-pod"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/mutate-v1-service"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-elbv2-k8s-aws-v1beta1-ingressclassparams"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/mutate-elbv2-k8s-aws-v1beta1-targetgroupbinding"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-elbv2-k8s-aws-v1beta1-targetgroupbinding"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"controller-runtime.webhook","msg":"Registering webhook","path":"/validate-networking-v1-ingress"}
{"level":"info","ts":"2024-08-29T00:13:24Z","logger":"setup","msg":"starting podInfo repo"}
{"level":"info","ts":"2024-08-29T00:13:26Z","logger":"controller-runtime.metrics","msg":"Starting metrics server"}
{"level":"info","ts":"2024-08-29T00:13:26Z","logger":"controller-runtime.webhook","msg":"Starting webhook server"}
{"level":"info","ts":"2024-08-29T00:13:26Z","logger":"controller-runtime.metrics","msg":"Serving metrics server","bindAddress":":8080","secure":false}
{"level":"info","ts":"2024-08-29T00:13:26Z","msg":"starting server","name":"health probe","addr":"[::]:61779"}
{"level":"info","ts":"2024-08-29T00:13:26Z","logger":"controller-runtime.certwatcher","msg":"Updated current TLS certificate"}
{"level":"info","ts":"2024-08-29T00:13:26Z","logger":"controller-runtime.webhook","msg":"Serving webhook server","host":"","port":9443}
{"level":"info","ts":"2024-08-29T00:13:26Z","logger":"controller-runtime.certwatcher","msg":"Starting certificate watcher"}
I0829 00:13:26.476885       1 leaderelection.go:250] attempting to acquire leader lease kube-system/aws-load-balancer-controller-leader...

Atif
asked a month ago55 views
2 Answers
1
Accepted Answer

It seems the main issue in your production environment is related to AWS credentials. The error message suggests that the AWS Load Balancer Controller can't find valid credentials to authenticate with AWS services. Let's break down the situation and explore some potential solutions:

Error Analysis: The error "NoCredentialProviders: no valid providers in chain" typically occurs when the AWS SDK can't find any credentials to use. This could be due to several reasons:

  1. Missing or incorrectly configured IAM role for the EKS cluster
  2. Issues with the AWS Load Balancer Controller's service account
  3. Differences in AWS authentication setup between non-prod and prod environments

Comparison with Non-Prod: In your non-prod environment, everything works fine, and an ALB is created automatically. This suggests that the AWS Load Balancer Controller has the necessary permissions in that environment. SSO Login: You mentioned using SSO login. While this is good for user authentication, the AWS Load Balancer Controller needs to authenticate as a service, not as a user. It typically uses IAM roles for this purpose. Potential Solutions:

  • Check IAM Role: Ensure that your EKS cluster has an appropriate IAM role attached with the necessary permissions for the AWS Load Balancer Controller. This role should have permissions to manage ALBs, EC2 instances, etc.
  • IRSA (IAM Roles for Service Accounts): Implement IRSA for the AWS Load Balancer Controller. This allows you to assign an IAM role directly to the Kubernetes service account used by the controller. Create an IAM role with the necessary permissions Modify the service account to use this IAM role Update the AWS Load Balancer Controller deployment to use the modified service account

Please let me know if this doesn't help then we can troubleshoot further.

answered a month ago
0

It was an IAM role issue, and after applying the IAM role and restarting aws-load-balancer-controller we are able to resolve the issue.

Atif
answered a month ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions