Skip to content

How can I automate the configuration of HTTP proxy for Amazon EKS containerd nodes?

11 minute read
0

I want to automate the HTTP proxy configuration for Amazon Elastic Kubernetes Service (Amazon EKS) nodes with containerd runtime.

Short description

You can automate HTTP proxy configuration for Amazon EKS nodes with a custom launch template that includes proxy settings in the user data. The configuration approach varies based on your AMI family: Amazon Linux 2, Amazon Linux 2023, or Bottlerocket.

Note: For Amazon EKS clusters version 1.24 and later, containerd is the default container runtime.

Resolution

To configure your managed node group with HTTP proxy settings, create a custom launch template with your Amazon Machine Image (AMI) ID. Then, configure the appropriate settings for your HTTP proxy and the environment values of your cluster.

Choose the configuration approach based on your AMI family:

Configure HTTP proxy for Amazon Linux 2 nodes

Create the launch template

  1. Open the Amazon Elastic Compute Cloud (Amazon EC2) console.
  2. In the navigation pane, choose Launch Templates.
  3. Choose Create launch template.
  4. For Launch template name, enter a name for your template.
  5. For Application and OS Images (Amazon Machine Image), choose your Amazon Linux 2 AMI ID.
  6. Configure the following options:
    For Instance type, choose your required instance type.
    For Key pair name, choose your Amazon EC2 SSH key pair.
    For Security groups, choose your security groups.
  7. Expand Advanced details.
  8. For User data, enter the following configuration:
    MIME-Version: 1.0
    Content-Type: multipart/mixed; boundary="==BOUNDARY=="
    
    --==BOUNDARY==
    Content-Type: text/cloud-boothook; charset="us-ascii"
    
    #Set the proxy hostname and port
    PROXY=XXXXXXX:3128
    TOKEN=`curl -X PUT "http://[IP_ADDRESS]/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
    MAC=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v -s http://[IP_ADDRESS]/latest/meta-data/mac/)
    VPC_CIDR=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v -s http://[IP_ADDRESS]/latest/meta-data/network/interfaces/macs/$MAC/vpc-ipv4-cidr-blocks | xargs | tr ' ' ',')
    
    #Create the containerd and sandbox-image systemd directory
    mkdir -p /etc/systemd/system/containerd.service.d
    mkdir -p /etc/systemd/system/sandbox-image.service.d
    
    #[Optional] Configure yum to use the proxy
    cloud-init-per instance yum_proxy_config cat << EOF >> /etc/yum.conf
    proxy=http://$PROXY
    EOF
    
    #Set the proxy for future processes, and use as an include file
    cloud-init-per instance proxy_config cat << EOF >> /etc/environment
    http_proxy=http://$PROXY
    https_proxy=http://$PROXY
    HTTP_PROXY=http://$PROXY
    HTTPS_PROXY=http://$PROXY
    no_proxy=$VPC_CIDR,[IP_ADDRESS],[IP_ADDRESS],[IP_ADDRESS],.internal,.eks.amazonaws.com
    NO_PROXY=$VPC_CIDR,[IP_ADDRESS],[IP_ADDRESS],[IP_ADDRESS],.internal,.eks.amazonaws.com
    EOF
    
    #Configure Containerd with the proxy
    cloud-init-per instance containerd_proxy_config tee <<EOF /etc/systemd/system/containerd.service.d/http-proxy.conf >/dev/null
    [Service]
    EnvironmentFile=/etc/environment
    EOF
    
    #Configure sandbox-image with the proxy
    cloud-init-per instance sandbox-image_proxy_config tee <<EOF /etc/systemd/system/sandbox-image.service.d/http-proxy.conf >/dev/null
    [Service]
    EnvironmentFile=/etc/environment
    EOF
    
    #Configure the kubelet with the proxy
    cloud-init-per instance kubelet_proxy_config tee <<EOF /etc/systemd/system/kubelet.service.d/proxy.conf >/dev/null
    [Service]
    EnvironmentFile=/etc/environment
    EOF
    
    cloud-init-per instance reload_daemon systemctl daemon-reload
    
    --==BOUNDARY==
    Content-Type:text/x-shellscript; charset="us-ascii"
    
    #!/bin/bash
    set -o xtrace
    
    #Set the proxy variables before running the bootstrap.sh script
    set -a
    source /etc/environment
    
    #Run the bootstrap.sh script
    B64_CLUSTER_CA=YOUR_CLUSTER_CA
    API_SERVER_URL=API_SERVER_ENDPOINT
    
    /etc/eks/bootstrap.sh EKS_CLUSTER_NAME --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL
    
    --==BOUNDARY==--
    Note: Replace XXXXXXX:3128 with your proxy hostname and port. Replace YOUR_CLUSTER_CA with your cluster certificate authority (CA). Replace API_SERVER_ENDPOINT with your server endpoint. Replace EKS_CLUSTER_NAME with your cluster name.
  9. Choose Create launch template.

Verify the configuration

After you create your managed node group with the launch template, verify the proxy configuration:

  1. To check the status of your nodes, run the following command:

    kubectl get nodes -o wide
  2. To verify the proxy environment variables are set, connect to a node and run the following command:

    systemctl show containerd | grep Environment

The output shows the proxy environment variables configured for containerd.

Configure HTTP proxy for Amazon Linux 2023 nodes

Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshooting errors for the AWS CLI. Also, make sure that you're using the most recent AWS CLI version.

Amazon Linux 2023 introduces a new node initialization process nodeadm that uses a YAML configuration schema. For this process, nodeadm runs in two phases: config and run. The nodeadm-config phase runs before cloud-init, while nodeadm-run runs after cloud-init.

During the nodeadm-config phase, the system calls the Amazon EC2 service to retrieve instance details. To prevent the Amazon EC2 call before you configure the proxy settings, use the InstanceIdNodeName feature gate in nodeadm.

Create a worker node IAM role

  1. Create a new worker node AWS Identity and Access Management (IAM) role with the required policies.

  2. Use one of the following options to provide the worker node IAM role the appropriate access:

    Option 1: Create an EKS Access Entry
    To create an Access Entry of type EC2, run the following create-access-entry AWS CLI command:

    aws eks create-access-entry --cluster-name EKS_CLUSTER_NAME --principal-arn WORKER_NODE_IAM_ROLE_ARN --type EC2

    Note: Replace EKS_CLUSTER_NAME with the name of your cluster. Replace WORKER_NODE_IAM_ROLE_ARN with your worker node IAM role ARN.

    Option 2: Update the aws-auth ConfigMap
    Add the following configuration to your aws-auth ConfigMap in YAML format:

    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: ROLE_ARN
      username: system:node:{{SessionName}}

    Note: Replace ROLE_ARN with your worker node IAM role ARN. This configuration grants the necessary Kubernetes RBAC permissions to the worker node IAM role.

Create the launch template

  1. Open the Amazon EC2 console.
  2. In the navigation pane, choose Launch Templates.
  3. Choose Create launch template.
  4. For Launch template name, enter a name for your template.
  5. For Application and OS Images (Amazon Machine Image), choose your Amazon Linux 2023 AMI ID.
  6. Configure the following options:
    For Instance type, choose your required instance type.
    For Key pair name, choose your Amazon EC2 SSH key pair.
    For Security groups, choose your security groups.
  7. Expand Advanced details.
  8. For User data, enter the following configuration:
    MIME-Version: 1.0
    Content-Type: multipart/mixed; boundary="==BOUNDARY=="
    
    --==BOUNDARY==
    Content-Type: text/cloud-boothook; charset="us-ascii"
    
    #!/bin/bash
    
    #Set the proxy hostname and port
    PROXY=XXXXXXX:3128
    TOKEN=`curl -X PUT "http://[IP_ADDRESS]/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
    MAC=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v -s http://[IP_ADDRESS]/latest/meta-data/mac/)
    VPC_CIDR=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v -s http://[IP_ADDRESS]/latest/meta-data/network/interfaces/macs/$MAC/vpc-ipv4-cidr-blocks | xargs | tr ' ' ',')
    
    #[Optional] Configure yum to use the proxy
    cloud-init-per instance yum_proxy_config cat << EOF >> /etc/yum.conf
    proxy=http://$PROXY
    EOF
    
    #Set the proxy for future processes, and use as an include file
    cloud-init-per instance proxy_config cat << EOF >> /etc/environment
    http_proxy=http://$PROXY
    https_proxy=http://$PROXY
    HTTP_PROXY=http://$PROXY
    HTTPS_PROXY=http://$PROXY
    no_proxy=$VPC_CIDR,[IP_ADDRESS],[IP_ADDRESS],[IP_ADDRESS],.internal,.eks.amazonaws.com
    NO_PROXY=$VPC_CIDR,[IP_ADDRESS],[IP_ADDRESS],[IP_ADDRESS],.internal,.eks.amazonaws.com
    EOF
    
    #Configure Containerd with the proxy
    cloud-init-per instance containerd_proxy_config tee <<EOF /etc/systemd/system/containerd.service.d/http-proxy.conf >/dev/null
    [Service]
    EnvironmentFile=/etc/environment
    EOF
    
    #Configure the kubelet with the proxy
    cloud-init-per instance kubelet_proxy_config tee <<EOF /etc/systemd/system/kubelet.service.d/proxy.conf >/dev/null
    [Service]
    EnvironmentFile=/etc/environment
    EOF
    
    cloud-init-per instance reload_daemon systemctl daemon-reload
    
    --==BOUNDARY==
    Content-Type: application/node.eks.aws
    
    ---
    apiVersion: node.eks.aws/v1alpha1
    kind: NodeConfig
    spec:
      featureGates:
        InstanceIdNodeName: true
      cluster:
        name: EKS_CLUSTER_NAME
        apiServerEndpoint: API_SERVER_ENDPOINT
        certificateAuthority: YOUR_CLUSTER_CA
        cidr: KUBERNETES_SERVICE_CIDR_RANGE
    
    --==BOUNDARY==--
    Note: Replace XXXXXXX:3128 with your proxy hostname and port. Replace YOUR_CLUSTER_CA with your cluster certificate authority (CA). Replace API_SERVER_ENDPOINT with your server endpoint. Replace EKS_CLUSTER_NAME with your cluster name. Replace KUBERNETES_SERVICE_CIDR_RANGE with your cluster's service CIDR range. If the service CIDR isn't supplied in the user data, then the nodeadm-config phase fails.
  9. Choose Create launch template.

Verify the configuration

After you create your managed node group with the launch template, verify the proxy configuration:

  1. To check the status of your nodes, run the following command:

    kubectl get nodes -o wide
  2. To verify the proxy environment variables are set, connect to a node and run the following command:

    systemctl show containerd | grep Environment

The output shows the proxy environment variables configured for containerd.

Configure HTTP proxy for Bottlerocket nodes

Create the launch template

  1. Open the Amazon EC2 console.

  2. In the navigation pane, choose Launch Templates.

  3. Choose Create launch template.

  4. For Launch template name, enter a name for your template.

  5. For Application and OS Images (Amazon Machine Image), choose your Bottlerocket AMI ID.

  6. Configure the following options:
    For Instance type, choose your required instance type.
    For Key pair name, choose your Amazon EC2 SSH key pair.
    For Security groups, choose your security groups.

  7. Expand Advanced details.

  8. For User data, enter the following configuration:

    [settings.kubernetes]
    "cluster-name" = "EKS_CLUSTER_NAME"
    "api-server" = "API_SERVER_ENDPOINT"
    "cluster-certificate" = "YOUR_CLUSTER_CA"
    
    [settings.network]
    no-proxy = ["VPC_CIDR_RANGE","[IP_ADDRESS]","[IP_ADDRESS]","[IP_ADDRESS]",".internal",".eks.amazonaws.com"]
    https-proxy = "XXXXXXX:3128"

    Note: Replace XXXXXXX:3128 with your proxy hostname and port. Replace YOUR_CLUSTER_CA with your cluster certificate authority (CA). Replace API_SERVER_ENDPOINT with your server endpoint. Replace EKS_CLUSTER_NAME with your cluster name. Replace VPC_CIDR_RANGE with your VPC CIDR. Bottlerocket automatically configures the proxy settings for the containerd and kubelet service.

    To add self-signed certificates to your Bottlerocket instances, use the following configuration in your user data section:

    [settings.pki.proxy-bundle]
    data="ENCODED_CA_DATA"
    trusted=true

    Note: Replace ENCODED_CA_DATA with the base64-encoded certificate data.

    To encode your certificate file, run the following command:

    base64 -i FILE -w0
  9. Choose Create launch template.

Verify the configuration

After you create your managed node group with the launch template, verify the proxy configuration:

  1. To check the status of your nodes, run the following command:

    kubectl get nodes -o wide
  2. To verify the proxy settings, connect to a Bottlerocket node using AWS Systems Manager Session Manager and run the following command:

    apiclient get settings.network

The output shows the proxy settings configured for the node.

Create the managed node group

After you create your launch template, create a new managed node group that uses the custom launch template.

For more information about creating managed node groups with launch templates, see Customize managed nodes with launch templates.

Configure proxy for fully private clusters

Important: When you use EKS Pod Identity associations with a proxy configuration, you must also include [IP_ADDRESS] (IPv4) or [[IP_ADDRESS]] (IPv6) in your no_proxy/NO_PROXY environment variables.

Amazon EKS clusters with private API server endpoint access, private subnets, and no internet access require additional endpoints. If you use the preceding configuration to build a cluster, then you must create and add endpoints for the following services:

  • Amazon EC2
  • Amazon Elastic Container Registry to pull container images
  • Amazon Elastic Load Balancing for Application Load Balancers and Network Load Balancers
  • Amazon CloudWatch Logs
  • AWS Security Token Service when you use IAM roles for service accounts
  • Amazon EKS Auth when you use Pod Identity associations
  • Amazon EKS

After you create these endpoints, configure the NO_PROXY and no_proxy variables in your Amazon EC2 instance launch template user data. Include the public endpoint subdomains specific to your AWS Region and services.

For example:

For Amazon Simple Storage Service (Amazon S3):

  • If your bucket is in us-east-1, then add: .s3.us-east-1.amazonaws.com
  • If your bucket is in eu-west-1, then add: .s3.eu-west-1.amazonaws.com

For Amazon EKS when you use private endpoint access:

  • If your cluster is in us-east-1, then add: .us-east-1.eks.amazonaws.com
  • If your cluster is in eu-west-1, then add: .eu-west-1.eks.amazonaws.com

Note: Replace the region identifiers with the AWS Region where you have deployed your resources, and add these endpoints to both NO_PROXY and no_proxy variables. Depending on your cluster's workload and add-ons, add other services to your proxy configuration.

Configure proxy for public clusters

Note: If you have a different configuration, then these steps are optional.

If you route traffic from the cluster to the internet through an HTTP proxy and your Amazon EKS endpoint is public, then complete the following task.

Create a ConfigMap to configure the environment values:

apiVersion: v1
kind: ConfigMap
metadata:
  name: proxy-environment-variables
  namespace: kube-system

data:
  HTTP_PROXY: http://XXXXXXX:3128
  HTTPS_PROXY: http://XXXXXXX:3128
  NO_PROXY: KUBERNETES_SERVICE_CIDR_RANGE,[IP_ADDRESS],[IP_ADDRESS],VPC_CIDR_RANGE,[IP_ADDRESS],.internal,.eks.amazonaws.com,ec2.us-east-1.amazonaws.com
  no_proxy: KUBERNETES_SERVICE_CIDR_RANGE,[IP_ADDRESS],[IP_ADDRESS],VPC_CIDR_RANGE,[IP_ADDRESS],.internal,.eks.amazonaws.com,ec2.us-east-1.amazonaws.com

Note: Replace KUBERNETES_SERVICE_CIDR_RANGE and VPC_CIDR_RANGE with the values for your CIDR ranges. After you create the VPC endpoints (Amazon EKS and Amazon EC2), add AWS service endpoints to NO_PROXY and no_proxy.

Apply the ConfigMap:

kubectl apply -f proxy-configmap.yaml

Configure aws-node and kube-proxy

To set your HTTP proxy configuration to aws-node and kube-proxy, run the following commands:

kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "aws-node", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset aws-node

kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "kube-proxy", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset kube-proxy

Verify the proxy configuration

To verify the proxy configuration is working correctly, complete the following steps:

  1. To check the status of your nodes, run the following command:

    kubectl get nodes -o wide
  2. To test pod connectivity through the proxy, run the following commands:

    kubectl run test-pod --image=amazonlinux:2 --restart=Never -- sleep 300
    kubectl get pods -A
  3. Check your proxy log for additional information on your nodes' connectivity. The logs should show successful connections (TCP_TUNNEL/200) to container registry endpoints.
    Example output:

    192.168.100.114 TCP_TUNNEL/200 6230 CONNECT registry-1.docker.io:443 - HIER_DIRECT/XX.XX.XX.XX -
    192.168.100.114 TCP_TUNNEL/200 10359 CONNECT auth.docker.io:443 - HIER_DIRECT/XX.XX.XX.XX -

Related information

Upgrade from Amazon Linux 2 to Amazon Linux 2023

Create self-managed Bottlerocket nodes

Grant IAM users access to Kubernetes with EKS access entries

AWS OFFICIALUpdated 2 months ago
13 Comments

Keep in mind calls to the metadata service may require a token: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html

replied 3 years ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

AWS
MODERATOR
replied 3 years ago

There is minor formatting error on the user-data, it should be like the following in the very top

MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
AWS
replied 2 years ago

Under what circumstances and use cases would it be necessary or recommended to configure HTTP proxy settings for aws-node and kube-proxy within an Amazon Elastic Kubernetes Service (EKS) cluster? What are the best practices and considerations for implementing such configurations, and what potential benefits or scenarios might warrant this approach?

I do understand that kubelet/containerd need to pull image from external network etc what about aws-node and kube-proxy component.

replied 2 years ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

AWS
MODERATOR
replied 2 years ago

Why doesn't the vpc & kube-proxy have the ability to configure this through the add-on instead of having to patch the daemonset manually post install?

replied 2 years ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

AWS
MODERATOR
replied 2 years ago

If you are using EKS Pod Identities to pass IAM roles into your workloads, we must add the IP use by the Pod Identity Agent to the no_proxy/NO_PROXY environment variables for the pods, see below. Otherwise the requests would be proxied and wouldn't make it to the eks-pod-identity-agent DaemonSet pods

NO_PROXY: KUBERNETES_SERVICE_CIDR_RANGE,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,169.254.170.23

IPv4: 169.254.170.23

IPv6: [fd00:ec2::23]

AWS
replied a year ago

For this to work in a private subnet calls to several aws endpoints need to be created in the vpc. These services include eks-api, eks-auth, ec2, sts service, logs, application loadbalancer for the alb ingress controller. Without these private endpoints I could not use aws services nor could I create ebs volumes to mount to my containers and the logs were frustratingly vague.

For reference this problem is mentioned here on this article: https://repost.aws/knowledge-center/eks-http-proxy-configuration-automation

replied a year ago

This no longer works for AL 2023 Nodes in kubernetes. This is the error in the log:

!!!!!!!!!!
!!!!!!!!!! ERROR: bootstrap.sh has been removed from AL2023-based EKS AMIs.
!!!!!!!!!!
!!!!!!!!!! EKS nodes are now initialized by nodeadm.
!!!!!!!!!!
!!!!!!!!!! To migrate your user data, see:
!!!!!!!!!!
!!!!!!!!!!     https://awslabs.github.io/amazon-eks-ami/nodeadm/
!!!!!!!!!!

It would be nice if this document would be updated to include the configuration for this new setup.

replied 6 months ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

AWS
MODERATOR
replied 6 months ago

Hello again.

Not only is this document wrong for EKS v1.33 it turns out that if you are using v1.33 it is not possible at the moment to configure these managed eks nodes behind a proxy.

This took me 2 weeks to track down. It should be documented here.

https://github.com/awslabs/amazon-eks-ami/issues/2128

Please add a note here letting people know that 1.33 will not work with this process so other people don't have to waste time troubleshooting an unsupported setup chasing "ghosts"

For anyone that wants to use kube version 1.33 but is stuck behind a proxy the AL2 images for kube v1.32 will work until AWS fixes the nodeadm tool to support proxies.

replied 5 months ago

Hello, its me again.

Following the instruction for the sectioned titled "Configure the proxy setting for aws-node and kube-proxy" completely broke my eks nodes and resulted in many hours of disappoint and troubleshooting. This configuration directly breaks the daemonsets its applied and in order to make my nodes work again I had to rollback these changes.

When I built another cluster I skipped this section. It seems the userdata and private links are enough to get the cluster working behind a proxy. Adding this configuration results in broken Nodes and a complete broken aws-node/kube-proxy daemonset.

replied 5 months ago