How do I automate the HTTP proxy configuration for Amazon EKS worker nodes with Docker?

6 minute read
0

I want to automate the HTTP proxy configuration for Amazon Elastic Kubernetes Service (Amazon EKS) worker nodes with user data.

Resolution

Note: The following resolution applies only to nodes where the underlying runtime is Docker and doesn't apply to nodes with a containerd runtime. For nodes with a containerd runtime, see How can I automate the configuration of HTTP proxy for Amazon EKS containerd nodes?

To set up a proxy on worker nodes, you must configure the necessary components of your Amazon EKS cluster to communicate from the proxy. Components include the kubelet systemd service, kube-proxy, aws-node pods, and yum update.

To automate the proxy configuration for worker nodes with a Docker runtime, complete the following steps:

  1. Find your cluster's IP address CIDR block:

    $ kubectl get service kubernetes -o jsonpath='{.spec.clusterIP}'; echo

    Note: The preceding command returns either 10.100.0.1 or 172.20.0.1, so the cluster IP address CIDR block is either 10.100.0.0/16 or 172.20.0.0/16.

  2. Based on the command's output, create a ConfigMap file that's named proxy-env-vars-config.yaml.
    If the output has an IP address from the range 172.20.x.x, then use the following ConfigMap structure:

    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: proxy-environment-variables
     namespace: kube-system
    data:
     HTTP_PROXY: http://customer.proxy.host:proxy_port
     HTTPS_PROXY: http://customer.proxy.host:proxy_port
     NO_PROXY: 172.20.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com

    Note: Replace VPC_CIDR_RANGE with the IPv4 address CIDR block of your cluster's virtual private cloud (VPC).
    If the output has an IP address from the range 10.100.x.x, then use the following ConfigMap structure:

    apiVersion: v1
    kind: ConfigMap
    metadata:
     name: proxy-environment-variables
     namespace: kube-system
    data:
     HTTP_PROXY: http://customer.proxy.host:proxy_port
     HTTPS_PROXY: http://customer.proxy.host:proxy_port
     NO_PROXY: 10.100.0.0/16,localhost,127.0.0.1,VPC_CIDR_RANGE,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com

    Note: Replace VPC_CIDR_RANGE with the IPv4 address CIDR block of your cluster's VPC.
    Amazon EKS clusters with private API server endpoint access, private subnets, and no internet access require additional endpoints. If you use the preceding configuration to build a cluster, then you must create and add endpoints for the following services:
    Amazon Elastic Container Registry (Amazon ECR)
    Amazon Simple Storage Service (Amazon S3)
    Amazon Elastic Compute Cloud (Amazon EC2)
    Amazon Virtual Private Cloud (Amazon VPC)
    Important: You must add the public endpoint subdomain to the NO_PROXY variable. For example, add the .s3.us-east-1.amazonaws.com domain for Amazon S3 in the us-east-1 AWS Region. If you activate endpoint private access for your Amazon EKS cluster, then you must add the Amazon EKS endpoint to the NO_PROXY variable. For example, add the .us-east-1.eks.amazonaws.com domain for your Amazon EKS cluster in the us-east-1 AWS Region.

  3. Verify that the NO_PROXY variable in configmap/proxy-environment-variables that kube-proxy and aws-node pods use includes the Kubernetes cluster IP address space. For example, 10.100.0.0/16 is used in the preceding code example for the ConfigMap file where the IP address range is from 10.100.x.x.

  4. Apply the ConfigMap:

    $ kubectl apply -f /path/to/yaml/proxy-env-vars-config.yaml
  5. To configure the Docker daemon and kubelet, include user data in your worker nodes:

    Content-Type: multipart/mixed; boundary="==BOUNDARY=="
    MIME-Version:  1.0
    
    --==BOUNDARY==
    Content-Type: text/cloud-boothook; charset="us-ascii"
    
    #Set the proxy hostname and port
    PROXY="proxy.local:3128"
    MAC=$(curl -s http://169.254.169.254/latest/meta-data/mac/)
    VPC_CIDR=$(curl -s http://169.254.169.254/latest/meta-data/network/interfaces/macs/$MAC/vpc-ipv4-cidr-blocks | xargs | tr ' ' ',')
    
    #Create the docker systemd directory
    mkdir -p /etc/systemd/system/docker.service.d
    
    #Configure yum to use the proxy
    cloud-init-per instance yum_proxy_config cat << EOF >> /etc/yum.conf
    proxy=http://$PROXY
    EOF
    
    #Set the proxy for future processes, and use as an include file
    cloud-init-per instance proxy_config cat << EOF >> /etc/environment
    http_proxy=http://$PROXY
    https_proxy=http://$PROXY
    HTTP_PROXY=http://$PROXY
    HTTPS_PROXY=http://$PROXY
    no_proxy=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com
    NO_PROXY=$VPC_CIDR,localhost,127.0.0.1,169.254.169.254,.internal,s3.amazonaws.com,.s3.us-east-1.amazonaws.com,api.ecr.us-east-1.amazonaws.com,dkr.ecr.us-east-1.amazonaws.com,ec2.us-east-1.amazonaws.com
    EOF
    
    #Configure docker with the proxy
    cloud-init-per instance docker_proxy_config tee <<EOF /etc/systemd/system/docker.service.d/proxy.conf >/dev/null
    [Service]
    EnvironmentFile=/etc/environment
    EOF
    
    #Configure the kubelet with the proxy
    cloud-init-per instance kubelet_proxy_config tee <<EOF /etc/systemd/system/kubelet.service.d/proxy.conf >/dev/null
    [Service]
    EnvironmentFile=/etc/environment
    EOF
    
    #Reload the daemon and restart docker to reflect proxy configuration at launch of instance
    cloud-init-per instance reload_daemon systemctl daemon-reload 
    cloud-init-per instance enable_docker systemctl enable --now --no-block docker
    
    --==BOUNDARY==
    Content-Type:text/x-shellscript; charset="us-ascii"
    
    #!/bin/bash
    set -o xtrace
    
    #Set the proxy variables before running the bootstrap.sh script
    set -a
    source /etc/environment
    
    /etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
    
    # Use the cfn-signal only if the node is created through an AWS CloudFormation stack and needs to signal back to an AWS CloudFormation resource (CFN_RESOURCE_LOGICAL_NAME) that waits for a signal from this EC2 instance to progress through either:
    # - CreationPolicy https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-creationpolicy.html
    # - UpdatePolicy https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html
    # cfn-signal will signal back to AWS CloudFormation using https transport, so set the proxy for an HTTPS connection to AWS CloudFormation
    /opt/aws/bin/cfn-signal
        --exit-code $? \
        --stack  ${AWS::StackName} \
        --resource CFN_RESOURCE_LOGICAL_NAME  \
        --region ${AWS::Region} \
        --https-proxy $HTTPS_PROXY
    
    --==BOUNDARY==--

    Important: Before you start the Docker daemon and kubelet, you must update or create yum, Docker, and kubelet configuration files.
    For more information about how to use an AWS CloudFormation template to include user data in worker nodes, see Create self-managed Amazon Linux nodes.

  6. To update the aws-node and kube-proxy pods, run the following commands:

    $ kubectl patch -n kube-system -p '{ "spec": {"template": { "spec": { "containers": [ { "name": "aws-node", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset aws-node
    $ kubectl patch -n kube-system -p '{ "spec": {"template":{ "spec": { "containers": [ { "name": "kube-proxy", "envFrom": [ { "configMapRef": {"name": "proxy-environment-variables"} } ] } ] } } } }' daemonset kube-proxy

    If you change the ConfigMap, then apply the updates, and set the ConfigMap in the pods again:

    $ kubectl set env daemonset/kube-proxy --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'
    $ kubectl set env daemonset/aws-node --namespace=kube-system --from=configmap/proxy-environment-variables --containers='*'

    Important: When you update kube-proxy or aws-node, you must also update all YAML modifications. To update a ConfigMap to a default value, run the eksctl utils update-kube-proxy or eksctl utils update-aws-node commands.
    If the proxy loses connectivity to the API server, then the proxy becomes a single point of failure and can result in unpredictable cluster behavior. To prevent this issue, run your proxy behind a service discovery namespace or load balancer.

  7. Check that the proxy variables are used in the kube-proxy and aws-node pods:

    $ kubectl describe pod kube-proxy-xxxx -n kube-system

    Example output:

    Environment:
     HTTPS_PROXY: <set to the key 'HTTPS_PROXY' of config map 'proxy-environment-variables'> Optional: false
     HTTP_PROXY: <set to the key 'HTTP_PROXY' of config map 'proxy-environment-variables'> Optional: false

    If you don't use AWS PrivateLink, then verify access to API endpoints through a proxy server for Amazon EC2, Amazon ECR, and Amazon S3.

AWS OFFICIAL
AWS OFFICIALUpdated 2 months ago
2 Comments

Why doesn't the vpc & kube-proxy have the ability to configure this through the add-on instead of having to patch the daemonset manually post install?

replied 6 months ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

profile pictureAWS
MODERATOR
replied 6 months ago