Skip to content

How do I create multiple node groups for Amazon EKS nodes with eksctl?

4 minute read
1

I want to create multiple node groups for Amazon Elastic Kubernetes Service (Amazon EKS) nodes with eksctl.

Resolution

You can create a node group with eksctl and default parameters. Or, create one with custom parameters and a configuration file for multiple node groups.

Note: To install the latest version of eksctl, see Installation options for Eksctl.

To confirm that eksctl is configured and installed on your local machine, open your preferred terminal or command prompt and run the following command:

eksctl version

Then, choose one of the following resolutions based on the type of parameters you want to use.

Create a node group with default parameters

  1. To create an additional node group with default parameters, run this command:

    eksctl create nodegroup --cluster=CLUSTER_NAME --name=NODEGROUP_NAME --region REGION_NAME

    Note: Replace CLUSTER_NAME with your cluster name, NODEGROUP_NAME with your node group name, and REGION_NAME with your AWS Region.

    The following are the default parameters:

    Instance type = m5.largeAMI : latest AWS EKS AMI
    Nodes-desired capacity = 2
    Nodes-min capacity =2 
    Nodes-max capacity=2

    Note: By default, new node groups inherit the version of Kubernetes from the control plane. You can specify a different version of Kubernetes, for example, version=1.27. To use the latest version of Kubernetes, run the –version=latest command.

  2. To confirm that the new node groups are attached to the cluster and verify that the nodes have joined the cluster, run the following command:

    kubectl get nodes
    eksctl get nodegroups --cluster CLUSTER_NAME --region REGION_NAME

    Note: Replace CLUSTER_NAME with your cluster name and REGION_NAME with your AWS Region.

  3. In the output, confirm that the node group status is ACTIVE and the node status is READY.
    Example node group status:

    eksctl get nodegroups --cluster yourClusterName --region yourRegionName
    CLUSTER    NODEGROUP       STATUS  CREATED                MIN SIZE  MAX SIZE  DESIRED CAPACITY  INSTANCE TYPE  IMAGE ID     ASG NAME                      TYPE  
    clusterName example-workers ACTIVE  2023-10-28T14:30:00Z  2         2         2                 m5.large      AL2_x86_64   eks-example-workers-11223344  managed

    Example node status:

    kubectl get nodes
    NAME                                          STATUS  ROLES  AGE  VERSION
    ip-192-168-100-101.us-west-2.compute.internal Ready   <none> 4h   v1.27.1-eks-1
    ip-192-168-100-102.us-west-2.compute.internal Ready   <none> 4h   v1.27.1-eks-1

Create a node group with custom parameters

  1. Define the parameters for the new node group in a configuration file. For example:

    kind: ClusterConfig
    apiVersion: eksctl.io/v1alpha5
    metadata:
        name: CLUSTER_NAME
        region: REGION_NAME
    nodeGroups:
      - name: NODEGROUP_NAME
        availabilityZones: ["AVAILABILITY_ZONE"]
        desiredCapacity: 3
        instanceType: m5.large
        iam:
          instanceProfileARN: "arn:aws:iam::444455556666:instance-profile/eks-nodes-base-role" #Attaching IAM role
          instanceRoleARN: "arn:aws:iam::444455556666:role/eks-nodes-base-role"
        privateNetworking: true
        securityGroups:
          withShared: true
          withLocal: true
          attachIDs: ['SECURITY_GROUP_ID']
        ssh:
          publicKeyName: 'KEY-PAIR-NAME'
        kubeletExtraConfig:
            kubeReserved:
                cpu: "300m"
                memory: "300Mi"
                ephemeral-storage: "1Gi"
            kubeReservedCgroup: "/kube-reserved"
            systemReserved:
                cpu: "300m"
                memory: "300Mi"
                ephemeral-storage: "1Gi"
        tags:
          'environment': 'development'
      - name: ng-2-builders #example of a nodegroup that uses 50% spot instances and 50% on demand instances:
        minSize: 2
        maxSize: 5
        instancesDistribution:
          maxPrice: 0.017
          instanceTypes: ["t3.small", "t3.medium"] # At least two instance types should be specified
          onDemandBaseCapacity: 0
          onDemandPercentageAboveBaseCapacity: 50
          spotInstancePools: 2
        tags:
          'environment': 'production'

    Note: Replace CLUSTER_NAME with your cluster name, REGION_NAME with your AWS Region, NODEGROUP_NAME with your node group name. SECURITY_GROUP_ID with your security group ID, KEY_PAIR_NAME with your key pair name, and AVAILABILITY_ZONE with your availability zone.

    For more information on supported parameters and node group types, see Nodegroups.

  2. To create an additional node group with the configuration file, run the following command:

    eksctl create nodegroup --config-file=CONFIG_FILE

    Note: Replace CONFIG_FILE with your configuration file name.

  3. (Optional) The command in step 2 deploys an AWS CloudFormation stack to create resources for EKS node group. To check the stack status, access the CloudFormation console and confirm that the AWS Region is the same as the cluster's.
    After the stack is in a CREATE_COMPLETE state, the eksctl command exits successfully.

  4. To confirm that the new node groups are attached to the cluster and to verify that the nodes joined the cluster, run the following command:

    kubectl get nodes
    eksctl get nodegroups --cluster CLUSTER_NAME --region REGION_NAME

    Note: Replace CLUSTER_NAME with your cluster name and REGION_NAME with your AWS Region.

    In the output, confirm that the node group status is ACTIVE and the node status is READY.

    Example node group status:

    eksctl get nodegroups --cluster yourClusterName --region yourRegionName
    CLUSTER    NODEGROUP       STATUS  CREATED                MIN SIZE  MAX SIZE  DESIRED CAPACITY  INSTANCE TYPE  IMAGE ID     ASG NAME                      TYPE  
    clusterName example-workers ACTIVE  2023-10-28T14:30:00Z  2         2         3                 m5.large      AL2_x86_64   eks-example-workers-11223344  managed

    Example node status:

    kubectl get nodes
    NAME                                          STATUS  ROLES  AGE  VERSION
    ip-192-168-100-101.us-west-2.compute.internal Ready   <none> 4h   v1.27.1-eks-1
    ip-192-168-100-102.us-west-2.compute.internal Ready   <none> 4h   v1.27.1-eks-1
    ip-192-168-100-103.us-west-2.compute.internal Ready   <none> 4h   v1.27.1-eks-1
AWS OFFICIALUpdated 4 months ago