Skip to content

How do I use Amazon EBS Multi-Attach to attach the same volume to multiple workloads in Amazon EKS?

5 minute read
1

I want to use Amazon Elastic Block Store (Amazon EBS) Multi-Attach for multiple workloads across multiple clusters in Amazon Elastic Kubernetes Service (Amazon EKS).

Short description

Amazon EBS Multi-Attach allows you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances in the same Availability Zone. You can use Multi-Attach to share persistent storage across multiple workloads in different Amazon EKS clusters.

Resolution

Important: Standard file systems such as XFS and EXT4 aren't designed to be accessed simultaneously by multiple servers. Use a clustered file system to ensure data resiliency and reliability for your production workloads.

Before you begin, make sure the Amazon EBS CSI driver is installed in the required Amazon EKS clusters.

For more information about installing the Amazon EBS CSI driver, see Use Kubernetes volume storage with Amazon EBS.

Note: Multi-Attach enabled volumes can be attached to up to 16 Linux instances built on the Nitro System that are in the same Availability Zone.

To use Amazon EBS Multi-Attach to attach the same volume to multiple workloads across multiple clusters, complete the following steps:

Provision an Amazon EBS volume

Run the following create-volume AWS CLI command:

aws ec2 create-volume --volume-type io2 --multi-attach-enabled --size 10 --iops 2000 --region example-region --availability-zone example-az --tag-specifications 'ResourceType=volume,Tags=[{Key=purpose,Value=prod},{Key=Name,Value=multi-attach-eks}]'

Note: Replace example-region with your required AWS Region. Replace example-az with your required Availability Zone.

Important: Amazon EBS Multi-Attach can be turned on for io2 volumes after they're created if they aren't attached to instances. Amazon EBS Multi-Attach can't be turned on for io1 volumes after they're created.

Retrieve the volume ID

Run the following describe-volumes AWS CLI command:

aws ec2 describe-volumes --filters "Name=tag:Name,Values=multi-attach-eks*" --query "Volumes[*].{ID:VolumeId}" --region example-region

Note: Replace example-region with the required AWS Region.

Create a storage class

Create a storage class manifest with the following configuration:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: io2
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: io2
  iops: "2000"

Apply the storage class:

kubectl apply -f storageclass.yaml

Provision a persistent workload in cluster A

Create the following manifest named workloadA.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pv-claim-name-a
spec:
  storageClassName: io2
  volumeName: example-pv-name-a
  accessModes:
    - ReadWriteMany
  volumeMode: Block
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: example-pod-a
spec:
  containers:
    - name: <example-pod-container-name>
      image: centos:6.6
      command: ["/bin/sh"]
      args: ["-c", "while true; do echo $(date -u) on pod A >> /data/out.txt; sleep 15; done"]
      volumeDevices:
        - name: example-volume-device-name
          devicePath: "/dev/xvda"
  volumes:
    - name: example-volume-device-name
      persistentVolumeClaim:
        claimName: example-pv-claim-name-a
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv-name-a
spec:
  storageClassName: io2
  volumeMode: Block
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 10Gi
  csi:
    driver: ebs.csi.aws.com
    fsType: ext4
    volumeHandle: example-preceding-volume-id
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: topology.ebs.csi.aws.com/zone
              operator: In
              values:
                - example-az

Note: Replace all example strings in the manifest with your required values.

Use the same volume ID to create another workload in cluster B

Create the following manifest named workloadB.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pv-claim-name-b
spec:
  storageClassName: io2
  volumeName: example-pv-name-b
  accessModes:
    - ReadWriteMany
  volumeMode: Block
  resources:
    requests:
      storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: example-pod-b
spec:
  containers:
    - name: example-pod-container-name
      image: centos:6.6
      command: ["/bin/sh"]
      args: ["-c", "while true; do echo $(date -u) on pod B >> /data/out.txt; sleep 15; done"]
      volumeDevices:
        - name: example-volume-device-name
          devicePath: "/dev/xvda"
  volumes:
    - name: example-volume-device-name
      persistentVolumeClaim:
        claimName: example-pv-claim-name-b
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv-name-b
spec:
  storageClassName: io2
  volumeMode: Block
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 10Gi
  csi:
    driver: ebs.csi.aws.com
    fsType: ext4
    volumeHandle: example-preceding-volume-id
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: topology.ebs.csi.aws.com/zone
              operator: In
              values:
                - example-az

Note: Replace all example strings with your required values.

Switch the kubectl context to cluster B, and then deploy the workload:

kubectl config use-context example-clusterB-context
kubectl apply -f workloadB.yaml

Note: Replace example-clusterB-context with your cluster B context.

Verify that the pods are running and have the same content

Authenticate across the different clusters and run the following command:

kubectl get pods

Example output for cluster A:

NAME                         READY   STATUS    RESTARTS   AGE
example-pod-a                1/1     Running   0          18m

Example output for cluster B:

NAME                         READY   STATUS    RESTARTS   AGE
example-pod-b                1/1     Running   0          3m13s

For example-pod-a, run the following command to view the content written to the storage:

kubectl exec -it example-pod-a -- cat /data/out.txt

Example output:

Fri Sep 22 12:39:04 UTC 2024 on example-pod-a
Fri Sep 22 12:39:19 UTC 2024 on example-pod-a
Fri Sep 22 12:39:34 UTC 2024 on example-pod-a

For example-pod-b, run the following command to read the content written into the same storage as example-pod-a:

kubectl logs -f example-pod-b

Example output:

Fri Sep 22 12:39:04 UTC 2024 on example-pod-b
Fri Sep 22 12:39:19 UTC 2024 on example-pod-b
Fri Sep 22 12:39:34 UTC 2024 on example-pod-b

Related information

Attach an EBS volume to multiple EC2 instances using Multi-Attach

Enable Multi-Attach for an Amazon EBS volume

Use Kubernetes volume storage with Amazon EBS

What is Amazon Elastic File System?