- Newest
- Most votes
- Most comments
1. Verify EFS Configuration Check EFS File System: Ensure the EFS file system is in the correct state and is accessible from other environments, as you have verified with WSL.
Mount Targets: Verify that the EFS file system has mount targets in the same VPC and subnet as your ECS tasks.
2. ECS Task Definition Configuration
Mount Point Configuration:
Ensure you have added the EFS volume configuration in your ECS task definition.
Check the volumes section for the correct EFS configuration:
"volumes": [
{
"name": "my-efs-volume",
"efsVolumeConfiguration": {
"fileSystemId": "your-efs-file-system-id",
"rootDirectory": "/"
}
}
]
Ensure the mountPoints configuration in your container definition is correct
"containerDefinitions": [
{
"name": "my-container",
"mountPoints": [
{
"sourceVolume": "my-efs-volume",
"containerPath": "/path/in/container"
}
]
}
]
**3. Security Group and IAM Role Security Group: ** The security group associated with your EFS mount targets should allow inbound traffic on port 2049 (NFS).
The security group associated with your ECS tasks should allow outbound traffic to the EFS security group on port 2049.
IAM Role:
Ensure the IAM task execution role has the necessary permissions to access the EFS file system. Attach the policy AmazonElasticFileSystemClientReadWriteAccess to the task execution role.
- Subnet and VPC Configuration
Subnet:
Ensure the ECS tasks are running in subnets that have access to the EFS mount targets.
Subnets should be part of the same VPC as the EFS mount targets.
- Task Logs and Errors
ECS Task Logs:
Check the logs for your ECS task for any error messages related to mounting the EFS volume. You can view logs in the CloudWatch Logs for your ECS tasks.
Hello,
To troubleshoot issues with mounting EFS to ECS tasks running on Fargate, follow these steps:
1.Check Security Groups: Ensure the security groups associated with your ECS task and EFS are correctly configured to allow NFS traffic. The security group should allow inbound traffic on port 2049 (NFS).
2.Correct EFS Mount Target: Verify that your EFS file system has mount targets in the same VPC and subnet as your ECS tasks.
3.Task Execution Role: Ensure that the ECS task execution role has the necessary permissions to access EFS. Add the elasticfilesystem:* permissions to your task execution IAM role.
4.EFS Mount Point Configuration: Double-check your ECS task definition's EFS volume configuration. Ensure you have specified the correct fileSystemId and rootDirectory.
5.Fargate Platform Version: Ensure you are using the appropriate Fargate platform version (1.4.0 or later) which supports EFS.
6.Logging: Enable and check the container logs for any error messages related to EFS mounting.
these are some ways you can resolve the issue.
Verify EFS and ECS Configuration
EFS File System and Mount Target:
- Ensure that your EFS file system has the correct mount targets in the same VPC and subnets as your ECS tasks.
- Verify that the security groups associated with your EFS mount targets allow inbound traffic on the NFS port (2049) from the ECS task security group.
** ECS Task Definition:**
- Verify that your ECS task definition includes the correct EFS volume configuration.
- Ensure that the mount point configuration matches the EFS volume configuration.
Verify Network Configuration
- Security Groups: Ensure that the security group attached to your ECS tasks allows outbound traffic to the EFS mount target security group on port 2049.
- Subnets: Ensure that your ECS tasks are running in subnets that have network connectivity to the EFS mount targets.
Verify Mounting Inside the Container
Shell into the Running Container:
- Use AWS ECS Exec to get a shell into the running container and verify the mount point.
- Run the following command to list the contents of the mount directory
ls -la /mnt/efs
Thank you for that Thanniru.
I can confirm ECS and EFS have a secGroup attached to allow traffic.
I have the attached policy which you noted (i missed that one) but connection via ECS is still having issues.
I have simulated the connection and that works
Task stopped at: 2024-06-10T13:18:43.400Z
ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post "https://api.ecr.eu-west-2.amazonaws.com/": dial tcp 3.8.169.45:443: i/o timeout. Please check your task network configuration.
My Volume:
"volumes": [
{
"name": "propfile_mnt",
"efsVolumeConfiguration": {
"fileSystemId": "REDACTED",
"rootDirectory": "/"
}
}
],
My mount points:
"mountPoints": [
{
"sourceVolume": "propfile_mnt",
"containerPath": "/efs/propfiles",
"readOnly": false
}
],
ok so after getting ls, -la , [mount path]
running, I can confirm the mount is mounting in the container.
what has been seen is the efs mount I did locally had disconnected when I created the required files.
I dont seem to be able to remount as rw option.
How can I get this last bit sorted?
this is resolved.
turns out WSL does not allow EFS rw mounting.
I mounted the EFS volume to an Ubuntu Server, and was able to get the files over i needed to
Relevant content
- asked 2 years ago
- asked 3 years ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated a year ago