- Newest
- Most votes
- Most comments
I'm assuming the reason for so many file systems is for isolation of files. Have you taken a look at Access Points for EFS? This might help you to leverage fewer file systems and still be able to isolate use cases: https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html
To my understanding you are correct - if you are using 3 availability zones for each filesystem you'll be limited to 133 filesystems before hitting the mount quota limit (400) which cannot be increased. You might want to consider if Amazon FSx for Ontap better meets your requirements
Thanks for the suggestion but Amazon FSx for Ontap doesn't seem to fit the use case based on price.
Noting: FSx seems to have a default quota of 100 Filesystems per AWS account. You can request quota increases, though it is not clear to what limit. So this may be more restrictive than the 133 EFS volumes per VPC.
Relevant content
- asked 10 months ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
Yes, you're correct. My plan is to isolate compute and storage for each customer via Lambda and EFS.
Using Access Points does seem like a possible solution. I was using an IAM policy to restrict the access to the filesystem, and it looks like I can do something similar with access points. This does have a few drawbacks though:
Access points are limited to 120 per file system. So this would only help if you add complex logic to organize which access point of which EFS volume should be used for each case.
The alternative way to scale would be to create more VPCs so that you can have more EFS volumes with mount points, but then to connect them you'd need some complex VPC peering set up.