1 Answer
- Newest
- Most votes
- Most comments
0
To address the scalability and fault tolerance of a large, dynamically growing file on FSx for OpenZFS:
- FSx for OpenZFS provides fully managed and elastic storage that can scale to petabytes in size. The file system performance also scales automatically as more storage is added
- For fault tolerance, FSx replicates your data across different physical storage infrastructure within an Availability Zone. It also continuously monitors for hardware failures and will automatically replace failed components.
- You can take snapshots of the file system to backup your data and enable point-in-time recovery. Snapshots can be shared across accounts and regions for disaster recovery.
- For higher availability, you can use a multi-AZ deployment of FSx across two Availability Zones. This provides protection from an AZ failure.
- The OpenZFS file system itself provides features like data compression, checksumming, and self-healing that help ensure the integrity and durability of very large files.
Relevant content
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated 2 years ago
@Giovanni The OpenZFS file-system will be NFS-mounted on each VM/container in an elastic pool of VMs/containers. But will the NFS server, providing this export from the OpenZFS side, be able to elastically scale itself somehow and handle read-write requests from millions of users concurrently hitting these VMs/containers?
In other words, while I'm sure that OpenZFS will scale storage capacity wise, I'm not sure if it will also auto-scale with compute and network loads as well.