1 Answer
- Newest
- Most votes
- Most comments
1
Hello Stephen,
You're right, pausing applications for backups isn't ideal with 24/7 operations. Here's how AWS Backup with EFS handles potential inconsistencies:
AWS Backup for Amazon EFS may encounter inconsistencies if the file system is modified during a backup. These inconsistencies (like duplicated or missing data) are specific to that snapshot and won't be automatically resolved.
To Solve this:
- Schedule backups during low-activity periods if possible.
- Regularly restore and validate backups to ensure data integrity.
- Implement application-level strategies to minimize modifications during backups.
Go through This Link: https://aws.amazon.com/getting-started/hands-on/amazon-efs-backup-and-restore-using-aws-backup/
Relevant content
- asked 4 months ago
- asked a year ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
Thanks, I am aware of these but I'd like a slightly more nuanced answer if possible.
Let me be clear. The 24/7 activities I am referring to are long-running processes, often spanning multiple days, of batch processing by analysis engines. These are equally weighted read-write operations. We don't want to include substantial dead periods or miss regular backups.
You mention that the inconsistencies are "specific to that snapshot". Does that mean that the next snapshot will have a correct backup of the affected changes, assuming no further modifications, or will the data still be in the failed state if restoring from future backups?
Note that "regularly restoring and validating backups" only demonstrates the issue, it doesn't actually resolve anything. I'm looking for solutions that resolve it after the issue has occured.
The other options would basically entail duplicating and verifying substantial volumes of data. We can do this, but I'm wondering if there's a lighter touch approach.