- Newest
- Most votes
- Most comments
Yes, the initial spike of API cost is expected.
Since, AWS Backup supports the following metadata: tags, access control lists (ACLs), user-defined metadata, original creation date, and version ID. You may also restore all backed-up data and metadata except original creation date, version ID, storage class, and e-tags.
For buckets with more than 300 million objects:
- Continuous backups are recommended.
- If backup lifecycle is planned for more than 35 days, you can also enable snapshot backups for the bucket in the same vault in which your continuous backups are stored.
Please refer to Best practices and cost considerations for S3 backups: https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html#bestpractices-costoptimization
To avoid the recurring API costs for larger S3 Buckets, it is recommended to go with S3 Continuous backups.
Also, make sure using features of AWS KMS, CloudTrail, Amazon CloudWatch, and Amazon GuardDuty as part of your backup strategy can result in additional costs beyond S3 bucket data storage. If enabled, please Disable CloudTrail Data events and Exclude AWS KMS events.
Both tag-based resource selection and static resource assignment for S3 backups should work the same way when scanning the s3 objects and taking backups via API calls.
Relevant content
- asked 3 years ago
- AWS OFFICIALUpdated 20 days ago
- AWS OFFICIALUpdated 2 years ago

Thank you @ARK for sharing the useful information.It is really useful.Though I want to know for smaller buckets If we switch from tag-based resource selection to static resource assignment for S3 backups, will AWS Backup still incur the same level of ReadObjectTagging and GetObjectAcl API calls during object enumeration? Or are there any optimization techniques—beyond using Continuous Backups and lifecycle rules—that can reduce these per-object metadata scan costs for very large buckets?