- Newest
- Most votes
- Most comments
To view the cost per bucket, you'd need to use tagging feature and on the basis of those tags, you can filter your bucket in cost explorer:
Please refer following two documentations, which talk about same:
- Go to Cost Explorer -> Choose Date Range in right pane
- Granularity -> Monthly
- Dimension -> Usage Type
- Service -> S3
- Tag -> Select Tag to filter the bucket -> choose tag value
- Apply
This would show you when and what type of usage costed how much.
If you want to setup alert, you can consider setting up AWS budgets and configure alarm on those, which would notify you, if you cross the defined usage threshold. Follow the Well Architected Lab instructions here for setting up budget alert based on your usage.
If you want to calculate/estimate s3 usage, refer s3 pricing and pricing calculator.
Hope you find this helpful.
Comment here if you have additional questions, happy to help.
Abhishek
To identify the buckets that are responsible for high data transfer, check your S3 usage report. The report helps you to review the operation, Region, and time when the data transfer occurred
Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/.
In the title bar, choose your user name or account ID, and then choose Billing Dashboard. In the navigation pane, choose Cost & usage reports. Under AWS Usage Report, choose Create a Usage Report. For Services, choose Amazon Simple Storage Service. For Download Usage Report, choose the following settings:
- Usage Types – For a detailed explanation of Amazon S3 usage types, see Understanding your AWS billing and usage reports for Amazon S3.
- Operation – For a detailed explanation of Amazon S3 operations, see Tracking Operations in Your Usage Reports.
- Time Period– The time period that you want the report to cover.
- Report Granularity– Whether you want the report to include subtotals by the hour, by the day, or by the month.
- Choose the **Download **format and follow the prompts to open or save the report.
Review the S3 server access logs that are associated with the buckets that are responsible for high data transfer charges. This helps you to view detailed information about the requests. You can query the server access logs using Amazon Athena to get information on a specific date and time, operations, and requesters. For example, run the following query to see the amount of data that was transferred through a certain IP address during a specific time period:
SELECT SUM(bytessent) as uploadtotal,SUM(objectsize) as downloadtotal,SUM(bytessent + objectsize) AS total FROM s3_access_logs_db.mybucket_logsWHERE remoteIP='1.2.3.4' AND parse_datetime(requestdatetime,'dd/MMM/yyyy:HH:mm:ss Z')BETWEEN parse_datetime('2021-07-01','yyyy-MM-dd')AND parse_datetime('2021-08-01','yyyy-MM-dd');
Relevant content
- Accepted Answerasked 10 months ago
- asked a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 months ago
It might work with small workloads, but won't work very well at scale. With this method, you have to tag each individual bucket with the tag key BucketName (or something like that), and the tag value would be different for each individual bucket, which is not a very efficient use of tagging. Imagine if you had thousands of buckets across hundreds of accounts, the complexity of governance for this method will make it hard to maintain. The better option would be to utilize Cost and Usage Report (check response from Dave Connelly AWS about it) to track individual resources/buckets cost & usage