- Newest
- Most votes
- Most comments
"Global-DataScanned-Bytes" is a metric used by AWS CloudWatch Logs Insights to track the amount of log data scanned by your queries across all log groups in all AWS regions. It's not well-documented by AWS, hence the limited information available.
To figure out where this usage is coming from and to confirm your theory about the dashboard causing the high usage, you can take the following steps:
-
Review Queries and Dashboards: Look at any queries or dashboards you've created using CloudWatch Logs Insights. Specifically, check if any queries are scanning large amounts of log data, especially if they are run frequently or on a schedule.
-
Check Query Efficiency: Ensure that your queries are optimized to minimize the amount of log data scanned. Use filters and conditions to narrow down the scope of your queries, and limit the time range to only what is necessary.
-
Review Dashboard Widgets: Look at each widget on your dashboard and consider whether it might be causing high data scanning. Widgets that display large amounts of log data or refresh frequently can contribute to increased usage.
-
Check Query Execution Frequency: If your dashboard widgets or queries are set to refresh frequently, this could result in higher data scanning. Consider adjusting the refresh interval or optimizing the queries to reduce scanning.
-
Use CloudWatch Metrics: Look for CloudWatch metrics related to your queries and dashboard activity. You can use metrics such as "QueryExecutionTime" or "QueryDataScanned" to monitor query performance and resource usage.
-
Enable Detailed Monitoring: If you haven't already, consider enabling detailed monitoring for CloudWatch Logs Insights. This will provide more granular metrics that can help you identify the source of high data scanning.
-
Experiment with Filters: Experiment with different filters and query conditions to see how they affect data scanning. Try to isolate the impact of specific filters or conditions on the amount of data scanned.
By following these steps and carefully reviewing your queries, dashboards, and usage patterns, you should be able to identify the source of high data scanning and take appropriate actions to optimize your usage and stay within the CloudWatch free tier limits.
Hi, as explained by Mustafa, "Global-DataScanned-Bytes" is the amount of data queried by Logs insights queries (the "DataScanned-Bytes" usage type) across all regions (the "Global" prefix). Mustafa also shared a great list of steps to look into your usage. I would just like to add that the usage explanation is documented on https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_billing.html
Here is additional information.
You can track "Global-DataScanned-Bytes" in Cost Explorer (not in CloudWatch Metrics).
In the [Report parameters] section, select “Service: CloudWatch” and “Usage type: xxxx-DataScanned-Bytes” in the [Filter] section.
Alternatively, you can execute the following command in AWS CLI: (Note: Adjust hyphens and quotes as needed to prevent character encoding issues depending on your environment.)
📌e.g., time period = July 1 - 31, 2024, target metrics = USE1-DataScanned-Bytes and USW2-DataScanned-Bytes (means Virginia and Oregon)
aws ce get-cost-and-usage \
--time-period Start=2024-07-01,End=2024-07-31 \
--granularity DAILY \
--metrics "UsageQuantity" \
--filter '{"Dimensions":{"Key":"USAGE_TYPE","Values":["USE1-DataScanned-Bytes", "USW2-DataScanned-Bytes"]}}' --output json \
| jq '.ResultsByTime[] | .TimePeriod.Start = "Date: " + .TimePeriod.Start + " " + .Total.UsageQuantity.Amount + " GB" | .TimePeriod.Start'
The output is like this,
"Date: 2024-07-01 1.4960333733 GB"
"Date: 2024-07-02 0 GB"
"Date: 2024-07-03 0.8519507069 GB"
"Date: 2024-07-04 0.1059339764 GB"
"Date: 2024-07-05 0 GB"
"Date: 2024-07-06 0 GB"
:
Relevant content
- Accepted Answerasked 5 months ago
- asked a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 23 days ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 22 days ago
just submitted an answer that will solve the issue (hopefully)