- Newest
- Most votes
- Most comments
I do not believe that the metric filter is priced by data points ingested. CloudWatch Metrics are priced on 2 dimensions. The first is the cost for calling the Put Metric Data API, which we don't have to worry about with a metric filter, since the metric is being sent over from a log group. The 2nd is the cost for metric storage ( the custom metric cost ).
The nice thing about a metric filter is that the metric cost the same regardless of how many logs are ingested during the time that match the filter. ( assuming that there is at least 1 match for every hour in the month, which will give you the $0.30 custom metric cost. This is so because the cost for a custom metric is pro-rated hourly, so if you don't have a pattern match the metric for that hour, and you do not have a default value applied to the filter then a metric is not published, so you don't pay for the metric. ).
Example
For example if I have a single metric filter that creates a single metric ( no dimensions yet ), that filter would cost me a maximum of $0.30 per month ( using pricing in us-east-1 ) , since it would only create a single metric. During this time this filter could match 1,000, or 100,000,000 events and the metric cost would be the same. Of course you would still need to pay for the cost of CloudWatch Log ingest. You can think of a metric filter, and embedded metric format as a way that lets you swap the cost of calling the Put Metric Data API call with CloudWatch Log Ingest.
Now For Dimensions: When we add a metric filter with dimension support, it is possible to create many metrics. Lets take a look at maximum possible number of metrics a filter you mentioned would generate:
I have a log statement that looks like this: { "foo": "x", "bar": "y", "baz": "z" } One metric filter on the word "foo" with 1 dimension: Bar = $.bar I know that my logs could produce up to a maximum of16 different values for bar.
In this case you are looking at a maximum potential of 16 different metrics. However it is important to note that if there is not a match on the pattern for the dimension, and you do not have a default value set, then a metric for that dimension combination will not be published, and you will not be charged for the metric for that hour.
Now when we add a second dimension:
If I added a dimension of Baz = $.baz and there are a maximum of up to 4 different values for baz, does that mean I multiply the previous amount by 4?
correct, so this would give you a maximum of 64 metrics for each unique combination of namespace, metric name, and dimension key value pairs. In this case I would suggest that you do not use a default value on the filter, because if there is no match for a given combination during an hour, then the metric will not be published, and you will not be charged for the metric. If you don't want to have gaps in your time series, you can always use the Fill() metric match expression.
CloudWatch Metric Filters are charged based on the number of metric data points ingested per month. In your case, the metric data points would be the number of log events that match the filter criteria, in this case, the "foo" field. The number of metrics you input in the pricing calculator would be the number of unique metric names that you send to CloudWatch.
In your example, assuming that the logs write 1.6 events per second, this would result in approximately 4,147,200 (1.6 * 60 * 60 * 24 * 30) log events in a month. If you apply the metric filter to these events and only count the events that match the filter criteria, you would end up with the number of metric data points ingested per month.
For the dimension "Bar", since there are up to 16 different values, you would have 16 unique dimension value combinations (i.e., the combination of the metric name and dimension value). If you have one metric filter and one dimension, the total number of metric data points ingested per month would be the number of log events that match the filter criteria multiplied by 16. If each log event has a unique value for "bar", then you would end up with up to 16 times the number of metric data points ingested.
For the second dimension "Baz", if there are up to 4 different values, you would have up to 64 unique dimension value combinations (i.e., the combination of the metric name, "Bar" dimension value, and "Baz" dimension value). If you add this dimension to the metric filter, the total number of metric data points ingested per month would be the number of log events that match the filter criteria multiplied by 64 (i.e., 16 x 4). Therefore, adding this dimension would result in an additional cost.
To estimate the cost, you would need to know the number of metric data points ingested per month, which would depend on the number of log events that match the filter criteria. You can use the CloudWatch pricing calculator to estimate the cost based on the number of metric data points ingested per month.
Relevant content
- Accepted Answerasked 3 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 8 months ago
- AWS OFFICIALUpdated 2 years ago
Thanks, this is very helpful. It matches my assumption, which is concerning regarding the cost.