- Newest
- Most votes
- Most comments
It seems that not work correctly (does not correctly recognize the type). If i remove the filter for the type, I receive poorly paginated results and I don't understand why: @message 2022-06-07T08:01:26.897Z 65985471-edd9-44ba-99f8-8e9f0a5d4fa6 INFO saving evse statuses to database...
the type info is in the third place but the query (removing the filter on error and warn) put as type "statuses" and as info "to database..."
If you fields always have these two formats, then you can extract the type (ERROR or WARN) as the third space separated field. I used this and rewrote your first query to show you an alternative way of doing this that keeps the type in a single column, and how you can still use this to create multiple series for a time chart (always useful to know some more tricks! )
parse @message "* * * *" as time, request, type, info
| filter type in ["ERROR", "WARN"]
| stats count(type="ERROR") as error, count(type="WARN") as warning by bin(15m)
You asked to see w table with columns of time, requestID, type, error message, log, logstream Basing this off the above query we can get all of this (apart from the error message) using the following:
parse @message "* * * *" as time, request, type, info
| filter type in ["ERROR", "WARN"]
| display @timestamp, @requestId, type, @log, @logstream
The last part is getting the error message. You have two formats of messages here, so we will need two capture patterns. I'm assuming your JSON event is ingested with the JSON fields extracted and you therefore have a field called errorMessage. The trick here is to use coalesce to get a JSON error message if it exists, and if not, take the info field from the first format of your events. You can then add this new field into the display command.
parse @message "* * * *" as time, request, type, info
| filter type in ["ERROR", "WARN"]
| fields coalesce(errorMessage, info) as msg
| display @timestamp, @requestId, type, msg @log, @logstream
If you want to see more on the Logs Insights syntax see https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html.
A Quick note on field names. Field names don't have to start with @. In fact it is better not to. The CloudWatch discovered fields do start with @, and it will help you distinguish between them and those you create. The exception to this is discovered fields from JSON do not start with @. You can read more at https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html.
Relevant content
- asked 8 months ago
- asked 2 years ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 16 days ago
- AWS OFFICIALUpdated 6 months ago
I may well have the parse wrong since I didn't have your data to work with. Check the pattern for your log events and modify accordingly. (What did the event look like that gave you the info noted above for the type and info fields?)
Once you get the right data in the parse field though, I think this should work.