- Newest
- Most votes
- Most comments
One method you could use to approach this would be to utilize a scheduled EventBridge rule to trigger the Lambdas automatically every 24 hours and trigger an S3 Export Task for the most recent day's log data.
However, another way to ensure that log data is continually archived from CloudWatch to S3 without losing data outside the window of running the Lambdas would be to utilize a subscription filter on the log groups you wish to archive, to be delivered to a Kinesis Firehose delivery stream with an S3 destination.
[CW Log Group Subscription Filter] -> [Kinesis Firehose] -> [S3]
By setting the Filter Pattern on the Subscription Filter to capture all logs, this will forward all log data ingested in to the log group to S3 via the Kinesis Firehose delivery stream without needing to run the lambda's to export data. Note that this will only send logs ingested after the creation of the subscription filter to S3. Any logs ingested prior to the subscription filter creation would still need to be exported.
Resources for configuring this can be found here:
Relevant content
- asked 5 years ago
- asked 8 months ago
- AWS OFFICIALUpdated 6 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 6 months ago