- Newest
- Most votes
- Most comments
Hello.
If your EC2 instance is registered as a Systems Manager managed instance, you can use Systems Manager and the EventBridge scheduler to configure it to periodically execute the AWS CLI command to upload logs to S3.
Specifically, configure the EventBridge scheduler to run the SSM SendCommand API.
https://docs.aws.amazon.com/scheduler/latest/UserGuide/what-is-scheduler.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html
With EventBridge scheduler, you can select the API you want to execute periodically as follows.
The parameters of the API to be executed can be set as follows.
This time we are using the SSM SendCommand API, so you can use the parameters listed in the document below.
https://docs.aws.amazon.com/ja_jp/systems-manager/latest/APIReference/API_SendCommand.html#systemsmanager-SendCommand-request-InstanceIds
If it is not registered as a managed instance in Systems Manager, I think you can use the EC2 OS's crond to configure the AWS CLI to run periodically.
- Kinesis Agent: Set up a Kinesis Agent on a Linux server to stream log files to an S3 bucket. The Kinesis Agent monitors files on a machine and sends new data to a Kinesis Fire Hose, which then deposits the log files in the S3 bucket. Please destructure the data format before.
- CloudWatchAgent: Setup cloudwatch agent and stream logs to cloudwatch. Create a event bridge rule to trigger an lambda function to export logs to S3.
Relevant content
- Accepted Answerasked 2 years ago
- asked 3 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a month ago
- AWS OFFICIALUpdated 3 years ago
This is a valid way to trigger the transfer to start, but you should be mindful of that if you just copy or move a log file to S3, all the processes that have open handles to the log file will continue to write to the same file. You'll need a rotation process. It can be based on the source application writing to per-day files, naturally stopping writing to a file at the start of a new day, or alternatively, an external process renaming the old file and signalling the apps to open a new file. Otherwise, you'll be overwriting the logs in S3 and losing log data, or accumulating data on the disk.
Hi,
Thanks for your response.
I need to scale this solution to manage multiple EC2 instances. Specifically, I want to copy log files from a directory on each EC2 instance to an S3 bucket. The desired S3 bucket structure should have the instanceId as the root folder, with logs organized by year/month/day/time.
Can the SSM approach be scaled to achieve this?
It is also possible to target multiple EC2s. The parameter "InstanceIds" is a list, so it can contain multiple EC2 instance IDs. This can also be achieved by using a parameter called "Targets". The parameter "Targets" determines the instances on which the command will be executed based on the tags set in EC2. https://docs.aws.amazon.com/ja_jp/systems-manager/latest/APIReference/API_SendCommand.html
In the example in the image, only the "aws s3 cp" command is executed, but it is also possible to execute a shell script placed in EC2, so it is also possible to create folder names etc. with a shell script.