Skip to content

How do I send my container logs to multiple destinations in Amazon ECS on AWS Fargate?

5 minute read
0

I want my application container that runs on AWS Fargate to forward logs to multiple destinations, such as Amazon CloudWatch, Amazon Data Firehose, or Splunk.

Short description

An Amazon Elastic Container Service (Amazon ECS) task definition allows you to specify only a single log configuration object for a given container. This limit means that you can forward logs to only a single destination. To forward logs to multiple destinations in Amazon ECS on Fargate, you can use FireLens.

Note: FireLens works with both Fluent Bit and Fluentd log forwarders. This resolution uses Fluent Bit because Fluent Bit is more resource-efficient than Fluentd.

Resolution

Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshoot AWS CLI errors. Also, make sure that you're using the most recent AWS CLI version.

Before you begin, make sure that you understand:

  • To generate the Fluent Bit output definition, FireLens uses the key-value pairs specified as options in the logConfiguration object from the ECS task definition. The destination where FireLens routes the logs is specified in the [OUTPUT] definition section of a Fluent Bit configuration file. For more information, see Output on the Fluent Bit website. If you do not specify any options, no output is generated.
  • FireLens creates a configuration file on your behalf, but you can also specify a custom configuration file. You can host this configuration file in Amazon Simple Storage Service (Amazon S3). Or, create a custom Fluent Bit Docker image with the custom output configuration file added to the image.
  • You can use Amazon ECS and Fargate to pull a custom configuration file from Amazon S3, unless you require custom plugins for Fluent Bit.

Create a Fluent Bit custom output configuration file and upload it to Amazon S3

Create a custom Fluent Bit configuration file called logDestinations.conf with your choice of [OUTPUT] definitions. For example, the configuration file includes configurations defined for CloudWatch, Amazon Data Firehose, and Splunk. The Match directive routes your data. When Amazon ECS populates the Fluent Bit configuration, it tags the input streams with an auto-generated name prefixed by the container name. The prefix service is used in the example:

[OUTPUT]   
    Name                firehose   
    Match               service*
    region              us-west-2
    delivery_stream     nginx-stream  
[OUTPUT]
    Name                cloudwatch
    Match               service*
    region              us-east-1
    log_group_name      firelens-nginx-container
    log_stream_prefix   from-fluent-bit
    auto_create_group   true   
[OUTPUT]
    Name                splunk
    Match               service*
    Host                127.0.0.1
    Splunk_Token        xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx
    Splunk_Send_Raw     On

Note: Different destinations require different fields to be specified in the [OUTPUT] definition. For examples, see amazon-ecs-firelens-examples on the GitHub website.

To upload this file to an Amazon S3 bucket that you control, run the AWS CLI cp command:

aws s3 cp logDestinations.conf s3://example-bucket/logDestinations.conf

Create IAM permissions

Create AWS Identity and Access Management (IAM) permissions to allow your task role to acquire the S3 configuration file. Allow your task role to route your logs to different destinations. For example, if your destination is Amazon Data Firehose, then you must give the task permission to call the firehose:PutRecordBatch API.

Note: Fluent Bit supports plugins as log destinations. Destinations like CloudWatch and Kinesis require permissions that include logs:CreateLogGroup, logs:CreateLogStream, logs:DescribeLogStreams, logs:PutLogEvents, and kinesis:PutRecords. For more information see Permissions for CloudWatch and Kinesis on the GitHub website.

In the example IAM policy, the task role gives s3:GetObject access to the configuration file. It's a best practice to grant the task the required permissions to log the task's own output to another CloudWatch log group.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "ConfigurationFileAccess",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::example-bucket/logDestinations.conf"
    },
    {
      "Sid": "CloudWatchLogGroupPermissions",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:logs:us-east-1:555555555555:log-group:firelens-log-router"
    },
    {
      "Sid": "CloudWatchLogStreamPermissions",
      "Action": [
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:logs:us-east-1:555555555555:log-group:firelens-log-router:log-stream:*"
    }
  ]
}

Note: For more information about how to create a task IAM role and required permissions, see Amazon ECS task execution IAM role.

The example IAM task role grants the required permissions for the CloudWatch and Amazon Data Firehose drivers used in the example destination:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DataFirehosePermissions",
      "Action": [
        "firehose:PutRecordBatch"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:firehose:us-west-2:555555555555:deliverystream/nginx-stream"
    },
    {
      "Sid": "CloudWatchLogGroupPermissions",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:logs:us-east-1:555555555555:log-group:firelens-nginx-container"
    },
    {
      "Sid": "CloudWatchLogStreamPermissions",
      "Action": [
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:logs:us-east-1:555555555555:log-group:firelens-nginx-container:log-stream:*"
    }
  ]
}

Create the task definition

In your task definition, use the AWS for Fluent Bit image to include an additional essential container. The example runs the official nginx container. Use the AWS CLI or the Amazon ECS console to create a complete task definition:

{
    "family": "firelens-example-task",
    "taskRoleArn": "arn:aws:iam::012345678901:role/exampleTaskRole",
    "taskExecutionRoleArn": "arn:aws:iam::444455556666:role/exampleTaskExecutionRole",
    "networkMode": "awsvpc",
    "requiresCompatibilities": ["FARGATE"],
    "cpu": "256",
    "memory": "512",
    "containerDefinitions": [
        {
            "essential": true,
            "image": "public.ecr.aws/docker/library/nginx:latest",
            "name": "service_web_app",
            "logConfiguration": {
                "logDriver": "awsfirelens"
            }
        },
        {
            "essential": true,
            "image": "public.ecr.aws/aws-observability/aws-for-fluent-bit:stable",
            "name": "log_router",
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "firelens-log-router",
                    "awslogs-region": "us-east-1",
                    "awslogs-create-group": "true",
                    "awslogs-stream-prefix": "firelens-router-logs"
                }
            },
            "firelensConfiguration": {
                "type": "fluentbit",
                "options": {
                    "config-file-type": "s3",
                    "config-file-value": "arn:aws:s3:::example-bucket/logDestinations.conf"
                }
            }
        }
    ]
}

Note: To troubleshoot issues, the example configuration directs the output of the Fluent Bit container to another CloudWatch log group under a different prefix.

AWS OFFICIALUpdated 3 months ago
7 Comments

Hello!

It is not necessary to create your own image; you can connect the config file from S3.

"config-file-type": "s3", "config-file-value": "arn:aws:s3:::yourbucket/yourdirectory/extra.conf"

replied 2 years ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

AWS
MODERATOR
replied 2 years ago

I didn't find any mention the logConfiguration setup on the container definition level with a logDriver of awsfirelens? something like:

 "logConfiguration": {
                "logDriver": "awsfirelens",
                "options": {
                    "compress": "gzip",
                    "provider": "ecs",
                    "dd_service": "prefix-service-service",
                    "Host": "http-intake.logs.datadoghq.com",
                    "TLS": "on",
                    "dd_source": "python-grpc",
                    "dd_tags": "env:dev, prefix-service-dev",
                    "Name": "datadog"
                },
                "secretOptions": [
                    {
                        "name": "apikey",
                        "valueFrom": "arn:aws:secretsmanager:us-east-2:12121212121:secret:datadog_dev:dd_api_key::"
                    }
                ]
            }

Full details for the issue I am facing: https://stackoverflow.com/questions/78632920/aws-ecs-fargate-send-logs-to-multiple-destinations-aws-s3-and-datadog

replied 2 years ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

AWS
MODERATOR
replied 2 years ago

The article didn't mention the logConfiguration setup on the container definition level with a logDriver of awsfirelens?, I was getting this error:

Error: failed creating ECS Task Definition (prefix-service-dev): ClientException: When a firelensConfiguration object is specified, at least one container has to be configured with the awsfirelens log driver.

replied 2 years ago

I used this article for forwarding logs to NewRelic and Cloudwatch and it worked with making some changes. Main Container logConfiguration-

"logConfiguration": {
                "logDriver": "awsfirelens"
}

And, also need to add the Newrelic plugin to the docker image and mention Path for it. dockerfile-

FROM amazon/aws-for-fluent-bit:latest
ADD logDestinations.conf /logDestinations.conf
ADD out_newrelic-linux-amd64-1.19.2.so /out_newrelic-linux-amd64-1.19.2.so
ADD plugins.conf /plugins.conf

logDestination.conf

[SERVICE]
   Plugins_File /plugins.conf

[OUTPUT]
    name      nrlogs
    match     *
    api_key   YOUR_API_KEY_HERE

[OUTPUT]
    Name cloudwatch_logs
    Match   *
    region us-east-1
    log_group_name fluent-bit-cloudwatch
    log_stream_prefix from-fluent-bit-
    auto_create_group On

plugins.conf

[PLUGINS]
   Path /out_newrelic-linux-amd64-1.19.2.so

Rest the configuration for sidecar was the same as mentioned in the article.

Reference document- link Plugin download

replied 2 years ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

AWS
EXPERT
replied 2 years ago