awslogs docker driver cannot get credentials on an on-premises server

0

I'm trying to use the awslogs driver on a dockerised applicaiton running on a non-aws server (or locally for debugging). I pass in the docker-compose.yaml file the environmental variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY which are correctly added on the container (i verified). However the container fails to start with error:

Error response from daemon: failed to create task for container: failed to initialize logging driver: failed to create Cloudwatch log stream: operation error CloudWatch Logs: CreateLogStream, get identity: get credentials: failed to refresh cached credentials, no EC2 IMDS role found, operation error ec2imds: GetMetadata, request canceled, context deadline exceeded.

from my local terminal with the same AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY I can send logs to aws cloudwatch. What could be the issue?

version: '3.8'
services:
  api:
    image: etiket_app
    volumes:
      - ~/.aws:/root/.aws
    environment:
       - AWS_ACCESS_KEY_ID=...
       - AWS_SECRET_ACCESS_KEY=...
       - AWS_DEFAULT_REGION=...
    logging:
      driver: awslogs
      options:
        awslogs-group: "test_group"
        awslogs-region: "eu-north-1"
        awslogs-stream: "test_stream"
Alberto
asked 10 days ago111 views
2 Answers
0

Hello.

How about passing environment variables to the Docker daemon?
https://www.linkedin.com/pulse/docker-logging-driver-awslogs-ubuntu-shahzad-masud

profile picture
EXPERT
answered 10 days ago
  • Hi Riku, currently I'm developing on macos, do you know how to pass environment variables to the Docker daemon in macos?, the link you shared is only applicable for linux

0

The issue you are facing with the awslogs driver on your on-premises server is likely due to the way the Docker daemon is handling the AWS credentials.

The Docker documentation states that the Docker daemon needs to be provided with the AWS credentials in one of the following ways:

  1. Using the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables.
  2. Using the default AWS shared credentials file (i.e., ~/.aws/credentials of the root user).
  3. If running the Docker daemon on an Amazon EC2 instance, using the Amazon EC2 instance profile. - Not your case

In your case, you are passing the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, which should be the correct way to provide the credentials. However, the error message you're getting suggests that the Docker daemon is still unable to access the AWS credentials.

The possible reasons for this could be:

  1. EC2 IMDS role: The error message mentions "no EC2 IMDS role found", which suggests that the Docker daemon is still trying to use the EC2 instance profile, even though you're running it on an on-premises server. This is likely because the Docker daemon is configured to use the EC2 instance profile by default, and it's failing to find one.

  2. Credential caching: The error message also mentions "failed to refresh cached credentials", which suggests that the Docker daemon is having trouble refreshing the cached credentials. This could be due to network connectivity issues or other factors.

To resolve this issue, you can try the following:

  1. Ensure the correct credentials: Double-check that the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables are set correctly and contain valid AWS credentials. - You already mentioned that, just double checking.

  2. Disable EC2 IMDS role: You can disable the EC2 IMDS role by setting the environment variable AWS_EC2_METADATA_DISABLED=true. This will prevent the Docker daemon from trying to use the EC2 instance profile.

  3. Use the shared credentials file: Instead of using environment variables, you can try mounting the AWS shared credentials file (~/.aws/credentials) into the container. This way, the Docker daemon can directly access the credentials file.

Update your docker-compose.yaml file accordingly:

version: '3.8'
services:
  api:
    image: etiket_app
    volumes:
      - ~/.aws:/root/.aws
    environment:
      - AWS_EC2_METADATA_DISABLED=true
    logging:
      driver: awslogs
      options:
        awslogs-group: "test_group"
        awslogs-region: "eu-north-1"
        awslogs-stream: "test_stream"

This should prevent the Docker daemon from trying to use the EC2 instance profile and instead use the shared credentials file mounted from the host.

If the issue persists, you can also try checking the network connectivity between the Docker daemon and the AWS CloudWatch service, as the error message mentions a "context deadline exceeded" issue, which could be related to network problems.

Let me know if this helps you!

profile pictureAWS
answered 9 days ago
  • Thanks for your answer! I checked the following:

    • correct credentials
    • AWS_EC2_METADATA_DISABLED=true
    • ~/.aws:/root/.aws (also using profile name set to AmazonCloudWatchAgent)

    But still getting the same error

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions