Skip to content

EC2 + IPv6-Only + Docker + awslogs driver to CloudWatch (not working)

0

TL;DR: Docker awslogs driver failing with "i/o timeout" to CloudWatch Logs over IPv6 on EC2

Problem Description:

We are encountering a persistent "i/o timeout" error when our Docker container, running on an EC2 instance with public IPv6-only connectivity (no public IPv4), attempts to send logs to CloudWatch Logs using the awslogs driver. The error message consistently shows an attempt to dial tcp <IPv4_ADDRESS>:443: i/o timeout, indicating the awslogs driver is still trying to connect via IPv4 despite the environment being configured for IPv6. We want to avoid using the Cloudwatch VPC Endpoint with Private IPv4 because of the extra VPC Endpoint fees.

Environment Details:

  • EC2 Instance: Amazon Linux 2, configured with only private IPv4 and public IPv6 addresses. No public IPv4 is assigned.
  • Docker Version: Whatever latest version that comes with Amazon Linux
  • AWS Region: us-west-2
  • CloudWatch Logs: Public endpoint for us-west-2 is logs.us-west-2.amazonaws.com. This service is confirmed to support IPv6.

Steps Taken and Observations:

  1. Initial Setup: The webapp-service Docker container is configured to use the awslogs driver.
  2. VPC Endpoint Removal: We initially attempted to use a CloudWatch Logs VPC endpoint, but due to cost considerations, this was removed. The current goal is to utilize the EC2 instance's public IPv6 connectivity to reach the public CloudWatch Logs service endpoint.
  3. Docker Daemon IPv6 Configuration:
    • We ensured /etc/docker/daemon.json on the EC2 instance is configured as follows:
      {
        "ipv6": true,
        "fixed-cidr-v6": "fd00::/80"
      }
    • This configuration is verified to be correctly applied on the remote host.
    • An aggressive Docker daemon reset (disable, stop, remove all networks and containers, then re-enable and start) is performed to ensure a clean state and proper application of daemon.json changes.
    • The Docker daemon is confirmed to be active after these steps.
  4. docker run Command: The container is launched with the following command (simplified for clarity):
    /usr/bin/docker run --name webapp-service \
      -p 8080:8080 \
      -e ENVIRONMENT_NAME=myapp-dev \
      -e AWS_REGION=us-west-2 \
      -e AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE=IPv6 \
      --log-driver=awslogs \
      --log-opt awslogs-group=/ec2/myapp-dev-webapp-service-service-v1 \
      --log-opt awslogs-region=us-west-2 \
      webapp-service:latest
    • We explicitly added -e AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE=IPv6 to encourage the AWS SDK (used by awslogs) to prefer IPv6.
  5. Error: Despite all these configurations, the docker run command fails with:
    docker: Error response from daemon: failed to create task for container: failed to initialize logging driver: failed to create Cloudwatch log stream: operation error CloudWatch Logs: CreateLogStream, exceeded maximum number of attempts, 3, https response error StatusCode: 0, RequestID: , request send failed, Post "https://logs.us-west-2.amazonaws.com/": dial tcp 44.234.123.82:443: i/o timeout.
    
    The dial tcp <IPv4_ADDRESS>:443 clearly shows an IPv4 connection attempt, which times out because the EC2 instance lacks public IPv4.

Question:

How can we force the Docker awslogs driver (or the underlying AWS SDK it uses) to resolve the logs.us-west-2.amazonaws.com endpoint to its IPv6 address and use IPv6 for communication, given that the Docker host has public IPv6 connectivity and the Docker daemon is configured for IPv6?

Are there any specific awslogs driver options, Docker network configurations, or AWS SDK environment variables (beyond AWS_EC2_METADATA_SERVICE_ENDPOINT_MODE=IPv6) that need to be set to ensure IPv6 preference for CloudWatch Logs communication from a Docker container on an IPv6-only public EC2 instance?

Any guidance on how to diagnose why the awslogs driver is still attempting IPv4 connections would be greatly appreciated.


1 Answer
0

The issue you're experiencing is related to how the Docker awslogs driver resolves and connects to AWS service endpoints in an IPv6-only environment. Based on your error message showing "dial tcp 44.234.123.82:443: i/o timeout", the driver is still attempting to use IPv4 despite your IPv6-only configuration.

Here are several approaches to resolve this issue:

  1. Enable DNS64 and NAT64 on your subnet:
  • DNS64 allows the Amazon Route 53 Resolver to synthesize IPv6 addresses for IPv4-only services
  • This works in conjunction with NAT64 to translate between IPv6 and IPv4 protocols
  • You can enable DNS64 through the VPC console (select your subnet and choose Actions > Edit subnet settings) or via AWS CLI

NAT64/DNS64 solution enables IPv6 clients to reach IPv4 endpoints, but it incurs additional costs. We can avoid this by using AWS dual-stack endpoints that natively support IPv6.

Recommended NAT64-Free Solution :

  1. Use Dual-Stack CloudWatch Logs Endpoint:
  • Replace the default endpoint logs.us-west-2.amazonaws.com with the IPv6-capable endpoint: logs.us-west-2.api.aws
  • Update your Docker awslogs driver configuration to specify this endpoint:
--log-opt awslogs-endpoint=https://logs.us-west-2.api.aws

This ensures the driver resolves and connects via IPv6 directly.

  1. Ensure Docker Daemon IPv6 Configuration:
  • Confirm /etc/docker/daemon.json includes:
{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80"
}
  • Restart Docker after changes.
  1. IMDSv2 Hop Limit:
  • Set the Instance Metadata Service hop limit to 2 to allow containerized access:
aws ec2 modify-instance-metadata-options \
    --instance-id <instance-id> \
    --http-put-response-hop-limit 2
  1. AWS Credentials (Optional):
  • If the container cannot reach IMDSv2 over IPv6, you can supply AWS credentials directly to Docker:
[Service]
Environment="AWS_EC2_METADATA_DISABLED=true"
Environment="AWS_EC2_METADATA_V1_DISABLED=true"
Environment="AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY>"
Environment="AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_KEY>"
  • Then reload and restart Docker:
systemctl daemon-reload
systemctl restart docker
  1. Verification:
  • Test IPv6 DNS resolution:

dig AAAA logs.us-west-2.api.aws

  • Test connectivity from host:

curl -6 https://logs.us-west-2.api.aws

  • Test connectivity from container:

docker run --rm busybox ping6 logs.us-west-2.api.aws

If these succeed, your container logs should now flow to CloudWatch Logs over native IPv6, without requiring NAT64.

Conclusion: The most effective solution is likely enabling DNS64/NAT64 or using the newer dual-stack endpoints, as these approaches are specifically designed to handle IPv6-only environments accessing AWS services.

By switching to the dual-stack endpoint and confirming IPv6 connectivity, your Docker containers can send logs to CloudWatch Logs over IPv6-only connections. This approach avoids NAT64-related charges while fully supporting your IPv6-only EC2 environment.

References:
[1] Docker container running in EC2 is unable to create Cloudwatch log stream. | AWS re:Post
[2] Docker aws log driver from on-premise server | AWS re:Post
[3] IPv6-only EC2 instance in AL2023 cannot resolve ECR public dual-stack endpoints (DNS resolution failure) | AWS re:Post

answered 3 months ago
AWS
SUPPORT ENGINEER
revised 3 months ago
  • I'd like to avoid NAT64 charges, which are actually more expensive than a Public IPv4.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.