- Newest
- Most votes
- Most comments
The issue you're experiencing is related to how the Docker awslogs driver resolves and connects to AWS service endpoints in an IPv6-only environment. Based on your error message showing "dial tcp 44.234.123.82:443: i/o timeout", the driver is still attempting to use IPv4 despite your IPv6-only configuration.
Here are several approaches to resolve this issue:
- Enable DNS64 and NAT64 on your subnet:
- DNS64 allows the Amazon Route 53 Resolver to synthesize IPv6 addresses for IPv4-only services
- This works in conjunction with NAT64 to translate between IPv6 and IPv4 protocols
- You can enable DNS64 through the VPC console (select your subnet and choose Actions > Edit subnet settings) or via AWS CLI
NAT64/DNS64 solution enables IPv6 clients to reach IPv4 endpoints, but it incurs additional costs. We can avoid this by using AWS dual-stack endpoints that natively support IPv6.
Recommended NAT64-Free Solution :
- Use Dual-Stack CloudWatch Logs Endpoint:
- Replace the default endpoint logs.us-west-2.amazonaws.com with the IPv6-capable endpoint: logs.us-west-2.api.aws
- Update your Docker awslogs driver configuration to specify this endpoint:
--log-opt awslogs-endpoint=https://logs.us-west-2.api.aws
This ensures the driver resolves and connects via IPv6 directly.
- Ensure Docker Daemon IPv6 Configuration:
- Confirm /etc/docker/daemon.json includes:
{
"ipv6": true,
"fixed-cidr-v6": "fd00::/80"
}
- Restart Docker after changes.
- IMDSv2 Hop Limit:
- Set the Instance Metadata Service hop limit to 2 to allow containerized access:
aws ec2 modify-instance-metadata-options \
--instance-id <instance-id> \
--http-put-response-hop-limit 2
- AWS Credentials (Optional):
- If the container cannot reach IMDSv2 over IPv6, you can supply AWS credentials directly to Docker:
[Service]
Environment="AWS_EC2_METADATA_DISABLED=true"
Environment="AWS_EC2_METADATA_V1_DISABLED=true"
Environment="AWS_ACCESS_KEY_ID=<YOUR_ACCESS_KEY>"
Environment="AWS_SECRET_ACCESS_KEY=<YOUR_SECRET_KEY>"
- Then reload and restart Docker:
systemctl daemon-reload
systemctl restart docker
- Verification:
- Test IPv6 DNS resolution:
dig AAAA logs.us-west-2.api.aws
- Test connectivity from host:
curl -6 https://logs.us-west-2.api.aws
- Test connectivity from container:
docker run --rm busybox ping6 logs.us-west-2.api.aws
If these succeed, your container logs should now flow to CloudWatch Logs over native IPv6, without requiring NAT64.
Conclusion: The most effective solution is likely enabling DNS64/NAT64 or using the newer dual-stack endpoints, as these approaches are specifically designed to handle IPv6-only environments accessing AWS services.
By switching to the dual-stack endpoint and confirming IPv6 connectivity, your Docker containers can send logs to CloudWatch Logs over IPv6-only connections. This approach avoids NAT64-related charges while fully supporting your IPv6-only EC2 environment.
References:
[1] Docker container running in EC2 is unable to create Cloudwatch log stream. | AWS re:Post
[2] Docker aws log driver from on-premise server | AWS re:Post
[3] IPv6-only EC2 instance in AL2023 cannot resolve ECR public dual-stack endpoints (DNS resolution failure) | AWS re:Post
Relevant content
- asked 2 years ago

I'd like to avoid NAT64 charges, which are actually more expensive than a Public IPv4.