Skip to content

Migrating Container Images from Google Artifact Registry to Amazon ECR: A Step-by-Step Guide

8 minute read
Content level: Intermediate
0

This article demonstrates a simple and cost-effective approach for bulk migration of container images from Google Artifact Registry to Amazon Elastic Container Registry (ECR)

Introduction

When organizations migrate their containerized applications and microservices architectures, ensuring a smooth and secure transfer of container images between registries is essential to the overall migration process. While numerous methods are available, this article demonstrates a simple and cost-effective approach for bulk migration of container images from Google Artifact Registry to Amazon Elastic Container Registry (ECR).

Google Cloud customers who wish to migrate their data to another cloud provider or on-premises infrastructure can take advantage of free network data transfer when moving data out of Google Cloud [1].

Important: Effective March 18, 2025, Google Container Registry is shut down. Google recommends transitioning projects with active Container Registry usage to Artifact Registry repositories [2]. This article assumes Google Artifact Registry as the source for container images.

Prerequisites

Before beginning the migration process, ensure you have:

Migration Steps

Launch and Connect to an Amazon EC2 Instance

  1. Follow the Get started with Amazon EC2 guide to launch an Amazon Linux instance. Select Amazon Linux 2023 for the Amazon Machine Image (AMI) and t3.medium for the Instance type.

  2. Connect to your Amazon EC2 instance using Session Manager. You can also connect to the instance using other available options.

  3. Attach the AmazonEC2ContainerRegistryFullAccess policy to the IAM role associated with your EC2 instance. This provides the necessary permissions to interact with Amazon ECR.

    IAM Role Permissions

Install Docker on Amazon EC2 Instance

  1. After connecting to EC2 instance using Session Manager, run sudo yum update -y to update installed packages and the package cache.

  2. Run sudo yum install -y docker to install Docker.

  3. Run sudo service docker start to start the Docker service.

  4. Add your user to the docker group by running sudo usermod -aG docker $(whoami).

Important: You must terminate your session and reconnect to the EC2 instance for these changes to take effect.

Install Google Cloud CLI on Amazon EC2 Instance

  1. Run cd ~ to switch to the home directory.

  2. Follow these instructions to install Google Cloud CLI on your Amazon EC2 Linux instance. The Google Cloud CLI includes the gcloud, gsutil, and bq command-line tools [3].

  3. Initialize the Google Cloud CLI by running ./google-cloud-sdk/bin/gcloud init. During initialization: Follow the prompts to set your default region and Google Cloud project for this migration.

  4. After installation, run source ~/.bashrc to refresh your terminal session and enable gcloud commands.

Migration Script

  1. Create the migration script by saving the following code as gar-to-ecr-migration.sh using a text editor like nano or vim.

    Note: This script defaults to migrating images from the gcr.io repository. If your images are stored in a different location (such as us.gcr.io, eu.gcr.io, or asia.gcr.io), update the GCP_REPOSITORY variable accordingly.

    #!/bin/bash
    
    # =============================================================================
    # GAR to ECR Migration Script
    # This script migrates Docker images from Google Artifact Registry (GAR) 
    # to Amazon Elastic Container Registry (ECR)
    # =============================================================================
    
    # AWS Configuration - Automatically detect AWS account and region
    AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)  # Get current AWS account ID
    TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")  # Get EC2 metadata token for security
    AWS_REGION=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" http://169.254.169.254/latest/meta-data/placement/region)  # Get current AWS region from EC2 metadata
    ECR_DOMAIN="${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com"  # Construct ECR registry URL
    
    # Google Configuration - Get current GCP project and set repository
    GCP_PROJECT=$(gcloud config get-value project)  # Get currently configured GCP project
    GCP_REPOSITORY="gcr.io"  # GAR repository URL - change to "us.gcr.io", "eu.gcr.io", or "asia.gcr.io" if needed
    
    # Display current configuration for user verification
    echo "=== Configuration Verification ==="
    echo "AWS Account ID: ${AWS_ACCOUNT}"
    echo "AWS Region: ${AWS_REGION}"
    echo "GCP Project: ${GCP_PROJECT}"
    echo "ECR Domain: ${ECR_DOMAIN}"
    echo "GAR Repository: ${GCP_REPOSITORY}/${GCP_PROJECT}"
    echo "=================================="
    
    # Validate that all required configuration values are present
    if [ -z "$AWS_ACCOUNT" ] || [ -z "$AWS_REGION" ] || [ -z "$GCP_PROJECT" ]; then
        echo "❌ Error: Missing required configuration"
        echo "Prerequisites:"
        echo "  - AWS CLI configured and authenticated (run: aws configure)"
        echo "  - GCP CLI configured and authenticated (run: gcloud auth login)"
        echo "  - Docker installed and running"
        exit 1
    fi
    
    # Function to create ECR repository if it doesn't exist
    # This prevents errors when trying to push to non-existent repositories
    create_ecr_repo() {
        local repo_name=$1
        echo "  📦 Checking if ECR repository '${repo_name}' exists..."
        # Try to describe the repository; if it fails, create it
        aws ecr describe-repositories --repository-names "${repo_name}" 2>/dev/null || {
            echo "  ➕ Creating ECR repository: ${repo_name}"
            aws ecr create-repository --repository-name "${repo_name}"
        }
    }
    
    # Step 1: Authenticate with AWS ECR
    echo "🔐 Authenticating with AWS ECR..."
    # Get ECR login token and authenticate Docker with ECR
    aws ecr get-login-password --region ${AWS_REGION} | \
        docker login --username AWS --password-stdin ${ECR_DOMAIN}
    
    # Step 2: Authenticate with Google Artifact Registry
    echo "🔐 Authenticating with Google Artifact Registry..."
    # Configure Docker to use gcloud as credential helper for GAR
    gcloud auth configure-docker
    
    # Step 3: Get list of all images from GAR
    echo "📋 Fetching list of images from GAR repository: ${GCP_REPOSITORY}/${GCP_PROJECT}..."
    images=$(gcloud container images list --repository=${GCP_REPOSITORY}/${GCP_PROJECT} --format="get(name)")
    
    # Step 4: Process each image found in GAR
    for image in $images; do
        echo ""
        echo "🔄 Processing image: ${image}"
        
        # Get all tags for the current image (excluding untagged images)
        echo "  📌 Fetching tags for ${image}..."
        tags=$(gcloud container images list-tags ${image} --format="get(tags)" --filter="tags:*" | tr ';' '\n')
    
        # Extract repository name by removing the GAR prefix
        # Example: gcr.io/my-project/my-app -> my-app
        repo_name=$(echo ${image} | sed "s|${GCP_REPOSITORY}/${GCP_PROJECT}/||g")
        echo "  📂 Repository name: ${repo_name}"
    
        # Create corresponding ECR repository if it doesn't exist
        create_ecr_repo "${repo_name}"
    
        # Process each tag for the current image
        for tag in $tags; do
            if [ ! -z "$tag" ]; then
                echo ""
                echo "  🏷️  Processing tag: ${tag}"
                echo "    Source: ${image}:${tag}"
                echo "    Target: ${ECR_DOMAIN}/${repo_name}:${tag}"
    
                # Step 4a: Pull image from GAR to local Docker
                echo "    ⬇️  Pulling from GAR..."
                docker pull ${image}:${tag}
    
                # Step 4b: Tag the image for ECR destination
                echo "    🏷️  Tagging for ECR..."
                docker tag ${image}:${tag} ${ECR_DOMAIN}/${repo_name}:${tag}
    
                # Step 4c: Push image to ECR
                echo "    ⬆️  Pushing to ECR..."
                docker push ${ECR_DOMAIN}/${repo_name}:${tag}
    
                # Step 4d: Clean up local images to save disk space
                echo "    🧹 Cleaning up local images..."
                docker rmi ${image}:${tag} 2>/dev/null || true
                docker rmi ${ECR_DOMAIN}/${repo_name}:${tag} 2>/dev/null || true
    
                echo "    ✅ Successfully migrated ${image}:${tag}"
            fi
        done
    done
    
    echo ""
    echo "🎉 Migration complete!"
    echo ""
    echo "📊 Summary:"
    echo "  - All images have been migrated from GAR to ECR"
    echo "  - ECR repositories created as needed"
    echo "  - Local Docker images cleaned up to save space"
    
  2. Run chmod +x gar-to-ecr-migration.sh to make it executable.

  3. Run ./gar-to-ecr-migration.sh to initiate migration process.

Clean Up

To avoid ongoing AWS charges, make sure to Terminate the EC2 Instance.

  • Go to the EC2 Console
  • Select the instance you created for migration
  • Choose Instance StateTerminate instance
  • Confirm termination when prompted

Important: Make sure your container images are successfully migrated and working in your new environment before deleting any resources.

Next Steps

After completing the migration, take these additional steps to ensure a smooth transition:

  1. Update deployment configurations to reference the new ECR image URLs
  2. Test your applications thoroughly with the migrated ECR images
  3. Implement ECR lifecycle policies to optimize storage costs and manage image retention
  4. Update CI/CD pipelines to push new images directly to ECR
  5. Document the new image locations for your team

Conclusion

Migrating container images from Google Artifact Registry to Amazon ECR can be accomplished efficiently and securely using the automated approach outlined in this guide. This migration strategy offers several key benefits:

  • Cost-effective: Leverages Google Cloud's free egress policy for customers migrating away from their services
  • Automated: Reduces manual effort and potential errors through scripted migration
  • Comprehensive: Handles bulk migration of multiple images and tags simultaneously

By following these steps, development teams can ensure their containerized applications continue operating seamlessly in their new AWS environment while minimizing downtime and migration costs. The provided script serves as a foundation that can be customized for specific organizational needs and extended for more complex migration scenarios.

References

[1]. Removing data transfer fees when moving off Google Cloud

[2]. Google Container Registry Depreciation

[3]. Install the gcloud CLI