Migrating Container Images from Google Artifact Registry to Amazon ECR: A Step-by-Step Guide
This article demonstrates a simple and cost-effective approach for bulk migration of container images from Google Artifact Registry to Amazon Elastic Container Registry (ECR)
Introduction
When organizations migrate their containerized applications and microservices architectures, ensuring a smooth and secure transfer of container images between registries is essential to the overall migration process. While numerous methods are available, this article demonstrates a simple and cost-effective approach for bulk migration of container images from Google Artifact Registry to Amazon Elastic Container Registry (ECR).
Google Cloud customers who wish to migrate their data to another cloud provider or on-premises infrastructure can take advantage of free network data transfer when moving data out of Google Cloud [1].
Important: Effective March 18, 2025, Google Container Registry is shut down. Google recommends transitioning projects with active Container Registry usage to Artifact Registry repositories [2]. This article assumes Google Artifact Registry as the source for container images.
Prerequisites
Before beginning the migration process, ensure you have:
- An active AWS account (if you don't have one, follow the Setting Up Your Environment tutorial)
- Container images stored in Google Artifact Registry
- Appropriate permissions in both Google Cloud and AWS environments
Migration Steps
Launch and Connect to an Amazon EC2 Instance
-
Follow the Get started with Amazon EC2 guide to launch an Amazon Linux instance. Select
Amazon Linux 2023for the Amazon Machine Image (AMI) andt3.mediumfor the Instance type. -
Connect to your Amazon EC2 instance using Session Manager. You can also connect to the instance using other available options.
-
Attach the AmazonEC2ContainerRegistryFullAccess policy to the IAM role associated with your EC2 instance. This provides the necessary permissions to interact with Amazon ECR.
Install Docker on Amazon EC2 Instance
-
After connecting to EC2 instance using Session Manager, run
sudo yum update -yto update installed packages and the package cache. -
Run
sudo yum install -y dockerto install Docker. -
Run
sudo service docker startto start the Docker service. -
Add your user to the docker group by running
sudo usermod -aG docker $(whoami).
Important: You must terminate your session and reconnect to the EC2 instance for these changes to take effect.
Install Google Cloud CLI on Amazon EC2 Instance
-
Run
cd ~to switch to the home directory. -
Follow these instructions to install Google Cloud CLI on your Amazon EC2 Linux instance. The Google Cloud CLI includes the
gcloud,gsutil, andbqcommand-line tools [3]. -
Initialize the Google Cloud CLI by running
./google-cloud-sdk/bin/gcloud init. During initialization: Follow the prompts to set your default region and Google Cloud project for this migration. -
After installation, run
source ~/.bashrcto refresh your terminal session and enablegcloudcommands.
Migration Script
-
Create the migration script by saving the following code as
gar-to-ecr-migration.shusing a text editor likenanoorvim.Note: This script defaults to migrating images from the
gcr.iorepository. If your images are stored in a different location (such asus.gcr.io,eu.gcr.io, orasia.gcr.io), update theGCP_REPOSITORYvariable accordingly.#!/bin/bash # ============================================================================= # GAR to ECR Migration Script # This script migrates Docker images from Google Artifact Registry (GAR) # to Amazon Elastic Container Registry (ECR) # ============================================================================= # AWS Configuration - Automatically detect AWS account and region AWS_ACCOUNT=$(aws sts get-caller-identity --query Account --output text) # Get current AWS account ID TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600") # Get EC2 metadata token for security AWS_REGION=$(curl -H "X-aws-ec2-metadata-token: ${TOKEN}" http://169.254.169.254/latest/meta-data/placement/region) # Get current AWS region from EC2 metadata ECR_DOMAIN="${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com" # Construct ECR registry URL # Google Configuration - Get current GCP project and set repository GCP_PROJECT=$(gcloud config get-value project) # Get currently configured GCP project GCP_REPOSITORY="gcr.io" # GAR repository URL - change to "us.gcr.io", "eu.gcr.io", or "asia.gcr.io" if needed # Display current configuration for user verification echo "=== Configuration Verification ===" echo "AWS Account ID: ${AWS_ACCOUNT}" echo "AWS Region: ${AWS_REGION}" echo "GCP Project: ${GCP_PROJECT}" echo "ECR Domain: ${ECR_DOMAIN}" echo "GAR Repository: ${GCP_REPOSITORY}/${GCP_PROJECT}" echo "==================================" # Validate that all required configuration values are present if [ -z "$AWS_ACCOUNT" ] || [ -z "$AWS_REGION" ] || [ -z "$GCP_PROJECT" ]; then echo "❌ Error: Missing required configuration" echo "Prerequisites:" echo " - AWS CLI configured and authenticated (run: aws configure)" echo " - GCP CLI configured and authenticated (run: gcloud auth login)" echo " - Docker installed and running" exit 1 fi # Function to create ECR repository if it doesn't exist # This prevents errors when trying to push to non-existent repositories create_ecr_repo() { local repo_name=$1 echo " 📦 Checking if ECR repository '${repo_name}' exists..." # Try to describe the repository; if it fails, create it aws ecr describe-repositories --repository-names "${repo_name}" 2>/dev/null || { echo " ➕ Creating ECR repository: ${repo_name}" aws ecr create-repository --repository-name "${repo_name}" } } # Step 1: Authenticate with AWS ECR echo "🔐 Authenticating with AWS ECR..." # Get ECR login token and authenticate Docker with ECR aws ecr get-login-password --region ${AWS_REGION} | \ docker login --username AWS --password-stdin ${ECR_DOMAIN} # Step 2: Authenticate with Google Artifact Registry echo "🔐 Authenticating with Google Artifact Registry..." # Configure Docker to use gcloud as credential helper for GAR gcloud auth configure-docker # Step 3: Get list of all images from GAR echo "📋 Fetching list of images from GAR repository: ${GCP_REPOSITORY}/${GCP_PROJECT}..." images=$(gcloud container images list --repository=${GCP_REPOSITORY}/${GCP_PROJECT} --format="get(name)") # Step 4: Process each image found in GAR for image in $images; do echo "" echo "🔄 Processing image: ${image}" # Get all tags for the current image (excluding untagged images) echo " 📌 Fetching tags for ${image}..." tags=$(gcloud container images list-tags ${image} --format="get(tags)" --filter="tags:*" | tr ';' '\n') # Extract repository name by removing the GAR prefix # Example: gcr.io/my-project/my-app -> my-app repo_name=$(echo ${image} | sed "s|${GCP_REPOSITORY}/${GCP_PROJECT}/||g") echo " 📂 Repository name: ${repo_name}" # Create corresponding ECR repository if it doesn't exist create_ecr_repo "${repo_name}" # Process each tag for the current image for tag in $tags; do if [ ! -z "$tag" ]; then echo "" echo " 🏷️ Processing tag: ${tag}" echo " Source: ${image}:${tag}" echo " Target: ${ECR_DOMAIN}/${repo_name}:${tag}" # Step 4a: Pull image from GAR to local Docker echo " ⬇️ Pulling from GAR..." docker pull ${image}:${tag} # Step 4b: Tag the image for ECR destination echo " 🏷️ Tagging for ECR..." docker tag ${image}:${tag} ${ECR_DOMAIN}/${repo_name}:${tag} # Step 4c: Push image to ECR echo " ⬆️ Pushing to ECR..." docker push ${ECR_DOMAIN}/${repo_name}:${tag} # Step 4d: Clean up local images to save disk space echo " 🧹 Cleaning up local images..." docker rmi ${image}:${tag} 2>/dev/null || true docker rmi ${ECR_DOMAIN}/${repo_name}:${tag} 2>/dev/null || true echo " ✅ Successfully migrated ${image}:${tag}" fi done done echo "" echo "🎉 Migration complete!" echo "" echo "📊 Summary:" echo " - All images have been migrated from GAR to ECR" echo " - ECR repositories created as needed" echo " - Local Docker images cleaned up to save space" -
Run
chmod +x gar-to-ecr-migration.shto make it executable. -
Run
./gar-to-ecr-migration.shto initiate migration process.
Clean Up
To avoid ongoing AWS charges, make sure to Terminate the EC2 Instance.
- Go to the EC2 Console
- Select the instance you created for migration
- Choose Instance State → Terminate instance
- Confirm termination when prompted
Important: Make sure your container images are successfully migrated and working in your new environment before deleting any resources.
Next Steps
After completing the migration, take these additional steps to ensure a smooth transition:
- Update deployment configurations to reference the new ECR image URLs
- Test your applications thoroughly with the migrated ECR images
- Implement ECR lifecycle policies to optimize storage costs and manage image retention
- Update CI/CD pipelines to push new images directly to ECR
- Document the new image locations for your team
Conclusion
Migrating container images from Google Artifact Registry to Amazon ECR can be accomplished efficiently and securely using the automated approach outlined in this guide. This migration strategy offers several key benefits:
- Cost-effective: Leverages Google Cloud's free egress policy for customers migrating away from their services
- Automated: Reduces manual effort and potential errors through scripted migration
- Comprehensive: Handles bulk migration of multiple images and tags simultaneously
By following these steps, development teams can ensure their containerized applications continue operating seamlessly in their new AWS environment while minimizing downtime and migration costs. The provided script serves as a foundation that can be customized for specific organizational needs and extended for more complex migration scenarios.
References
[1]. Removing data transfer fees when moving off Google Cloud
- Language
- English
Relevant content
- asked 3 years ago
