Get Hands-on with Amazon EKS - Workshop Event Series
Whether you're taking your first steps with Kubernetes or you're an experienced practitioner looking to sharpen your skills, our Amazon EKS workshop series delivers practical, real-world experience that moves you forward. Learn directly from AWS solutions architects and EKS specialists through hands-on sessions designed to build your confidence with Kubernetes. Register now and start building with Amazon EKS!
How do I migrate databases from Amazon EC2 instances or on-premises VMs to Amazon RDS for SQL Server?
I want to migrate databases from an Amazon Elastic Compute Cloud (Amazon EC2) instance or on-premises Microsoft SQL Server instance to my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server instance.
Short description
Use one of the following methods to migrate your database:
- Native SQL Server backup and restore
- AWS Database Migration Service (AWS DMS)
- Transactional replication
- Backup Package (.bacpac) file
- Import data with the SQL Server Generate and Publish Script Wizard
- Import data with the SQL Server Import and Export Wizard
- Import data with the SQL Server Bulk Copy Program utility
Migration time and downtime depend on your migration method. For more details see Best practices for migrating to Amazon RDS for SQL Server.
Resolution
Native SQL Server backup and restore
You can use native backup and restore to migrate a SQL Server database from an on-premises or Amazon EC2 instance to an Amazon RDS for SQL Server instance. This method is slower than methods that use replication, but it can have fewer limitations.
Complete the following steps:
-
Create an Amazon Simple Storage Service (Amazon S3) bucket to store the backup from the source instance. The Amazon S3 bucket must be in the same AWS Region as the Amazon RDS instance.
-
Create the AWS Identity and Access Management (IAM) role to access the bucket.
-
Add the SQLSERVER_BACKUP_RESTORE option to the option group associated with the Amazon RDS for SQL Server instance.
-
Create a backup from the source instance (on-premises or Amazon EC2), and then copy it to the Amazon S3 bucket you created in step 1.
-
Run the following script to restore the backup to the Amazon RDS for SQL Server instance:
exec msdb.dbo.rds_restore_database @restore_db_name='database_name', @s3_arn_to_restore_from='arn:aws:s3:::bucket_name file_name_and_extension';Note: Replace database_name with the name of your database and replace bucket_name and file_name_and_extension with the names of your bucket and file.
-
Run the following script to track the status of your backup. Check the task_info column and the Amazon RDS Troubleshooting guide if the task fails:
exec msdb.dbo.rds_task_status;
Note: You can back up and restore differential backups. You can also restore logs to restore to a point in time.
AWS Database Migration Service (AWS DMS)
Complete the following steps:
- Verify the prerequisites and limitations of using SQL Server as a source or target for AWS DMS.
- Create an AWS DMS replication instance.
- Use AWS DMS to create source and target endpoints.
- Create a migration task.
Transactional replication
You can set up transactional replication from the on-premises or EC2 SQL Server instances to an Amazon RDS for SQL Server instance. However, you can only make the Amazon RDS for SQL Server instance a Subscriber. And, you can only make the on-premises or EC2 SQL Server instance the Publisher or Distributor with push subscription.
Note: This method depends on the EC2 hostname, which can change in RDS for SQL Server. Because of this limitation, we recommend that you use this method only for a one-time migration of your data to RDS for SQL Server.
Backup Package (.bacpac) file
The .bacpac file consists of copied metadata and the data compressed to a file. It's a best practice to use a .bacpack file for databases that are less than 200 GB. You can use Export/Import or the SQLPackage.exe (command line) utility to create a .bacpac file. For more information about the .bacpac file, see Migrate SQL Server databases from an Azure SQL database to Amazon RDS for SQL Server using bacpac method.
SQL Server Generate and Publish Script Wizard
If your database is smaller than 1 GB, then you can use the Generate and Publish Script Wizard. For larger databases, you can use the Import and Export Wizard or Bulk copy methods to script the schema of the database. For more information about the Generate and Publish Script Wizard, see How to: Generate a script (SQL Server Management Studio) in the Microsoft SQL Server documentation.
Note: Make sure that you select Save scripts to specific location, Advanced on the Set Scripting Option page. The Advanced setting provides additional options for including or excluding object in the table during import and export.
SQL Server Import and Export Wizard
The Import and Export Wizard creates an integration package. The integration package is used to copy data from your on-premises or Amazon EC2 SQL Server database to the Amazon RDS for SQL Server instance. You can filter the specific tables you want to copy to the Amazon RDS instance.
Before you run the Import and Export Wizard, verify that it can reach your network and configure permissions for your security group. See What permissions do I need? in the Microsoft SQL Server documentation
For more details on the Import and Export Wizard, see How to: Run the SQL Server Import and Export Wizard in the Microsoft SQL Server documentation.
Note: When you run the Import and Export Wizard, choose the following options for the Destination Amazon RDS for SQL Server instance:
- For Server Name, enter the name of the endpoint for the Amazon RDS database instance.
- For Authentication mode, choose SQL Server Authentication.
- For the Username and Password, enter the main user that you created in the Amazon RDS instance.
SQL Server Bulk Copy Program utility
The Bulk Copy Program (bcp) is a command line utility that's used to bulk copy data between SQL Server instances. You can use the bcp utility to import large sets of data to a SQL Server instance or export to a file.
The following are examples of the IN and OUT commands:
OUT: Use this command to export or dump the records from a table into a file:
bcp dbname.schema_name.table_name out C:\table_name.txt -n -S localhost -U username -P password -b 10000
The previous code includes the following options:
- -n: specifies that the bulk copy uses the native data types of the data to be copied.
- -S: specifies the SQL Server instance that the bcp utility connects to.
- -U: specifies the username of the AWS account to log in to the SQL Server instance.
- -P: specifies the password for the user specified by -U.
- -b: specifies the number of rows per batch of imported data.
IN: Use this command to import all records from the dump file to the table. You must create the table before you run the bcp command.
bcp dbname.schema_name.table_name in C:\table_name.txt -n -S endpoint,port -U master_user_name -P master_user_password -b 10000
For more information, see bcp utility in the Microsoft SQL Server documentation.
- Language
- English
