I want to import data from my external database into an Amazon Relational Database Service (Amazon RDS) instance with minimal downtime.
Resolution
Prepare your data import
Note: When you import data, the downtime that occurs depends on the size of the source database and the database engine type.
For all Amazon RDS data import and export operations, take the following actions:
- Use compression and multiple threads to load and unload data in parallel where applicable. If you're loading a large amount of data in parallel, then make sure that the client machine has sufficient resources before you import the data.
- Set the backup retention for your target Amazon RDS DB instance to zero to turn off automated backups and improve performance for large data loads. Then, reboot your RDS DB instance to apply this change.
Note: It's best practice to not turn off backups for production instances. If you turn off backups, then you prevent point-in-time recovery, delete previously automated snapshots of the DB instance, and prevent recovery of deleted snapshots.
- After the data load completes, turn on automated backups. It's a best practice to create a DB snapshot at critical stages of the data load. This allows you to restore your instance from a snapshot, if required.
To reduce the overhead that the write operations of Multi-AZ data loads cause, turn off Multi-AZ.
Note: It's a best practice to use Multi-AZ for all production RDS DB instances. Make sure that you turn on Multi-AZ after the data load is complete.
To import data from on-premises environments, use AWS Database Migration Service (AWS DMS). AWS DMS supports homogeneous and heterogeneous migrations with minimal downtime.
Import data to Amazon RDS for MySQL
To import data from your external MySQL database, see Importing data from an external MySQL database to an Amazon RDS for MySQL DB instance.
Import data to Amazon RDS for MariaDB
To import data from your external MariaDB database, see Importing data from an external MariaDB database to an Amazon RDS for MariaDB DB instance.
Import data to Amazon RDS for PostgreSQL
To import data from your external PostgreSQL database, use either Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Simple Storage Service (Amazon S3).
Note: You can only use Amazon S3 to import data if you run PostgreSQL version 10.7 or higher.
For more information, see pg_dump, psql, and copy on the PostgreSQL website.
Import data to Amazon RDS for Oracle
To import data from an external Oracle database, use Oracle SQL Developer, Oracle Datapump, or AWS DMS.
For large databases, use a database link to perform a data pump export and import to the directory that's defined on the instance. If you want to specify export parameters, then you must use database links. For more information, see Creating database links, Oracle Data Pump Export, Oracle Data Pump Import, and About Oracle Data Pump Export parameters on the Oracle website.
To use Oracle Data Pump with Amazon S3, see Importing data with Oracle Data Pump and an S3 bucket. To use Oracle Data Pump with a database link, see Importing data with Oracle Data Pump and a database link.
For more information about Oracle Data Pump, see Overview of Oracle Data Pump. For more information about the impdp command, see Starting Oracle Data Pump Import on the Oracle website.
Import data to Amazon RDS for SQL Server
To import data from an external SQL Server database, use native backup and restore.
Note: Make sure to review the limitations and recommendations for native backup and restore.
You can use SQL Server Management Studio, Transact-SQL, or PowerShell to create a full database backup. For more information see, Create a full database backup on the SQL Server website.
Then, use the rds_restore_database stored procedure to restore to Amazon RDS for SQL Server.
Import data to Aurora MySQL-Compatible
To import data from an external MySQL database to Amazon Aurora MySQL-Compatible Edition, use logical migration or physical migration.
Note: If you choose physical migration, then review the limitations and considerations.
You can use native tools such as mysqldump or mydumper to create a copy of your MySQL database. Also, use Aurora MySQL-Compatible binary log replication to reduce downtime.
Note: For all migration options, make sure that you convert all source tables to an InnoDB storage engine with dynamic row format. This accelerates the speed of your migration. For more information, see Migrating data to an Aurora MySQL-Compatible DB cluster.
For large database migrations, use mydumper and myloader to perform multi-threaded dumps and imports.
Import data to Aurora PostgreSQL-Compatible
To import data from an external PostgreSQL database, use either AWS DMS or Amazon S3.
Note: You can only use Amazon S3 to import data if your database runs Aurora PostgreSQL-Compatible Edition 10.14, 11.9, 12.4 or later versions.
To create a copy of the source database, use native tools such as pg_dump and pg_restore.
To import the data to your Aurora PostgreSQL-Compatible DB instance, take the following actions:
Related information
Best practices for Amazon RDS
Exporting data from a MySQL DB instance by using replication
Best practices for migrating PostgreSQL databases to Amazon RDS and Amazon Aurora