By using AWS re:Post, you agree to the AWS re:Post Terms of Use

Migration from MySQL to Aurora by using the same DMS task.

0

We are planning to migrate our databases to Aurora, and we would like to know what the impact in our pipeline would be. We are currently running a dms task with Full load, ongoing replication that migrates data from MySQL to an S3 bucket (landing bucket) and from there by using Datasync and Glue we move data to another S3 bucket (raw bucket). From the raw bucket data is ingested to Snowflake with the use of pipes, streams and tasks that treat the CDC. Therefore, we would like to know what the best practice is to follow in order not to break anything in our pipeline and ensure the integrity of our data.

1 Answer
0

Your question has ambiguity. 1) You want to change the process of destination from S3 to Aurora MySQL, and then continue the rest of the process as is, or 2) you want to keep this process, and setup a separate process to migrate the data Aurora.

However, I will provide separate answers for the two use cases:

Use Case 1: Change the destination from S3 to Aurora MySQL and continue the rest of the process as is.

In this scenario, you want to modify your existing pipeline to replicate data directly from MySQL to Aurora MySQL, and then continue the subsequent steps (DataSync, Glue, and Snowflake ingestion) as they are currently configured. Here are the steps:

  1. Create an Aurora MySQL-Compatible Edition Cluster:

    • Provision a new Aurora MySQL-Compatible Edition cluster with the desired specifications and configurations.
    • Ensure that the Aurora cluster is compatible with your existing MySQL version and configurations.
  2. Update the DMS Task:

    • In your existing DMS task, update the target endpoint to point to the newly created Aurora MySQL-Compatible Edition cluster instead of the S3 bucket.
    • Keep the existing source endpoint (MySQL) and the migration task settings (Full Load + CDC) as they are.
    • Test the updated DMS task to ensure that data is being replicated correctly from MySQL to the Aurora cluster.
  3. Validate Data in Aurora Cluster:

    • Perform data validation to ensure that the data in the Aurora cluster matches the data in your source MySQL database.
  4. Continue with Existing Pipeline:

    • Since the rest of your pipeline (DataSync, Glue, and Snowflake ingestion) is configured to read data from the S3 bucket, you can continue using these components as they are currently set up.
    • DataSync will continue to move data from the landing bucket to the raw bucket.
    • Glue jobs and Snowflake ingestion processes will continue to read data from the raw bucket.
  5. Cutover:

    • After successful validation, you can cut over to the new pipeline that uses the Aurora cluster as the source for DMS.
    • Decommission the old DMS task that was replicating data to the S3 bucket.

In this use case, the only change you need to make is to update the DMS task to replicate data directly to the Aurora cluster instead of the S3 bucket. The rest of your pipeline components (DataSync, Glue, and Snowflake ingestion) can continue to operate as they are currently configured, reading data from the S3 buckets.

Use Case 2: Keep the existing process and set up a separate process to migrate data to Aurora.

In this scenario, you want to keep your existing pipeline intact and set up a separate process to migrate data from MySQL to Aurora MySQL. Because in the same single DMS task, you cannot choose two destinations to replicate the data, and depending upon your replication instance utilization either increase the capacity of the replication instance, or setup another replication instance for this new DMS task.

Here are the steps:

  1. Create an Aurora MySQL-Compatible Edition Cluster:

    • Provision a new Aurora MySQL-Compatible Edition cluster with the desired specifications and configurations.
    • Ensure that the Aurora cluster is compatible with your existing MySQL version and configurations.
  2. Set up a New DMS Task for Migration:

    • Create a new DMS task specifically for migrating data from MySQL to the Aurora cluster.
    • Configure the source endpoint as your existing MySQL database and the target endpoint as the newly created Aurora cluster.
    • Set up the migration task to perform a full load and ongoing replication (CDC) from MySQL to Aurora.
  3. Validate Data in Aurora Cluster:

    • Perform data validation to ensure that the data in the Aurora cluster matches the data in your source MySQL database.
  4. Keep Existing Pipeline Running:

    • Your existing pipeline (DMS task to S3, DataSync, Glue, and Snowflake ingestion) can continue to run as it is currently configured.
    • This pipeline will continue to process data from MySQL and ingest it into Snowflake.
  5. Cutover to Aurora:

    • Once you have successfully validated the data in the Aurora cluster, you can cut over to using Aurora as your primary data source.
    • Update your Glue jobs and Snowflake ingestion processes to read data from the Aurora cluster instead of the S3 bucket or MySQL database.
    • Decommission the old DMS task that was replicating data to the S3 bucket and the components that were reading from the S3 bucket.

In this use case, you are setting up a separate migration process(new DMS task) using DMS to replicate data from MySQL to Aurora MySQL. Your existing pipeline (DMS to S3, DataSync, Glue, and Snowflake ingestion) can continue to run in parallel until you have successfully validated the data in the Aurora cluster. Once validated, you can cut over to using Aurora as the primary data source for your Glue jobs and Snowflake ingestion processes, and decommission the old components that were reading from the S3 bucket or MySQL database.

This approach allows you to migrate to Aurora without disrupting your existing pipeline, but it does require setting up and maintaining a separate migration process until the cutover is complete.

AWS
answered 5 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions