Questions tagged with AWS Database Migration Service
Content language: English
Sort by most recent
How do you change the MASTER_HEARTBEAT_PERIOD on an Aurora MySQL version 2 (with MySQL 5.7 compatibility) RDS database?
AWS is forcing us to upgrade our RDS databases from MySQL 5.6 to 5.7 before February 28, 2023. MySQL 5.7 automatically decreases the slave_net_timeout from 3600 sec to 60 sec which we agree with. However, the MASTER_HEARTBEAT_PERIOD remained at 1800 sec, which causes replication problems because it should always be half of the slave_net_timeout ([MySQL Documentation](https://dev.mysql.com/doc/mysql-installation-excerpt/5.7/en/upgrading-from-previous-series.html)). > ![Enter image description here](/media/postImages/original/IM28oWh61pRsyE_2bTwP7Cqg) We tried the following command (*CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD = 30;*) but it returns an error (*Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation*). The problem is that AWS explicitly doesn't give users SUPER privileges ([AWS Documentation](https://aws.amazon.com/premiumsupport/knowledge-center/rds-mysql-functions/)). > ![Enter image description here](/media/postImages/original/IMgxcRkXAkRG2mzQb20edGRQ) How do we change the MASTER_HEARTBEAT_PERIOD if it requires SUPER privileges and AWS doesn’t give it to us?
Weird DMS Error: "Replication Task Settings document error: Array element's unique identifier was not found, please provide Id for each and every element"
Trying to set up DMS between RDS MySQL and Redshift. If I try to edit the the Table Mappings from the GUI, I always get a weird error (that has zero Google results, except 1 spam page) `Replication Task Settings document error: Array element's unique identifier was not found, please provide Id for each and every element`. No idea what that means. Every rule has a unique name and ID. Not the biggest issue in the world, as it's usually okay if I make a new one, but it does make it hard to iterate. Any idea what the issue here could be? Thanks!
DMS instance selection - Any documentation / guidelines ?
**Background :** We use DMS as the tool to export DB CDC changes to S3 and then ETL the data to S3 datalake . The flow is as below : MYSQL->DMS->S3->Glue->S3 Datalake The DMS job is of the type : Migrate existing data and replicate ongoing changes 1. Currently we have around 20 tables of varying row counts and row sizes , ranging from a few hundred rows to a million 2. The DMS runs on a T3 Medium replication instance 3. The volume is expected to grow in future We have been tasked to add a few more tables to this above process. We tried adding these tables to the existing DMS job, which fails , sighting memory issue. **Question :** ***Question1:*** Is there a defined guideline on how to select the DMS instance sizes or is it on a trial and error basis ? I did some research around it , and came accross https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.Types.html , however this just has an overall description and not a detailed instance selection process. Can you please help us by directing to the right documentatiokn if any ? ***Question2 : *** Is it a good practice to distribute the load accross multiple DMS tasks of varied Instance types ? For example Max 25 Tables with Rows < 1000 - dms.t3.medium Max 2 Tables with Rows < 1 million - dms.r5.large Max 2 Tables with Rows > 1 million - dms.r5.xlarge
Need to move or clone RDS Instance MS SQL with DB data to another region
Hi, I have created an Initial RDS Instance with MS SQL and DB Data but it's too slow for me in the California region, I am planning to migrate RDS Instance with DB using a cloud formation template to the Mumbai region, can I anyone help me with the steps?
On-prem Oracle to RDS Oracle SE2 migration and replication
Hi, I need to migrate and create bi-directional replication from on-prem Oracle to Oracle RDS. I have used SCT for DMS migration assessment. How do I start the migration using AWS DMS is the challenge I am facing. There are multiple schemas which I need to migration and replicate. Can I use DMS for this task? DMS can not create schemas in Oracle, Do I create all required schemas first then create migration tasks? Thank you, /AJ
AWS DMS connected successfully to Redshift created table but not loading data. Connected to S3 but failing at Uploading files to redshift.
Any help please? Logs is as below: : Failed to upload file '/rdsdbdata/data/tasks/KXHGVEZM7WRYI636BOJOQYTCOIGFGS6AOGE3MNY/cloud/1/LOAD00000001.gzip' to bucket 'dms-in2tox7r7fgrwfjrshecwnp6s4y2uvsre63afzy' as 'redshift-FGKQA6KH2TIZ5J6G4T3NMVU7REZGRBNUYSGQI3Y/KXHGVEZM7WRYI636BOJOQYTCOIGFGS6AOGE3MNY/1/LOAD00000001.csv', status = 4 (FAILED)  (transfer_client.cpp:514) 2022-11-17T14:59:38.000+00:00 2022-11-17T14:59:38 [FILE_FACTORY ]E: Failed to upload </rdsdbdata/data/tasks/KXHGVEZM7WRYI636BOJOQYTCOIGFGS6AOGE3MNY/cloud/1/LOAD00000001.gzip> to <dms-in2tox7r7fgrwfjrshecwnp6s4y2uvsre63afzy/redshift-FGKQA6KH2TIZ5J6G4T3NMVU7REZGRBNUYSGQI3Y/KXHGVEZM7WRYI636BOJOQYTCOIGFGS6AOGE3MNY/1/LOAD00000001.csv>  (at_s3_ff.c:429) 2022-11-17T14:59:38.000+00:00 2022-11-17T14:59:38 [FILE_FACTORY ]E: Failed to write entire file (second trial)  (at_universal_fs_object.c:631) 2022-11-17T14:59:38.000+00:00 2022-11-17T14:59:38 [FILE_FACTORY ]E: Write entire file failed: source = '/rdsdbdata/data/tasks/KXHGVEZM7WRYI636BOJOQYTCOIGFGS6AOGE3MNY/cloud/1/LOAD00000001.gzip' target = 'dms-in2tox7r7fgrwfjrshecwnp6s4y2uvsre63afzy/redshift-FGKQA6KH2TIZ5J6G4T3NMVU7REZGRBNUYSGQI3Y/KXHGVEZM7WRYI636BOJOQYTCOIGFGS6AOGE3MNY/1/LOAD00000001.csv' open type = 3  (at_universal_fs_object.c:316
Question about Replication tasks Target table preparation mode options.
Exists a replication set up in which an identical table from N Source databases is being replicated into a single larger table at the Target. Each source database has a replication task set as full-load-and-cdc and inside that task the data is pushed into one larger table at the target endpoint with a database identifier field appended to identify the source database. If the Tasks' Target table preparation mode options is set to Truncate, is there a way to override the functionality to only remove records based on the partition of data based on a sources database identity keys or some other combination, trick?
Is this a good way to contend with a start point restore for an N-tier replication task.
Here is a n-tier data migration orchestration that we are working with. I apologize for the verbosity. I am attempting to describe a sizable migration in a few sentences. **Migration** > (on-prem-sql-server-dms-task)---->(s3 B1)----T1--->(s3 B2)---->( RDS sql-server-task (N-1 table replication)) *on-prem-sql-server-db-dms-task(1 to 1)* - Target is a single database in a multi-tenant, multi-database on prem direct connect sql-server instance. For each database participating, a new full load/cont. sync dms-task will be created. It is important to note that there is a transformation that adds the database Identifier for the source to every table from cdc. The target (s3 B1) will have a folder for each database in the replication task. If 150 databases are involved there will be 150 tasks here and 150 database dbo/cdc folders in (s3 B1). *s3 B1* - Target of the task above. *T1* - Trasformation1: Lambda triggered for s3 B1 put/post. This lambda adds a unique identifier for every incoming row for tracking and more importantly copies the dbo/cdc file over to corresponding database location in (s3 B2). *s3 B2* - Target of a new full load/cont. sync dms-task described below. *RDS sql-server-task && possibly aurora postgres* - There is a task for each source database with a transformation that shoves the data from each table, for each database, into one large table in the target endpoint. That is why adding the database field is key in step one. It's like a N-1 table replication. **Question** This will likely be running in a dev/test/stage/prod environment and the dev/test/stage databases are frequently restored from prod. In order to contend with database restores at the point of origin, I am hopeful that the following steps will be workable. Does anyone see any undocumented gotchas that may be encountered? *Origin Database Restore Contention* 1. Make sure that all replication tasks of type (on-prem-sql-server-dms-task) and (many to 1-RDS sql-server) have the StartReplicationTaskType set to reload-target. 2. Expose a fire and forget API endpoint that will allow internal tooling to call into and perform the following stepsL * Using aws dms api call method to StopReplicationTask for (on-prem-sql-server-dms-task). * Using aws dms api call method to StopReplicationTask for (RDS sql-server-task). * Foreach table in the RDS sql-server, remove records related to source database, or mask them and delete later. * Using aws dms api call method to StartReplicationTask for (on-prem-sql-server-dms-task). * Using aws dms api call method to StartReplicationTask for (RDS sql-server-task). 3. Test?! **I would like to solution this without having the API part, however, I can't figure out how to trigger a lambda or some other process to stop the tasks, delete the data and restart the tasks when a restore event occurs at source. That would be the ideal solution.**
Split one table into multiple tables using AWS DMS?
Giving an example to explain my scenario - FileA has 100 attributes. FileB has 300 attributes. The attributes in Files A and B can be from multiple tables. These files are present in S3. We are going to import Files A and B into RDS Postgres. Now this Postgres will have Table A --> generated from File A (100 attributes) and Table B --> generated from File B (300 attributes) We now want to split Tables A & B into multiple smaller tables using AWS DMS. Source Postgres (Tables A & B) --> AWS DMS --> Target Postgres (Tables C, D, E, F) Is this possible? Any advice to implement this scenario? Please help
Troubleshoot Unable to get table definition for table '0' cause by a DDL command (cant find the DDL in the logs)
Hello Im currently using DMS and got the following message 2022-10-30T09:40:03 [SOURCE_CAPTURE ]E: Unable to get table definition for table '0'  (oracle_endpoint_capture.c:759) and found the following info https://community.qlik.com/t5/Official-Support-Articles/SOURCE-CAPTURE-E-Unable-to-get-table-definition-for-table-0/ta-p/1809603 I was looking towards enabling source capture for DDL commands in SOURCE_CAPTURE Im looking towards a way to enable trace only for ddl commands (traceDdlChanges?) to dump the exact command which failed. Activating full debug trace for SOURCE_CAPTURE is not an option at all. Too much verbose for an event that may happen very occasionally I couldnt find any further information https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.Logging.html https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.DDLHandling.html https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.ErrorHandling.html Thanks
What happens when a source endpoint database is restored that is part of a data migration task
Using AWS DMS there are source endpoints (On-Prem SQL Servers) and associated target endpoints (S3). Could someone with experience tell me what happens when a source endpoint (sql-server) gets restored, and CDC is re-enabled on that database. For the associated data migration task, I hope that DMS will see this a requiring a new snapshot/initial load (in transactional-replication parlance). If not, how does one contend with this. These restores can happen with databases from multiple servers and on a frequent basis. Normally, I would test this out, however, our dev accounts are locked down with limited console access, everything has to be scripted in terraform and this would take some time to get a "quick test" up and running.
AWS DMS Source Mongo DB target PostgreSQL change data type in transformation rule
I am running a DMS migration task, * Source - Mongo DB * Target - Postgres SQL One of the integer field in mongo has 5% values in string and rest values in integer data type. After migration to Postgres the String values is as it is and all integer values in that column becomes null. I also tried adding a transformation rule where I changed that column data type to int2, int4, string but still only 5% data comes with changed data type but rest of values are NULL which were integer type in mongo DB. How will I be able to change all values to Integer type or all to string type. As I do not want to lose any data.