Ongoing service disruptions
For the most recent update on ongoing service disruptions affecting the AWS Middle East (UAE) Region (ME-CENTRAL-1), refer to the AWS Health Dashboard. For information on AWS Service migration, see How do I migrate my services to another region?
How do I resolve errors when I use native backup and restore for my Amazon RDS for SQL Server DB instance?
I want to resolve errors that occur when I back up and restore my Amazon Relational Database Service (Amazon RDS) for Microsoft SQL Server DB instance.
Short description
To resolve errors that occur when you use native backup and restore for your Amazon RDS for SQL Server DB instance, try the following:
- Increase the space on the DB instance.
- Give access permissions to the AWS Identity and Access Management (IAM) role for the SQLSERVER_BACKUP_RESTORE option.
- Give AWS Key Management Service (AWS KMS) permissions to the IAM role for the option group.
- Give permissions to the IAM policy or bucket policy for cross-account backups.
- Import the Transparent Data Encryption (TDE) certificate.
- Specify the correct Windows drive letters.
- Set the backup file's MAXTRANSFERSIZE to a value larger than the value you used when you performed the restore.
- Reduce the backup file size to transfer the file to Amazon Simple Storage Service (Amazon S3).
Resolution
Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshooting errors for the AWS CLI. Also, make sure that you're using the most recent AWS CLI version.
Increase the space on the DB instance
Your DB instance might have insufficient space when you restore the backup from an Amazon Elastic Compute Cloud (Amazon EC2) or on-premises instance. Amazon RDS then stops the task.
Example log output:
[2022-04-07 05:21:22.317] Aborted the task because of a task failure or a concurrent RESTORE_DB request. [2022-04-07 05:21:22.437] Task has been aborted [2022-04-07 05:21:22.440] There is not enough space on the disk to perform restore database operation.
To resolve this issue, you can increase the available storage on the DB instance. Or, reduce the transaction log file size on the DB instance.
Increase available storage
Complete the following steps:
- Run the following query on the EC2 or on-premises instance to check the size of the database data file and transaction log file:
Note: Replace DB_NAME with the name of your database.SELECT DB_NAME(database_id) AS DatabaseName, Name AS Logical_Name, Physical_Name, (size*8)/1024/1024 AS SizeGB FROM sys.master_files WHERE DB_NAME(database_id) = 'DB_NAME' GO Database Size = (DB_Name size + DB_Name_Log size) - Compare the EC2 or on-premises instance database size with the available storage on the DB instance.
- Increase the available storage on the DB instance, and then restore the database.
Reduce the transaction log file size
Complete the following steps:
- To reduce the current transaction log file size on the EC2 or on-premises instance, run the following command:
Note: Replace FileName with the name of your data or transaction log file and FileSizeMB with the target file size in megabytes.DBCC SHRINKFILE (FileName, FileSizeMB) - Back up the database.
Give access permissions to the IAM role for the SQLSERVER_BACKUP_RESTORE option
If you have insufficient permissions for the IAM role that's associated with the SQLSERVER_BACKUP_RESTORE option, then Amazon RDS stops the task.
Example log output:
[2020-12-15 08:56:22.143] Aborted the task because of a task failure or a concurrent RESTORE_DB request. [2020-12-15 08:56:22.213] Task has been aborted [2020-12-15 08:56:22.217] Access Denied
-or-
[2022-07-16 16:08:22.067] Task execution has started. [2022-07-16 16:08:22.143] Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backup. [2022-07-16 16:08:22.147] Task has been aborted [2022-07-16 16:08:22.150] Access Denied
To resolve this issue, complete the following steps:
-
Run the following command to verify that the S3 bucket and the folder prefix are correct in the restore query:
exec msdb.dbo.rds_restore_database @restore_db_name='database_name', @s3_arn_to_restore_from='arn:aws:s3:::bucket_name/file_name_and_extension'; -
Add the following statement to the IAM permissions policy:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::bucket_name" }, { "Effect": "Allow", "Action": [ "s3:GetObjectAttributes", "s3:GetObject", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::bucket_name/*" } ] }Note: In the preceding policy, replace arn:aws:s3:::bucket_name with the Amazon Resource Name (ARN) of your S3 bucket.
-
Add the policy to the role that's associated with the SQLSERVER_BACKUP_RESTORE option.
-
Verify that the SQLSERVER_BACKUP_RESTORE option is in the option group that's associated with the DB instance.
For more information, see How do I perform a native backup of my SQL Server database to Amazon RDS and restore from Amazon S3?
Give AWS KMS permissions to the IAM role for the option group
RDS for SQL Server native backup and restore can encrypt and decrypt backup file on the client side. If the policy for the IAM role that's associated with the option group lacks permissions for the KMS key, the backup or restore will fail.
Example log output:
[2025-12-12 01:34:22.217] Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backup. [2025-12-12 01:34:22.223] Task has been aborted [2025-12-12 01:34:22.230] User: arn:aws:sts::0123456789:assumed-role/<your_role_name>/RDS-SqlServerBackupRestore is not authorized to perform: kms:DescribeKey on resource: arn:aws:kms:ap-northeast-1:0123456789:key/<your_kms_key_id> because no identity-based policy allows the kms:DescribeKey action
To resolve this issue, add the following statement to the IAM policy that's associated with the option group.
{ "Version":"2012-10-17", "Statement": [ { "Sid": "AllowAccessToKey", "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:GenerateDataKey", "kms:Encrypt", "kms:Decrypt" ], "Resource": "arn:aws:kms:us-east-1:0123456789:key/key-id" }, { "Sid": "AllowAccessToS3", "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::PUT-BUCKET-NAME" }, { "Sid": "GetS3Info", "Effect": "Allow", "Action": [ "s3:GetObjectAttributes", "s3:GetObject", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::PUT-BUCKET-NAME/*" } ] }
Give permissions to the IAM policy or bucket policy for cross-account backups
When you restore a database backup from one AWS account to a different account, Amazon RDS might stop the task because of insufficient permissions. For example, you store the backup in an S3 bucket in account A, and restore to an Amazon RDS DB instance in account B.
Either the policy for the IAM role that's associated with the option group or the bucket policy that's associated with the S3 bucket lacks permissions.
Example log output:
[2022-02-03 15:57:22.180] Aborted the task because of a task failure or a concurrent RESTORE_DB request. [2022-02-03 15:57:22.260] Task has been aborted [2022-02-03 15:57:22.263] Error making request with Error Code Forbidden and Http Status Code Forbidden. No further error information was returned by the service.
To resolve this issue, complete the following steps:
-
Add the following statement to the IAM policy that's associated with the option group in account B:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::name_of_bucket_present_in_Account_A" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": "arn:aws:s3:::name_of_bucket_present_in_Account_A/*" }, { "Action": [ "kms:DescribeKey", "kms:GenerateDataKey", "kms:Decrypt", "kms:Encrypt" "kms:ReEncryptTo", "kms:ReEncryptFrom" ], "Effect": "Allow", "Resource": [ "arn:aws: PUT THE NAME OF THE KEY HERE", "arn:aws:s3:::name_of_bucket_present_in_Account_A/*" ] } ] } -
Add the following statement to the bucket policy that's associated with the S3 bucket in account A:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Permission to cross account", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::AWS-ACCOUNT-ID-OF-RDS:role/service-role/PUT-ROLE-NAME" ] }, "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": [ "arn:aws:s3:::PUT-BUCKET-NAME" ] }, { "Sid": "Permission to cross account on object level", "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::AWS-ACCOUNT-ID-OF-RDS:role/service-role/PUT-ROLE-NAME" ] }, "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:AbortMultipartUpload" ], "Resource": [ "arn:aws:s3:::PUT-BUCKET-NAME/*" ] } ] }
For more information, see Importing and exporting SQL Server databases using native backup and restore and Bucket owner granting cross-account permission to objects it does not own.
Import the TDE certificate
When you restore a backup of a database but didn't import the TDE certificate to the destination server, then the task stops. For example, you try to restore a database with TDE from an EC2 or on-premises instance to an RDS for SQL Server DB instance.
Example log output:
[2022-06-15 11:55:22.280] Cannot find server certificate with thumbprint '########'. [2022-06-15 11:55:22.280] RESTORE FILELIST is terminating abnormally. [2022-06-15 11:55:22.300] Aborted the task because of a task failure or a concurrent RESTORE_DB request. [2022-06-15 11:55:22.333] Task has been aborted [2022-06-15 11:55:22.337] Empty restore file list result retrieved.
To resolve this issue, import the TDE certificate to the destination server.
To prevent this issue, use one of the following workarounds.
You back up the database from an on-premises or EC2 instance, but the target RDS for SQL Server is in multiple Availability Zones
Complete the following steps:
-
Create a backup of the EC2 or on-premises database with TDE turned on.
-
Restore the backup as a new database within your on-premises server.
-
Run the following command to turn off encryption on the new database:
USE master; GO ALTER DATABASE Databasename SET ENCRYPTION OFF; GONote: Replace Databasename with the name of your database.
-
Run the following command to drop the Database Encryption Key (DEK) on the new database:
USE Databasename; GO DROP DATABASE ENCRYPTION KEY; GONote: Replace Databasename with the name of your database.
-
Perform a native SQL Server backup, and then restore the backup to the DB instance.
You back up the DB instance from an RDS for SQL Server DB instance that's encrypted with TDE
Complete the following steps:
- Use a DB snapshot from the RDS for SQL Server instance to restore to a new DB instance.
Note: If you change the edition of the DB instance, then see Microsoft SQL Server considerations. - Turn off TDE for the new DB instance.
- Perform a native SQL backup, and then restore the backup to the DB instance.
- Turn on TDE for the new DB instance.
Specify the correct Windows drive letters
RDS for SQL Server can restore a database to additional storage volumes. When you specify an incorrect Windows drive letter for a storage volume, the restore operation fails.
Example query and error message:
-- Native restore query EXEC msdb.dbo.rds_restore_database @restore_db_name='my_database', @s3_arn_to_restore_from='arn:aws:s3:::<your_bucket_name>/my_database.bak', @data_file_volume='Y:', -- incorrect drive letter. @log_file_volume='Z:'; -- incorrect drive letter. -- Error message Message 50000、Level 16、State 1、Procedure msdb.dbo.rds_restore_database、Line 122 Volume for data files is unavailable. Choose from available volumes.
To resolve this issue, check the Windows drive letters. Complete the following steps:
-
List the attached additional storage volumes on your DB instance with the describe-db-instances command:
aws rds describe-db-instances \ --db-instance-identifier your-db-instance-id \ --query 'DBInstances[].AdditionalStorageVolumes[].VolumeName' \ --output textNote: Replace your-db-instance-id with the identifier of your DB instance.
Note: If you can't get volume name, you need to update AWS CLI to latest version.For more information on storage volumes, see Considerations for using additional storage volumes with RDS for SQL Server.
-
Fix your query to specify the correct Windows drive letters for
@data_file_volumeand@log_file_volume.
Set the backup file's MAXTRANSFERSIZE to a value larger than the value you used when you performed the restore
A MAXTRANSFERSIZE error occurs when the backup contains FILESTREAM or In-Memory OLTP filegroups, and you used an incorrect MAXTRANSFERSIZE during restore.
Note: RDS for SQL Server doesn't support the FILESTREAM feature.
If you specify MAXTRANSFERSIZE explicitly, you may encounter an error: "RESTORE requires MAXTRANSFERSIZE=<required_size> but <your_specified_size> was specified."
Example query and log output:
-- query EXEC msdb.dbo.rds_restore_database @restore_db_name='my_database', @s3_arn_to_restore_from='arn:aws:s3:::<your_bucket_name>/my_database.bak', @max_transfer_size=65536 -- specified MAXTRANSFERSIZE explicitly -- error message [2025-12-11 07:26:22.320] Task execution has started. [2025-12-11 07:26:22.520] RESTORE requires MAXTRANSFERSIZE=4194304 but 65536 was specified. RESTORE DATABASE is terminating abnormally.
To resolve this issue, don't specify MAXTRANSFERSIZE:
EXEC msdb.dbo.rds_restore_database @restore_db_name='my_database', @s3_arn_to_restore_from='arn:aws:s3:::<your_bucket_name>/my_database.bak'
Or, specify a value equal to or greater than the size indicated in the error message:
EXEC msdb.dbo.rds_restore_database @restore_db_name='my_database', @s3_arn_to_restore_from='arn:aws:s3:::<your_bucket_name>/my_database.bak', @max_transfer_size=4194304
Reduce the backup file size to transfer the file to Amazon S3
This issue occurs when you move an object that's larger than the maximum object size for an Amazon S3 multipart operation. Amazon S3 divides the larger object into multiple parts that exceed the maximum number of parts for each upload.
Example log output:
[2022-04-21 16:45:04.597] reviews_consumer/reviews_consumer_PostUpdate_042122.bak: Completed processing 100% of S3 chunks. [2022-04-21 16:47:05.427] Write on "####" failed: 995(The I/O operation has been aborted because of either a thread exit or an application request.) A nonrecoverable I/O error occurred on file "XXXX:" 995(The I/O operation has been aborted because of either a thread exit or an application request.). BACKUP DATABASE is terminating abnormally. [2022-04-21 16:47:22.033] Unable to write chunks to S3 as S3 processing has been aborted. [2022-04-21 16:47:22.040] reviews_consumer/reviews_consumer_PostUpdate_042122.bak: Aborting S3 upload, waiting for S3 workers to clean up and exit [2022-04-21 16:47:22.053] Aborted the task because of a task failure or an overlap with your preferred backup window for RDS automated backup. [2022-04-21 16:47:22.060] reviews_consumer/reviews_consumer_PostUpdate_042122.bak: Aborting S3 upload, waiting for S3 workers to clean up and exit [2022-04-21 16:47:22.067] S3 write stream upload failed. Encountered an error while uploading an S3 chunk: Part number must be an integer between 1 and 10000, inclusive S3 write stream upload failed. Encountered an error while uploading an S3 chunk: Part number must be an integer between 1 and 10000, inclusive S3 write stream upload failed. Encountered an error while uploading an S3 chunk: Part number must be an integer between 1 and 10000, inclusive S3 write stream upload failed. Encountered an error while uploading an S3 chunk: Part number must be an integer between 1 and 10000, inclusive
To resolve this issue, turn on database backup compression to reduce the backup size so that Amazon S3 can receive the file.
To turn on backup compression, run the following command:
exec rdsadmin..rds_set_configuration 'S3 backup compression', 'true';
- Language
- English
Related videos


Thanks for the detailed information. It helped me to troubleshoot a issue with one of my platinum customer who are using the SQL server workloads. At the end , customer was able to resolve the issue. Thanks again
Our storage is set to 400GB with 1500GB threshold with auto-scaling but we get [2022-04-07 05:21:22.317] Aborted the task because of a task failure or a concurrent RESTORE_DB request. [2022-04-07 05:21:22.437] Task has been aborted [2022-04-07 05:21:22.440] There is not enough space on the disk to perform restore database operation.
So my question, does auto-scale not come into play with this type of action? I assume no, but I can't find any info.
Thank you for your comment. We'll review and update the Knowledge Center article as needed.
I'm unable to see SQLSERVER_BACKUP_RESTORE in my option group for restore. how can I enable this option?
Relevant content
- Accepted Answerasked 2 years ago
- asked 4 years ago