Why can't my Amazon FSx for Lustre data repository association export files to Amazon S3?

4 minute read

I want to troubleshoot why my Amazon FSx for Lustre data repository association (DRA) can't export files to Amazon Simple Storage Service (Amazon S3).


Check for file system configuration issues

  • Check if the FSx for Lustre data repository is correctly configured.
  • Be sure that the file system is correctly mounted.

Check for export configuration issues

Check the export configuration between the FSx for Lustre data repository and Amazon S3 bucket. When you create a data repository association (DRA), make sure to select the correct export options. Verify that the data is stored in UTF-8 compatible format so that FSx for Lustre can export data to your S3 bucket. Amazon S3 object keys have a maximum length of 1,024 bytes. FSx for Lustre doesn't export files with a corresponding S3 object key that's longer than 1,024 bytes.

Review errors and logs

Check the FSx for Lustre logs and S3 bucket logs for any errors or warnings that are related to the sync operation. These logs can help identify any issues that you must address. For more information, see Logging with Amazon CloudWatch Logs. Check the AgeOfOldestQueuedMessage metric for the file system on CloudWatch. This metric helps you to identify how long the synchronization is delayed from the file system to S3.

Note: To investigate further, check the error and failure logs that correspond to the time when AgeOfOldestQueuedMessage started to grow.

If you turned on logging for the FSx for Lustre filesystem, then you can find the corresponding logs under the Cloudwatch/aws/fsx/lustre log group. If you have information on files that failed to export to S3, then search the log string with the relative file path, for example dir1/file.txt. The data repository task failure and automatic export failure have a corresponding JSON object entry in the log group. You can find the specific errorCode from these entries. For more information on these log messages and root causes, see Data repository event logs.

Check for permission issues

Confirm that the AWS Identity and Access Management (IAM) role that's associated with the FSx for Lustre data repository has the necessary permissions to access the S3 bucket. This IAM role must have the necessary permissions to perform the required actions on the S3 bucket, such as listing, reading, and writing objects. If the IAM role doesn't have the correct permissions, then the sync operation fails.

To check and modify the permissions for the IAM role that's associated with the FSx for Lustre data repository, complete the following steps:

  1. Open the IAM console.
  2. In the navigation pane, choose Roles.
  3. Under Roles, search for an IAM role that's similar to AWSServiceRoleForFSxS3Access\_fs-01234567890. This is the IAM role that's associated with FSx for Lustre data repository.
  4. Choose the IAM role.
  5. Choose the Permissions tab to review the permissions that are associated with the role.
  6. Expand the attached customer inline policy. Then, review the policy to make sure that the role has the necessary permissions to access the S3 bucket. At minimum, this role must have the s3:ListBucket, s3:GetObject, and s3:PutObject permissions.

Note that the S3 bucket policy must allow access from the IAM role that's associated with the FSx for Lustre data repository. Also, check the S3 bucket policy in the Amazon S3 console and if necessary, modify this policy to allow access from the IAM role.

Important: You must correctly configure the AWS Key Management System (AWS KMS) permissions for either of the following conditions:

  • You have a cross-account setup, such as a file system and s3 bucket that are in different AWS accounts.
  • You use AWS KMS for your Amazon S3 bucket.

For more information, see Linking your file system to an S3 bucket.

Check for file locations

Confirm that the files are located within DRA namespaces. If the files aren't in these namespaces, then they are skipped. For example, if the DRA namespace is /ns1/dir1/, then a file such as ns1/file.txt is skipped.

AWS OFFICIALUpdated a year ago