Recovering and syncing files from S3 Glacier Flexible Retrieval

1

Hello everyone.

I have two AWS accounts. Account A and account B. Account A has 30 Tb of information archived on S3 Glacier Flexible Retrieval (05 buckets). I need to move this information to account B on a S3 standard storage.

Im trying this:

  1. First, restore the objects from glacier with this command (one bucket at a time)
aws s3 ls s3://bucket1 --recursive  | awk '{print $4}' | xargs -L 1 aws s3api restore-object --restore-request '{"Days":10,"GlacierJobParameters":{"Tier":"Standard"}}' --bucket bucket1 --key
  1. Sync the information from the bucket restored between the accounts with this command. I´ve created the restored-bucket1 bucket on account B and has all the policies needed applied to it.
aws s3 sync s3://bucket1 s3:restored-bucket1

Even when the s3 console shows that the information has been restored

Object: Amazon S3 > Buckets > Bucket1 > file1.png
Restoration status
Completed
Restoration expiry date
January 14, 2023, 21:00:00 (UTC-03:00)

I still get the error:

warning: Skipping file s3://bucket1/file1.png . Object is of storage class GLACIER. Unable to perform download operations on GLACIER objects. You must restore the object to be able to perform the operation. See aws s3 download help for additional parameter options to ignore or force these transfers.

I have all the policies that allow file transfer between those accounts set up and running ok. I can sync other information that is not on Glacier with no issues.

Anyone could help me?

3 Answers
2
Accepted Answer

The command you want to do is aws s3 sync s3://bucketname1 s3://bucketname2 --force-glacier-transfer --storage-class STANDARD

There is a known issues here - https://github.com/aws/aws-cli/issues/1699

Basically, when you do a restore, the CLI still thinks it's an archived object. The above command will force it to do the sync anyway, and wont initiate a new restore.

profile pictureAWS
micah
answered a year ago
profile picture
EXPERT
reviewed 10 months ago
profile pictureAWS
EXPERT
Matt-B
reviewed a year ago
  • Hey micah

    Thanks for the message. The article you mentioned helped me with other minor issues related to this transfer. And I used --force-glacier-transfer as you also mentioned.

2

Try to check this page(https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html) and play with different params. Maybe you should provide --storage-class GLACIER or pass --force-glacier-transfer.

In general, I do not think moving 30 Tb using s3 sync is the best option. If your buckets are in the same region, I would suggest to try Amazon S3 Batch Operations. You create a job, specify source, destination, maybe smth. else and AWS will run this job for you. I did it couple of times and it worked just fine.

profile picture
Max
answered a year ago
  • Hey MaxBorysov

    Thanks for the message. I used --force-glacier-transfer and it worked

1

Hi There

Can you try adding the --force-glacier-transfer option to the CLI command?

--force-glacier-transfer Forces a transfer request on all Glacier objects in a sync or recursive copy. [1]

[1] https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html

profile pictureAWS
EXPERT
Matt-B
answered a year ago
  • Hey Matt-B

    Thanks for the message. I add the --force-glacier-transfer and it did the trick.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions