Recovering and syncing files from S3 Glacier Flexible Retrieval

1

Hello everyone.

I have two AWS accounts. Account A and account B. Account A has 30 Tb of information archived on S3 Glacier Flexible Retrieval (05 buckets). I need to move this information to account B on a S3 standard storage.

Im trying this:

  1. First, restore the objects from glacier with this command (one bucket at a time)
aws s3 ls s3://bucket1 --recursive  | awk '{print $4}' | xargs -L 1 aws s3api restore-object --restore-request '{"Days":10,"GlacierJobParameters":{"Tier":"Standard"}}' --bucket bucket1 --key
  1. Sync the information from the bucket restored between the accounts with this command. I´ve created the restored-bucket1 bucket on account B and has all the policies needed applied to it.
aws s3 sync s3://bucket1 s3:restored-bucket1

Even when the s3 console shows that the information has been restored

Object: Amazon S3 > Buckets > Bucket1 > file1.png
Restoration status
Completed
Restoration expiry date
January 14, 2023, 21:00:00 (UTC-03:00)

I still get the error:

warning: Skipping file s3://bucket1/file1.png . Object is of storage class GLACIER. Unable to perform download operations on GLACIER objects. You must restore the object to be able to perform the operation. See aws s3 download help for additional parameter options to ignore or force these transfers.

I have all the policies that allow file transfer between those accounts set up and running ok. I can sync other information that is not on Glacier with no issues.

Anyone could help me?

3 個答案
2
已接受的答案

The command you want to do is aws s3 sync s3://bucketname1 s3://bucketname2 --force-glacier-transfer --storage-class STANDARD

There is a known issues here - https://github.com/aws/aws-cli/issues/1699

Basically, when you do a restore, the CLI still thinks it's an archived object. The above command will force it to do the sync anyway, and wont initiate a new restore.

profile pictureAWS
micah
已回答 1 年前
profile picture
專家
已審閱 10 個月前
profile pictureAWS
專家
Matt-B
已審閱 1 年前
  • Hey micah

    Thanks for the message. The article you mentioned helped me with other minor issues related to this transfer. And I used --force-glacier-transfer as you also mentioned.

2

Try to check this page(https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html) and play with different params. Maybe you should provide --storage-class GLACIER or pass --force-glacier-transfer.

In general, I do not think moving 30 Tb using s3 sync is the best option. If your buckets are in the same region, I would suggest to try Amazon S3 Batch Operations. You create a job, specify source, destination, maybe smth. else and AWS will run this job for you. I did it couple of times and it worked just fine.

profile picture
Max
已回答 1 年前
  • Hey MaxBorysov

    Thanks for the message. I used --force-glacier-transfer and it worked

1

Hi There

Can you try adding the --force-glacier-transfer option to the CLI command?

--force-glacier-transfer Forces a transfer request on all Glacier objects in a sync or recursive copy. [1]

[1] https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html

profile pictureAWS
專家
Matt-B
已回答 1 年前
  • Hey Matt-B

    Thanks for the message. I add the --force-glacier-transfer and it did the trick.

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南