Recovering and syncing files from S3 Glacier Flexible Retrieval

1

Hello everyone.

I have two AWS accounts. Account A and account B. Account A has 30 Tb of information archived on S3 Glacier Flexible Retrieval (05 buckets). I need to move this information to account B on a S3 standard storage.

Im trying this:

  1. First, restore the objects from glacier with this command (one bucket at a time)
aws s3 ls s3://bucket1 --recursive  | awk '{print $4}' | xargs -L 1 aws s3api restore-object --restore-request '{"Days":10,"GlacierJobParameters":{"Tier":"Standard"}}' --bucket bucket1 --key
  1. Sync the information from the bucket restored between the accounts with this command. I´ve created the restored-bucket1 bucket on account B and has all the policies needed applied to it.
aws s3 sync s3://bucket1 s3:restored-bucket1

Even when the s3 console shows that the information has been restored

Object: Amazon S3 > Buckets > Bucket1 > file1.png
Restoration status
Completed
Restoration expiry date
January 14, 2023, 21:00:00 (UTC-03:00)

I still get the error:

warning: Skipping file s3://bucket1/file1.png . Object is of storage class GLACIER. Unable to perform download operations on GLACIER objects. You must restore the object to be able to perform the operation. See aws s3 download help for additional parameter options to ignore or force these transfers.

I have all the policies that allow file transfer between those accounts set up and running ok. I can sync other information that is not on Glacier with no issues.

Anyone could help me?

3개 답변
2
수락된 답변

The command you want to do is aws s3 sync s3://bucketname1 s3://bucketname2 --force-glacier-transfer --storage-class STANDARD

There is a known issues here - https://github.com/aws/aws-cli/issues/1699

Basically, when you do a restore, the CLI still thinks it's an archived object. The above command will force it to do the sync anyway, and wont initiate a new restore.

profile pictureAWS
micah
답변함 일 년 전
profile picture
전문가
검토됨 10달 전
profile pictureAWS
전문가
Matt-B
검토됨 일 년 전
  • Hey micah

    Thanks for the message. The article you mentioned helped me with other minor issues related to this transfer. And I used --force-glacier-transfer as you also mentioned.

2

Try to check this page(https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html) and play with different params. Maybe you should provide --storage-class GLACIER or pass --force-glacier-transfer.

In general, I do not think moving 30 Tb using s3 sync is the best option. If your buckets are in the same region, I would suggest to try Amazon S3 Batch Operations. You create a job, specify source, destination, maybe smth. else and AWS will run this job for you. I did it couple of times and it worked just fine.

profile picture
Max
답변함 일 년 전
  • Hey MaxBorysov

    Thanks for the message. I used --force-glacier-transfer and it worked

1

Hi There

Can you try adding the --force-glacier-transfer option to the CLI command?

--force-glacier-transfer Forces a transfer request on all Glacier objects in a sync or recursive copy. [1]

[1] https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html

profile pictureAWS
전문가
Matt-B
답변함 일 년 전
  • Hey Matt-B

    Thanks for the message. I add the --force-glacier-transfer and it did the trick.

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠