Skip to content

How can I use the AWS CLI to restore an Amazon S3 object from the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class?

5 minute read
1

I archived an Amazon Simple Storage Service (Amazon S3) object to the Amazon S3 Glacier Flexible Retrieval or Amazon S3 Glacier Deep Archive storage class. I want to use the AWS Command Line Interface (AWS CLI) to restore the object.

Resolution

Note: If you receive errors when you run AWS Command Line Interface (AWS CLI) commands, then see Troubleshooting errors for the AWS CLI. Also, make sure that you're using the most recent AWS CLI version.

Initiate a restore request

Run the following restore-object command:

aws s3api restore-object --bucket awsexamplebucket --key dir1/example.obj --restore-request '{"Days":25,"GlacierJobParameters":{"Tier":"Standard"}}'

Note: Replace the example values with your bucket, object, and restore request values.

Because data retrieval charges are based on the quantity of requests, confirm that the parameters of your restore request are correct.

The retrieval request creates a temporary copy of your object in the S3 Standard storage class and keeps the archived object. The preceding example command requests to restore the object for 25 days.

You can make the following modifications to the preceding command:

  • To restore a specific object version in a versioned bucket, include the --version-id option, and then specify the version ID.
  • For the S3 Glacier Flexible Retrieval storage class, you can use the Expedited, Standard, or Bulk retrieval options. For the S3 Glacier Deep Archive storage class, you can use only the Standard or Bulk retrieval options.
  • If the JSON syntax results in an error on a Windows client, then replace the restore request with the following syntax:
    --restore-request Days=25,GlacierJobParameters={"Tier"="Standard"}

Note: For objects that you store in S3 Glacier Instant Retrieval, the data retrieval is immediate and you don't need to use the restore operation. For more information, see Amazon S3 storage classes.

Monitor the status of your restore request

Run the following head-object command:

aws s3api head-object --bucket awsexamplebucket --key dir1/example.obj

If the restore is still in progress after you run the command, then you receive a response that's similar to the following message:

{  
    "Restore": "ongoing-request=\"true\"",  
    ...  
    "StorageClass": "GLACIER | DEEP_ARCHIVE",  
    "Metadata": {}  
}

After the restore is complete, you receive a response that's similar to the following message:

{  
    "Restore": "ongoing-request=\"false\", expiry-date=\"Sun, 13 Aug 2017 00:00:00 GMT\"",  
    ...  
    "StorageClass": "GLACIER | DEEP_ARCHIVE",  
    "Metadata": {}  
}

In the message that you receive after the restore is complete, note the expiry-date. You have until the expiry date that's shown to access the temporary object that's stored in the S3 Standard storage class. The temporary object is available with the archived object that's in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class. After the expiry date passes, Amazon S3 removes the temporary object. You must change the object's storage class before the temporary object expires. To change the object's storage class after the expiry date, initiate a new restore request.

Change the object's storage class to Amazon S3 Standard

To change the object's storage class to Amazon S3 Standard, use copy. You can overwrite the object, or copy the object into another location.

Warning: If you're using version 1.x of the AWS CLI, then set the multipart threshold to 5 GB before you copy an object. Otherwise, you lose the object's user metadata when the object size is larger than the multipart thresholds of the AWS CLI. To keep user metadata for objects that are larger than 5 GB, use version 2.x of the AWS CLI.

(Optional) To increase the multipart threshold of the AWS CLI, run the following configure command:

aws configure set default.s3.multipart_threshold 5GB

To overwrite the object with the Amazon S3 Standard storage class, run the following cp command:

aws s3 cp s3://awsexamplebucket/dir1/example.obj s3://awsexamplebucket/dir1/example.obj --storage-class STANDARD

To perform a recursive copy for an entire prefix and overwrite objects with the Amazon S3 Standard storage class, run the following cp command:

aws s3 cp s3://awsexamplebucket/dir1/ s3://awsexamplebucket/dir1/ --storage-class STANDARD --recursive --force-glacier-transfer

Note: Objects that you archive to S3 Glacier Flexible Retrieval have a minimum storage duration of 90 days. Objects that you archive to S3 Glacier Deep Archive have a minimum storage duration of 180 days. If you overwrite objects in either storage class before the minimum storage duration, then you're charged for the for the entire minimum duration.

To copy the object into another location, run the following cp command:

aws s3 cp s3://awsexamplebucket/dir1/example.obj s3://awsexamplebucket/dir2/example2.obj

Note: For suspended buckets or buckets with versioning turned on, the preceding step creates additional copies of objects. The additional objects also incur storage costs. To avoid storage costs, remove the non-current versions that are still in the Amazon S3 Glacier storage class. Or, create an S3 Lifecycle expiration rule.

Related information

How do I restore a large volume of Amazon S3 objects that are in the S3 Glacier Flexible Retrieval or S3 Glacier Deep Archive storage class?

How do I use the restore tiers in the Amazon S3 console to restore archived objects from Amazon S3 Glacier storage class?

Restoring an archived object

2 Comments

This is not accurate: "In AWS Regions where Reduced Redundancy Storage is a lower price than S3 Standard". In fact, when you try to move an object to RRS you get this warning: "The Reduced Redundancy storage class is not recommended because the Standard storage class is more cost effective."

You can recommend to do restore it to One Zone-IA, instead.

AWS
replied 2 years ago

Thank you for your comment. We'll review and update the Knowledge Center article as needed.

AWS
EXPERT
replied 2 years ago