- Mais recentes
- Mais votos
- Mais comentários
To upload a file larger than 110GB to Amazon S3 using the AWS CLI, you can use the multipart upload feature. However, there are some limitations on the maximum object size that you should be aware of:
-
The maximum object size that can be uploaded in a single PUT operation is 5GB. For objects larger than 100MB, it is recommended to use the multipart upload feature.
-
The maximum size of an individual part in a multipart upload is 5GB. This means that for a file larger than 110GB, you will need to split it into multiple parts and upload them separately.
-
To perform a multipart upload using the AWS CLI, you can use the following steps:
aws s3 mb s3://your-bucket-name aws s3 cp /path/to/large/file.zip s3://your-bucket-name/file.zip --recursive --multipart-upload --chunk-size 5GB
- The
--multipart-upload
option enables the multipart upload feature. - The
--chunk-size
option specifies the size of each part, which should be less than or equal to 5GB.
- The
-
You can monitor the progress of the multipart upload using the
aws s3 ls
command:aws s3 ls s3://your-bucket-name/file.zip --human-readable --summarize
-
If the upload is interrupted for any reason, you can resume the upload from the last successful part using the
aws s3 cp
command with the--continue
option:aws s3 cp /path/to/large/file.zip s3://your-bucket-name/file.zip --continue
For more detailed information on uploading large files to Amazon S3, you can refer to the
You need to use multipart upload as there is a limit on a single upload of 5 GB size..
You can use the high-level aws s3 cp
command instead of the low-level aws s3api put-object
command to have the multipart done automatically.
Please refer to this link for more information on the exact usage.
Conteúdo relevante
- AWS OFICIALAtualizada há 6 meses
- AWS OFICIALAtualizada há 8 meses
you add the
--sse-c
and--sse-c-key
parameters to the aws s3 cp command