- Newest
- Most votes
- Most comments
Hi,
To get a summary of the objects within a bucket, run the following command:
aws s3 ls --summarize --human-readable --recursive s3://<bucket-name>/
This should give an output like this:
2021-10-07 21:32:57 452 Bytes foo/bar/car/petrol
2021-10-07 21:32:57 896 Bytes foo/bar/truck/diesel
2021-10-07 21:32:57 189 Bytes foo/bar/hybrid/battery
2021-10-07 21:32:57 398 Bytes vehicles.txt
Total Objects: 4
Total Size: 2.9 MiB
By comparing the number of files and total size from your on-premise file system and S3, you can verify if everything is transferred correctly.
References:
[1] https://aws.amazon.com/blogs/storage/find-out-the-size-of-your-amazon-s3-buckets/
[2] https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html
Thanks,
Atul
Hi,
Agreed with IBAtulAnand's answer for a quick check
A second level could be to compare a list of files by name and size from both sides (local and S3)
But, the most secure is to compute for each file a checksum (like MD5 or more advanced if you prefer) on both local and S3 and check if they are identical for a given file name. This will confirm to your client that files not only have same size individually but also that their content is identical for each individual bit. This very secure procedure will ensure that the transfer utility did not alter any data (while keeping it same size)
Best,
Didier
Relevant content
- Accepted Answerasked 2 years ago
- asked 2 months ago
- asked 8 months ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 2 years ago