- Newest
- Most votes
- Most comments
Check out: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html
Using S3DistCp, you can efficiently copy large amounts of data from Amazon S3 into HDFS where it can be processed by subsequent steps in your Amazon EMR cluster. You can also use S3DistCp to copy data between Amazon S3 buckets or from HDFS to Amazon S3. S3DistCp is more scalable and efficient for parallel copying large numbers of objects across buckets and across AWS accounts.
You can parallelize your copying process with the aws s3 cli, using sync and --exclude and --include. For example, if all your files are in one folder, you might break them down by the letter of the alphabet they are starting with or some other scheme that you know will distribute them in parts.
Performance is better when the the "prefixes" of the url are well distributed, so it will be better if all files are in the "top folder", not in some sub folder.
I used this method to transfer 280000 images for someone and recall that each aws s3 sync process needed about 1/3 of a CPU, so was using a 4 core server to run around 10 processes in parallel.
Relevant content
- asked 5 months ago
- asked 5 months ago
- asked a year ago
- AWS OFFICIALUpdated 3 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a month ago