Right directory structure to optimize read throughput
1
We are trying to determine how to organize our S3 bucket for optimizing Read operation for this use case. We have a daily job that will write few single digit million files (each file less than 1 MB) to the bucket. The read pattern would be spread throughout the day with parallel requests to read one different file per request. We are thinking to choose between 3 options
Write job creates new directory for each daily run in the bucket. Distributes few files in the daily directory into 36 sub-directories hashed by last character in alphanumeric file name (26 characters + 10 digits)
The bucket contain 36 directories and each directory contains new directory for each daily run.
Manually create 36 partitions and find a way to randomly distribute data (if at all possible)