How can I explicitly specify the size of the files to be split or the number of files?

0

Situation: If only specify the partition clause, it will be divided into multiple files. The size of one file is less than 1MB (~ 40 files).

What I am thinking of: I want to explicitly specify the size of the files to be split or the number of files when registering data with CTAS or INSERT INTO.

I have read this article: https://aws.amazon.com/premiumsupport/knowledge-center/set-file-number-size-ctas-athena/

Problem: Using bucketing method (like said in above article ) can help me specify the number of file or file size. However, it also said that "Note: The INSERT INTO statement isn't supported on bucketed tables". I would like to register data daily with Athena's INSERT INTO.

Question: what is the best way to build a partitioned data mart without compromising search efficiency? Is it best to register the data with Glue and save it as one file?

已提问 2 年前1702 查看次数
1 回答
0
已接受的回答

Hello,

Yes. You are right that INSERT INTO is not yet supported for bucketed tables. For your use case where you wanted to specify the number of buckets/file sizes, using Athena bucketing would be appropriate but, with the downfall of not being able to use INSERT INTO to insert new incoming data.

But, I can recommend of using S3distcp utility on AWS EMR to merge small files into ~128MB size to solve your small file problem. You can use it to combine smaller files into larger objects. You can also use S3DistCP to move large amounts of data in an optimized fashion from HDFS to Amazon S3, Amazon S3 to Amazon S3, and Amazon S3 to HDFS.

REFERENCES:

https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html

https://aws.amazon.com/blogs/big-data/seven-tips-for-using-s3distcp-on-amazon-emr-to-move-data-efficiently-between-hdfs-and-amazon-s3/

AWS
支持工程师
已回答 2 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则