How can I explicitly specify the size of the files to be split or the number of files?

0

Situation: If only specify the partition clause, it will be divided into multiple files. The size of one file is less than 1MB (~ 40 files).

What I am thinking of: I want to explicitly specify the size of the files to be split or the number of files when registering data with CTAS or INSERT INTO.

I have read this article: https://aws.amazon.com/premiumsupport/knowledge-center/set-file-number-size-ctas-athena/

Problem: Using bucketing method (like said in above article ) can help me specify the number of file or file size. However, it also said that "Note: The INSERT INTO statement isn't supported on bucketed tables". I would like to register data daily with Athena's INSERT INTO.

Question: what is the best way to build a partitioned data mart without compromising search efficiency? Is it best to register the data with Glue and save it as one file?

已提問 2 年前檢視次數 1701 次
1 個回答
0
已接受的答案

Hello,

Yes. You are right that INSERT INTO is not yet supported for bucketed tables. For your use case where you wanted to specify the number of buckets/file sizes, using Athena bucketing would be appropriate but, with the downfall of not being able to use INSERT INTO to insert new incoming data.

But, I can recommend of using S3distcp utility on AWS EMR to merge small files into ~128MB size to solve your small file problem. You can use it to combine smaller files into larger objects. You can also use S3DistCP to move large amounts of data in an optimized fashion from HDFS to Amazon S3, Amazon S3 to Amazon S3, and Amazon S3 to HDFS.

REFERENCES:

https://docs.aws.amazon.com/emr/latest/ReleaseGuide/UsingEMR_s3distcp.html

https://aws.amazon.com/blogs/big-data/seven-tips-for-using-s3distcp-on-amazon-emr-to-move-data-efficiently-between-hdfs-and-amazon-s3/

AWS
支援工程師
已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南