Redshift UNLOAD parquet file size

0

My customer has a 2 - 4 nodes of dc2.8 xlarge Redshift cluster and they want to export data to parquet in the optimal size (~1GB) per file with option (MAXFILESIZE AS 1GB). But the engine somehow exported the total of 500MB files into 64 files (average size from 5mb -25mb).

My question:

  1. How can we control the size per parquet file?
  2. How does Redshift determine the optimal file size?
AWS
demandé il y a 4 ans1648 vues
1 réponse
0
Réponse acceptée

The UNLOAD command in its default configuration unloads a number of files equal to number of slices. For a DC2.8xlarge 4 node cluster the number of slices are 64 (4 node * 16 slices per node). This is the default behavior and makes all the slices at work in parallel. Redshift tries to make the files sizes in chunk of 32 MB row group when unloaded in Parquet format. For smaller data volume where 32 MB chunk are big enough it will generate smaller files. The multiple files are effective than a single file as the later case Redshift combines the data from table and then generate a single file- less effective for parallel compute nodes.

One solution to generate fixed size file is to use the UNLOAD option PARALLEL OFF and MAXFILESIZE 1GB.

AWS
répondu il y a 4 ans
profile picture
EXPERT
vérifié il y a 2 mois

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions