Redshift UNLOAD parquet file size

0

My customer has a 2 - 4 nodes of dc2.8 xlarge Redshift cluster and they want to export data to parquet in the optimal size (~1GB) per file with option (MAXFILESIZE AS 1GB). But the engine somehow exported the total of 500MB files into 64 files (average size from 5mb -25mb).

My question:

  1. How can we control the size per parquet file?
  2. How does Redshift determine the optimal file size?
AWS
feita há 4 anos1648 visualizações
1 Resposta
0
Resposta aceita

The UNLOAD command in its default configuration unloads a number of files equal to number of slices. For a DC2.8xlarge 4 node cluster the number of slices are 64 (4 node * 16 slices per node). This is the default behavior and makes all the slices at work in parallel. Redshift tries to make the files sizes in chunk of 32 MB row group when unloaded in Parquet format. For smaller data volume where 32 MB chunk are big enough it will generate smaller files. The multiple files are effective than a single file as the later case Redshift combines the data from table and then generate a single file- less effective for parallel compute nodes.

One solution to generate fixed size file is to use the UNLOAD option PARALLEL OFF and MAXFILESIZE 1GB.

AWS
respondido há 4 anos
profile picture
ESPECIALISTA
avaliado há 2 meses

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas