1 Answer
- Newest
- Most votes
- Most comments
0
That number is normally larger (e.g. 2x) because it compresses rows while parquet columnar compression is much more efficient.
Must mean in your data there are many columns with repeated values.
Relevant content
- asked 2 months ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 2 months ago
Can you please explain a bit more. I dont have any repeated values as such but few values are nulls.
How we can optimise .Because without repartition I tried writing to s3 in csv format ,its 500 GB of data
Avoid the shuffle if you can, otherwise don't worry too much about the amount, the transfer is quite fast
Data is skewed, so using repartition to distribute the data evenly which is resulting in huge shuffle writes. Even without repartition it is taking around 1 hr to complete with G2.x and 60 DPUs
These parquet data is being read from Glue Catalog Tables directly.