Delete old parquet files of overwritten Iceberg table

0

I am trying to write a pyspark dataframe to S3 and the AWS data catalog using the Iceberg format and the pyspark.sql.DataFrameWriterV2 with the createOrReplace function. When I write the same dataframe twice one after another, I see that all parquet files on S3 exist twice with slightly different names (hashes) in each partition, however, when I read the table with SQL, i get the expected number of rows, which corresponds to the number of rows in the dataframe. Is there a way to automatically delete the overwritten/superseded parquet files?

已提问 4 个月前605 查看次数
1 回答
0

That's normal Iceberg behavior, it keeps the old files in case you want to get the data as it was in the past.
In the configuration you can tell it when to expire or you can force it, see: https://iceberg.apache.org/docs/latest/maintenance/ https://iceberg.apache.org/docs/latest/configuration/

profile pictureAWS
专家
已回答 4 个月前
  • Thanks for your answer, Gonzalo!

    My problem is, that i need to physically delete data older than 10 years due to legal regulations. The age is defined by the value of a certain column in my tables, because i processed a large chunk of old data at once so that the time of writing the data does not correspond to the real age of the data. Do you have a hint for me how to implement that? My impression is, that the snapshot expire mechanism work only for the physical age of objects on S3.

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则

相关内容