Delete old parquet files of overwritten Iceberg table

0

I am trying to write a pyspark dataframe to S3 and the AWS data catalog using the Iceberg format and the pyspark.sql.DataFrameWriterV2 with the createOrReplace function. When I write the same dataframe twice one after another, I see that all parquet files on S3 exist twice with slightly different names (hashes) in each partition, however, when I read the table with SQL, i get the expected number of rows, which corresponds to the number of rows in the dataframe. Is there a way to automatically delete the overwritten/superseded parquet files?

質問済み 4ヶ月前605ビュー
1回答
0

That's normal Iceberg behavior, it keeps the old files in case you want to get the data as it was in the past.
In the configuration you can tell it when to expire or you can force it, see: https://iceberg.apache.org/docs/latest/maintenance/ https://iceberg.apache.org/docs/latest/configuration/

profile pictureAWS
エキスパート
回答済み 4ヶ月前
  • Thanks for your answer, Gonzalo!

    My problem is, that i need to physically delete data older than 10 years due to legal regulations. The age is defined by the value of a certain column in my tables, because i processed a large chunk of old data at once so that the time of writing the data does not correspond to the real age of the data. Do you have a hint for me how to implement that? My impression is, that the snapshot expire mechanism work only for the physical age of objects on S3.

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ