Delete old parquet files of overwritten Iceberg table

0

I am trying to write a pyspark dataframe to S3 and the AWS data catalog using the Iceberg format and the pyspark.sql.DataFrameWriterV2 with the createOrReplace function. When I write the same dataframe twice one after another, I see that all parquet files on S3 exist twice with slightly different names (hashes) in each partition, however, when I read the table with SQL, i get the expected number of rows, which corresponds to the number of rows in the dataframe. Is there a way to automatically delete the overwritten/superseded parquet files?

gefragt vor 4 Monaten606 Aufrufe
1 Antwort
0

That's normal Iceberg behavior, it keeps the old files in case you want to get the data as it was in the past.
In the configuration you can tell it when to expire or you can force it, see: https://iceberg.apache.org/docs/latest/maintenance/ https://iceberg.apache.org/docs/latest/configuration/

profile pictureAWS
EXPERTE
beantwortet vor 4 Monaten
  • Thanks for your answer, Gonzalo!

    My problem is, that i need to physically delete data older than 10 years due to legal regulations. The age is defined by the value of a certain column in my tables, because i processed a large chunk of old data at once so that the time of writing the data does not correspond to the real age of the data. Do you have a hint for me how to implement that? My impression is, that the snapshot expire mechanism work only for the physical age of objects on S3.

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen