AWS Glue Job - Job Bookmark - Read from JDBC RDS Postgres instance and write parquet files into S3 on daily basis

0

Hi. I'm building up/developing a pipeline in AWS using AWS Glue.

Context: I'm extracting data from RDS AWS Postgres instance. This instance is a productive database of a mobile app (OLTP).

Goal: Extract historical and incremental data. Setting up Glue PySpark jobs in order to extract not only the historical data but also the "daily deltas". Write parquet files on S3 bucket daily folders.

Up to now: I've already scheduled a Crawler, it updates and map tables and schema, weekly.

Most of the RDS tables have this 3 fields:

"id" "created_at" "updated_at"

The app's backend inserts new rows in the RDS ("created_at" == "updated_at"), but also updates (not changing the ID) previously inserted rows (so "created_at" < "updated_at")

On the first job run with bookmark enabled, it will grab the historical data, on the second run (lets say T+1), will the bookmark catch the updated rows?

Besides this, is there any other consideration or advice you see worth sharing?

Thank you in advance.

1 個回答
2

By default it will bookmark on the PK, so it won't detect updates but if you always update the updated_at column with a timestamp when doing changes and you specify updated_at as a jobBookmarkKeys (see the documentation), then it will retrieve those updated rows as well as the new ones in the next run.

profile pictureAWS
專家
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南