Hi. I'm building up/developing a pipeline in AWS using AWS Glue.
Context:
I'm extracting data from RDS AWS Postgres instance. This instance is a productive database of a mobile app (OLTP).
Goal:
Extract historical and incremental data.
Setting up Glue PySpark jobs in order to extract not only the historical data but also the "daily deltas". Write parquet files on S3 bucket daily folders.
Up to now:
I've already scheduled a Crawler, it updates and map tables and schema, weekly.
Most of the RDS tables have this 3 fields:
"id"
"created_at"
"updated_at"
The app's backend inserts new rows in the RDS ("created_at" == "updated_at"), but also updates (not changing the ID) previously inserted rows (so "created_at" < "updated_at")
On the first job run with bookmark enabled, it will grab the historical data, on the second run (lets say T+1), will the bookmark catch the updated rows?
Besides this, is there any other consideration or advice you see worth sharing?
Thank you in advance.