2 回答
- 最新
- 投票最多
- 评论最多
0
You cannot just read the new columns, for that you would need a columnar format like parquet.
Also incremental ingestion normally refers to loading new files, for that you could use Glue bookmarks (running a Glue job instead of Spectrum) or putting new files on different folders(partitions) and telling Spectrum to load just that)
0
Have you configured an ETL job to merge data? https://github.com/sinemozturk/INCREMENTAL-DATA-LOADING-FROM-AWS-S3-BUCKET-TO-REDSHIFT-BY-USING-AWS-GLUE-ETL-JOB
We want to explore the options to load the data without flattening the json using AWS Glue job to reduce the billing
相关内容
- AWS 官方已更新 1 年前
- AWS 官方已更新 2 年前
- AWS 官方已更新 2 年前
How to dynamically change the partition values so that we could automate this job !
If you mean filtering partitions, you would need to build your query with the values you need, for instance using the current date for date related columns