1 回答
- 最新
- 投票最多
- 评论最多
1
Not sure how big the data set is but here are some options
- Simple lambda & s3 unload with Redshift scheduled job can do that. Then from s3 to dynamodb https://aws.amazon.com/blogs/database/implementing-bulk-csv-ingestion-to-amazon-dynamodb/ you can see some sample code in this blog. You can also use AWS Data Pipeline
- You can also use Glue connection type for DDB , reference : https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-dynamodb ("connectionType": "dynamodb" with the ETL connector as sink).
Thanks
相关内容
- AWS 官方已更新 2 年前
- AWS 官方已更新 1 年前