1 Answer
- Newest
- Most votes
- Most comments
1
Not sure how big the data set is but here are some options
- Simple lambda & s3 unload with Redshift scheduled job can do that. Then from s3 to dynamodb https://aws.amazon.com/blogs/database/implementing-bulk-csv-ingestion-to-amazon-dynamodb/ you can see some sample code in this blog. You can also use AWS Data Pipeline
- You can also use Glue connection type for DDB , reference : https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect.html#aws-glue-programming-etl-connect-dynamodb ("connectionType": "dynamodb" with the ETL connector as sink).
Thanks
Relevant content
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago