how to pass S3 file (table) to DynamoDB as part of an ETL Job

0

Hi I am building an ETL Which should start by taking a file from s3 and copy/transfer it to DynamoDB. Y tried to build it on the ETL Canvas but I was not able to find the option to pass the file to DynamoDB. How can I do this?

Manually I used to import my S3 files to DynamoDB, and created a crawler to make the data visible (Data Catalog) to the ETL Job. But now I want to do this part as part on the ETL process, to avoid any human intervention (manual).

i am not an expert on AWS Glue, so any advice will be greatly appreciated.

Thanks

Alejandro

已提問 1 年前檢視次數 508 次
2 個答案
1

I believe you know how to use Glue to import data to DynamoDB, but you are concerned about manual intervention.

To avoid any manual intervention, you can use AWS Glue Triggers, when fired, a trigger can start specified jobs and crawlers. A trigger fires on demand, based on a schedule, or based on a combination of events. This would remove the need for you to manually crawl your S3 data, instead the complete solution would be handled by AWS Glue.

Moreover, if you are importing to new tables, I suggest using DynamoDB Import from S3 feature, however, it does not currently allow importing to existing tables.

profile pictureAWS
專家
已回答 1 年前
-1

The visual jobs don't have a DDB target yet. What you can do is run a crawler on DDB, so the table is linked with the catalog and then create a visual Glue job that reads from s3 and saves on that catalog table.

profile pictureAWS
專家
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南