how to pass S3 file (table) to DynamoDB as part of an ETL Job

0

Hi I am building an ETL Which should start by taking a file from s3 and copy/transfer it to DynamoDB. Y tried to build it on the ETL Canvas but I was not able to find the option to pass the file to DynamoDB. How can I do this?

Manually I used to import my S3 files to DynamoDB, and created a crawler to make the data visible (Data Catalog) to the ETL Job. But now I want to do this part as part on the ETL process, to avoid any human intervention (manual).

i am not an expert on AWS Glue, so any advice will be greatly appreciated.

Thanks

Alejandro

2回答
1

I believe you know how to use Glue to import data to DynamoDB, but you are concerned about manual intervention.

To avoid any manual intervention, you can use AWS Glue Triggers, when fired, a trigger can start specified jobs and crawlers. A trigger fires on demand, based on a schedule, or based on a combination of events. This would remove the need for you to manually crawl your S3 data, instead the complete solution would be handled by AWS Glue.

Moreover, if you are importing to new tables, I suggest using DynamoDB Import from S3 feature, however, it does not currently allow importing to existing tables.

profile pictureAWS
エキスパート
回答済み 1年前
-1

The visual jobs don't have a DDB target yet. What you can do is run a crawler on DDB, so the table is linked with the catalog and then create a visual Glue job that reads from s3 and saves on that catalog table.

profile pictureAWS
エキスパート
回答済み 1年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ