how to pass S3 file (table) to DynamoDB as part of an ETL Job

0

Hi I am building an ETL Which should start by taking a file from s3 and copy/transfer it to DynamoDB. Y tried to build it on the ETL Canvas but I was not able to find the option to pass the file to DynamoDB. How can I do this?

Manually I used to import my S3 files to DynamoDB, and created a crawler to make the data visible (Data Catalog) to the ETL Job. But now I want to do this part as part on the ETL process, to avoid any human intervention (manual).

i am not an expert on AWS Glue, so any advice will be greatly appreciated.

Thanks

Alejandro

2개 답변
1

I believe you know how to use Glue to import data to DynamoDB, but you are concerned about manual intervention.

To avoid any manual intervention, you can use AWS Glue Triggers, when fired, a trigger can start specified jobs and crawlers. A trigger fires on demand, based on a schedule, or based on a combination of events. This would remove the need for you to manually crawl your S3 data, instead the complete solution would be handled by AWS Glue.

Moreover, if you are importing to new tables, I suggest using DynamoDB Import from S3 feature, however, it does not currently allow importing to existing tables.

profile pictureAWS
전문가
답변함 일 년 전
-1

The visual jobs don't have a DDB target yet. What you can do is run a crawler on DDB, so the table is linked with the catalog and then create a visual Glue job that reads from s3 and saves on that catalog table.

profile pictureAWS
전문가
답변함 일 년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠