how to save a file while running an etl job in a fargate task?

0

I have a python code for an etl job , i am planning to run as a fargate job. I am saving a docker container in ecr and pulling that to run my task, but i need to temporarily save a text file while running that container. i know in lambda, i think we can save a file to temporary folder as such /tmp/somefile.txt, i assume as lambda or a fargate task will run in some ec2, it should be the same?

已提問 2 年前檢視次數 1818 次
2 個答案
0

Yes, you are right. You can store the files temporarily in /tmp folder like you do in any EC2 instance, but it cannot exceed 20GiB with default settings. Please read this doc here.

Also, if you want to persist the files, you could use EFS mounting.

profile pictureAWS
支援工程師
已回答 2 年前
0

To add to the answer, you can definitely indeed store files locally on the filesystem in similar way to lambda / EC2. Your image + local content can go up to 20GB "for free" with every task. You can go up to 200GB of that NVMe goodness, but you have to pay for the storage above 20GB (also you have to define that at the task definition level, it doesn't magically add storage for you).

But instead of EFS I'd much more recommend something a little more modern and use S3, as if you needed to do any form of automation (i.e. trigger a lambda when that temporary file is created/updated) then that's easy to do, whereas EFS won't give you that. But that very much depends on your IO pattern.

EFS requires also a little more involvement in the infrastructure. Not too much but substantially more than S3.

profile picture
已回答 2 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南