1 réponse
- Le plus récent
- Le plus de votes
- La plupart des commentaires
0
I would look into Lambda functions. You will have 2 buckets, one for the large files and one for the small files. One function will trigger from the first bucket, it will read the file and split it into multiple, smaller files, which it will save in the second bucket. The second function will be triggered from the second bucket and will run the analysis on the small files.
This is assuming that the large files can be loaded into a function (size wise) and that it takes less than 15 minutes to split a large file and less than 15 minutes to analyze a small file.
Contenus pertinents
- demandé il y a un an
- demandé il y a un an
- demandé il y a 10 mois
- AWS OFFICIELA mis à jour il y a 3 mois
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a 2 ans