1 Antwort
- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
1
Hi,
given your use case, yes, batch transform jobs are the way to go: you accumulate your input, start the model, run the inferences and stop the model. Since you inferences are infrequent, it's important to stop the engine when you're done with current set of inferences to remain most frugal and cost-efficient.
Question: to further reduce your costs, can you infer less frequently than every hour? Let's say 4 times a day ?
To achieve this level of efficiency, it means that you should develop a fully automated MLOps pipeline: see https://github.com/aws-samples/amazon-sagemaker-safe-deployment-pipeline for a full example with code to implement the below
Best,
Didier
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor 2 Jahren
- AWS OFFICIALAktualisiert vor einem Jahr
- AWS OFFICIALAktualisiert vor 2 Jahren
Hi Didier, Thank you for your answer! Documentation doesn't seem to be that available for our particular use case, as per my understanding batch transforms don't inherently support multi-model-endpoints. What you think is a workaround to this would be? Best