1 回答
- 最新
- 投票最多
- 评论最多
1
Hi Sarath,
- Create the model in SageMaker Console or using the CreateModel API, specify the right inference container image based on the model framework along with the s3 location that contains the model artefacts including the inference code.
- Create a BatchTransform job in the SageMaker console or using the CreateTransformJob API, parallelise the prediction using multiple instances and MultiRecord Batch strategy to speed up the batch inference based on the dataset volume
- Start the transform job
Check an example here.
已回答 10 个月前
相关内容
- AWS 官方已更新 2 年前
- AWS 官方已更新 2 年前