how does inference work in multi model endpoint in sagemaker?

0

based on the docs provided here, https://docs.aws.amazon.com/sagemaker/latest/dg/create-multi-model-endpoint.html. i created a multi model endpoint and invoked it as documented here https://docs.aws.amazon.com/sagemaker/latest/dg/invoke-multi-model-endpoint.html. I'm getting a Invalid model exception and the message => "model version is not defined" . my set up is , i have created two models , say modelOne and modelTwo.tar.gz file , and both models have their own custom script/inference.py file with following directory structure.

when we send request to a multi model endpoint, does sagemaker uncompresses the tar.gz file specified in the request, as in my case both models have save directory structure and same model.pth files inside the tar files , is it getting mixed up and not sure which one to invoke?

inference.py

import torch
import os

def model_fn(model_dir, context):
    model = Your_Model()
    with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:
        model.load_state_dict(torch.load(f))
    return model

directory structure

model.tar.gz/
|- model.pth
|- code/
  |- inference.py
  |- requirements.txt  
response = runtime_sagemaker_client.invoke_endpoint(
                        EndpointName = "my-multi-model-endpoint",
                        ContentType  = "text/csv",
                        TargetModel  = "modelOne.tar.gz",
                        Body         = body)
質問済み 1年前130ビュー
回答なし

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ