how does inference work in multi model endpoint in sagemaker?

0

based on the docs provided here, https://docs.aws.amazon.com/sagemaker/latest/dg/create-multi-model-endpoint.html. i created a multi model endpoint and invoked it as documented here https://docs.aws.amazon.com/sagemaker/latest/dg/invoke-multi-model-endpoint.html. I'm getting a Invalid model exception and the message => "model version is not defined" . my set up is , i have created two models , say modelOne and modelTwo.tar.gz file , and both models have their own custom script/inference.py file with following directory structure.

when we send request to a multi model endpoint, does sagemaker uncompresses the tar.gz file specified in the request, as in my case both models have save directory structure and same model.pth files inside the tar files , is it getting mixed up and not sure which one to invoke?

inference.py

import torch
import os

def model_fn(model_dir, context):
    model = Your_Model()
    with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:
        model.load_state_dict(torch.load(f))
    return model

directory structure

model.tar.gz/
|- model.pth
|- code/
  |- inference.py
  |- requirements.txt  
response = runtime_sagemaker_client.invoke_endpoint(
                        EndpointName = "my-multi-model-endpoint",
                        ContentType  = "text/csv",
                        TargetModel  = "modelOne.tar.gz",
                        Body         = body)
preguntada hace un año130 visualizaciones
No hay respuestas

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas