1 Risposta
- Più recenti
- Maggior numero di voti
- Maggior numero di commenti
0
【以下的回答经过翻译处理】 你好!
简而言之,我们的SageMaker scikit-learn容器目前不支持模型特定的推断脚本。
您在MultiDataModel对象中引用的entry_point脚本将用于所有模型的推断脚本。如果在脚本中添加了日志记录,您将能够在CloudWatch日志中查看它们。
如果您有一些需要在特定模型上执行的预处理/后处理脚本,则需要将它们全部写在一个通用的inference.py文件中。然后,在调用端点时,在数据中添加一些额外的属性,并让相同的脚本读取这些额外的属性,以便它知道要执行哪个预处理/后处理脚本。
需要注意的一件事是,尽管您在MultiDataModel中引用了一个模型对象,即
mme = MultiDataModel(
name='model',
model_data_prefix=model_data_prefix,
model= cluster_model,
sagemaker_session=sagemaker_session,
)
但从模型对象中提取的唯一信息是image_uri和entry_point,这些信息在端点部署期间是必需的。
'model_data_prefix'中的所有model.tar.gz都不应该具有inference.py,因为这会使容器混淆并强制其返回默认的处理程序,因此您可能会收到ModelError。
可以尝试以下操作:
cluster_model = SKLearnModel(
model_data=cluster_artifact,
role=role,
entry_point="scripts/cluster_inference.py",
sagemaker_session=sagemaker_session
)
pca_model = SKLearnModel(
model_data=pca_artifact,
role=role,
entry_point="scripts/pca_inference.py",
sagemaker_session=sagemaker_session
)
mme = MultiDataModel(
name='model',
model_data_prefix=model_data_prefix, #make sure the directory of this prefix is empty, i.e. no models in this location
model= cluster_model,
sagemaker_session=sagemaker_session,
)
list(mme.list_models()) # this should be empty
mme.add_model(model_data_source=cluster_artifact, model_data_path='cluster_artifact.tar.gz') #make sure model artifact doesn't contain inference.py
mme.add_model(model_data_source=pca_artifact, model_data_path='pca_artifact.tar.gz') #make sure model artifact doesn't contain inference.py
list(mme.list_models()) # there should be two models listed now, if you look at the location of model_data_prefix, there should also be two model artifact
output_cluster = predictor.predict(data='<your-data>', target_model='cluster_artifact.tar.gz')
print(output_cluster) #this should work since it's using the inference.py from cluster_inference.py
output_pca = predictor.predict(data='<your-data>', target_model='pca_artifact.tar.gz')
print(output_pca) #this might fail since it's using cluster_inference.py, add this model's inference script into cluster_inference.py to make it work
我知道这种方法并不理想,因为如果您有一个新模型和新的预处理/后处理脚本,您需要重新部署端点,以便新脚本生效。
实际上,我们刚刚在我们的TensorFlow容器中添加了支持,允许使用特定于模型的推理脚本:https://github.com/aws/deep-learning-containers/pull/2680
您可以在此处请求为我们的scikit容器添加相同的功能:https://github.com/aws/sagemaker-scikit-learn-container/issues