Understanding the Usage of suggested_constraints() and suggest_baseline() in SageMaker Model Monitoring with Pipelines

0

I'm currently working with a SageMaker pipeline setup, that uses QualityCheckStep() to run ModelQualityCheck and DataQualityCheck jobs. These jobs accept parameters such as register_new_baseline and skip_check. After the pipeline runs, the model is registered and ready to be approved. Once it is approved a lambda function is triggered which deploys the model to an endpoint and creates the monitoring schedules.

I'm using the ModelQualityMonitor() and DefaultModelMonitor() in the lambda function to configure the monitoring schedules for data and model quality monitoring, but these both accept baseline statistics and constraints. In the documentation examples I have seen, the suggested_constraints() and suggest_baseline() methods are used to retrieve the latest baselining job. This doesn't seem to work in the scenario where the baselining jobs are run in the SageMaker pipeline, and then the monitoring schedules are set up afterwards.

To get around this, I've located the baseline jobs associated with the pipeline execution that built the model and can pass the s3 uris when creating the monitoring schedule from the lambda. But it seems the constraints are very tight and I'm constantly getting violations - making me wonder if the issue is that I'm not using the provided suggested_constraints() and suggest_baseline() methods.

Is there a different approach I should be taking? I also tried without providing baselines, and this worked for the ModelQualityMonitor, but then I'm constantly getting no violations regardless of what I send to the endpoint.

Any insights or guidance on these aspects would be greatly appreciated!

lhobbs
已提问 10 个月前151 查看次数
没有答案

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则