- 최신
- 최다 투표
- 가장 많은 댓글
Hi there,
Thanks for using AWS Sagemaker.
Have you checked out the paramaterised pipeline notebook -> https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-pipeline-parameterization/evaluate.py
You will see that we can create virtually any metrics, using a processing job to evaluate the model. Once the evaluation report is created which is just a JSON file containing our metrics that was created by the previous evaluate model step (processing job) we can run this through our accuracy step.
You can basically make your own metrics.
Hope this information helps.
To further understand the issue more in depth as I have limited visibility on your setup, I'd recommend you to reach to AWS Support by creating a support case[+] so that the engineer can investigate further and help you overcome the issue.
[+] Open a support case with AWS using the link: https://console.aws.amazon.com/support/home?#/case/create
Thanks, but the example you linked is the reason I'm asking the question, the json has a specific format
# Available metrics to add to model: https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-model-quality-metrics.html
report_dict = {
"binary_classification_metrics": {
"accuracy": {"value": accuracy, "standard_deviation": "NaN"},
"precision": {"value": precision, "standard_deviation": "NaN"},
...
In particular there's a top level key ("binary_classification_metrics") and then specific metrics for that class of problem. And the comment specifically mentions "available metrics" which implies only specific metrics/models (i.e. regression, binary of multi-class classification) mentioned on the page you provided are supported?