Xgboost hyperparameter tuning : worse metrics than RandomizedSearchCV

0

I have implemented hyperparameter tuning using the 1.5-1 sagemaker xgboost container, for a binary classifier with a F1:validation objective, loading data from a csv file. Compared to a simple RandomizedSearchCV with same parameter exploration (versions 1.5.1 or 1.6.2), the metrics are significantly worse using sagemaker (0.42 vs 0.59 for best model). I am trying to understand where such discrepancy could come from. Have you observed such a thing and how could I resolve the situation? Thanks in advance.

1 回答
0

Hello,

Thank you for using AWS SageMaker.

To better understand the root cause of this issue, we need to understand the job configuration along with internal logs and also other factors that could have lead to the observation that you have observed at your end while comparing the tuning jobs. As this medium is not appropriate to share job details along with logs, we encourage you to reach out to AWS Support by opening case so that the engineers can assist your case and help you overcome the issue. To open a support case with AWS using the link: https://console.aws.amazon.com/support/home?#/case/create

AWS
支持工程师
已回答 1 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则

相关内容