Xgboost hyperparameter tuning : worse metrics than RandomizedSearchCV

0

I have implemented hyperparameter tuning using the 1.5-1 sagemaker xgboost container, for a binary classifier with a F1:validation objective, loading data from a csv file. Compared to a simple RandomizedSearchCV with same parameter exploration (versions 1.5.1 or 1.6.2), the metrics are significantly worse using sagemaker (0.42 vs 0.59 for best model). I am trying to understand where such discrepancy could come from. Have you observed such a thing and how could I resolve the situation? Thanks in advance.

1回答
0

Hello,

Thank you for using AWS SageMaker.

To better understand the root cause of this issue, we need to understand the job configuration along with internal logs and also other factors that could have lead to the observation that you have observed at your end while comparing the tuning jobs. As this medium is not appropriate to share job details along with logs, we encourage you to reach out to AWS Support by opening case so that the engineers can assist your case and help you overcome the issue. To open a support case with AWS using the link: https://console.aws.amazon.com/support/home?#/case/create

AWS
サポートエンジニア
回答済み 1年前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ