Xgboost hyperparameter tuning : worse metrics than RandomizedSearchCV

0

I have implemented hyperparameter tuning using the 1.5-1 sagemaker xgboost container, for a binary classifier with a F1:validation objective, loading data from a csv file. Compared to a simple RandomizedSearchCV with same parameter exploration (versions 1.5.1 or 1.6.2), the metrics are significantly worse using sagemaker (0.42 vs 0.59 for best model). I am trying to understand where such discrepancy could come from. Have you observed such a thing and how could I resolve the situation? Thanks in advance.

1개 답변
0

Hello,

Thank you for using AWS SageMaker.

To better understand the root cause of this issue, we need to understand the job configuration along with internal logs and also other factors that could have lead to the observation that you have observed at your end while comparing the tuning jobs. As this medium is not appropriate to share job details along with logs, we encourage you to reach out to AWS Support by opening case so that the engineers can assist your case and help you overcome the issue. To open a support case with AWS using the link: https://console.aws.amazon.com/support/home?#/case/create

AWS
지원 엔지니어
답변함 일 년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠