Xgboost hyperparameter tuning : worse metrics than RandomizedSearchCV

0

I have implemented hyperparameter tuning using the 1.5-1 sagemaker xgboost container, for a binary classifier with a F1:validation objective, loading data from a csv file. Compared to a simple RandomizedSearchCV with same parameter exploration (versions 1.5.1 or 1.6.2), the metrics are significantly worse using sagemaker (0.42 vs 0.59 for best model). I am trying to understand where such discrepancy could come from. Have you observed such a thing and how could I resolve the situation? Thanks in advance.

1 Resposta
0

Hello,

Thank you for using AWS SageMaker.

To better understand the root cause of this issue, we need to understand the job configuration along with internal logs and also other factors that could have lead to the observation that you have observed at your end while comparing the tuning jobs. As this medium is not appropriate to share job details along with logs, we encourage you to reach out to AWS Support by opening case so that the engineers can assist your case and help you overcome the issue. To open a support case with AWS using the link: https://console.aws.amazon.com/support/home?#/case/create

AWS
ENGENHEIRO DE SUPORTE
respondido há um ano

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas