Xgboost hyperparameter tuning : worse metrics than RandomizedSearchCV

0

I have implemented hyperparameter tuning using the 1.5-1 sagemaker xgboost container, for a binary classifier with a F1:validation objective, loading data from a csv file. Compared to a simple RandomizedSearchCV with same parameter exploration (versions 1.5.1 or 1.6.2), the metrics are significantly worse using sagemaker (0.42 vs 0.59 for best model). I am trying to understand where such discrepancy could come from. Have you observed such a thing and how could I resolve the situation? Thanks in advance.

1 Answer
0

Hello,

Thank you for using AWS SageMaker.

To better understand the root cause of this issue, we need to understand the job configuration along with internal logs and also other factors that could have lead to the observation that you have observed at your end while comparing the tuning jobs. As this medium is not appropriate to share job details along with logs, we encourage you to reach out to AWS Support by opening case so that the engineers can assist your case and help you overcome the issue. To open a support case with AWS using the link: https://console.aws.amazon.com/support/home?#/case/create

AWS
SUPPORT ENGINEER
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions