Questions about back testing and the possibility to set more than 1 in the AWS Forecast console

0

When I use AWSForecast through the console (with metadata, related time series and a target variable, with daily periodicity and using data from a 30 months time interval for hundreds of items_id) in the 10 days predictions I systematically obtain (for 10 different cut points) higher MAPEs in the validation data than in the train/test data used to optimize the model. Is it possible that the trained model has overfitting or is it related to the fact that the training through the console allows only 1 backtesting? In case it is related to this second point, how can I select more backtests through the console?

LMC_arg
asked 3 months ago109 views
2 Answers
1
Accepted Answer

Hello,

Kindly note that It is hard to say why you are obtaining higher MAPEs in the validation data than in the train/test data used to optimize the model, because Forecast is ensembling multiple models on a backend.I request you to refer "model explainability" feature to check how related data influence on a forecast.

[+]https://docs.aws.amazon.com/forecast/latest/dg/forecast-explainability.html


Also, this does not look related to number of backtest windows. However it is not possible to get an overview regarding why you are seeing higher MAPE without your data. Overfitting may be a potential cause, but for specific investigation into your issue and recommendations for next steps please reach out to AWS Support [1] (Forecast) along with your issue or use case in detail, and we would be happy to assist you further.

References:

[1] Creating support cases and case management - https://docs.aws.amazon.com/awssupport/latest/user/case-management.html#creating-a-support-casehttps://docs.aws.amazon.com/awssupport/latest/user/case-management.html#creating-a-support-case

AWS
answered 3 months ago
profile picture
EXPERT
reviewed 2 months ago
0

Thank you very much for your answer. Regarding the "explainability" tool we have deployed it as described in the documentation and we get three results in different tests:

  1. When we load only target, metadata and holidays the explainability work is created and shows us that the variables holiday and country have explanatory power (weights greater than zero (0.45 and 0.20)).
  2. When we load, in addition, the related variables (exogenous) in some cases we reach the status : "CREATED_FAILED" and in others the variables appear on the screen but with a weight equal to zero, including vacations and country.

Why could this be happening? Thank you very much for your answer

LMC_arg
answered 3 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions