Disappointing Forecast accuracy metrics - is it the data or the training?

0

We want to improve sales forecasting for the products we manufacture. Using Amazon Forecast console with almost 5 years of per-product (sku) sales data. Initial experiments across the whole product range produced disappointing accuracy metrics which I attributed to variable product lifecycle and too much intermittent data. So I have gradually refined the TTS data set - aggregating each product sku across all sales channels, and to monthly frequency, removing products that were discontinued early, or introduced late, in the period. After a few iterations, TTS data now only includes 33 skus that have sales data in all or almost all 58 months (3 of which I've kept as a holdout). I have also created metadata for these skus that includes material, product family, design type and selling price. On last test I also included a related time series for "covid impact" during 2020.

However, the system's confidence in it's own model has remained very low: Average wQL of .63 (and not much variance across p50, p70 and p90) RMSE 84.2 MAPE 1.1 etc

As a beginner at this I've run out of ideas around providing a dataset with the best chance of success, so what other strategies do I have to create and train a better model. Or do I focus on the TTS data and are their ways to identify if it is somehow unsuitable for forecasting from? I am specifically confused about the methods the system is using as I cannot see anywhere to configure or review this.

I believe our goal should be achievable - and am keen to learn and work at this - really appreciate any pointers to get there !

Thanks

Stewrat
asked 5 months ago199 views
No Answers

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions