I want to forecast the future demand based on the 69 Millions historical demand records on CSV file, what is the best practice?

0

I have historical demand data. 18gb in CSV format , 69M records, 30 columns.

I'm exploring SageMaker options. I see several options. Amazon Forecast, SageMaker Studio, Canvas, Training Jobs, and just plain Jupyter Notebook instance. I believe theoretically all can be used but not sure which one actually can handle such a huge dataset without taking forever.

I think I heard some of these can only support a few Million records. I'd like to know the best approach with such a huge data points. (for forecasting the future demand)

Should I use Spark? Can someone lay out how to do this?

已提問 8 個月前檢視次數 361 次
1 個回答
1
已接受的答案

Hi,

For such large datasets Sagemaker Data Wrangler seems quite appropriate to prepare it. In https://aws.amazon.com/blogs/machine-learning/process-larger-and-wider-datasets-with-amazon-sagemaker-data-wrangler/ you have it benchmarked on a dataset of around 100 GB with 80 million rows and 300 columns.

About the training of large models with Amazon SageMaker, see this video: https://www.youtube.com/watch?v=XKLIhIeDSCY

Also, re. training of your model, this post helps you choose the best datasource: https://aws.amazon.com/blogs/machine-learning/choose-the-best-data-source-for-your-amazon-sagemaker-training-job/

Best,

Didier

profile pictureAWS
專家
已回答 8 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南