- Newest
- Most votes
- Most comments
As of today, it's not possible to train a machine learning model with SageMaker using a reserved instance that is already up and running instead of provisioning a new instance. The service team is currently working on it, unfortunately I don't have an ETA as to when the feature will be released.
Local Mode is supported for frameworks images (TensorFlow, MXNet, Chainer, PyTorch, and Scikit-Learn) and images you supply yourself.
Using the SageMaker Python SDK — sagemaker 2.72.3 documentation
If you want to train Built-in algorithm models simply faster, you should check the recommendation in the SageMaker document.
Example Blazingtext-instances, Deepar-instances
If the algorithm supports it, one can also try using Pipe mode or FastFile mode. These offer some fast training job startup time. Accelerate-model-training-using-faster-pipe-mode-on-amazon-sagemaker
ML Instance
Amazon EC2 Inf1 instances are available in 4 sizes, providing up to 16 Inferentia chips, 96 vCPUs, 192GB of memory, 100 Gbps of networking bandwidth and 19 Gbps of Elastic Block Store (EBS) bandwidth. These instances are purchasable On-Demand, as Reserved Instances, as Spot instances, or as part of Savings Plans and are now available in 21 regions globally,
Relevant content
- Accepted Answerasked 3 months ago
- Accepted Answerasked a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated a year ago
This is very helpful. Thanks for getting back to me.
Regards, Stefan