- Mais recentes
- Mais votos
- Mais comentários
To deploy a model on Inf1 and Inf2 instances, you need to compile the model using AWS Neuron. In this documentation page you will find the updated list of Supported models for AWS Inferentia2, AWS Inferentia, and also AWS Trainium.
If you want to deploy Stable Diffusion on AWS Inferentia2, please see this blogpost for a full walkthrough.
Hope this helps.
Some of the JumpStart models support Trainium and Inferentia instances. You'll notice this in the model description. Sometimes the model will say "Neuron" in the title, such as with Llama 2 and 3.
You can search for "neuron" in the JumpStart Studio page. I'm doing this now and it shows me 17 models that support this, including Llama3.
Each of these will vary in terms of what modes they support, training, hosting, and evaluation.
Once you've selected the model and the mode, you'll be prompted to select the instance type. For Neuron models this will be only Trainium and Inferentia instances with a variety of sizes.
You can do the same search on our product documentation page here.
To work with JumpStart models in the Python SDK, including for Llama 3, check out the steps here.
Conteúdo relevante
- AWS OFICIALAtualizada há um ano
- AWS OFICIALAtualizada há 2 anos
- AWS OFICIALAtualizada há 2 anos
This is not correct. You can deploy some supported models from prebuilt JumpStart images that are precompiled for Trainium and Inferentia.