- Newest
- Most votes
- Most comments
With inf1 instances, you were indeed able to compile models outside of inf1 instances, which made the development and deployment process more flexible. This allowed developers to test and refine their models on CPU-based machines before deploying on the inf1 instances.
However, as AWS iterates and improves its services, there may be changes in workflow or capabilities. The restrictions on inf2 instances you mentioned could be a part of these changes, which could be due to various reasons ranging from technical requirements, performance optimization, security considerations, or other factors.
Check out https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html#torch-neuronx-trace-api - this should work on a CPU instance, though you will need to install all the components (including the runtime-lib) on your instance for the installation to work correctly.
Relevant content
- asked 6 months ago
- asked a year ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago