- Le plus récent
- Le plus de votes
- La plupart des commentaires
I have found the solution to make this happen.
In the app.py file and in the file where you run the torch.jit.load
(assuming it is different from your app.py file) , set the following parameters:
import os
os.environ['NEURON_RT_NUM_CORES'] = '1'
This tells each child process in gunicorn to use 1 neuron core each and hence you can run X number of workers where X is the number of cores that are on the device you are running your code on.
Inferentia is compatible with FastAPI. The error suggests the program is asking to allocate more cores than available. As an example, lets assume the instance is inf1.6xl, which has 16 Neuron Cores. Below should be your gunicorn
command:
gunicorn main-fastapi-demo:app —workers 4 —worker-class uvicorn.workers.UvicornWorker —bind 0.0.0.0:8001
and in your server code main-fastapi-demo.py
, make sure you add environment variable after your import statements:
NUM_CORES = 4
os.environ['NEURON_RT_NUM_CORES'] = str(NUM_CORES)
Taken together, this means that you will invoke four gunicorn workers, each worker gets four Neuron Cores. So total of (4 x 4 = 16) 16 Neuron Cores are allocated to your server process.
You may mix and match these parameters, it doesn't have to be 4 x 4, it may be 8 x 2, 2 x 8. The best combination is determined by benchmarking effort.
Contenus pertinents
- demandé il y a un an
- demandé il y a 2 mois
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a 2 ans