1 Answer
- Newest
- Most votes
- Most comments
0
Hi,
Do you have Instance_type = "local_gpu"
?
When you have "local", the model may default to CPU instead of GPU.
Best,
Didier
Relevant content
- asked 5 months ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated a year ago
as I said earlier.
All the things are in place model is loaded in GPU. also, all the GPUs are showing that vRAM is used. But, when I start inferencing using the model only one GPU is used to process. GPU processors are not used except for the first GPU. please refer to the 2nd image("GPU-Util") I have attached in question.
Thanks Didier.