- Le plus récent
- Le plus de votes
- La plupart des commentaires
Hi Max,
My job is running with this config "--conf spark.executor.cores=1 --conf spark.executor.memory=2g --conf spark.driver.cores=1 --conf spark.driver.memory=2g --conf spark.executor.instances=1 " and the ServiceQuotaLimit is 16 vCPUs. I am not able to understand how this is adding up to 16. Need to understand that to calculate what limit I should request for.
From the documentation, it seems that spark.dynamicAllocation.enabled
is True by default, the spark.dynamicAllocation.maxExecutors
default value is infinite (for version 6.10.0 and higher):
https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/jobs-spark.html so our job was creating a high number of workers. We are going to disable this option and see if we still hit the vCPU limits.
You can read more about vCPU account limits here: https://aws.amazon.com/blogs/compute/preview-vcpu-based-instance-limits/
To request an increase - first determine how many vCPUs you need - then open a support case, and ask for a limit increase to the number of vCPUs that you need. Follow the process discussed in the EC2 Knowledge Center Article on vCPU Limit Increases.
Contenus pertinents
- demandé il y a un an
- demandé il y a 3 mois
- demandé il y a 7 mois
- demandé il y a 8 mois
- AWS OFFICIELA mis à jour il y a 3 ans
- AWS OFFICIELA mis à jour il y a un an
- AWS OFFICIELA mis à jour il y a 2 ans
- AWS OFFICIELA mis à jour il y a 3 ans
Disabling dynamicAllocation worked for us