- 최신
- 최다 투표
- 가장 많은 댓글
Hi Max,
My job is running with this config "--conf spark.executor.cores=1 --conf spark.executor.memory=2g --conf spark.driver.cores=1 --conf spark.driver.memory=2g --conf spark.executor.instances=1 " and the ServiceQuotaLimit is 16 vCPUs. I am not able to understand how this is adding up to 16. Need to understand that to calculate what limit I should request for.
From the documentation, it seems that spark.dynamicAllocation.enabled
is True by default, the spark.dynamicAllocation.maxExecutors
default value is infinite (for version 6.10.0 and higher):
https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/jobs-spark.html so our job was creating a high number of workers. We are going to disable this option and see if we still hit the vCPU limits.
You can read more about vCPU account limits here: https://aws.amazon.com/blogs/compute/preview-vcpu-based-instance-limits/
To request an increase - first determine how many vCPUs you need - then open a support case, and ask for a limit increase to the number of vCPUs that you need. Follow the process discussed in the EC2 Knowledge Center Article on vCPU Limit Increases.
관련 콘텐츠
- AWS 공식업데이트됨 2년 전
Disabling dynamicAllocation worked for us