Running more than 1 executor per worker in AWS Glue

0

Hello, By default Glue run one executor per worker. I want to run more executors in worker. I have set following spark configuration in Glue params but It didn't work.

--conf : spark.executor.instances=10

Let's say I have 5 G.2X workers. In that case It starts 4 executors because 1 will be reserved for driver. I can see list of all 4 executors in Spark UI. But above configuration does not increase executors at all.

I'm getting following warning in driver logs. Seems like glue.ExecutorTaskManagement controlling number of executors.

WARN [allocator] glue.ExecutorTaskManagement (Logging.scala:logWarning(69)): executor task creation failed for executor 5, restarting within 15 secs. restart reason: Executor task resource limit has been temporarily hit

Any help would be appreciated. Thanks!

  • If you run "Standard" node type it will run 2 executors per worker but as Fabrizio says, don't try to change those settings on Glue, it's meant to be managed for you

已提问 1 年前1862 查看次数
1 回答
1

Hi,

AWS Glue is a serverless fully managed service and it has been pre-optimized, while using the --conf parameter allows to change some of the spark configuration it should not be used unless documented somewhere as for example in the migration from Glue 2.0 to Glue 3.0 (or 4.0).

Most configuration changes as the one you are trying to pass will not be taken in consideration. If you need to scale your job further increase the number of workers to increase the number of executors.

In case you really need the flexibility to choose the number of executors and the memory configuration , and even different instance types I would suggest to look at EMR (on EKS or Serverless) to run your spark code.

hope this helps,

AWS
专家
已回答 1 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则