Running more than 1 executor per worker in AWS Glue

0

Hello, By default Glue run one executor per worker. I want to run more executors in worker. I have set following spark configuration in Glue params but It didn't work.

--conf : spark.executor.instances=10

Let's say I have 5 G.2X workers. In that case It starts 4 executors because 1 will be reserved for driver. I can see list of all 4 executors in Spark UI. But above configuration does not increase executors at all.

I'm getting following warning in driver logs. Seems like glue.ExecutorTaskManagement controlling number of executors.

WARN [allocator] glue.ExecutorTaskManagement (Logging.scala:logWarning(69)): executor task creation failed for executor 5, restarting within 15 secs. restart reason: Executor task resource limit has been temporarily hit

Any help would be appreciated. Thanks!

  • If you run "Standard" node type it will run 2 executors per worker but as Fabrizio says, don't try to change those settings on Glue, it's meant to be managed for you

asked a year ago1716 views
1 Answer
1

Hi,

AWS Glue is a serverless fully managed service and it has been pre-optimized, while using the --conf parameter allows to change some of the spark configuration it should not be used unless documented somewhere as for example in the migration from Glue 2.0 to Glue 3.0 (or 4.0).

Most configuration changes as the one you are trying to pass will not be taken in consideration. If you need to scale your job further increase the number of workers to increase the number of executors.

In case you really need the flexibility to choose the number of executors and the memory configuration , and even different instance types I would suggest to look at EMR (on EKS or Serverless) to run your spark code.

hope this helps,

AWS
EXPERT
answered a year ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions