2 Answers
- Newest
- Most votes
- Most comments
2
Can check the worker nodes to ensure that they have enough resources to run the job. The warning message in the logs suggests that the worker nodes may not have enough resources to execute the job.
answered a year ago
0
The driver logs should tell you bit what the driver is doing, the warning could not be the cause.
It would be good to enable SparkUI logs and view then in a History Server to check what the driver is doing, has the driver started a Spark job/stage and doesn't get resources?
The most likely cause is that you have multiple stages in the job and the first stage is reading the data with a single task, SparkUI will tell you that. Also check this: https://docs.aws.amazon.com/glue/latest/dg/run-jdbc-parallel-read-job.html
Relevant content
- asked 2 years ago
- asked 2 years ago
- asked 5 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 4 months ago