AWS EMR [System Error] Fail to delete the temp folder

0

In AWS EMR, I encountered the following error message when running a pyspark job, which ran successfully on my local machine.

[System Error] Fail to delete the temp folder

Is there a way to troubleshoot this? Is this a permissions issue with the temp folder access by EMR, accessible across all jobs?

已提問 5 個月前檢視次數 235 次
1 個回答
4
已接受的答案

Hello,

Yes, it looks like either the permission issue or the tmp files might be in use.

  1. Please check if you try opening the pyspark shell as hadoop user in EMR or use sudo pyspark in hadoop user
  2. Try checking the spark-shell working without issue instead of pyspark shell.
  3. Include spark.local.dir to different local directory in primary node to see if this fix the issue.
  4. Restart the spark service(sudo systemctl restart spark-history-server.service
  5. Set the log level to debug rootLogger.level = debug in log4j file /etc/spark/conf/log4j2.properties and retry the pyspark shell. This might give more insight to understand the issue.
AWS
支援工程師
已回答 5 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南