AWS EMR [System Error] Fail to delete the temp folder

0

In AWS EMR, I encountered the following error message when running a pyspark job, which ran successfully on my local machine.

[System Error] Fail to delete the temp folder

Is there a way to troubleshoot this? Is this a permissions issue with the temp folder access by EMR, accessible across all jobs?

質問済み 5ヶ月前235ビュー
1回答
4
承認された回答

Hello,

Yes, it looks like either the permission issue or the tmp files might be in use.

  1. Please check if you try opening the pyspark shell as hadoop user in EMR or use sudo pyspark in hadoop user
  2. Try checking the spark-shell working without issue instead of pyspark shell.
  3. Include spark.local.dir to different local directory in primary node to see if this fix the issue.
  4. Restart the spark service(sudo systemctl restart spark-history-server.service
  5. Set the log level to debug rootLogger.level = debug in log4j file /etc/spark/conf/log4j2.properties and retry the pyspark shell. This might give more insight to understand the issue.
AWS
サポートエンジニア
回答済み 5ヶ月前

ログインしていません。 ログイン 回答を投稿する。

優れた回答とは、質問に明確に答え、建設的なフィードバックを提供し、質問者の専門分野におけるスキルの向上を促すものです。

質問に答えるためのガイドライン

関連するコンテンツ