1回答
- 新しい順
- 投票が多い順
- コメントが多い順
4
Hello,
Yes, it looks like either the permission issue or the tmp files might be in use.
- Please check if you try opening the pyspark shell as hadoop user in EMR or use sudo pyspark in hadoop user
- Try checking the
spark-shell
working without issue instead of pyspark shell. - Include
spark.local.dir
to different local directory in primary node to see if this fix the issue. - Restart the spark service(
sudo systemctl restart spark-history-server.service
- Set the log level to debug
rootLogger.level = debug
in log4j file/etc/spark/conf/log4j2.properties
and retry the pyspark shell. This might give more insight to understand the issue.
関連するコンテンツ
- 質問済み 6年前
- AWS公式更新しました 1年前