Unable to create spark context in jupyterhub

0

When creating the SC, it keeps failing with fatal error though restarted the kernel multiple times. As I am data analyst, not sure what else to check in terms of resources

Sri
已提问 8 个月前214 查看次数
1 回答
4
已接受的回答

Hello, As you are getting fatal error when creating the SparkContext, then the issue might be in acquiring resources like yarn memory or CPU. Please check the memory and cpu availability of the current status in the cloudwatch and yarn resource manager UI/logs. If there are already applications running which took over the cluster resources, then you might not be able to create new application in JNB. Also, check the HDFS utilization, MR healthy node status in the Cloudwatch metrics. If the nodes are not available to take the containers, then new application might fail to initiate. Based on the resource checks stated above, please add more nodes or use auto scaling or add additional EBS volumes if you find that the resource constraint in the EMR nodes. Here is the EMR best practice for reference.

AWS
支持工程师
已回答 8 个月前
  • Thanks a ton!!! this is helpful

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则