Installing additional libraries to PySpark kernel

0

JupyterHub on Amazon EMR comes with default PySpark kernel. How can I install additional libraries on this kernel (e.g. numpy)? I've tried following instructions on https://aws.amazon.com/blogs/big-data/install-python-libraries-on-a-running-cluster-with-emr-notebooks/. However, I cannot install simple libraries like pandas:

sc.install_pypi_package("pandas==0.25.1")
An error was encountered:No module named 'six'Traceback (most recent call last): File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/context.py", line 1108, in install_pypi_package pypi_package = self._validate_package(pypi_package) File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/context.py", line 1173, in _validate_package import sixModuleNotFoundError: No module named 'six'

And I get the same error if I try to install six sc.install_pypi_package("pandas==0.25.1")

질문됨 일 년 전1686회 조회
2개 답변
1

One possible cause of this issue is that the PySpark kernel does not have access to the required Python libraries. In order to install additional libraries on the PySpark kernel, you need to ensure that the libraries are available on the EMR cluster. Here are some steps you can take to install additional libraries on the PySpark kernel: Install the libraries on the EMR cluster using the pip command. For example:

!pip install pandas==0.25.1 Make sure that the libraries are available on all the worker nodes in the EMR cluster. You can do this by adding the libraries to the PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON paths in the spark-env.sh file on the worker nodes.

Restart the PySpark kernel in JupyterHub to make the libraries available to the PySpark kernel.

SeanSi
답변함 일 년 전
0

For a complete guide on how to install additional kernels and libraries on EMR Jupyert hub please read the documentation page here

AWS
전문가
답변함 일 년 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠