1개 답변
- 최신
- 최다 투표
- 가장 많은 댓글
0
Hello,
Assuming your file employees.csv is present in your local path C:/Docker/jupyter_workspace, I could see that you are expecting it gets mounted to the location /home/glue_user/workspace/jupyter_workspace/ within the docker container using below command.
docker run -it -p 8888:8888 -p 4040:4040 -e DISABLE_SSL="true" -v C:/Docker/jupyter_workspace:/home/glue_user/workspace/jupyter_workspace/ --name glue_jupyter amazon/aws-glue-libs:glue_libs_3.0.0_image_01 /home/glue_user/jupyter/jupyter_start.sh
However, when you try to read the file using something like below
df = spark.read.csv("employees.csv")
As per the error message, Spark appears to be looking for the file in the location /home/glue_user/workspace/
So, can you try using full path of the file or something like below ?
df = spark.read.csv("jupyter_workspace/employees.csv")
관련 콘텐츠
- AWS 공식업데이트됨 일 년 전
- AWS 공식업데이트됨 일 년 전
- AWS 공식업데이트됨 2년 전
Hello Chiranjeevi Thanks for reply . Yes I resolved it in same way you have mentioned . It was my mistake that even after mounting my directory to working directory I was passing windows path rather than passing path of container.