_$folder$ while writing to S3 from a Glue PySpark job

0

Hi

I have written a few Glue Jobs and not faced this situation , but all of a sudden this has started appearing for a new job that I wrote. I am using the below code to write data to S3 . The S3 path is "s3://...."

unionData_df.repartition(1).write.mode("overwrite").parquet(test_path)

In my test env, when I first ran the glue job , it created an empty file with the suffix _$folder$ The same happened in Prod also . My other jobs do not have this problem.

Why is it creating this file ? How to avoid it ? Any pointes on why is it not happening for other jobs but for this one? What should I be checking ? Note , I think the file gets created the first time the prefix/folder is created. Some blogposts suggest to change the S3 path to s3a , but I am not sure if that is the right thing to do .

feita há um ano1465 visualizações
2 Respostas
1
Resposta aceita

This is done by Hadoop if the folder does not exist. This _$folder$ is just a placeholder. This is created by mkdir commands. The actual folder is only created when first file is placed. The other jobs where this is not happening might be writing to existing folders. These files should not cause a problem.

AWS
venky81
respondido há um ano
AWS
ESPECIALISTA
avaliado há um ano
0

This happens because of the S3 path you use during writing.

s3:// vs s3a://

s3:// will make the folder s3a:// will not

They both have their ups and downs and is generally recommended to stick with s3://.

respondido há um ano

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas