same glue job running differently when used sample method

1

I have a csv file with 5 million rows in my s3. I used crawler on the file. I have a custom transformations in my glue job. My issue is, if I use create_dynamic_frame_from_catalog(), it is running very slow, where as if I use create_sample_dynamic_frame_from_catalog() with max sample limit as 5 million, it is working much faster. Why it is happening. i want to speed up the job without using the sample method.

1 Respuesta
1

Hello, based on the documentation, what parameter did you configure for create_sample_dynamic_frame_from_catalog function ? num is the one that defines the maximum number of records to be fetched.

Overall, regarding the job performance, there are a couple of strategies, like changing WorkerType and NumberOfWorkers parameter on the Job.

This blog post is also handy: https://aws.amazon.com/blogs/big-data/best-practices-to-scale-apache-spark-jobs-and-partition-data-with-aws-glue/

AWS
respondido hace 2 años
profile pictureAWS
EXPERTO
Chris_G
revisado hace 2 años

No has iniciado sesión. Iniciar sesión para publicar una respuesta.

Una buena respuesta responde claramente a la pregunta, proporciona comentarios constructivos y fomenta el crecimiento profesional en la persona que hace la pregunta.

Pautas para responder preguntas