AWS Glue error to write redshift > 50000 records

0

Trying to write the records from S3 text file to Redshit. It running when the record count is around 10000, but running long and further connection timing out when trying to write the entire file (50K records)

df.write.format("jdbc").
option("url", redshiftclusterurl).
option("dbtable", dbtable).
option("user", username).
option("password", password).
option("connectTimeout", "120000").
mode('append').save()

AWS
已提問 9 個月前檢視次數 188 次
1 個回答
0

It might be you have too many partitions and thus trying to use too many connections which Redshift might not accept.
You can you the option numPartitions to control this parallelism (or just repartition the data).

Please note using JDBC on Redshift as the data grows is inefficient, the Glue connector will scale much better (you can convert your DataFrame to DynamicFrame): https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-connect-redshift-home.html#aws-glue-programming-etl-connect-redshift-write

profile pictureAWS
專家
已回答 9 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南