Writing into redshift with Glue 4.0 fails due to string lengths

0

I'm writing into redshift and realized Glue 4.0 is probably optimizing the column sizes. Summary of error:

py4j.protocol.Py4JJavaError: An error occurred while calling o236.pyWriteDynamicFrame.
: java.sql.SQLException: 
Error (code 1204) while loading data into Redshift: "String length exceeds DDL length"
Table name: "PUBLIC"."table_name"
Column name: column_a
Column type: varchar(256)

In previous glue versions, the string columns were always varchar(65535) but now, my tables are created with varchar(256), and writing into some columns fail due to this error. Now, will this occur with other data types? . How can I solve this within Glue 4.0?

已提問 1 年前檢視次數 910 次
1 個回答
0

The closest answer I've found is about the new redshift driver for spark under the 'Configuring the maximum size of string columns ': https://docs.databricks.com/external-data/amazon-redshift.html#language-python

But this is with respect to spark. How can I translate this to Glue?

已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南