Writing into redshift with Glue 4.0 fails due to string lengths

0

I'm writing into redshift and realized Glue 4.0 is probably optimizing the column sizes. Summary of error:

py4j.protocol.Py4JJavaError: An error occurred while calling o236.pyWriteDynamicFrame.
: java.sql.SQLException: 
Error (code 1204) while loading data into Redshift: "String length exceeds DDL length"
Table name: "PUBLIC"."table_name"
Column name: column_a
Column type: varchar(256)

In previous glue versions, the string columns were always varchar(65535) but now, my tables are created with varchar(256), and writing into some columns fail due to this error. Now, will this occur with other data types? . How can I solve this within Glue 4.0?

已提问 1 年前910 查看次数
1 回答
0

The closest answer I've found is about the new redshift driver for spark under the 'Configuring the maximum size of string columns ': https://docs.databricks.com/external-data/amazon-redshift.html#language-python

But this is with respect to spark. How can I translate this to Glue?

已回答 1 年前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则