Redshift super datatype not enough to store json data type column from Postgres

0

We are encountering a issue where we're utilizing the "super" datatype. The column in the Parquet file we receive has a maximum length of 192K. How should we handle this data? Are there alternative datatypes we can use to accommodate such large data sizes?

msve
已提問 1 個月前檢視次數 235 次
2 個答案
0

Redshift copy, as well as glue/athena, is incapable of processing an embedded json string within a parquet column, no matter what data type you set that column to within the parquet schema. No matter what you do, if the json string is over 65k characters, you will not be able to get it into a redshift super. Neither through vanilla copy, nor through spectrum. If you can't change the way the files are being written to s3, use a lambda to reprocess the parquet into json in a different s3 folder, then ingest it from there.

已回答 2 天前
-1

Is the parquet file the one you are ingesting? One option would be to keep the file as parquet and read it via Redshift Spectrum. https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-tables.html. You could then query it joined with all the other data in Redshift and not have make alterations to the file itself.

AWS
evaleah
已回答 1 個月前
profile picture
專家
已審閱 1 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南