Redshift super datatype not enough to store json data type column from Postgres

0

We are encountering a issue where we're utilizing the "super" datatype. The column in the Parquet file we receive has a maximum length of 192K. How should we handle this data? Are there alternative datatypes we can use to accommodate such large data sizes?

msve
posta 2 mesi fa238 visualizzazioni
2 Risposte
0

Redshift copy, as well as glue/athena, is incapable of processing an embedded json string within a parquet column, no matter what data type you set that column to within the parquet schema. No matter what you do, if the json string is over 65k characters, you will not be able to get it into a redshift super. Neither through vanilla copy, nor through spectrum. If you can't change the way the files are being written to s3, use a lambda to reprocess the parquet into json in a different s3 folder, then ingest it from there.

con risposta 3 giorni fa
-1

Is the parquet file the one you are ingesting? One option would be to keep the file as parquet and read it via Redshift Spectrum. https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-tables.html. You could then query it joined with all the other data in Redshift and not have make alterations to the file itself.

AWS
evaleah
con risposta un mese fa
profile picture
ESPERTO
verificato un mese fa

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande