Redshift super datatype not enough to store json data type column from Postgres

0

We are encountering a issue where we're utilizing the "super" datatype. The column in the Parquet file we receive has a maximum length of 192K. How should we handle this data? Are there alternative datatypes we can use to accommodate such large data sizes?

msve
demandé il y a un mois235 vues
2 réponses
0

Redshift copy, as well as glue/athena, is incapable of processing an embedded json string within a parquet column, no matter what data type you set that column to within the parquet schema. No matter what you do, if the json string is over 65k characters, you will not be able to get it into a redshift super. Neither through vanilla copy, nor through spectrum. If you can't change the way the files are being written to s3, use a lambda to reprocess the parquet into json in a different s3 folder, then ingest it from there.

répondu il y a 2 jours
-1

Is the parquet file the one you are ingesting? One option would be to keep the file as parquet and read it via Redshift Spectrum. https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-tables.html. You could then query it joined with all the other data in Redshift and not have make alterations to the file itself.

AWS
evaleah
répondu il y a un mois
profile picture
EXPERT
vérifié il y a un mois

Vous n'êtes pas connecté. Se connecter pour publier une réponse.

Une bonne réponse répond clairement à la question, contient des commentaires constructifs et encourage le développement professionnel de la personne qui pose la question.

Instructions pour répondre aux questions