Memory Error Loading a medium size table with Spectrum into Redshift Serverless

0

Hi there, I need help, We are loading tables from glue data catalogs into a Redshift Serverless with 8 RPUs. All works ok with small tables, but when we try with a 1.5 million records tables (like 300 MB in Parquet), redshift throws us a memory error.

ERROR: Insufficient memory to run query: consider increasing compute size or re-write the query

We are using filters for partitions, so we can not re write it because the query is Ok. If we increase from 8 to 16, then it works perfectly.

So, we don't know why loading a 300MB table into a 8 RPU( 8*16GB RAM) cluster, with 1.5M does not work, but loading 300M with 40GB from TCP/TICKIT S3, demo tables from workshops works ok.

We read that un managed Redshift we can control how much RAM a slice can use, but we cant figure it out how to manage the 100GB RAM from de 8RPUs Serverless Cluster.

Does Spectrum has limits in redshift Serverless? Are there some configs we can use?

Thanks

rmis
asked 9 months ago406 views
1 Answer
0

Hello,

I would suggest opening up a case with our Redshift Support team to check further on this.

AWS
SUPPORT ENGINEER
answered 9 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions