Memory Error Loading a medium size table with Spectrum into Redshift Serverless

0

Hi there, I need help, We are loading tables from glue data catalogs into a Redshift Serverless with 8 RPUs. All works ok with small tables, but when we try with a 1.5 million records tables (like 300 MB in Parquet), redshift throws us a memory error.

ERROR: Insufficient memory to run query: consider increasing compute size or re-write the query

We are using filters for partitions, so we can not re write it because the query is Ok. If we increase from 8 to 16, then it works perfectly.

So, we don't know why loading a 300MB table into a 8 RPU( 8*16GB RAM) cluster, with 1.5M does not work, but loading 300M with 40GB from TCP/TICKIT S3, demo tables from workshops works ok.

We read that un managed Redshift we can control how much RAM a slice can use, but we cant figure it out how to manage the 100GB RAM from de 8RPUs Serverless Cluster.

Does Spectrum has limits in redshift Serverless? Are there some configs we can use?

Thanks

rmis
已提问 9 个月前420 查看次数
1 回答
0

Hello,

I would suggest opening up a case with our Redshift Support team to check further on this.

AWS
支持工程师
已回答 9 个月前

您未登录。 登录 发布回答。

一个好的回答可以清楚地解答问题和提供建设性反馈,并能促进提问者的职业发展。

回答问题的准则