Memory Error Loading a medium size table with Spectrum into Redshift Serverless

0

Hi there, I need help, We are loading tables from glue data catalogs into a Redshift Serverless with 8 RPUs. All works ok with small tables, but when we try with a 1.5 million records tables (like 300 MB in Parquet), redshift throws us a memory error.

ERROR: Insufficient memory to run query: consider increasing compute size or re-write the query

We are using filters for partitions, so we can not re write it because the query is Ok. If we increase from 8 to 16, then it works perfectly.

So, we don't know why loading a 300MB table into a 8 RPU( 8*16GB RAM) cluster, with 1.5M does not work, but loading 300M with 40GB from TCP/TICKIT S3, demo tables from workshops works ok.

We read that un managed Redshift we can control how much RAM a slice can use, but we cant figure it out how to manage the 100GB RAM from de 8RPUs Serverless Cluster.

Does Spectrum has limits in redshift Serverless? Are there some configs we can use?

Thanks

rmis
질문됨 9달 전420회 조회
1개 답변
0

Hello,

I would suggest opening up a case with our Redshift Support team to check further on this.

AWS
지원 엔지니어
답변함 9달 전

로그인하지 않았습니다. 로그인해야 답변을 게시할 수 있습니다.

좋은 답변은 질문에 명확하게 답하고 건설적인 피드백을 제공하며 질문자의 전문적인 성장을 장려합니다.

질문 답변하기에 대한 가이드라인

관련 콘텐츠