Glue Pyspark create table in Redshift

0

I'm following documentation from :

  1. https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-redshift-readwrite.html
  2. https://github.com/spark-redshift-community/spark-redshift

My code: Enter image description here

Logs: Enter image description here

I am getting these timeout messages until job reaches it's timeout threshold and fails. Is that IP from log my internal Redshift Serverless address?

Am I missing something?

I would appreciate any help.

  • Timeouts are one of the symptoms for permissions. Check if the IAM user/role running the Glue Job has access to Redshift or not.

feita há um ano580 visualizações
1 Resposta
1

Timeouts could be due to multiple reasons. The most probably reason in my opinion the security group rules not allowing the EMR clusters to reach the Redshift clusters. Check the security group and network ACL rules for the resource you are trying to access. Make sure that the rules allow inbound and outbound traffic for the appropriate protocols and port ranges.

To get the IP address of a Redshift serverless cluster, you can use the DescribeClusters action of the Amazon Redshift API.

Here is an example of how to use the DescribeClusters action to get the IP address of a Redshift serverless cluster using the AWS CLI:

aws redshift describe-clusters --cluster-identifier my-serverless-cluster --query 'Clusters[*].Endpoint.Address'

Let me know if you still face any issues.

profile pictureAWS
respondido há um ano

Você não está conectado. Fazer login para postar uma resposta.

Uma boa resposta responde claramente à pergunta, dá feedback construtivo e incentiva o crescimento profissional de quem perguntou.

Diretrizes para responder a perguntas