Glue Pyspark create table in Redshift

0

I'm following documentation from :

  1. https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-redshift-readwrite.html
  2. https://github.com/spark-redshift-community/spark-redshift

My code: Enter image description here

Logs: Enter image description here

I am getting these timeout messages until job reaches it's timeout threshold and fails. Is that IP from log my internal Redshift Serverless address?

Am I missing something?

I would appreciate any help.

  • Timeouts are one of the symptoms for permissions. Check if the IAM user/role running the Glue Job has access to Redshift or not.

posta un anno fa580 visualizzazioni
1 Risposta
1

Timeouts could be due to multiple reasons. The most probably reason in my opinion the security group rules not allowing the EMR clusters to reach the Redshift clusters. Check the security group and network ACL rules for the resource you are trying to access. Make sure that the rules allow inbound and outbound traffic for the appropriate protocols and port ranges.

To get the IP address of a Redshift serverless cluster, you can use the DescribeClusters action of the Amazon Redshift API.

Here is an example of how to use the DescribeClusters action to get the IP address of a Redshift serverless cluster using the AWS CLI:

aws redshift describe-clusters --cluster-identifier my-serverless-cluster --query 'Clusters[*].Endpoint.Address'

Let me know if you still face any issues.

profile pictureAWS
con risposta un anno fa

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande