Glue Pyspark create table in Redshift

0

I'm following documentation from :

  1. https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-redshift-readwrite.html
  2. https://github.com/spark-redshift-community/spark-redshift

My code: Enter image description here

Logs: Enter image description here

I am getting these timeout messages until job reaches it's timeout threshold and fails. Is that IP from log my internal Redshift Serverless address?

Am I missing something?

I would appreciate any help.

  • Timeouts are one of the symptoms for permissions. Check if the IAM user/role running the Glue Job has access to Redshift or not.

已提問 1 年前檢視次數 580 次
1 個回答
1

Timeouts could be due to multiple reasons. The most probably reason in my opinion the security group rules not allowing the EMR clusters to reach the Redshift clusters. Check the security group and network ACL rules for the resource you are trying to access. Make sure that the rules allow inbound and outbound traffic for the appropriate protocols and port ranges.

To get the IP address of a Redshift serverless cluster, you can use the DescribeClusters action of the Amazon Redshift API.

Here is an example of how to use the DescribeClusters action to get the IP address of a Redshift serverless cluster using the AWS CLI:

aws redshift describe-clusters --cluster-identifier my-serverless-cluster --query 'Clusters[*].Endpoint.Address'

Let me know if you still face any issues.

profile pictureAWS
已回答 1 年前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南