Connect to Redshift (in private Subnet) using Lambda

0

I have a Redshift cluster in private subnet and i am using Pyscopg2 library to connect to it from Lambda . I have deployed the Lambda inside the VPC and subnets and security group appropriately.

When I use the following code to connect ,

try:
    client = boto3.client('redshift')
    creds = client.get_cluster_credentials(
      DbUser=REDSHIFT_USER,
      DbName=REDSHIFT_DATABASE,
      ClusterIdentifier=REDSHIFT_CLUSTER,
      DurationSeconds=3600)
  except Exception as ERROR:
    print("Credentials Issue: " + ERROR)
  try:

    conn = psycopg2.connect(
      dbname=REDSHIFT_DATABASE,
      user=creds['DbUser'],
      password=creds['DbPassword'],
      port=REDSHIFT_PORT,
      host=REDSHIFT_ENDPOINT)
  except Exception as ERROR:
    print("Connection Issue: ")
    raise

I get the folliowing error

{
  "errorMessage": "FATAL:  no pg_hba.conf entry for host \"::ffff:10.209.4.117\", user \"IAM:awsuser\", database \"dev\", SSL off\n",
  "errorType": "OperationalError",

I have solved this error in SQL clients by adding the following properties AuthMech=REQUIRE&ssl=TRUE

However , i am not sure how can i make the lambda code connect to Redshift.

AWS
EXPERT
asked 4 years ago1630 views
1 Answer
0
Accepted Answer

Check to see if the Lambda is running in the default VPC. It should be running in the same VPC as the Redshift. Please read more here, you will find a tutorial with RDS use pg instead of pymysql to connect to Redshift, pycopg2 is also fine.

AWS
Kunal_G
answered 4 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions