Generative AI with Amazon Bedrock and Amazon Athena

4 minute read
Content level: Intermediate
0

This article demonstrates how to leverage Amazon Bedrock to build a simple chatbot that effectively combines both user input and Amazon Athena-processed data to provide insightful and data-based responses in real-time.

Introduction

This article demonstrates how to leverage Amazon Bedrock to build a simple chatbot that effectively combines both user input and Amazon Athena-processed data to provide insightful and data-based responses in real-time.

The Chatbot will prompt the user for a question, this question is then sent to Amazon Bedrock to create a SQL query based on the question, it will then run the SQL query against Amazon Athena, Amazon Bedrock will then form a response based on the SQL query result from Amazon Athena. It will look similar to the below.

Enter image description here

In this post I will use Amazon Bedrock, Amazon Athena and LangChain. Using Athena’s query federation, this allows you to analyze records from different sources. Additionally, they can setup a pipeline that can extract data from these sources, store them in Amazon S3 and use Athena to query them. The two data sources I have used are VMware Cloud on AWS (PostgreSQL Database Virtual Machine) and AWS Lake Formation.

Enter image description here

Prerequisites

  • AWS Account
  • AWS CLI
  • Amazon Athena that is connected to a data source OR use a PostgreSQL database if you are not using Amazon Athena
  • Amazon Bedrock Model Access
  • Select the right Foundational Model (FM) for your requirements. I have used AI21 Labs Jurassic-2-Ultra

Enter image description here

The Code

To get started, we are going to create a simple Python script, and then we will add each required part to the script.

Installs

You will need to install LangChain and SQLAlchemy

pip install langchain
pip install sqlalchemy

Import Modules

from langchain.llms.bedrock import Bedrock
from langchain_experimental.sql import SQLDatabaseChain
from langchain.utilities.sql_database import SQLDatabase
from sqlalchemy import create_engine

Setup Athena Connection

Note: if you plan to use a PostgreSQL DB connection, skip this code.

#DB Variables
connathena=f"athena.REGION.amazonaws.com"
portathena='443' #Update, if port is different.
schemaathena='DatabaseName to Query' #from Amazon Athena
s3stagingathena=f's3://S3PATH'#from Amazon Athena settings
wkgrpathena='primary'#Update, if the workgroup is different
connection_string = f"awsathena+rest://@{connathena}:{portathena}/{schemaathena}?s3_staging_dir={s3stagingathena}/&work_group={wkgrpathena}"
engine_athena = create_engine(connection_string, echo=False)
db = SQLDatabase(engine_athena)

Setup PostgreSQL Connection

Note: If you are using Amazon Athena, skip this section, and only use the above Amazon Athena Connection code. To keep the code simple, I have used username and password inputs; replace this if it doesn't meet your requirements (definitely not in production).

# DB variables
db = SQLDatabase.from_uri(
 f"postgresql+psycopg2://username:password@DBIPAddress:5432/DBNAME",
)

Setup LLM - Amazon Bedrock AI21 Labs Jurassic-2-Ultra

Note: With the below, you can change this to the different Amazon Bedrock Models, a good chance to see any differences in responses.

# Setup LLM
llm = Bedrock(model_id="ai21.j2-ultra-v1", model_kwargs={"maxTokens": 1024,"temperature": 0.0})

Create your prompt

# Create the prompt
QUERY = """
Create a syntactically correct postgresql query to run based on the question, then look at the results of the query and return the answer like a human
{question}
"""

Setup the Database Chain

db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)
"""

Create the input prompt and response

def get_prompt():
 print("Type 'exit' to quit")

 while True:
     prompt = input("Ask me a Question: ")

     if prompt.lower() == 'exit':
         print('Exiting...')
         break
     else:
         try:
             question = QUERY.format(question=prompt)
             print(db_chain.run(question))
         except Exception as e:
             print(e)

get_prompt()

Once you have created the above, you can run this python file, when you ask a question, it should look similar to the below.

Enter image description here

3 Comments

Thank you for this article, it was of great assistance.

A couple of notes, to get up and running found I needed to pip install langchain_experimental and PyAthena libraries as well as needing to specify the region_name argument as either us-east-1 or us-east-2 as the ai21.j2-ultra-v1 model only seems to be available in those two regions currently.

Lloyd
replied 3 months ago

Can you share the GitHub for this code please. Any idea how to use guardrails from Bedrock to limit the scope of the FM?

Abilash
replied a month ago

Thanks Lloyd for the notes above.

Abilash, there is a preview of Amazon Bedrock Gaurdrails at the moment, you can see the detail here - https://aws.amazon.com/bedrock/guardrails/ If you wish to have that added to your account, that link gives you details how to do that.

AWS
EXPERT
replied a month ago