- Newest
- Most votes
- Most comments
You can accomplish this with the QnAIntent in Amazon Lex. Request access to the models that you will need. (Try the Cohere models for embeddings) https://catalog.us-east-1.prod.workshops.aws/workshops/b5506ec7-e09f-4aab-b50a-c194cfa5a6d7/en-US/prerequisites/request-model-access
Then check out the Conversational FAQ section of this workshop. It goes through how to set it up step by step. https://catalog.us-east-1.prod.workshops.aws/workshops/b5506ec7-e09f-4aab-b50a-c194cfa5a6d7/en-US/create-qna-intent
QnAIntent now also supports Claude3 Sonnet and Haiku in select regions.
This can be potentially achieved using Bedrock Knowledge Base and Lex Conversational FAQ. Refer this video for a demo and walk through.
but if i build a Bedrock Knowledge Base using Amazon OpenSearch Serverless, Amazon OpenSearch Serverless will charge me every hour or just when the data is sync?
Relevant content
- asked 10 months ago
- Accepted Answerasked 2 years ago
- asked 2 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 10 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago
thank you for your answer Thomas. Its a great solution but im concerned about the pricing. I dont know much of Amazon OpenSearch Serverless, it will be running every hour or just when data is sync?
If you use Amazon OpenSearch Serverless it would cost every hour as it never scales down to zero. It is possible to set this up with another database such as Amazon Aurora which can provide a lower cost than OpenSearch for the vector database, however it is a bit more work as there is not easy create option for that at present.
You can also check out the "Chat with a single document" feature. With that you don't need to have a vector database. You can specify an S3 URI in a lambda function. https://aws.amazon.com/blogs/machine-learning/knowledge-bases-in-amazon-bedrock-now-simplifies-asking-questions-on-a-single-document/