How to link figures in webdocuments and other sources for an LLM chatbot?

0

Hello,

Currently I have an LLM chatbot running on the sample code from streamlit via this workshop: https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/

and using bedrock with RAG approach using this workshop as sample: https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb

I want the chatbot to respond with relevant figures along with the textual response to the question. To do this. one solution could be to use AWS textract and get the images out and label them with various text groups that would be relevant to that figure. This way when an answer has a text group that is similar enough to the ones associated with the figure, the figure can be selected with a given confidence level as well.

However, I don't see any clear pipelines to extract various tables, pictures, and figures from web documents such as those on confluence. What solutions can I use to extract and link figures in PDFs so that they are linked together in QnA bot requests?

已提問 7 個月前檢視次數 308 次
1 個回答
0

I would take a look at this solution that does get closer to your ask: https://aws.amazon.com/blogs/machine-learning/deploy-generative-ai-self-service-question-answering-using-the-qnabot-on-aws-solution-powered-by-amazon-lex-with-amazon-kendra-and-amazon-bedrock/

The full implementation can be seen here: https://aws.amazon.com/solutions/implementations/qnabot-on-aws/

Depending on your data tables there might need to be changes to the ingest pipeline to tie your data tables to the keys that are being used for your embedding table. This will tie relevant data tables to your embedding table.

AWS
已回答 6 個月前

您尚未登入。 登入 去張貼答案。

一個好的回答可以清楚地回答問題並提供建設性的意見回饋,同時有助於提問者的專業成長。

回答問題指南