Need support on Prompt flow build. Getting error "retrieval query exceeds the limit of 1000 characters"

0

Hi , I just exploring Prompt Flow concept. As per the example , i have created a simple prompt flow to test my Knowledge base ( with PineCone) . I have very basic nodes in my prompt flow. i.e 1. Input 2. Prompt 3. Knowledbase 4.Output. When test this prompt flow by asking a simple question like "what is <XXXXX>" . I am getting an error , "Knowledge base retrieval query in knowledge base node <XXXXX> exceeds the limit of 1000 characters. Shorten the query and retry your request.". I dont see any information how/where to fix. everyone says "please check your prompt and sending a link to study that". actually something need to fix or need to be changed . Can anyone know how to fix this . Enter image description here

1 Answer
0

Hi,

Thank you for reaching out on this matter.

To explain the likely cause of this issue, let me discuss the steps the flow takes one-by-one:

  1. When you ask the question 'what is pmi', it is captured in the first Grey node 'Flow Input'
  2. The input 'what is pmi' is then passed as input to the Purple node 'Prompt_1'. Using this input, this block will create a new prompt and send this to a selected model. The response from this model will then be saved as the modelCompletion for this node
  3. The response from the model in the Purple node will then be passed to the Green Knowledge Base node where the issue seems to be occurring

Now based on the above, the probable cause of the issue is likely due to the model output in 'Step 2'. This generated output from the model is likely more than 1000 characters and hence is causing the experienced error.

So few options to test out which could help resolve the issue include:

  • In the input message specified in the Purple node, test modifying the prompt. For example, you could consider adding an instruction to respond in less than 1000 words in the Purple node's message field
  • Consider testing if other models in the Purple node help in resolving this. This is as different models can have different lengths of generated content for the same prompt
  • If suitable, you could consider directly connecting the Grey node to the Green node, if directly querying the knowledge base is suitable

I do hope the above helps. If you are still facing issues, I would recommend reaching out to AWS Support where you can be assisted further.

AWS
SUPPORT ENGINEER
Shoan_D
answered 2 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions