Questions tagged with Amazon Bedrock
The easiest way to build and scale generative AI applications with foundation models (FMs).
Content language: English
Select tags to filter
Sort by most recent
Browse through the questions and answers listed below or filter and sort to narrow down your results.
748 results
why, when i try to remove the service Llama 3.2 1B Instruct, i see this error
• Llama 3.2 1B Instruct - This model meta.llama3-2-1b-instruct-v1 does not support deleting an agreement
When I use the invoke_agent API, if there is no knowledge kb associated with it doesnt do any embedding and if there is a kb then it does embedding of the question which is understandable since it use...
I am trying to get my AWS Lambda function to call amazon Nova Reels v1.1 in order to get an Ai video made, and sent to my novabucket S3 bucket.
There are a few questions regarding the token counts in the invoke agent API:
1) When my agent is connected to a knowledge base, in the boto3event stream, the second trace doesnt contain the input and...
When I call the invoke agent api and see through the boto3event stream I find there are 2 output tokens and 2 input tokens being returned. Each completely different but the input tokens are increasing...
We want to create an agent in Amazon Bedrock and define an action group that can perform custom retrievals.
Can custom retrievals be performed directly on a Knowledge Base (KB)?
If not, can custom r...
When I use the Invoke_agent API and then ask it a question, in the reponse (ie boto3.completion_stream) there are 915 input tokens and 217 output tokens.
But when I check for the same in the Cloudwatc...
In RAG evaluation job of bedrock, there are multiple metrics and since we are using LLM as judge I assume there will be prompts for each metric. I want to know for lets say 3 metrics: Faithfulness, co...
Suppose in a bedrock agent I specify some instruction and some guardrails and also there is a prompt I am giving then are all prompt given together in a single prompt or as a series of prompts in diff...
Does the output and input tokens in the boto3.event_stream only those which are generated from the bedrock model's response, or does it comprise of guardrails calls and additional agent prompts
When I use Claude Sonnet 3.5 through AWS Bedrock to invoke a multimodal inference (say via boto3 invoke_model), this means I have a text-part in the payload which is the prompt, and a data part which ...
Hi AWS Support,
I'm currently working with Amazon Bedrock Knowledge Bases intergration Sharepoint and encountering a persistent issue during the sync process. When I create a new Knowledge Base and p...