Questions tagged with Amazon Bedrock

The easiest way to build and scale generative AI applications with foundation models (FMs).

Content language: English

Select tags to filter
Sort by most recent

Browse through the questions and answers listed below or filter and sort to narrow down your results.

748 results
why, when i try to remove the service Llama 3.2 1B Instruct, i see this error • Llama 3.2 1B Instruct - This model meta.llama3-2-1b-instruct-v1 does not support deleting an agreement
1
answers
0
votes
32
views
asked 2 days ago
When I use the invoke_agent API, if there is no knowledge kb associated with it doesnt do any embedding and if there is a kb then it does embedding of the question which is understandable since it use...
1
answers
0
votes
34
views
asked 4 days ago
I am trying to get my AWS Lambda function to call amazon Nova Reels v1.1 in order to get an Ai video made, and sent to my novabucket S3 bucket.
2
answers
0
votes
44
views
asked 4 days ago
There are a few questions regarding the token counts in the invoke agent API: 1) When my agent is connected to a knowledge base, in the boto3event stream, the second trace doesnt contain the input and...
1
answers
0
votes
31
views
asked 4 days ago
When I call the invoke agent api and see through the boto3event stream I find there are 2 output tokens and 2 input tokens being returned. Each completely different but the input tokens are increasing...
1
answers
0
votes
20
views
asked 4 days ago
We want to create an agent in Amazon Bedrock and define an action group that can perform custom retrievals. Can custom retrievals be performed directly on a Knowledge Base (KB)? If not, can custom r...
2
answers
0
votes
28
views
asked 4 days ago
When I use the Invoke_agent API and then ask it a question, in the reponse (ie boto3.completion_stream) there are 915 input tokens and 217 output tokens. But when I check for the same in the Cloudwatc...
2
answers
0
votes
33
views
asked 4 days ago
In RAG evaluation job of bedrock, there are multiple metrics and since we are using LLM as judge I assume there will be prompts for each metric. I want to know for lets say 3 metrics: Faithfulness, co...
2
answers
0
votes
32
views
asked 5 days ago
Suppose in a bedrock agent I specify some instruction and some guardrails and also there is a prompt I am giving then are all prompt given together in a single prompt or as a series of prompts in diff...
Accepted AnswerAmazon Bedrock
1
answers
0
votes
24
views
asked 5 days ago
Does the output and input tokens in the boto3.event_stream only those which are generated from the bedrock model's response, or does it comprise of guardrails calls and additional agent prompts
1
answers
0
votes
22
views
asked 5 days ago
When I use Claude Sonnet 3.5 through AWS Bedrock to invoke a multimodal inference (say via boto3 invoke_model), this means I have a text-part in the payload which is the prompt, and a data part which ...
2
answers
0
votes
19
views
asked 5 days ago
Hi AWS Support, I'm currently working with Amazon Bedrock Knowledge Bases intergration Sharepoint and encountering a persistent issue during the sync process. When I create a new Knowledge Base and p...
1
answers
0
votes
37
views
asked 5 days ago