Amazon QnA Intent unexpected fails - help

0

I am interacting with an Amazon Lex bot that is using an AMAZON QnA intent with a Kendra index. I am testing it with a simple query, for example: What is <something here>?

The issue is that it sometimes fails and sometimes works WITH THE SAME QUESTION. Here are the jsons (I replaced some information with <> so no private data is shown.)

Working json response:

{
    "messages": [
     {
      "content": "<This output is working as expected>",
      "contentType": "PlainText"
     }
    ],
    "sessionState": {
     "dialogAction": {
      "type": "ElicitIntent"
     },
     "sessionAttributes": {},
     "originatingRequestId": "<originatingRequestId>"
    },
    "interpretations": [
     {
      "intent": {
       "name": "FallbackIntent",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.74
      },
      "intent": {
       "name": "Greetings",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.72
      },
      "intent": {
       "name": "RequestAccessLink",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.68
      },
      "intent": {
       "name": "AccessToIntranet",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.64
      },
      "intent": {
       "name": "CreateCriticalTicket",
       "slots": {
        "message_ticket": null
       }
      },
      "interpretationSource": "Lex"
     }
    ],
    "requestAttributes": {
     "x-amz-lex:qnA-search-response": "<working>",
     "x-amz-lex:qnA-search-response-source": "<working>"
    },
    "sessionId": "<sessionID>"
   }

Not working json response:

{
    "messages": [
     {
      "content": "<fallbackintent message>",
      "contentType": "PlainText"
     }
    ],
    "sessionState": {
     "dialogAction": {
      "type": "Close"
     },
     "intent": {
      "name": "FallbackIntent",
      "slots": {},
      "state": "ReadyForFulfillment",
      "confirmationState": "None"
     },
     "sessionAttributes": {},
     "originatingRequestId": "<originatingRequestId>"
    },
    "interpretations": [
     {
      "intent": {
       "name": "FallbackIntent",
       "slots": {},
       "state": "ReadyForFulfillment",
       "confirmationState": "None"
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.76
      },
      "intent": {
       "name": "Greetings",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.75
      },
      "intent": {
       "name": "RequestAccessLink",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.69
      },
      "intent": {
       "name": "AccessToIntranet",
       "slots": {}
      },
      "interpretationSource": "Lex"
     },
     {
      "nluConfidence": {
       "score": 0.64
      },
      "intent": {
       "name": "CreateCriticalTicket",
       "slots": {
        "message_ticket": null
       }
      },
      "interpretationSource": "Lex"
     }
    ],
    "sessionId": "<sessionId>"
   } 

Any ideas?

gefragt vor einem Monat177 Aufrufe
1 Antwort
1

The LLM will be used to evaluate how confident it is that an answer is good enough to return. With some variability, some questions might get this behavior.

If you could open a support ticket with the account number, region, bot name/alias and session information and an engineer could look into it.

profile pictureAWS
beantwortet vor einem Monat

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen