Boto3 get or put error code best practice

0

I am trying to think in defensive programming mode..... Using Boto3 accessing a get_item or put_item typically works well having a HTTPStatusCode of 200 returned. If it ever occurs that a 200 is not returned: 1 What's the best practice for recovering from the error? 2 At AWS, does the underlying code retry the operation?

Example: MMediaSeQ = boto3.resource('dynamodb', region_name = 'us-east-1') table = MMediaSeQ.Table('D1MMedSeq') responseSeQ = table.get_item( Key={'ID': '1'}, ProjectionExpression='LargestSeQNo', ) #print("responseSeQ = ", responseSeQ)

# Get the HTTPStatusCode
ResponseCode = responseSeQ.get("ResponseMetadata", {}).get('HTTPStatusCode', None)
#print("HTTPStatusCode = ", ResponseCode)

if ResponseCode == int(200):
    print('Get Successful')
else:
    print ('Get ERROR return: ', ResponseCode )
profile picture
Petrus
posta 6 mesi fa269 visualizzazioni
2 Risposte
1
Risposta accettata

Looks like you are working with Python. Boto3 is the Python SDK for AWS. This link documents the best practices for error handling and retries.

Hope this helps!

profile pictureAWS
ESPERTO
iBehr
con risposta 6 mesi fa
profile picture
ESPERTO
verificato 4 mesi fa
profile picture
ESPERTO
verificato 6 mesi fa
0

Those links from @iBehr are worth a detailed look.

In summary there are 3 modes - Default/Legacy, Standard and Adaptive (experimental). You can configure the mode and number of retries (5 by default). Retrying is exponential backoff base 2 - this is particularly useful for handling throttling errors.

You can monitor retrying either in Boto3's logging (if enabled) or in the response to API calls. It's not clear from the doco but along with the documented structure returned by each call there's also metadata with RetryAttempts, e.g.:

'ResponseMetadata': {
   'RequestId': '1234567890ABCDEF',
   'HostId': 'host ID data will appear here as a hash',
   'HTTPStatusCode': 400,
   'HTTPHeaders': {'header metadata key/values will appear here'},
   'RetryAttempts': 4
}

On the subject of AWS API request throttling, documentation is unclear about this generally (specific services like EC2 document it), but AWS Support said (in response to a case I raised a while back) there are many services that can experience request throttling:

  • It varies service to service
  • For some services all its APIs can be throttled, for others only some
  • Not all AWS services document their throttling as they "might be dynamic in nature" (whatever that means).

Support said you can look for "ThrottleCount" and "RequestLimitExceeded" metrics in Cloudwatch but

  • Support confirmed that not all services that have API throttling generate these (yet).
  • I think only metrics that have generated data (i.e. a throttling event has happened) appear here, from what I've seen, so there's no way to inventory what ones can occur.
  • Until I got the EC2/API metric enabled by requesting Support, EC2 API throttling wasn't visible in Cloudwatch Metrics. Support said there's no similar situation for other APIs.
  • Seems that throttling on the EBS Direct API, even though this comes under "EC2", does not include throttling metrics under EC2/API.

EC2 uses a "token bucket" throttling method (see https://docs.aws.amazon.com/AWSEC2/latest/APIReference/throttling.html). Basically you have a bucket of tokens for making requests, which replenishes at a certain rate. If you run out, you get throttled and have to wait. Support said other services "likely" also use this method, but not always documented.

ESPERTO
con risposta 6 mesi fa
  • I am mostly using API Gateway, Lambda & DynomoDB. May I ask for you answer in the context of these services, please?

Accesso non effettuato. Accedi per postare una risposta.

Una buona risposta soddisfa chiaramente la domanda, fornisce un feedback costruttivo e incoraggia la crescita professionale del richiedente.

Linee guida per rispondere alle domande