Why is my Amazon DynamoDB provisioned table throttled?

5 minute read
1

Read or write operations on my Amazon DynamoDB provisioned table are throttled. Or, I get the following error when I perform read or write operations on my provisioned DynamoDB table: "ProvisionedThroughputExceededException."

Short description

The following are common scenarios where you might experience throttling on your DynamoDB provisioned table:

  • Your DynamoDB table has adequate provisioned capacity, but most of the requests are throttled.
  • You turned on AWS Application Auto Scaling for DynamoDB, but your DynamoDB table is throttled.
  • You have a hot partition in your table.
  • Your table's traffic exceeds your account throughput quotas.

Resolution

Note: For information on DynamoDB metrics, such as WriteThrottleEvents and ReadThrottleEvents, that must be monitored during throttling events, see DynamoDB metrics and dimensions.

Based on your use case, complete the following tasks.

Your DynamoDB table has adequate provisioned capacity, but most of the requests are throttled

DynamoDB reports minute-level metrics to Amazon CloudWatch. The metrics are calculated as the sum for a minute and then averaged. However, the DynamoDB rate limits are applied per second. For example, if you provisioned 60 write capacity units for your DynamoDB table, then you can perform 3600 writes in one minute. However, to drive all 3600 requests in one second with no requests for the rest of that minute can result in throttling. The total number of read capacity units or write capacity units per minute might be lower than the provisioned throughput for the table. However, if all the workload falls within a couple of seconds, then the requests might be throttled.

To resolve this issue, make sure that your table has enough capacity to serve your traffic. Then, use exponential backoff to retry throttled requests. If you use the AWS SDK, then this logic is implemented by default. For more information, see Error retries and exponential backoff.

Note: DynamoDB doesn't start throttling the table after the consumed capacity per second exceeds the provisioned capacity. With the burst capacity feature, DynamoDB reserves a portion of the unused capacity for later bursts of throughput to handle usage spikes. For more information, see Provisioned capacity mode and How does Amazon DynamoDB handle spiky loads in short intervals?

You turned on AWS Application Auto Scaling for DynamoDB, but your DynamoDB table is throttled

AWS Application Auto Scaling isn't a suitable solution to address sudden spikes in traffic with DynamoDB tables. Application Auto Scaling initiates a scale up when two consecutive data points for consumed capacity units exceed the configured utilization value within a one-minute span. Application Auto Scaling automatically scales the provisioned capacity only when the consumed capacity is higher than target utilization for two consistent minutes.

A scale-down event is initiated when 15 consecutive data points in CloudWatch for consumed capacity are lower than the target utilization. After Application Auto Scaling is initiated, an UpdateTable API call is invoked. The API call might take a few minutes to update the provisioned capacity for your DynamoDB table or index. Application Auto Scaling requires consecutive data points with higher target utilization values to scale up the provisioned capacity of the DynamoDB table. During this period, any requests that exceed the provisioned capacity of the table are throttled. It's not a best practice to use Application Auto Scaling to handle spiky workloads in DynamoDB. Instead, switch to on-demand mode. For more information, see Managing throughput capacity automatically with DynamoDB auto scaling.

You have a hot partition in your table

In DynamoDB, a partition key that doesn't have a high cardinality can result in many requests that target only a few partitions. This event causes a hot partition. A hot partition can cause throttling if the partition limits of 3000 RCU and 1000 WCU (or a combination of both) per second are exceeded.

To find the most accessed and throttled items in your table, use the Amazon CloudWatch Contributor Insights. Amazon CloudWatch Contributor Insights is a diagnostic tool that provides a summarized view of your DynamoDB tables traffic trends. Use this tool to identify the most frequently accessed partition keys and to continuously monitor the graphs for your table's item access patterns.

A hot partition can degrade the overall performance of your table. To avoid this poor performance, distribute the read and write operations as evenly as possible across your table. For more information, see Designing partition keys to distribute your workload and Choosing the right DynamoDB partition key.

Also, you can implement write sharding on the hot key to increase cardinality and allow the hot key to span multiple partitions. For more information, see Using write sharding to distribute workloads evenly. Use exponential backoff to retry throttled requests. If you use the AWS SDK, then this logic is implemented by default. For more information, see Error retries and exponential backoff.

When you expect high traffic, it's a best practice to increase the provisioned capacity to a high value. The increase in provisioned capacity increases the number of partitions in the backend.

Note: If you use the CloudWatch Contributor Insights tool for DynamoDB, you incur additional charges. For more information, see CloudWatch Contributor Insights for DynamoDB billing.

Your table's traffic exceeds your account throughput quotas

The table-level read throughput and table-level write throughput quotas apply at the account level in any AWS Region. These quotas apply for tables with both provisioned capacity mode and on-demand capacity mode. By default, the throughput quota placed on your table is 40,000 read requests units and 40,000 write requests units. If the traffic to your table exceeds this quota, then the table might be throttled.

To resolve this issue, use the Service Quotas console to increase the table-level read or write throughput quota for your account.

Related information

Best practices for designing and using partition keys effectively

AWS OFFICIAL
AWS OFFICIALUpdated 14 days ago