How to increase the maximum DDB table throughput setting per partition

0

My team is facing DDB throttling issue recently and we learned that each partition on the table can serve up to 3,000 read request units, we have more than 3000 read request units per partition for some business use cases, so we want to know how we can increase the maximum to 6000 or 8000 read request units per partition?

asked 19 days ago544 views
2 Answers
0

Amazon DynamoDB imposes certain limits on the throughput that can be served per partition. Specifically, each partition can handle up to 3,000 read capacity units (RCUs) and 1,000 write capacity units (WCUs). If your use case requires more than 3,000 RCUs per partition, you cannot directly increase this limit; instead, you need to address the issue through a combination of strategies to distribute the load across more partitions or optimize your access patterns.

Here are some strategies to handle high throughput requirements:

1. Increase the Number of Partitions DynamoDB automatically partitions your data and workload based on the throughput settings and the size of your data. By increasing the overall provisioned throughput or using on-demand mode, you can cause DynamoDB to create more partitions. This will distribute the read and write load more evenly across these partitions.

  • Provisioned Mode: Increase the provisioned throughput settings for your table. DynamoDB will then automatically redistribute your data across more partitions.
  • On-Demand Mode: DynamoDB automatically adapts to your traffic volume, managing partitioning behind the scenes. If you experience consistent high traffic, switching to on-demand mode might help.

2. Optimize Your Partition Key Design To ensure that your read and write operations are distributed evenly across partitions, you might need to redesign your partition key to avoid "hot" partitions. Some strategies include:

  • Composite Keys: Use a combination of attributes to create a more evenly distributed partition key.
  • Write Sharding: Append a random suffix or hash to the partition key to distribute writes more evenly.

For example, instead of having a partition key like user_id, you could use user_id#shard_id, where shard_id is a value between 1 and N to spread the load.

3. Use DAX (DynamoDB Accelerator) If your use case involves a high number of read requests, consider using DynamoDB Accelerator (DAX). DAX is a fully managed, highly available, in-memory cache for DynamoDB that can significantly improve read performance by reducing the load on your DynamoDB table.

4. Leverage Global Secondary Indexes (GSIs) If your access patterns are varied and can benefit from different partition keys, you can create Global Secondary Indexes (GSIs) that allow you to query the table using different keys. This can help distribute the read load across multiple indexes.

5. Batch Operations Using batch operations can help minimize the impact of throttling by reducing the number of individual requests. For example, use BatchGetItem to retrieve multiple items in a single request instead of issuing individual GetItem requests.

6. Application-Level Caching Implement application-level caching to reduce the number of read requests hitting your DynamoDB table. This can be done using in-memory caches like Redis or Memcached.

Example: Sharding Partition Keys Here’s an example of how you might implement sharding to distribute read operations more evenly across partitions:

import boto3
import random

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('YourTableName')

def get_item(user_id):
    shard_id = random.randint(1, 10)  # Assuming 10 shards
    partition_key = f"{user_id}#{shard_id}"
    response = table.get_item(Key={'PartitionKey': partition_key})
    return response['Item']

# For read operations
item = get_item('user1234')
print(item)

By sharding the user_id into multiple partitions, you can spread the read load across multiple partitions, thus avoiding throttling issues.

Conclusion Directly increasing the maximum throughput per partition in DynamoDB is not possible due to inherent service limits. However, by leveraging the strategies mentioned above, you can effectively manage and distribute your read and write throughput across more partitions, thereby mitigating throttling issues.

profile picture
EXPERT
answered 19 days ago
  • This reads like GenAI

0

I would suggest you open a ticket to AWS Support https://support.console.aws.amazon.com/ to discuss your specific requirements. They may be able to offer customized solutions or insights based on your use case.

As you know DynamoDB offers an On-Demand Capacity Mode where you don't need to specify provisioned capacity. It automatically scales based on the workload. This can be beneficial if your workload is unpredictable or if you have occasional spikes in traffic. However, it may be more expensive than provisioned capacity for steady-state workloads.

profile picture
EXPERT
A_J
answered 19 days ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions