- Newest
- Most votes
- Most comments
Hi again,
After some investigation and debugging, we identified that the counter increase is related to the TTL. Here’s what is happening:
- The first request is an INCRBY with the counter value (this works as expected).
- The second request is a PEXPIRE with the TTL. According to the AWS documentation, this command is not supported. However, after executing this command, the key’s value becomes the TTL instead of the result of INCRBY.
This behavior raises a couple of questions:
- If PEXPIRE is not supported, why does it modify the counter’s key value?
- Since all unsupported commands appear to be related to TTL, how does MemoryDB manage key expiration internally?
Regards,
The continuous increase in your counter value without making requests suggests there might be an issue with your Multi-Region MemoryDB cluster configuration or replication process.
This behavior could be caused by replication lag between your regions. When using Multi-Region clusters, MemoryDB publishes a metric called MultiRegionClusterReplicationLag which shows the elapsed time between when an update is written to one regional cluster and when it's written to another. During normal operation, this metric should remain fairly constant. An elevated value could indicate that updates aren't propagating properly between regions, potentially causing duplicate writes or unexpected behavior.
Another possibility is memory imbalance or uneven key distribution across your shards. Even though you've configured only one shard per cluster, there might be issues with how the data is being distributed or processed within that shard. This could be due to:
- Uneven key distribution
- "Hot" keys or shards
- Increased client output buffers
- Potential application logic issues with your rate limiting plugin
To troubleshoot this issue:
- Check the
MultiRegionClusterReplicationLagmetric in CloudWatch to see if there are replication issues between your regions. - Verify that your rate limiting plugin is correctly configured for a Multi-Region environment.
- Examine if there are any automated processes or services that might be incrementing your counter without your knowledge.
- Consider implementing TTL (Time To Live) settings on your keys to prevent unbounded growth.
For your use case of shared rate limiting across regions, a Multi-Region MemoryDB cluster is indeed an appropriate approach as it allows for consistent data access across different geographical locations. However, you may need to adjust your implementation to account for the replication behavior between regions.
Sources
Monitoring MemoryDB Multi-Region - Amazon MemoryDB
Troubleshoot memory imbalance in ElastiCache self-managed cluster | AWS re:Post
Relevant content
- asked 3 months ago
- asked 5 days ago
- asked a year ago
