- Newest
- Most votes
- Most comments
ElastiCache serverless is a distributed in-memory data store that provides high-performance caching. The service automatically monitors data storage usage by sampling multiple times per minute and calculates hourly averages in GB-hrs. Regarding data persistence, Redis uses LRU as a fixed maxmemory policy. When data exceeds 32 GB per node, eviction occurs for keys with TTL settings, and if no TTL-enabled keys exist, an OOM error will occur. Coming to your question why pre-scaling, it allows you to set minimum supported limits for ECPUs per second or data storage. The point of pre-scaling is to ensure your cache can handle sudden increases in load without any performance degradation. see below documentation link for details.
https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Scaling.html
Amazon ElastiCache Redis Serverless does indeed store data in-memory on AWS servers. This in-memory storage allows for fast data access and processing, which is a key feature of Redis.
Regarding scaling in ElastiCache Redis Serverless:
For scaling up, the service continuously monitors your cache's memory, compute, and network utilization to scale instantly. When you add more data, like in your example of going from 2GB to 3GB, ElastiCache Serverless automatically handles the increase. It doesn't necessarily add a new node, but rather adjusts the resources allocated to your cache to accommodate the additional data. The scaling happens seamlessly without downtime or performance degradation.
Data persistence in ElastiCache Serverless depends on your configuration. By default, Redis is an in-memory data store, but ElastiCache offers options for data persistence if needed.
For scaling down, ElastiCache Serverless also automatically adjusts resources based on your actual usage. If you remove data, reducing from 3GB to 2.5GB in your example, the service will optimize the resources allocated to your cache. However, it's important to note that the scaling is not instantaneous and may not be as granular as removing exactly 500MB worth of resources. The service aims to balance performance and cost-efficiency.
ElastiCache Serverless does scale down, but it's designed to do so gradually to avoid unnecessary fluctuations that could impact performance. The service continuously monitors your usage and adjusts resources accordingly, aiming to provide just the right amount of capacity for your workload.
Pre-scaling in Redis Serverless is beneficial for handling anticipated spikes in traffic or data volume. While the service can scale rapidly, pre-scaling ensures that resources are available immediately when needed, which can be crucial for applications with sudden, predictable surges in demand. Even if the cache scales down later, pre-scaling helps maintain performance during peak times.
ElastiCache Serverless is designed to provide a balance between performance, cost-efficiency, and ease of use, automatically handling scaling operations to match your application's needs without requiring manual intervention.
Sources
Valkey-, Memcached-, and Redis OSS-Compatible Cache – Amazon ElastiCache FAQs - AWS
Unlock on-demand, cost-optimized performance with Amazon ElastiCache Serverless | AWS Database Blog
ElastiCache Serverless Scaling Characteristics | AWS re:Post
Relevant content
- asked 9 months ago
- asked 2 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 10 months ago
- AWS OFFICIALUpdated a month ago