1 Answer
- Newest
- Most votes
- Most comments
0
Hello!
Appreciate the thorough explanation for your concern, to start, there are a few reasons why you might be exceeding the GET requests limit, will list them below.
- The 5,500 request per second limit is a guideline, not a hard limit. It is possible to burst over that limit for short periods of time before throttling kicks in. So when you execute a large parallel read like that, you can exceed 5,500 temporarily.
- The timeouts you see can be caused by eventual throttling or errors when going over the limits. Even if you don't hit the limits initially, a burst of requests like that may have a cascading effect that leads to throttles later on as S3 struggles to keep up.
- Resource constraints in your Lambda function like CPU, memory, network bandwidth or even just suboptimal code can cause timeouts even when S3 performs well.
Summary: Published S3 limits are not absolute guarantees, and exceeding them temporarily is possible, but leads to unreliable performance and throttling issues over time. I would optimize the parallel execution design to stay well below the documented limits for stable throughput. Things like Lambda exponential backoff and caching can help handle spikes when they do happen, but we want to try and stay below these limits.
answered 4 months ago
Relevant content
- asked 2 years ago
- asked 8 months ago
- Accepted Answerasked 7 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 2 years ago