1 Answer
- Newest
- Most votes
- Most comments
2
You should use the SQS Lambda trigger to consume the messages from the queue. Concurrency will auto scale between 0 to 1000, based on number of messages in the queue. If you have other functions in the account and you are concerned that the function's concurrency will prevent other functions from running, you have 2 options:
- Request a limit increase.
- Use Reserved concurrency on the function to limit its maximum concurrency.
I would recommend the first option, unless you want to limit the concurrency because the function calls some downstream service which has limited resources.
Relevant content
- asked 10 months ago
- asked a year ago
- AWS OFFICIALUpdated 8 days ago
- AWS OFFICIALUpdated 8 days ago
- AWS OFFICIALUpdated a year ago
appreciate the response. "the function calls some downstream service which has limited resources", do you mean the functionality allows us to limit instances of a particular lambda calls some downstream service? Can you please give more details about such "downstream service" as well as "limited resources"? thank you!
An example can be a downstream database, used by the Lambda function, that can accept up to 100 connections. If you let the function scale, it may get to 1000 concurrency, which will create 1000 connections. To limit that, you can set Reserved Concurrency on your function to 100, which will make sure that you will never have more than 100 concurrent invocations. (Specifically in the case of the database, You may use RDS Proxy to overcome this without limiting concurrency)
Hi, I am working on a similar requirement where I am expecting 3 million JSON messages per day each of approx 2KB. These messages needs to be stored on S3 for further processing and then move to Snowflake for reporting.
Will Lambda with concurrency be enough for processing? Scheduled or triggered? Does AWS SDK support reading messages in a batch of 10000 from standard queue?
Any suggestions/recommendations are welcome.
Thanks, Satyen.