- Newest
- Most votes
- Most comments
To manage a large number of simultaneous requests with your ECS service while staying within your IP limits, you can use a combination of AWS services and careful design. Keep using SQS for message queuing and EventBridge to trigger ECS tasks, but focus on configuring your ECS tasks to scale dynamically based on queue depth. This way, you can handle spikes efficiently.
You should monitor and manage your IP usage to avoid hitting limits, which might involve optimizing how tasks are placed and scaled. Additionally, implementing event-driven processing will help maintain parallel task execution, ensuring that new tasks are triggered as others complete. Set up monitoring and alerts to keep track of performance and resource use, and make adjustments as needed to stay within your constraints.
Relevant content
- Accepted Answerasked a year ago
- asked a year ago
- asked 6 months ago
- AWS OFFICIALUpdated 5 months ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 4 months ago
- AWS OFFICIALUpdated 5 months ago
Thanks, Giovanni. To monitor and manage IP usage, I was initially thinking about using lambda functions. However, my tasks are going to run for hours and lambda might not be the right way to go about. Are there ways to configure in the service that more than, say 400 parallel tasks, are not triggered by the service at a time? Or what would you suggest?