- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
Hi Kerokero,
Answering your question, by default, for tasks running as part of an Amazon ECS service, the task placement strategy is spread
using the attribute:ecs.availability-zone
. Which means that, if you have 1 task and would like to run 3 instead, the ECS will place the new 2 tasks into the other available AZ's configured on the service on best effort basis. This is following the well architect framework recommendation.
If you are looking for run tasks in the same instance as much as you can, you would need to change the placement strategy to binpack
instead. On binpack
, tasks are placed on container instances so as to leave the least amount of unused CPU or memory. This strategy minimizes the number of container instances in use.
When this strategy is used and a scale-in action is taken, Amazon ECS terminates tasks. It does this based on the amount of resources that are left on the container instance after the task is terminated. The container instance that has the most available resources left after task termination has that task terminated.
Please, take a read on my comment about ECS Fargate as well since it seems that you didn't have the right information while considering ECS EC2.
Hi,
Why don't you use the AWS-managed version of ECS named Fargate ? It will manage required capacity for you. You will pay only what you use and won't have excess capacity to pay.
See https://aws.amazon.com/fargate/
Best,
Didier
Hi, Didier I've decided not to use Fargate due to the following considerations:
- In scenarios involving heavy usage, I anticipate that Fargate costs would exceed those of EC2.
- I prefer not to have to reconnect to the database every time, as I understand that Fargate might establish and terminate connections to the database dynamically.
- Fargate seems to only support the awsvpc network mode. From what I gather, using the awsvpc network mode with Fargate may require additional NAT gateways if connectivity to external services is needed.
These are the points I've considered based on official reference documents and online articles. Please feel free to correct me if I'm mistaken.
Hi Kerokero,
- It really depends. How heavy you are thinking? Also, did you know that you have Fargate Graviton which reduces the cost up to 40% with better performance? There is also the SPOT option if it is not a critical application or tolerate down time.
- Fargate doesn't manage database connection. It is all about the application. You can think the Fargate as a managed EC2 which we don't need to be concerned about patch or AMI upgrades.
- There is no need of additional NAT GW. You will need to have a NATGW or IGW configured in your subnet. This is true for EC2 and Fargate.
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor 5 Monaten
- AWS OFFICIALAktualisiert vor 3 Jahren
- AWS OFFICIALAktualisiert vor 3 Jahren
Hi Henrique,
Thank you for your response and comments. Regarding the task placement strategy you mentioned, I will test it to see if it aligns with my expectations.
Additionally, in response to your comments:
Please correct me if there are any mistakes.
Quickly answering your points:
Thank you for your prompt response.
Regarding the second point, should I understand that if using Fargate, it consistently maintains the service running, thereby avoiding disconnection from the database and ensuring a continuous connection, similar to running on EC2?
Exactly the same as ECS with EC2. The difference is that you don't need to manage the underlying OS. There are some considerations that you also should be aware of. But, none of them are about disconnecting or terminating the task if there is no ongoing connection.
Hi Henrique,
After several rounds of experimentation, it appears that adjusting the task placement strategy to Binpack indeed enables the execution of the same task on the same EC2 instance. However, I've come across some observations that have left me somewhat puzzled. Specifically, when I set the task count to 5, it triggers the initiation of 3 EC2 instances, despite one EC2 instance having adequate resources to accommodate all 5 tasks. This discrepancy has raised doubts in my mind regarding the Binpack strategy. Could there be certain limitations inherent in Binpack leading to such behavior?
Expected Outcome: If one resource is sufficient, then all 5 tasks should be placed on the same EC2 instance: EC2 A -> 5 tasks
Actual Outcome: It results in the activation of 2 additional EC2 instances, distributing the 5 tasks evenly: EC2 A -> 2 tasks EC2 B -> 2 tasks EC2 C -> 1 task