Durch die Nutzung von AWS re:Post stimmt du den AWS re:Post Nutzungsbedingungen

Why does ECS binpack strategy on memory always scale up a new EC2 instance despite available resources?

0

I'm using ECS with an Auto Scaling group as a capacity provider to deploy tasks on EC2 (t3.micro, 2 vCPU, 1 GB memory). I've set the task placement strategy to binpack based on memory. However, I've noticed that ECS always scales up a new EC2 instance to deploy a new task, even if an existing instance has enough CPU and memory available. Consequently, there is only one task per EC2 instance. I expect that all tasks should be placed on a single EC2 instance if it has sufficient memory.

Here are some actions I've already checked:

  1. Port conflict: The network mode is set to awsvpc in the ECS task definition, so each task gets its own ENI, which prevents port conflicts.
  2. EC2 storage: Each EC2 instance has a storage size of 30GB (EBS GP3). My container is a nginx-based web app with 1 MB of static files, so the storage is more than sufficient for running multiple containers.

The following configurations might be related to this issue,

Capacity provider configurations

capacity provider: autoscaling group
base: 0
weight: 100
target capacity: 100%
managed instance draining: true
managed instance scaling: true
scale in protection: false

ECS service configurations

desired count: 1
placement strategy:
  type: binpack
  field: memory
scheduling strategy: REPLICA
service connect:
  enabled: true
  namespace: my_namespace
  services:
    - port name: web
      discovery name: hello
      client aliases:
        - port: 80
          dns name: hello
  

ECS task definition

network mode: awsvpc
CPU: 256
Memory: 128
container definitions:
  - name: web
    image: nginx
    port mappings:
      - name: web
        container port: 80
        protocol: tcp

Any insights or suggestions?

Additional information
I changed the instance type from t3.micro to t3.small (2 vCPUs, 2 GB memory) and deployed 4 ECS tasks. The ECS cluster autoscaled up 2 EC2 instances, placing 2 tasks on each instance.

1 Antwort
1

Hi,

I would assume it is perhaps due to the baseline constraints for T series instances which can burst upto 100% but normally operates at baseline. (Cannot corroborate this with AWS docs unfortunately)

You might want to check on a small non T type instance.

Update: I tried launching 8 tasks on c7g.medium instance. It launched 4 tasks on single node before scaling another node for the other 4. A slight improvement as compared to T.x instances but definitely not as expected for binpack. Maybe some more underlying factors determining binpack behaviour

--Syd

profile picture
beantwortet vor 5 Monaten
profile picture
EXPERTE
überprüft vor 3 Monaten
  • I changed the instance type to c7i.large (2 vCPU, 4 GB memory). Unfortunately, the ECS cluster still scaled up 2 instances and placed 2 tasks on each instance. Thanks your advice.

  • I also cleanuped the container instance memory cache/buffer (showed by free -hb). It didn't solve the problem.

  • Can you also define the same limits at the container level (in addition to the task definition level). After setting the limits on the container lever to 0.125vCPU /0.125GB (Memory hard limit) I was able to launch 8 tasks on one c7g.medium instance even though there were two instances running. Also i tested with network mode default

  • Thank you for your assistance. I have set the resource limits for the container level to 0.125 vCPU and 0.125 GB (hard limit). However, it still placed two tasks on each EC2 instance (c7i.large). I need to enable Service Connect, so I have defined the network mode as awsvpc.

  • Can I ask for your configuration details about capacity provider, autoscaling group, ECS service, ECS task definition?

Du bist nicht angemeldet. Anmelden um eine Antwort zu veröffentlichen.

Eine gute Antwort beantwortet die Frage klar, gibt konstruktives Feedback und fördert die berufliche Weiterentwicklung des Fragenstellers.

Richtlinien für die Beantwortung von Fragen