T3a instances use more CPU compared to m5a instances, despite baseline and same hardware


We have recently switched our instances, we used to run m5a.large instances, now t3a.large.

What concerns us now is that the t3a.large instances shower a higher CPU usage comparing to m5a.large. Note that we run the same applications. See:

  • M5a.large: Around 15% during daily peak times
  • T3a.large: Around 30% during daily peak times (twice!)

As per AWS (link) both types use the same hardware:

  • AMD EPYC 7000 series processors (AMD EPYC 7571) with an all core turbo clock speed of 2.5 GHz

And for t3a such is mentioned:

  • Burstable CPU, governed by CPU Credits, and consistent baseline performance

So they use the same hardware and t3a should have a consistent baseline performance. (Since t3a.large has 2vCPU with each having 30% baseline performance (=60%), we are not using CPU credits. It is not relevant here anyway.)

So the question is, why is t3a.large using twice as much CPU? We were not expecting this, and we wouldn't have bought t3a.large instances if we knew it.

asked 3 months ago217 views
1 Answer
profile picture
answered 3 months ago
profile picture
reviewed a month ago
  • But how does this explain that t3a.large and m5a.large show different cpu usage during baseline (not burst)? It is the same hardware and we run the same applications.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions