Knowledge Center Monthly Newsletter - July 2025
Stay up to date with the latest from the Knowledge Center. See all new Knowledge Center articles published in the last month, and re:Post’s top contributors.
How do I benchmark network throughput between Amazon EC2 Linux instances in the same VPC?
I want to measure the network bandwidth between Amazon Elastic Compute Cloud (Amazon EC2) Linux instances that are in the same Amazon Virtual Private Cloud (Amazon VPC).
Resolution
Define the instance type, size, and configuration to use to test network throughput
When instances are in the same VPC, multiple factors can cause significant network performance differences between different cloud environments. It's a best practice to regularly evaluate and baseline the network performance of your environment to improve application performance. Network performance tests provide valuable insight to determine the best EC2 instance types, sizes, and configurations for your needs.
Modify your instances for better network performance
Keep instances close together
Make sure that your instances are close in proximity. Instances that are physically closer provide better performance. Instances that are farther away from each other can cause network latency. Instances in the same Availability Zone or AWS Region have better network throughput than EC2 instances in different Availability Zones or Regions. In the following scenarios, instances are progressively farther away from each other:
- Instances in the same Availability Zone in the same Region
- Instances in different Availability Zones in the same Region
- Instances in different Regions on the same continent
- Instances in different Regions on different continents
Increase your instance's MTU
Increase your maximum transmission unit (MTU). All EC2 instance types support 1500 MTU. All current generation instances and the previous generation C3, G2, I2, M3, and R3 instances support jumbo frames. Jumbo frames allow more than 1500 MTU. However, instances that support jumbo frames can be limited to 1500 MTU in certain scenarios.
Increase the size of your instance
Increase your instance size. Larger instance sizes for an instance type typically provide better network performance than smaller instance sizes of the same type.
Use enhanced networking support for Linux
Use enhanced networking. Enhanced networking provides higher performance and consistently lower latency between instances. All current generation instances use Elastic Network Adaptor (ENA) or ENA Express drivers to activate enhanced networking by default.
Previous generations instance types that support Enhanced Networking might require additional configurations to use enhanced networking.
For more information, see How do I turn on and configure enhanced networking on my EC2 instances?
Put your instances in placement groups
Launch interdependent instances into a placement group to meet your workload's needs. High-performance computing (HPC) provides full-bisection bandwidth and low latency and supports up to 100 GB network speeds depending on the instance type.
Use a network I/O credit mechanism to allocate network bandwidth
To check whether your instance type can use an I/O network credit to burst beyond the baseline bandwidth, see Network specifications.
Set up your test instances
Complete the following steps:
- Launch two test Linux instances.
- Verify that the instances support enhanced networking for Linux and are in the same VPC.
- (Optional) If you perform network testing between instances that don't support jumbo frames, then set the network MTU on your instance.
- Use SSH to connect to the instances to verify that you can access them.
Install the iperf3 network benchmark tool on both instances
In some distributions, such as Amazon Linux, iperf3 is part of the Extra Packages for Enterprise Linux (EPEL) repository. To turn on the EPEL repository, see How do I turn on the EPEL repository for my Amazon EC2 instance that runs CentOS, RHEL, or Amazon Linux?
For more information about the iperf3 tool, see iperf2/iperf3 on the ESnet website.
Use SSH to connect to your Linux instances. Then, run the one of the following commands for your operating system (OS) to install iperf3.
Red Hat Enterprise Linux (RHEL) 9:
$ sudo dnf -y install iperf3
Debian or Ubuntu:
$ sudo apt-get install -y iperf3
CentOS 6/7:
$ sudo yum -y install epel-release && yum -y install iperf3
Amazon Linux 2023:
$ sudo yum -y install iperf3
Test TCP network performance between the instances
By default, iperf3 communicates over port 5001 when it tests TCP performance. However, you can use the -p switch to change the port. Make sure that you configure your security groups to allow communication over the port that iperf3 uses.
To configure the first instance as a server to listen on a specific TCP port, run the following command:
$ sudo iperf3 -s -p 5001
Note: If you choose to change the port, then replace 5001 with your port number.
Configure the second instance as a client, and then run a test against the server with the relevant parameters. For example, the following command initiates a TCP test against a server instance with 40 parallel connections:
$ sudo iperf3 -c 172.31.30.41 --parallel 40 -i 1 -t 2
The output shows the interval and data that's transferred in each client stream and the bandwidth that each client stream uses. The following iperf3 output shows test results for two c5n.18xlarge EC2 Linux instances that launch in a cluster placement group. The total transmitted bandwidth across all connections is 97.6 Gbps.
Example output:
------------------------------------------------------------------------------------Client connecting to 172.31.30.41, TCP port 5001 TCP window size: 975 KByte (default) ------------------------------------------------------------------------------------ [ 8] local 172.31.20.27 port 49498 connected with 172.31.30.41 port 5001 [ 38] local 172.31.20.27 port 49560 connected with 172.31.30.41 port 5001 [ 33] local 172.31.20.27 port 49548 connected with 172.31.30.41 port 5001 [ 40] local 172.31.20.27 port 49558 connected with 172.31.30.41 port 5001 [ 36] local 172.31.20.27 port 49554 connected with 172.31.30.41 port 5001 [ 39] local 172.31.20.27 port 49562 connected with 172.31.30.41 port 5001 ... [SUM] 0.0- 2.0 sec 22.8 GBytes 97.6 Gbits/sec
Test UDP network performance between the instances
By default, iperf3 communicates over port 5001 when it tests UDP performance. However, you can use the -p switch to change the port. Make sure that you configure your security groups to allow communication over the port that iperf3 uses.
Note: The default for UDP is 1 Mbps unless you specify a different bandwidth.
To configure the first instance as a server to listen on a specific UDP port, run the following command:
$ sudo iperf3 -s -u -p 5001
Note: If you choose to change the port, then replace 5001 with your port number.
Configure a second instance as a client, and then run a test against the server with the desired parameters. The following example runs a UDP test against a server instance with the -b parameter set to 5g. The -b parameter changes the bandwidth to 5 Gbps from the UDP default of 1 Mbps. 5 Gbps is the maximum network performance that a c5n.18xlarge instance can provide for a single traffic flow within a VPC:
$ sudo iperf3 -c 172.31.1.152 -u -b 5g
Note: UDP is connectionless and doesn't have the congestion control algorithms that TCP has. When you test with iperf3, the bandwidth that you get from UDP might be lower than the bandwidth that you get from TCP.
Example output:
$ sudo iperf3 -c 172.31.30.41 -u -b 5g ------------------------------------------------------------------------------------ Client connecting to 172.31.30.41, UDP port 5001 Sending 1470 byte datagrams, IPG target: 2.35 us (kalman adjust) UDP buffer size: 208 KByte (default) ------------------------------------------------------------------------------------ [ 3] local 172.31.20.27 port 39022 connected with 172.31.30.41 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 5.82 GBytes 5.00 Gbits/sec [ 3] Sent 4251700 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 5.82 GBytes 5.00 Gbits/sec 0.003 ms 1911/4251700 (0.045%) [ 3] 0.00-10.00 sec 1 datagrams received out-of-order
The preceding example output shows the following values:
- Interval (time)
- Amount of data transferred
- Bandwidth achieved
- Jitter (the deviation in time for the periodic arrival of data grams)
- Loss and total of UDP datagrams
Related information
Disk testing using iperf3 on the ESnet website
Network tuning on the ESnet website
Throughput tool comparison on the ESnet website
Iperf2 on the SourceForge website
iperf3 FAQ on the ESnet website
- Topics
- Compute
- Tags
- Amazon EC2Linux
- Language
- English
Related videos


Nice article! Very Helpful!
Relevant content
- Accepted Answerasked 6 months ago
- Accepted Answerasked 3 years ago