By using AWS re:Post, you agree to the AWS re:Post Terms of Use

How can I optimize file transfer performance over Direct Connect?

5 minute read
0

I'm experiencing slow file transfer speeds over my AWS Direct Connect connection.

Resolution

Use the following troubleshooting steps for your use case.

Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.

Use Amazon CloudWatch metrics to check for Direct Connect connection over utilization and errors

You can use CloudWatch metrics to monitor Direct Connect connections and virtual interfaces. For Direct Connect dedicated connections, check the ConnectionBpsEgress and ConnectionBpsIngress metrics for values that exceed network port speeds. Check the ConnectionErrorCount metric for MAC level errors. For more information on troubleshooting MAC level errors, see the ConnectionErrorCount section in Direct Connect connection metrics.

For hosted connections, review the VirtualInterfaceBpsEgress and VirtualInterfaceBpsIngress metrics. You can only create one Direct Connect virtual interface for each hosted connection. These metrics are an estimate of the total bitrate of network traffic for the hosted connection.

For more information, see Viewing Direct Connect CloudWatch metrics.

Optimizing performance when uploading large files to Amazon Simple Storage Service (Amazon S3)

For uploading large files to Amazon S3, it's a best practice to leverage multipart uploads. If you're using the AWS CLI, all high-level Amazon S3 commands like cp and sync automatically perform multipart uploads for large files.

Use the following AWS CLI Amazon S3 configuration values:

  • max_concurrent_requests - The maximum number of concurrent requests. The default value is 10. Make sure that you have enough resources to support the maximum number of requests.
  • max_queue_size - The maximum number of tasks in the task queue.
  • multipart_threshold - The size threshold the CLI uses for multipart transfers of individual files.
  • multipart_chunksize - When using multipart transfers, this is the chunk size that the CLI uses for multipart transfers of individual files. This value sets the size of each part that the AWS CLI uploads in a multipart upload for an individual file. This setting allows you to break down a larger file (for example, 300 MB) into smaller parts for quicker upload speeds. The default value is 8MB while the minimum value you can set is 5MB.

Note: A multipart upload requires that a single file uploaded in a maximum of 10,000 parts. Be sure that the chunksize that you set balances the file size and the number of parts.

  • max_bandwidth - The maximum bandwidth that will be consumed for uploading and downloading data to and from Amazon S3.

For more information, see Migrate small sets of data from on premises to Amazon S3 using AWS SFTP.

Performance tuning for Server Message Block (SMB) Windows file servers

To optimize network performance for Windows SMB file servers, the Server Message Block (SMB) 3.0 protocol must be negotiated between each client and file server. This is because SMB 3.0 uses protocol improves performance for SMB file servers including the following features:

  • SMB Direct - This feature ensures SMB detects RDMA network interfaces on the files server and automatically uses Remote Direct Memory Access (RDMA). RDMA increases throughput, provides low latency, and low CPU utilization.
  • SMB Multichannel - This feature allows file servers to use multiple network connections simultaneously and provides increased throughput.
  • SMB Scale-Out - This feature allows SMB 3.0 in cluster configurations to show a share in all nodes of a cluster in an active/active configuration. This ensures the maximum share bandwidth is the total bandwidth of all file server cluster nodes.

For SMB clients, use the robocopy multithreaded feature to copy files and folders to the file server over multiple parallel connections.

You can also use Explicit Congestion Notification (ECN) and Large Send Offload (LSO) to reduce throughput.

Check for packet loss on the Direct Connect connection

Packet loss occurs when transmitted data packets fail to arrive at their destination resulting in network performance issues. Packet loss is caused by low signal strength at the destination, excessive system utilization, network congestion and network route misconfigurations.

For more information, see How can I troubleshoot packet loss for my Direct Connect connection?

Isolate and diagnose network and application performance issues

You can use utilities such as iPerf3, tcpdump, and Wireshark to troubleshoot Direct Connect performance issues and analyze network results. Take note of the following settings that affect network throughput on a single TCP stream:

  • Receiver Window Size (RWS) - This indicates the maximum number of bytes the receiver can accept without overflowing buffers.
  • The senders send buffers - This may limit the maximum number of bytes that the receiver can acknowledged. The sender can't discard unacknowledged bytes until it receives acknowledgment. Unacknowledged bytes may have to be retransmitted after a timeout period.
  • The senders MSS (Maximum Segment Size) - The maximum number of bytes a TCP segment can have as a payload. The smaller the MSS, the less the network throughput.
  • The Round Trip Time (RTT) - The higher the RTT is between the sender and receiver, the lower the available network bandwidth.

Tip: It's a best practice for the sender to initiate several parallel connections to the receiver during file transfers.

For more information, see How can I troubleshoot Direct Connect network performance issues?


Related information

AWS Direct Connect features

Best practices for configuring network interfaces

AWS OFFICIAL
AWS OFFICIALUpdated 3 years ago