- 최신
- 최다 투표
- 가장 많은 댓글
There could be several reasons why the performance of your EKS cluster might decrease over time. Here are some possibilities:
-
Network Throttling on t3.xlarge instances: AWS EC2 instances come with a certain network performance capacity, defined in the form of a maximum bandwidth. For the
t3.xlarge
instance, the network performance is defined as "Up to 5 Gbps". If you're generating traffic that's hitting this limit, it could cause a decrease in performance. This might be the case if your tests are generating a large amount of network traffic. -
Burstable Performance Instances (CPU credits): AWS
t3
instances are burstable performance instances, which means that they provide a baseline level of CPU performance with the ability to burst above the baseline. If you're exhausting your CPU credits, it might impact your test performance over time. You can check your CPU credit balance in the EC2 console. -
Garbage Collection or JVM Heap Size in JMeter: If you're using JMeter, remember that it's a Java application. Java applications use a heap for memory, and when this heap becomes full, the JVM needs to perform garbage collection, which can slow down your application. Check the JVM heap size settings for your JMeter instances.
To debug this issue, you can start by checking the metrics for your EC2 instances and EKS nodes in CloudWatch to see if there are any obvious resource shortages. You can also look at the logs for your JMeter pods to see if there are any error messages or warnings.
If you're suspecting that network throttling might be an issue, you can try using a larger EC2 instance type that has a higher network performance limit. If you're suspecting that the issue might be related to CPU credits, you could try using an EC2 instance type that provides unlimited bursting, or a non-burstable instance type.
Finally, if none of that works, you can also consider reaching out to AWS Support for assistance in debugging this issue. They might be able to provide more detailed insights based on your specific configuration and workload.
관련 콘텐츠
- AWS 공식업데이트됨 일 년 전
- AWS 공식업데이트됨 2년 전
Ivan Casco, thank you answering my question. But I have performed same benchmark testing on March 2023, I didnt face this issue. It was same cluster on same AWS region. All the test parameters are identical its just that I didnt hit the BW throttling issue like now.
Also, t3.xlarge can burst upto 5Gbps, I dont think I have hit the BW limit. Also, the pods didnt run out of memory or any memory alert was observed.
Something got changed in between March to June 2023?. AMI version, Amazon CNI version which is causing this issue??.