- Newest
- Most votes
- Most comments
For EKS clusters using Fargate, obtaining CPU and memory utilization metrics for individual pods can indeed be challenging, as the traditional metric-server add-on doesn't work in the same way it does for EC2-based EKS clusters. However, there are solutions available to address this issue and enable Horizontal Pod Autoscaler (HPA) functionality in EKS Fargate.
Container Insights, which is a feature of AWS CloudWatch, is indeed a viable solution for obtaining the necessary metrics for HPA in EKS Fargate clusters. It provides detailed performance metrics at the cluster, node, pod, and container levels, including CPU and memory utilization. By enabling Container Insights, you can collect and analyze these metrics, which can then be used to inform your HPA decisions.
To implement HPA in EKS Fargate using Container Insights, you would typically follow these steps:
- Enable Container Insights for your EKS Fargate cluster.
- Configure CloudWatch to collect the relevant metrics.
- Set up your HPA to use these CloudWatch metrics for scaling decisions.
It's important to note that while Container Insights is a powerful tool, it may introduce additional costs due to the increased CloudWatch usage.
Another approach to consider is using the AWS Distro for OpenTelemetry (ADOT) collector for application monitoring. This can be set up to collect and export metrics from your Fargate pods, which can then be used for HPA.
For scaling in EKS Fargate, you have a few options:
- Use HPA with Container Insights or ADOT as mentioned above.
- Implement the Vertical Pod Autoscaler (VPA) to automatically adjust the CPU and memory requirements of your pods. When using VPA with Fargate, it's recommended to set the mode to either "Auto" or "Recreate" to ensure proper functionality.
- Manually adjust the Fargate profile to include different pod sizes based on your application's needs.
Remember that Fargate provides pod isolation by scheduling one pod per node, and it can scale workloads from as small as 0.25 vCPU with 0.5 GB memory to as large as 16 vCPU with 120 GB memory. However, there are limitations on the available pod size variations, so you'll need to carefully consider how your workload fits into the available Fargate configurations.
In summary, while the traditional metric-server approach may not work for EKS Fargate, solutions like Container Insights or ADOT can provide the necessary metrics for implementing HPA. Additionally, considering VPA or manual Fargate profile adjustments can help you effectively manage scaling in your EKS Fargate cluster.
Sources
Scale pod deployments with Horizontal Pod Autoscaler - Amazon EKS
Compute and Autoscaling - Amazon EKS
Get started with AWS Fargate for your cluster - Amazon EKS
1. Monitor Fargate Pods with Prometheus
Set up Prometheus to scrape metrics from your Fargate pods by following the AWS guide on monitoring Amazon EKS on Fargate with Prometheus and Grafana. This setup enables you to collect custom metrics, such as request rate, CPU usage, and memory utilization.
2. Configure HPA with Custom Metrics
To use these custom metrics with the Horizontal Pod Autoscaler (HPA), install the Prometheus Adapter. This adapter exposes metrics collected by Prometheus to the Kubernetes API, making them available for HPA to use in scaling decisions. you can configure the same using following blog post https://aws.amazon.com/blogs/containers/autoscaling-eks-on-fargate-with-custom-metrics/
3. Configure KEDA for Event-Driven Autoscaling (ALTERNATE METHOD)
Alternatively, use KEDA (Kubernetes Event-Driven Autoscaling) to scale based on Prometheus metrics. KEDA supports autoscaling triggered by multiple event sources, including Prometheus metrics. :
Relevant content
- asked a month ago
- asked 2 years ago
- asked a month ago
- asked a month ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 months ago