- Neueste
- Die meisten Stimmen
- Die meisten Kommentare
if a container exceeds its memory limit, it will be terminated and, depending on the restart policy, the pod might be restarted. This is what typically leads to an "exit 137" status, indicating termination due to memory issue. Please check it as below the steps as mention kubectl describe pod <pod-name> Look at kubelet and containerd logs for more detailed information about the memory handling Ensure your EKS cluster, kubelet, and containerd are up-to-date with the latest patches. Use tools like Prometheus or CloudWatch to monitor the memory usage over time.
Given the specific versions you provided (EKS v1.27, Kernel version 5.10.199), it may be worth checking for any known issues or updates related to memory management in these versions. could ypu please refer to thoiis eks-ami official documentations for known issues in the provided version :- https://github.com/awslabs/amazon-eks-ami/blob/master/CHANGELOG.md
Relevanter Inhalt
- AWS OFFICIALAktualisiert vor 2 Jahren
- AWS OFFICIALAktualisiert vor 2 Jahren
Hey Jagan. Thanks for the reply.
Unfortunately, I was not able to see any exit 137 or restart histories from the pod.
The memory usage of the pod definitely increased at that time, and it quickly stabilized when the OOM even occurred.
I checked the Containerd OOM event and the Pod memory usage through the Datadog btw.
Do you have any idea where should I look further? I don't think checking the EKS version and etc are helpful tho.