EKS Upgrade Failed from 1.13 to 1.14

0

We had a cluster created on version 1.12. We managed to upgrade it to 1.13 successfully. We also upgrade the nodes.
It ran for 2 weeks and today, we decided to upgrade it to 1.14.
The cluster upgrade from 1.13 to 1.14 was triggered from AWS EKS console. It was in 'updating' state for more than an hour before marking it as failed. We checked the errors section, it showed none.

When I check the actual cluster version using kubectl version command, it shows v1.14.9-eks-f459c0.
The AWS console still shows 1.13 and when I try to upgrade it fails. We have coredns, cni, kube-proxy all at expected versions as mentioned in https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html

Any pointers would be very much appreciated as this is a production environment.
Thanks,
Abhishek

asked 4 years ago412 views
1 Answer
0

Well, we contacted AWS support. They debugged and got back to us saying it was because the security groups per ENI limit on our account was set to 1. They increased it to 5 and then the upgrade was successful.
None of the parties are sure why the limit was set to 1.

answered 4 years ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions