EKS Upgrade from 1.29 to 1.30

0

Hi all,

Up to now, all my EKS upgrades from previous version has gone well. However, Im finding that with every cluster I have on 1.29 upgrading to 1.30 fails. Console error: VersionUpdate = Failed There are no other errors in the logs, console or Cloudwatch. I did however find one method that works:

If I disassociate the sts identity provider from the cluster, upgrade (successfully) to 1.30 then associate the sts identity provider again, it works

Any ideas?

asked 7 months ago3.8K views
2 Answers
2

Hello,

Disassociate the STS Identity Provider:

  • Temporarily remove the STS identity provider configuration from your EKS cluster.
aws eks disassociate-identity-provider-config --cluster-name <your-cluster-name> --identity-provider-config <config-name>

Upgrade the EKS Cluster:

  • Proceed with the upgrade to version 1.30.

Using eksctl:

eksctl upgrade cluster --name <your-cluster-name> --version 1.30 --approve

Reassociate the STS Identity Provider:

  • Once the upgrade is complete, re-associate the STS identity provider with your EKS cluster.
aws eks associate-identity-provider-config --cluster-name <your-cluster-name> --identity-provider-config <config-name>

These steps avoids the upgrade issue by temporarily removing and then restoring the potentially conflicting STS identity provider configuration​.

profile picture
EXPERT
answered 7 months ago
0
Accepted Answer

Official support answer:

*When an existing EKS v1.29 cluster is upgraded to v1.30, under some conditions, there is a failure resulting in the API server failing to start. When some deployment tools (like Terraform) create an EKS cluster, they configure the OIDC provider URL and the Service Account issuer URL to be the same value. The Kubernetes project added an extra validation as part of Kubernetes v1.30 [1] that prevents the Kubernetes API from starting when an OIDC identity provider used for authentication to the cluster and the Service Account issuer URL are the same value. EKS clusters at v1.29 with this configuration can initiate an upgrade for their cluster to v1.30 will encounter an upgrade failure. This also impacts clusters created on v1.30 and issue an update (eks:AssociateIdentityProviderConfig) to the OIDC provider issuer URL and set that value to the cluster’s own issuer URL.

The issue currently can be worked around by ensuring the OIDC issuer URL is not the same as the cluster’s own identity provider before upgrading their cluster to v1.30. If you have already initiated a failed upgraded to v1.30 you can call eks:DisassociateIdentityProviderConfig to remediate the issue and retry upgrade.*

answered 7 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions