Skip to content

Private connectivity to Amazon EKS cluster API endpoints using VPC Lattice

13 minute read
Content level: Intermediate
2

This post shows how to access Amazon EKS cluster API endpoints privately using Amazon VPC Lattice, without VPC peering or Transit Gateway. It’s aimed at platform and network engineers who need scalable, secure Kubernetes control plane access across VPCs or accounts, even with overlapping IPs. The step-by-step guide walks through using DNS-based resource configurations, service networks, and Route 53 for seamless connectivity.

Problem statement

Customers often need to securely access Amazon EKS cluster API servers over private network connectivity from outside the VPCs where the clusters are deployed, without relying on broad layer 3 VPC connectivity options such as peering, AWS Transit Gateway, or AWS Cloud WAN. If you are seeking a simpler, scalable solution to enable secure control plane access across VPCs or accounts, including in environments with overlapping IP address spaces, follow along.

Solution overview

Amazon VPC Lattice allows you to publish and access services across VPCs in a scalable and secure way. With DNS-based resource configurations, you can expose your EKS cluster API endpoint through a VPC Lattice service network. On the client side, VPCs can connect using either:

  • Service Network Association (SN-A): private access within a VPC the a service network
  • Service Network Endpoint (SN-E): for private access both inside and outside the VPC (including Direct Connect, VPN, or Cloud WAN paths)

Solution overview

Core components

  • Resource Gateway: A VPC construct that provides a point of traffic ingress into the resource owner VPC for accessing resources. You can have one or more resource gateways in a VPC.
  • Resource Configuration: A resource or a group of resources you want to access from other VPCs. You can associate multiple resource configurations with a single resource gateway in a VPC. Once you create a resource configuration, you can associate it with your service network. Resource configurations can be ARN-based (Amazon RDS database clusters), DNS-based or IP-based. DNS-based resource configurations must be configured with a publicly resolvable DNS target. The DNS target must resolve to private IP addresses.
  • Service network: The service network is a logical grouping mechanism that simplifies how you can enable connectivity across VPCs or accounts, and apply common security policies for application communication patterns. You can associate multiple resource configurations with your service network.
  • Service network VPC association (SN-A): Allows clients deployed in a VPC to access the service network. The service network association cannot be accessed from outside of the associated VPC. A VPC can have only one service network association.
  • Service network VPC endpoint (SN-E): Allows clients deployed in a VPC to access the service network. It also allows clients outside of the VPC to access the respective service network endpoint, if they have network connectivity to the VPC. For example, clients can access a service network VPC endpoint from a peered VPC, through AWS Cloud WAN or AWS Transit Gateway, or from on-premises through AWS Direct Connect or Site-to-Site VPN. A service network VPC endpoint uses IP addresses from the VPC CIDR blocks. You can configure connectivity to multiple service networks in a VPC by creating an SN-E for each service network.
  • (Optional) Route 53 Profile: A Route 53 profile allows you to simplify DNS management for many private hosted zones and client VPCs.

Initial setup

  • Cluster VPC: a VPC that hosts an EKS cluster
  • Client VPC: a VPC with a test client that requires access to the EKS cluster API endpoint
  • The VPCs have overlapping CIDR blocks to demonstrate that VPC Lattice abstracts IP addressing and avoids conflicts.

Initial Setup

Configuration steps

Step 1: Ensure EKS cluster API endpoint is configured for private access

Check that:

  • endpointPublicAccess is false
  • endpointPrivateAccess is true

Console

EKS cluster api server private configuration

CLI

$ aws eks describe-cluster --region us-east-1 --name peculiar-indie-dinosaur
{
    "cluster": {
        "name": "peculiar-indie-dinosaur",
        "arn": "arn:aws:eks:us-east-1:[output omitted]:cluster/peculiar-indie-dinosaur",
        "createdAt": "2025-07-22T18:52:45.966000-07:00",
        "version": "1.33",
        "endpoint": "https://B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com",
    "roleArn": "arn:aws:iam::[output omitted]:role/AmazonEKSAutoClusterRole",
    "resourcesVpcConfig": {
        "subnetIds": [
            "subnet-01aeb292b82afac1e",
            "subnet-04167fb31c013cac5"
        ],
        "securityGroupIds": [],
        "clusterSecurityGroupId": "sg-0933c408955778361",
        "vpcId": "vpc-02862dc7ec713c358",
        "endpointPublicAccess": false,
        "endpointPrivateAccess": true,
        "publicAccessCidrs": []
    },
[output omitted]

Step 2: Check the cluster API endpoint DNS resolution to private IP addresses

From a test EC2 instance in the client VPC (with no direct connectivity to the cluster VPC), run:

dig <cluster-endpoint-fqdn>

Confirm that the response includes private IP addresses.

$ dig B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com

; <<>> DiG 9.18.33 <<>> B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2624
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com. IN A

;; ANSWER SECTION:
B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com. 60 IN A 10.1.132.223
B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com. 60 IN A 10.1.154.158

;; Query time: 0 msec
;; SERVER: 10.1.0.2#53(10.1.0.2) (UDP)
;; WHEN: Wed Jul 23 04:03:23 UTC 2025
;; MSG SIZE  rcvd: 125

Step 3: Create a resource gateway in the cluster VPC

  • Choose one subnet per Availability Zone for high availability.
  • Attach a security group that allows egress to the EKS API (ingress rules for the resource gateway SG are not relevant)
  • Ensure the EKS cluster security group allows inbound traffic from the resource gateway

Resource Gateway

Step 4: Create a DNS-based resource configuration representing the EKS cluster API endpoint

  • Use the fully qualified domain name (FQDN) of your EKS cluster API server.
  • Associate the resource configuration to the resource gateway you created in Step 3
  • Enable access logging for observability and monitoring

Resource Configuration

Step 5: Create a VPC Lattice service network

  • Use the default settings.
  • Enable access logging for observability and monitoring.

VPC Lattice service network

Step 6: Associate the resource configuration with the service network

  • This makes the resource configuration (EKS API endpoint) discoverable and accessible via the service network.

Resource configuration associated with the service network

Step 7 (Option A): Associate Client VPC with the Service Network (SN-A)

  • This creates a private and secure network path to access the cluster API for clients inside the client VPC.
  • The security group associated with the SN-A must allow inbound access from client workloads.
  • Note: SN-A is not accessible externally similar to an S3 Gateway Endpoint.

Client VPC service network association

Step 7 (Option B): Create a Service Network Endpoint (SN-E) in the Client VPC

  • This creates a private and secure network path to access the cluster API for clients inside the client VPC. It also allows access to the service network peered VPCs, on-premises, or over VPN/Direct Connect.
  • Choose one subnet per AZ.
  • Set up a security group on the SN-E to allow inbound access from relevant sources.

Client VPC service network endpoint

Step 8: Configure DNS resolution for the client VPC

  • Create a Route 53 Private Hosted Zone with the same domain name as the EKS API endpoint.
  • Associate it with the client VPC

Route 53 private hosted zone

Step 9: Create an alias record in the Private Hosted Zone

Depending on which option you choose at Step 7, the Alias record will point either to the VPC Endpoint FQDN for the resource association, or to the resource association FQDN. For this post, I chose to use the Service Network Endpoint

Fetch the SNRA (Service Network Resource Association) FQDN and hosted zone ID from the Endpoint associations

Service network endpoint association FQDN

Create a Route 53 alias record pointing your cluster endpoint to the endpoint association FQDN:

Example record batch file (r53-test.json):

{
    "Comment": "Alias record for VPC Lattice",
    "Changes": [
      {
        "Action": "CREATE",
        "ResourceRecordSet": {
            "Name": "B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com",
            "Type": "A",
            "AliasTarget": {
                "HostedZoneId": "Z08285483B41F6ASEM5U1",
                "DNSName": "vpce-047a4a7c881854da2-snra-05253ef0b41be8382.rcfg-018bd7dc83ae0fb31.4232ccc.vpc-lattice-rsc.us-east-1.on.aws",
                "EvaluateTargetHealth": false
                }
            }
        }
    ]
}

Apply the change to the hosted zone you created in Step 8:

$ aws route53 change-resource-record-sets --hosted-zone-id Z00944471A1A7R5GDQ43E --change-batch file://r53-test.json

Step 10: Check the Route 53 hosted zone configuration

  • Double-check the alias record exists and is associated with the right hosted zone. Route 53 private hosted zone Alias record

Step 11: Check DNS resolution from the client VPC for the cluster API endpoint

Run on the client test instance:

dig <EKS-CLUSTER-ENDPOINT>

You should see IP addresses different from the original EKS VPC (i.e., from the client VPC range or SN-E CIDR).

$ dig B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com

; <<>> DiG 9.18.33 <<>> B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58163
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com. IN A

;; ANSWER SECTION:
B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com. 60 IN A 10.1.130.69

;; Query time: 0 msec
;; SERVER: 10.1.0.2#53(10.1.0.2) (UDP)
;; WHEN: Wed Jul 23 04:47:42 UTC 2025
;; MSG SIZE  rcvd: 109

Step 12: Check connectivity to the Cluster API endpoint from the client VPC

Use curl:

curl https://<EKS-CLUSTER-ENDPOINT-FQDN> -vk

Expect a 401 Unauthorized response (this confirms successful API reachability).

$ curl https://b05ef03f37ee208c14e4a9c0a5f999a4.gr7.us-east-1.eks.amazonaws.com -vk
* Host b05ef03f37ee208c14e4a9c0a5f999a4.gr7.us-east-1.eks.amazonaws.com:443 was resolved.
* IPv6: (none)
* IPv4: 10.1.130.69
*   Trying 10.1.130.69:443...
* Connected to b05ef03f37ee208c14e4a9c0a5f999a4.gr7.us-east-1.eks.amazonaws.com (10.1.130.69) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Request CERT (13):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Certificate (11):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 / x25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=kube-apiserver
*  start date: Jul 23 02:00:02 2025 GMT
*  expire date: Jul 23 02:05:02 2026 GMT
*  issuer: CN=kubernetes
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
*   Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://b05ef03f37ee208c14e4a9c0a5f999a4.gr7.us-east-1.eks.amazonaws.com/
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: b05ef03f37ee208c14e4a9c0a5f999a4.gr7.us-east-1.eks.amazonaws.com]
* [HTTP/2] [1] [:path: /]
* [HTTP/2] [1] [user-agent: curl/8.5.0]
* [HTTP/2] [1] [accept: */*]
> GET / HTTP/2
> Host: b05ef03f37ee208c14e4a9c0a5f999a4.gr7.us-east-1.eks.amazonaws.com
> User-Agent: curl/8.5.0
> Accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* received GOAWAY, error=0, last_stream=1
< HTTP/2 401 
< audit-id: 02c1f9f9-bf70-4e29-8de7-bc038c179c93
< cache-control: no-cache, private
< content-type: application/json
< content-length: 157
< date: Wed, 23 Jul 2025 04:51:20 GMT
< 
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
* Closing connection
* TLSv1.3 (OUT), TLS alert, close notify (256):

Final state architecture

At this point:

  • You have a scalable and secure way to access EKS cluster APIs across VPCs.
  • You’ve avoided VPC peering, overlapping CIDR issues, and complex routing.
  • This design works well in multi-account or multi-cluster environments.

Final architecture

Scaling the architecture

You can extend this setup to multiple clusters across different VPCs or accounts:

  • One resource configuration per cluster API endpoint
  • Multiple configurations can be exposed via a shared service network
  • Use AWS Resource Access Manager (RAM) to share resource configurations and service networks across accounts

Scaling the architecture

Create resource configurations - 1 per cluster API endpoint

Resource configurations for multiple EKS cluster API endpoints

Associate all resource configurations with the service network

Service network with all resource configurations

  • Test from the client

Cluster 1

ec2-user@client-test-instance ~]$ aws eks update-kubeconfig --region us-east-1 --name peculiar-indie-dinosaur
Updated context arn:aws:eks:us-east-1:119944160464:cluster/peculiar-indie-dinosaur in /home/ec2-user/.kube/config
[ec2-user@cclient-test-instance ~]$ kubectl cluster-info
Kubernetes control plane is running at https://B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com
CoreDNS is running at https://B05EF03F37EE208C14E4A9C0A5F999A4.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Cluster 2

[ec2-user@client-test-instance ~]$ aws eks update-kubeconfig --region us-east-1 --name fabulous-bluegrass-shark
Updated context arn:aws:eks:us-east-1:119944160464:cluster/fabulous-bluegrass-shark in /home/ec2-user/.kube/config
[ec2-user@clinet-test-instance ~]$ kubectl cluster-info
Kubernetes control plane is running at https://5B242A746BEC6525D187CDB4256CE0FD.gr7.us-east-1.eks.amazonaws.com
CoreDNS is running at https://5B242A746BEC6525D187CDB4256CE0FD.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Cluster 3

[ec2-user@client-test-instance ~]$ aws eks update-kubeconfig --region us-east-1 --name ridiculous-lofi-unicorn
Updated context arn:aws:eks:us-east-1:119944160464:cluster/ridiculous-lofi-unicorn in /home/ec2-user/.kube/config
[ec2-user@client-test-instance ~]$ kubectl cluster-info
Kubernetes control plane is running at https://4293C3BF2FA6BB151F7E63E897A28891.gr7.us-east-1.eks.amazonaws.com
CoreDNS is running at https://4293C3BF2FA6BB151F7E63E897A28891.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Run a test pod on one of the clusters from the client test instance

Check current context

[ec2-user@control-plane-instance ~]$ kubectl config current-context
arn:aws:eks:us-east-1:119944160464:cluster/ridiculous-lofi-unicorn

I used a basic pod configuration: test-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80

Create test pod

[ec2-user@control-plane-instance ~]$ kubectl apply -f test-pod.yaml
pod/test-pod created

Check pod status

[ec2-user@control-plane-instance ~]$ kubectl get pod test-pod
NAME       READY   STATUS    RESTARTS   AGE
test-pod   1/1     Running   0          2m36s

Check logs

[ec2-user@client-test-instance ~]$ kubectl logs -f test-pod -c nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2025/07/23 23:16:29 [notice] 1#1: using the "epoll" event method
2025/07/23 23:16:29 [notice] 1#1: nginx/1.29.0
2025/07/23 23:16:29 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14+deb12u1) 
2025/07/23 23:16:29 [notice] 1#1: OS: Linux 6.12.35
2025/07/23 23:16:29 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:1048576
2025/07/23 23:16:29 [notice] 1#1: start worker processes
2025/07/23 23:16:29 [notice] 1#1: start worker process 29

Check VPC Lattice access logs at the service network and resources level

Example

{
    "eventTimestamp": "2025-07-23T22:33:31.775Z",
    "serviceNetworkArn": "arn:aws:vpc-lattice:us-east-1:119944160464:servicenetwork/sn-01ad68c2d98fe85a1",
    "serviceNetworkResourceAssociationId": "snra-05253ef0b41be8382",
    "vpcEndpointId": "vpce-047a4a7c881854da2",
    "sourceVpcArn": "arn:aws:ec2:us-east-1:119944160464:vpc/vpc-09048c0f6b6f1a1fc",
    "resourceConfigurationArn": "arn:aws:vpc-lattice:us-east-1:119944160464:resourceconfiguration/rcfg-018bd7dc83ae0fb31",
    "protocol": "tcp",
    "sourceIpPort": "10.1.4.210:40016",
    "destinationIpPort": "10.1.130.69:443",
    "gatewayIpPort": "10.1.143.67:33257",
    "resourceIpPort": "10.1.154.158:443"
}

EKS IPv6 clusters

Cluster API endpoint FQDN has both A (IPv4) and AAAA (IPv6) records. Depending on the access configuration, IPv4 addresses will be different, while IPv6 addresses remain the same:

  • Private access: IPv4 addresses are private from the VPC subnets CIDRs, IPv6 addresses are from the VPC subnets CIDRs
  • Public access: IPv4 addresses are public, IPv6 addresses are from the VPC subnets CIDRs
  • Public and private access: If resolution is performed from within the cluster VPC, returned IPv4 addresses are private from the VPC subnets CIDRs. If resolution is performed from outside of the cluster VPC, returned IPv4 addresses are public. IPv6 addresses are from the VPC subnets CIDRs

You can configure your resource configurations using IPv6 as the protocol, and remove any dependency on the access mode set on the cluster.

AWS
EXPERT
published 8 months ago3.3K views