- Newest
- Most votes
- Most comments
Hello,
Greetings for the day!!
From your correspondence I can understand that you are trying to deny all egress traffic over a kubernetes namespace with the help of network policies and you need assistance with the same. Please correct me if I misunderstood anything.
I did some testing on my side and here are the steps:
-First I created an EKS cluster with version 1.27
-Next, I created a managed node group with 2 nodes using Amazon EKS optimized AMI (linux).
-Next, I check the Amazon VPC CNI version was same as yours, which is v1.16.4-eksbuild.2
-Next, I used the below configuration schema from this documentation[1] to enable network policies:
{
"enableNetworkPolicy": "true",
"nodeAgent": {
"enableCloudWatchLogs": "true",
"healthProbeBindAddr": "8163",
"metricsBindAddr": "8162"
}
}
I had used AWS Management Console to do the above.
-Next, I created 2 namespaces named 'open' and 'close'.
-Next, I created the following pod in each of the above namespace using the below command(s):
$ kubectl run netshoot --image nicolaka/netshoot --command sleep 10000 -n open
$ kubectl run netshoot --image nicolaka/netshoot --command sleep 10000 -n close
-Now I have total 2 pods named 'netshoot' running, one pod in each namespace 'open' and 'close' respectively.
-Next, I exec'ed into each of the netshoot pod and tested the internet connectivity using the below commands:
First I exec'ed into the pod in 'open' namespace and I was able to connect to the internet as shown below:
$ kubectl exec -it netshoot -n open bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
netshoot:~# curl https://google.com:443
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>
netshoot:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=1.35 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=1.38 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=1.38 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.354/1.372/1.382/0.013 ms
-Next, I exec'ed into the pod in 'close' namespace and I was able to connect to the internet as shown below:
$ kubectl exec -it netshoot -n close bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
netshoot:~# curl https://google.com:443
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>
netshoot:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=1.75 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=58 time=1.75 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=58 time=1.73 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 1.733/1.744/1.750/0.008 ms
-Next, I created the following network policy on the namespace 'close':
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: close
spec:
podSelector: {}
policyTypes:
- Egress
-Next, I applied the above policy and then exec'ed into the 'netshoot' pod in the 'close' namespace and this time I was not able to connect to the internet as shown below:
$ kubectl exec -it netshoot -n close bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
netshoot:~# curl https://google.com:443
curl: (6) Could not resolve host: google.com
netshoot:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
^C
--- 8.8.8.8 ping statistics ---
8 packets transmitted, 0 received, 100% packet loss, time 7170ms
From the above replication, it is clear that the network policies are working on EKS. I would request you to verify and compare the above steps that I followed with your own setup and look for any inconsistencies.
There could be the following reasons the network policies are not working for you (the below are some of the reasons and not a complete list):
-Network policies work only on linux nodes.
-Verify that the target pod is in the correct namespace.
-Verify that the network policy is actually applied.
-Check if you are using a third party solution to managed network policies in addition to the Amazon VPC CNI.
-Please verify the same using test pods that I have shared.
-Ensure that the pod is running on the primary network interface of the worker node instance.
Please refer this documentation[2] to verify all the considerations.
If the network policies are still not working on your side then the issue needs to be troubleshooted by manually checking every configuration.
Have a fantastic day ahead!!
Reference:
[1] https://docs.aws.amazon.com/eks/latest/userguide/cni-network-policy.html
[2] https://docs.aws.amazon.com/eks/latest/userguide/cni-network-policy.html#cni-network-policy-considerations
Hi, I followed your example, and the network policy you attached is working:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: close
spec:
podSelector: {}
policyTypes:
- Egress
my original one is not working because the additional rule set in the policy to all egress traffic.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-to-google
namespace: close
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- {}
Relevant content
- asked a year ago
- AWS OFFICIALUpdated 7 months ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 months ago
- AWS OFFICIALUpdated 10 months ago