I'm working with an Amazon EKS cluster that uses AWS VPC CNI for networking and has a custom network configuration. The primary IP address of the nodes is in the range 10.x.x.x/x, and there are secondary IPs in the range 100.x.x.x/x.
I've noticed that pods with IP addresses in the 100.x.x.x/x range are not using the node's IP address in the 10.x.x.x/x range to reach external resources outside the EKS cluster. I want to configure the cluster so that pods can communicate externally using the node's IP address for translation, while still keeping their IP addresses in the 100.x.x.x/x range.
I've already tried running the following command:
kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true
However, this hasn't resolved the issue. When I check the node-level NAT rules using the command iptables -L -t nat | grep MASQ
, I only see rules for pods that share the same primary CIDR as the node. This suggests that the node is not translating the IP addresses of pods outside of its primary CIDR.
How can I configure my EKS cluster to make this translation happen while keeping pods in the 100.x.x.x/x range?