0

My general question is in the title. I feel like I've misunderstood the way pods are connected to the VPC. I was assuming this would make pods routable on the vpc but it seems like this is not the case.

Does traffic still need to flow through the eks nodes?

Here's my current vpc config:

Hub VPC -- peered to -- Prod VPC

My vpn server lives in the Hub VPC and eks is running in the Prod VPC.

I can ssh in to the EKS nodes via the vpn so the routing between the vpc's and over the vpn are fine.

I can not ping the pod from the vpn server.

I can ping the pod from the eks node.

I can ping the pod from a new instance booted up in the Prod VPC.

I have enabled the aws vpc-cni plugin via the console.

I have enabled external SNAT via

kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true

I have not restarted any of the nodes as the documentation does seem to require it.

I actually think it uses the vpc-cni plugin by default.

I'm running k8s 1.21 and vpc-cni 1.9.3

1 Answer 1

0

This actually does work exactly how I expect and I can access pods directly via the vpn connection. This turned out to be a security group rule blocking access to the port I was trying to use to connect to the pod.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .