I am using cilium as my CNI. I have successfully run the cilium connectivity test
and all tests pass. My nodegroup schedules a t3.small
nodes (3 of them), which allows me to run 11 pods without Ipv4Prefix
enabled. Now I have enabled Ipv4Prefix
by running
kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true
and when I run:
kubectl describe daemonset -n kube-system aws-node | grep ENABLE_PREFIX_DELEGATION
I get :
ENABLE_PREFIX_DELEGATION: true
ENABLE_PREFIX_DELEGATION: true
ENABLE_PREFIX_DELEGATION: true
I have also installed cilium by running the following:
helm install cilium cilium/cilium --version 1.15.0 --namespace kube-system --set eni.enabled=true --set ipam.mode=eni --set eni.awsEnablePrefixDelegation=true --set egressMasqueradeInterfaces=eth0
and patched the daemonset like so:
kubectl -n kube-system patch daemonset aws-node --type='strategic' -p='{"spec":{"template":{"spec":{"nodeSelector":{"io.cilium/aws-node-enabled":"true"}}}}}'
I can see all the pods are running fine.
I can also see that prefix has been assigned to the eni:
aws ec2 describe-network-interfaces --region eu-west-2 --network-interface-ids eni-0eb1aff34572830d6 | grep Ipv4Prefix
"Ipv4Prefixes": [
"Ipv4Prefix": "10.0.1.112/28"
"Ipv4Prefix": "10.0.1.224/28"
QUESTION
I know the t3.small
instances can have a max pod of 110, however, I still cannot schedule more than 11 pods per instance. What could be the issue?
ERROR
0/3 nodes are available: 3 Too many pods. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.
this should not be the case when IPV4 prefix is enabled
kubectl top node
if you have sufficient resources available?