0

I'm trying to create node group on EKS Cluster(region = ap-south-1) but it is failing to join cluster. Health issues : NodeCreationFailure Instances failed to join the kubernetes cluster

I found that it may be because AWS EKS add-on(coredns) for Cluster is degraded. I tried to create new Cluster but it shows same status for add-on as degraded . Health issues shows: InsufficientNumberOfReplicas The add-on is unhealthy because it doesn't have the desired number of replicas.

And in the same region other Clusters with node group are working fine.Their all add-ons are in Active State. I'm creating cluster from console.

2 Answers 2

2

Issue is resolved , as my NAT gateway was the problem. the subnet in which NAT was present, it was not associated with my route table. After i added NAT in my route table, i created new node group and it was able to join the cluster and coredns pods also got deployed.

0

if configuration of cluster and node group is seems fine then once try changing type of instance (like if we have selected t3.medium in node group then try t2.medium or any) then verify with --- kubectl get pods -n=kube-system | grep coredns
are they running or not , in my case it was solved.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .