1

Problem with installing fargate profiles and coreddns addon; I'm using terraform for some parts and kubetctl for others, the fargate profiles are created via terraform:

fargate_profiles = {
  kube-system-profile = {
    name = "kube-system-profile"
    selectors = [
      {
        namespace = "kube-system"
        labels = {
          name = "kube-system"
          k8s-app = "kube-dns"
        }
      }
    ]
    tags = {
      Cost = "DaCost"
      Environment = "dev"
      Name = "coredns-fargate-profile"
    }
  },
  swiftalk-dev-profile = {
    name = "dev-profile"
    selectors = [
      {
        namespace = "dev"
        labels = {
          name = "dev"
        }
      }
    ]
    tags = {
      Cost = "DaCost"
      Environment = "dev"
      Name = "dev-profile"
    }
  },
}

I then install coredns addon again using terraform

resource "aws_eks_addon" "core_dns" {
  addon_name        = "coredns"
  addon_version     = "v1.8.3-eksbuild.1"
  cluster_name      = "${var.eks_cluster_name}-dev"
  resolve_conflicts = "OVERWRITE"
  tags              = { "eks_addon" = "coredns", name = "kube-system" }
  depends_on        = [kubernetes_namespace.dev]
}

I patched the coredns deployment for fargate

kubectl patch deployment coredns \
  --namespace kube-system \
  --type=json \
  -p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'

and then restarted

kubectl rollout restart -n kube-system deployment/coredns

however, the coredns pods are still in pending state

kubectl get pods -n kube-system
NAME                      READY   STATUS    RESTARTS   AGE
coredns-5766d4545-g6nxn   0/1     Pending   0          46m
coredns-5766d4545-xng48   0/1     Pending   0          46m
coredns-b744fccf4-hb726   0/1     Pending   0          77m

And the cloud watch logs point out to the pods looking for nodes to deploy instead of fargate

I0723 10:24:38.059960       1 factory.go:319] "Unable to schedule pod; no nodes are registered to the cluster; waiting" pod="kube-system/coredns-b744fccf4-hb726"
I0723 10:24:38.060078       1 factory.go:319] "Unable to schedule pod; no nodes are registered to the cluster; waiting" pod="kube-system/coredns-5766d4545-xng48"
5
  • We're also facing this issue. We have public and private subnets, with internet gateway and nat gateway, autoassign public ip on the subnets, We've tried using eksctl and it works when it makes it's own vpc but not when we specify the vpc and subnets. Did you find a solution?
    – blank
    Commented Aug 3, 2021 at 7:12
  • I haven’t yet, but that’s a good point to know, we’ve raised issue with AWS support waiting on their response. Commented Aug 3, 2021 at 7:14
  • @AnadiMisra I am also hitting this. What did AWS support say?
    – tm77
    Commented Oct 17, 2021 at 20:20
  • 1
    @tm77 My problem was due to limiting CIDR to my vpn only, enabling private access endpoint fixed it for me. See my answer Commented Oct 18, 2021 at 1:36
  • Same issue here, and the solution for me was in the doc link that @AnadiMisra provided: your VPC must have enableDnsHostnames and enableDnsSupport set to true. Once I set these to true for my VPC, everything worked as expected, also without the need for patching anything.
    – Tim Pesce
    Commented Mar 10, 2022 at 1:45

1 Answer 1

0

In my case the problem was that after enabling Cluster Public Access Endpoint I had limited to public CIDRs (of our VPN), which meant I had to either add pods CIDRS or enable private access endpoint as mentioned in this article

https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html

It works now, and there's no need to patch coredns deployment

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .