How do Kubernetes serverless pods (EKS Fargate) know the IP address to access the cluster's DNS server (the CoreDNS service deployment)?
I recently updated a Kubernetes cluster to set up serverless hosting for the Cluster-Autoscaler (and/or Karpenter) deployment. This was to eliminate the deadlock hazard, of this controller potentially being rendered unschedulable when it is evicted whilst rolling the worker node fleet for security updates, which can prevent it from instantiating the new replacement worker nodes that are needed for itself and other cluster workloads. (Note there is no EKS add-on to instead run this critical controller within the managed control plane or master nodes.) To accomplish this I terraformed an aws_eks_fargate_profile
resource (with the typical associated IAM role which permits the AWS Fargate service to pull container images from AWS ECR registries). This was initially unsuccessful until we resolved two issues: firstly, the control plane needs to be configured to associate various groups (ClusterRoles) of k8s RBAC permissions to that new IAM role. (This can be done either through the new EKS "access entries" AWS API if enabled, or via the old "aws-auth" ConfigMap. Apparently AWS amends the latter automatically, but our terraform was reverting the change.) Secondly, we needed to update our internal network firewalls (AWS Security Groups) to permit serverless pods to communicate with the CoreDNS service deployment (e.g., have the worker node SG accept connections from the cluster SG, which the fargate pod inherits membership of). This was required for the process contained in the pod to be able to resolve cluster-external DNS addresses (e.g. sts.amazon.com, to be able to exchange the pod's k8s service account OIDC tokens for temporary IAM credentials that this controller needed for managing the worker node fleet through cloud APIs). Note that this controller process does not otherwise make direct use of inter-pod networking (which was expected not to be functioning because "serverless" virtual machines lack the k8s daemonsets for supporting such functions, such as kube-proxy; this controller process ostensibly only needs to access API servers belonging to the cloud and the control plane).
How does the Fargate pod know to use the IP address belonging to the CoreDNS (kube-dns) Cluster-IP service, for resolving DNS queries? (I presume that Cluster-IPs are assigned randomly, and not all clusters use CoreDNS.) Is it baked in at some level or does it autodetect somehow? Can it use an external DNS server instead?