0

I am currently running into a problem around connecting to websites from an rke2 container. Whenever I try to connect to a site from inside of a container I receive an unknown host error. for example if I try pinging google.com from a container I get the error: ping: unknown host google.com. If I then check the coreDNS logs I see the following:

CoreDNS-1.11.1
linux/amd64, go1.20.14 X:boringcrypto, ae2bbc29
[ERROR] plugin/errors: 2 1281405945466787600.4013347565041043804. HINFO: dial udp [2603:9001:3d00:aec9::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 google.com.lan. A: dial udp [2603:9001:3d00:aec9::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 google.com.lan. A: dial udp [2603:9001:3d00:aec9::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 google.com. A: dial udp [2603:9001:3d00:aec9::1]:53: connect: network is unreachable
[ERROR] plugin/errors: 2 google.com. A: dial udp [2603:9001:3d00:aec9::1]:53: connect: network is unreachable

the resolv.conf file of the pods look like:

search default.svc.cluster.local svc.cluster.local cluster.local lan
nameserver 10.43.0.10
options ndots:5

and when I run a nslookup on kubernetes.default I get:

Server:         10.43.0.10
Address:        10.43.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.43.0.1

the coreDNS config map looks like this:

Name:         rke2-coredns-rke2-coredns
Namespace:    kube-system
Labels:       app.kubernetes.io/instance=rke2-coredns
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=rke2-coredns
              helm.sh/chart=rke2-coredns-1.29.002
              k8s-app=kube-dns
              kubernetes.io/cluster-service=true
              kubernetes.io/name=CoreDNS
Annotations:  meta.helm.sh/release-name: rke2-coredns
              meta.helm.sh/release-namespace: kube-system

Data
====
Corefile:
----
.:53 {
    errors
    health  {
        lameduck 5s
    }
    ready
    kubernetes   cluster.local  cluster.local in-addr.arpa ip6.arpa {
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
        ttl 30
    }
    prometheus   0.0.0.0:9153
    forward   . /etc/resolv.conf
    cache   30
    loop
    reload
    loadbalance
}

I've tried restarting the coreDNS deployment, but that has not worked. I am using rke2 version v1.28.10+rke2r1, and it is ran on a Rocky linux 8.10 server.

1 Answer 1

0

I found a solution to this problem. If I configure the coreDNS configmap to forward to google's name server (8.8.8.8), the DNS is now able to resolve hostnames. To do this you can edit the configmap with:

kubectl edit configmap -n kube-system rke2-coredns-rke2-coredns

and you want to change the data to look something like this

Corefile:
----
.:53 {
    errors 
    health  {
        lameduck 5s
    }
    ready 
    kubernetes   cluster.local  cluster.local in-addr.arpa ip6.arpa {
        pods insecure
        fallthrough in-addr.arpa ip6.arpa
        ttl 30
    }
    prometheus   0.0.0.0:9153
    forward   . 8.8.8.8
    cache   30
    loop 
    reload 
    loadbalance 
}

mostly just change the forward from . /etc/resolv.conf to . 8.8.8.8

From there you want to verify that masquerade is on in your fire wall.

firewall-cmd --list-all
  ... 
  masquerade: yes

if it is not on you can turn it on with:

firewall-cmd --add-masquerade --permanent
firewall-cmd --reload

you can then restart the coreDNS deployment with:

kubectl rollout restart deployment -n kube-system rke2-coredns-rke2-coredns

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .