I was trying to create a Kubernetes Cluster using kubeadm. I had spin up an Ubuntu 18.04 server, installed docker (made it sure that docker.service was running), installed kubeadm kubelet and kubectl.
The following are the steps that I did:
sudo apt-get update
sudo apt install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu `lsb_release -cs` test"
sudo apt update
sudo apt install docker-ce
sudo systemctl enable docker
sudo systemctl start docker
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
sudo apt-get install kubeadm kubelet kubectl -y
sudo apt-mark hold kubeadm kubelet kubectl
kubeadm version
swapoff –a
Also, in order to configure the Docker cgroup driver, I had edited /etc/systemd/system/kubelet.service.d/10-kubeadm.conf. Within the file, I added Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
and commented out Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
.
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf for reference:
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
#Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
After this I ran: systemctl daemon-reload
and systemctl restart kubelet
. kubelet.service was running fine.
Next, I ran sudo kubeadm init --pod-network-cidr=10.244.0.0/16
and got the following error:
root@ip-172-31-1-238:/home/ubuntu# kubeadm init --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-172-31-1-238 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.1.238]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ip-172-31-1-238 localhost] and IPs [172.31.1.238 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ip-172-31-1-238 localhost] and IPs [172.31.1.238 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID'
After running systemctl status kubelet.service
, seems that kubelet is running fine.
However, after running journalctl -xeu kubelet
, I got the following logs:
kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://172.31.1.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-1-238?timeout=10s": dial tcp 172.31.1.238:6443: connect: connection refused
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-1-238"
kubelet_node_status.go:92] "Unable to register node with API server" err="Post "https://172.31.1.238:6443/api/v1/nodes": dial tcp 172.31.1.238:6443: connect: connection refused" node="ip-172-31-1-238"
kubelet.go:2422] "Error getting node" err="node "ip-172-31-1-238" not found"
Versions:
Docker: Docker version 20.10.12, build e91ed57
Kubeadm: {Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:39:51Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
Not sure whether this is a connection issue between the Kube Api Server and Kubelet.
Does anyone know how to fix this?