0

I configured eks and when performing a deployment, the pod is always in pending state.

FailedScheduling appears. My node group is created with 2 node - T2.micro - 20gb.

Even the simplest deployment is not working. What is the reason for the "FailedScheduling" error?

$ kubectl get all
NAME                                READY   STATUS    RESTARTS   AGE
pod/my-flask-app-5c9f644594-6c7v6   0/1     Pending   0          22m
pod/nginx                           0/1     Pending   0          6m44s

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kubernetes         ClusterIP   10.100.0.1       <none>        443/TCP    33m
service/my-flask-service   ClusterIP   10.100.173.179   <none>        5000/TCP   22m

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-flask-app   0/1     1            0           22m

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/my-flask-app-5c9f644594   1         1         0       22m

I've tried to describe the pod and got this:

$ kubectl describe pod my-flask-app-5c9f644594-6c7v6
Name:             my-flask-app-5c9f644594-6c7v6
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=my-flask-app
                  pod-template-hash=5c9f644594
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/my-flask-app-5c9f644594
Containers:
  my-flask-container:
    Image:        992382690139.dkr.ecr.eu-west-3.amazonaws.com/my_monitoring_app_image:latest
    Port:         5000/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n88f4 (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-n88f4:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  12s (x5 over 20m)  default-scheduler  0/2 nodes are available: 2 Too many pods. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod."

The nodes:

$ kubectl get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-172-31-14-86.eu-west-3.compute.internal   Ready    <none>   28m   v1.25.16-eks-ae9a62a
ip-172-31-39-9.eu-west-3.compute.internal    Ready    <none>   28m   v1.25.16-eks-ae9a62a

Thanks for the help

0

You must log in to answer this question.

Browse other questions tagged .