2

On a Digital Ocean Kubernetes Cluster I am trying to create a pod using a Docker image pulled from Docker Hub but it keeps failing with CrashLoopBackOff and restarting. The image is a Rails app in a docker container and is quite large. I suspect that Kubernetes is restarting the pod before the image can load.

Here are the errors:

kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE exactpos-deployment-864c768bff-zr86t 0/1 CrashLoopBackOff 10 27m 10.244.66.4 sleepy-nobel-3v8a nginx-deployment-7d69d57649-97w92 1/1 Running 0 40m 10.244.66.7 sleepy-nobel-3v8a

Events: Type Reason Age From Message
---- ------ ---- ---- ------- Normal Scheduled 27m default-scheduler Successfully assigned default/exactpos-deployment-864c768bff-zr86t to sleepy-nobel-3v8a

Normal Pulled 26m (x4 over 27m) kubelet, sleepy-nobel-3v8a Successfully pulled image "index.docker.io/markhorrocks/exactpos_web:prod"

Normal Created 26m (x4 over 27m) kubelet, sleepy-nobel-3v8a Created container

Normal Started 26m (x4 over 27m) kubelet, sleepy-nobel-3v8a Started container

Normal Pulling 26m (x5 over 27m) kubelet, sleepy-nobel-3v8a pulling image "index.docker.io/markhorrocks/exactpos_web:prod"

Warning BackOff 2m (x117 over 27m) kubelet, sleepy-nobel-3v8a Back-off restarting failed container

Here is my yaml file

apiVersion: apps/v1 
kind: Deployment 
metadata: 
  name: exactpos-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: exactpos-deployment
  template:
    metadata:
      labels:
        app: exactpos-deployment
    spec:
      containers:
        - name: exactpos
          image: index.docker.io/markhorrocks/exactpos_web:prod
          imagePullPolicy: Always
          command: [ "echo", "SUCCESS" ]
      imagePullSecrets:
         - name: dockerhub-cred
1
  • Have you checked logs for the related Pod: kubectl logs exactpos-deployment-864c768bff-zr86t?
    – Nick_Kh
    Commented Dec 24, 2018 at 10:29

1 Answer 1

1

i sat on this kind of problem for about 18 hours or more.

the solution was simple, as usual. simply check your /etc/resolv.conf in your workers have proper DNS resolution. like this:

vi /etc/resolv.conf

nameserver 8.8.8.8 (new line) nameserver 8.8.4.4

restart the bad pods by deleting them.

this is 4 year's old question. but i never saw my answer on the internet anywhere.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .