2

I'm working on EKS v1.20.11-eks-f17b81 and I'm facing an issue with a Java container Alpine based. Basically my deployment have the limits of ephemeral-storage for the request's and also the limit's like this.

- containerPort: 8080
  protocol: TCP
resources:
  limits:
    cpu: 2048m
    ephemeral-storage: 1300Mi
    memory: 4096M
  requests:
    cpu: 500m
    memory: 1024M
    ephemeral-storage: 1000Mi

The pods after a few hours all of them, will be on Evicted state, and I'm not able to understand why, if I take a look at the /var/lib folder on the nodes there's plenty of space, also if I go to some pod with kubectl exec -ti POD -- sh and I do something like du -sch / I never found more than 300MB used, what can be happening ?.

1 Answer 1

1

TLDR: Either don't use ephemeral-storage limits at all or use them on all containers in the pod.


I guess you don't have ephemeral-storage limits set for all containers in that particular pod. Eviction manager sums all limits of containers and sets it as upper limit on pod level.

This is actual code snippet:

func (m *managerImpl) localStorageEviction(pods []*v1.Pod, statsFunc statsFunc) []*v1.Pod {
    evicted := []*v1.Pod{}
    for _, pod := range pods {
        podStats, ok := statsFunc(pod)
        if !ok {
            continue
        }

        if m.emptyDirLimitEviction(podStats, pod) {
            evicted = append(evicted, pod)
            continue
        }

        if m.podEphemeralStorageLimitEviction(podStats, pod) {
            evicted = append(evicted, pod)
            continue
        }

        if m.containerEphemeralStorageLimitEviction(podStats, pod) {
            evicted = append(evicted, pod)
        }
    }

    return evicted
}

As you can see both podEphemeralStorageLimitEviction and containerEphemeralStorageLimitEviction are used for eviction. As of writing I don't understand "why" both are used (and not just container one).

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .