0

We recently upgraded Kubernetes 1.21 to 1.22 version on aws eks. The upgrade was successful. However, the associated prometheus deployments fails with error

$ kubectl -n monitoring logs prometheus-operator-***
W0109 20:31:28.602872       1 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
{"level":"info","msg":"patching webhook configurations 'prometheus-operator-kube-p-admission' mutating=true, validating=true, failurePolicy=Fail","source":"k8s/k8s.go:39","time":"2023-01-09T20:31:28Z"}
{"err":"the server could not find the requested resource","level":"fatal","msg":"failed getting validating webhook","source":"k8s/k8s.go:48","time":"2023-01-09T20:31:28Z"}

In the events prometheus node exporter

 Liveness probe failed: Get "http://*******:9100/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Helm is running on helm3 Any thoughts/directions will be appreciated

1
  • 1
    My initial reaction is that you have some Ingresses that are on older API versions. Did you check for deprecations before upgrading?
    – Ackack
    Commented Jan 10, 2023 at 5:55

1 Answer 1

2

The Older Prometheus chart (v11.13.1 from September 2020) is not compatible with Kubernetes v1.22 because the RBAC ClusterRoleBinding API has changed.

Try the Helm chart v15.1.3 on the Kubernetes v1.22 cluster. Please go throuth the link for more information.

As @tilleyc commented, Check Deprecated API Migration Guide on the v1.22 release stopped serving the following deprecated API versions for more details.

0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .