1

I'm seeing the following error when running any kubectl command and no data is returned. This error occurs when accessing a private AWS EKS instance over a VPN connection.

$ kubectl get pods -A  -v=9 
...
5800 helpers.go:116] Unable to connect to the server: net/http: TLS handshake timeout

The strange thing about the error is kubectl will generate the error but output all the pod data the first time kubectl is run if there is no discovery cache. But then kubectl will fail after that and return no data. If I remove the cache directory (rm -rf ~/.kube/cache), kubectl will work once, and then start failing again since the ~/.kube/cache is recreated.

For instance, the first time I run kubectl:

$ kubectl get pods -A  -v=9 
I0718 14:52:58.797861   15292 loader.go:372] Config loaded from file:  U:\.kube\config
I0718 14:52:58.806839   15292 round_trippers.go:435] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl.exe/v1.22.0 (wi
ndows/amd64) kubernetes/c2b5237" 'https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/api?timeout=32s'
I0718 14:53:13.037830   15292 round_trippers.go:454] GET https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/api?timeout=
32s  in 14230 milliseconds
I0718 14:53:13.038981   15292 round_trippers.go:460] Response Headers:
I0718 14:53:13.044027   15292 cached_discovery.go:121] skipped caching discovery info due to Get "https://C21D1C150B2FC9F1252A79875E11C4BC.gr7
.us-east-2.eks.amazonaws.com/api?timeout=32s": net/http: TLS handshake timeout
I0718 14:53:13.051169   15292 round_trippers.go:435] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl.exe/v1.22.0 (wi
ndows/amd64) kubernetes/c2b5237" 'https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/api?timeout=32s'
I0718 14:53:23.063199   15292 round_trippers.go:454] GET https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/api?timeout=
32s  in 10010 milliseconds
I0718 14:53:23.065975   15292 round_trippers.go:460] Response Headers:
I0718 14:53:23.065975   15292 cached_discovery.go:121] skipped caching discovery info due to Get "https://C21D1C150B2FC9F1252A79875E11C4BC.gr7
.us-east-2.eks.amazonaws.com/api?timeout=32s": net/http: TLS handshake timeout
I0718 14:53:23.114872   15292 shortcut.go:89] Error loading discovery information: Get "https://C21D1C150B2FC9F1252A79875E11C4BC.gr7.us-east-2
.eks.amazonaws.com/api?timeout=32s": net/http: TLS handshake timeout
I0718 14:53:23.114872   15292 round_trippers.go:435] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl.exe/v1.22.0 (wi
ndows/amd64) kubernetes/c2b5237" 'https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/api?timeout=32s'
I0718 14:53:23.266940   15292 round_trippers.go:454] GET https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/api?timeout=
32s 200 OK in 152 milliseconds
I0718 14:53:23.267518   15292 round_trippers.go:460] Response Headers:
I0718 14:53:23.268082   15292 round_trippers.go:463]     Content-Type: application/json
I0718 14:53:23.268082   15292 round_trippers.go:463]     Content-Length: 166
I0718 14:53:23.268082   15292 round_trippers.go:463]     Date: Mon, 18 Jul 2022 19:53:23 GMT
I0718 14:53:23.268649   15292 round_trippers.go:463]     Audit-Id: dfc5cfe6-08d5-46a8-a61c-632dc3a21613
I0718 14:53:23.268649   15292 round_trippers.go:463]     Cache-Control: no-cache, private
I0718 14:53:23.307493   15292 request.go:1181] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCI
DR":"0.0.0.0/0","serverAddress":"ip-10-10-1-1.us-east-2.compute.internal:443"}]}
I0718 14:53:23.336044   15292 round_trippers.go:435] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl.exe/v1.22.0 (wi
ndows/amd64) kubernetes/c2b5237" 'https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/apis?timeout=32s'
I0718 14:53:23.368489   15292 round_trippers.go:454] GET https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/apis?timeout
=32s 200 OK in 32 milliseconds
I0718 14:53:23.369867   15292 round_trippers.go:460] Response Headers:
I0718 14:53:23.369867   15292 round_trippers.go:463]     Cache-Control: no-cache, private
I0718 14:53:23.369867   15292 round_trippers.go:463]     Content-Type: application/json
I0718 14:53:23.369867   15292 round_trippers.go:463]     Date: Mon, 18 Jul 2022 19:53:23 GMT
I0718 14:53:23.369867   15292 round_trippers.go:463]     Audit-Id: ba3c50bf-66a3-411e-8763-ec302cc78d03
...

And the command returns pod data. I notice it takes 3 curl attempts before http returns a 200 OK and from that point on, the curl commands all appear to succeed.

After that, if i run another kubectl command, I get the following error output and no pod data:

$ kubectl get pods -A  -v=9 --insecure-skip-tls-verify=true
I0718 14:51:33.249188    1640 loader.go:372] Config loaded from file:  U:\.kube\config
I0718 14:51:33.427333    1640 round_trippers.go:435] curl -v -XGET  -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;
as=Table;v=v1beta1;g=meta.k8s.io,application/json" -H "User-Agent: kubectl.exe/v1.22.0 (windows/amd64) kubernetes/c2b5237" 'https://C21D1C150B
2FC9F1252A79875E11C4BC.gr7.us-east-2.eks.amazonaws.com/api/v1/pods?limit=500'
I0718 14:51:47.439207    1640 round_trippers.go:454] GET https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/api/v1/pods?
limit=500  in 14011 milliseconds
I0718 14:51:47.440457    1640 round_trippers.go:460] Response Headers:
I0718 14:51:47.453797    1640 helpers.go:235] Connection error: Get https://ABCDEFG12345.AB1.us-east-2.eks.amazonaws.com/a
pi/v1/pods?limit=500: net/http: TLS handshake timeout
F0718 14:51:47.453797    1640 helpers.go:116] Unable to connect to the server: net/http: TLS handshake timeout
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0000d4001, 0xc000804000, 0x6f, 0xf9)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1026 +0xbf
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x300ff60, 0xc000000003, 0x0, 0x0, 0xc00012c0e0, 0x2, 0x271bb69, 0xa, 0x74, 0x2bef0
0)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:975 +0x1fb
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x300ff60, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x2, 0xc000788270, 0x1, 0x1)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:735 +0x190
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1500
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000af450, 0x41, 0x1)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:94 +0x296
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x218bc20, 0xc000004198, 0x2003930)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:178 +0x8b5
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:116
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func2(0xc000376280, 0xc0000dc880, 0x1, 0x4)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:180 +0x15d
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc000376280, 0xc0000dc840, 0x4, 0x4, 0xc000376280, 0xc0000dc840)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856 +0x2c2
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0003bcc80, 0xc0000e0000, 0xc0000de000, 0x6)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:960 +0x375
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:897
main.main()
        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x234

goroutine 19 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x300ff60)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1169 +0x92
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:420 +0xe5

goroutine 21 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x2003838, 0x2189500, 0xc000574000, 0x6c612079786f7201, 0xc000082ba0)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x1
19
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x2003838, 0x12a05f200, 0x0, 0x6c74636562756b01, 0xc000082ba0)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x9
f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x2003838, 0x12a05f200, 0xc000082ba0)
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x54
created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs
        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x9e

I tried setting the NO_PROXY env variable, but it did not help.

Any thoughts?

0

You must log in to answer this question.

Browse other questions tagged .