0

I have a Kubernetes deployment where a pod connects to a client via TCP Socket. On connect and disconnect different events happen. In the dev environment the server can respond to connections, data, and end, but in the production environment it only registers connect and data. What could be causing this? Below is the output of netstat -natp from inside the pod after the connection has been severed. Port 8081 is the socket server. The connection eventually times out on its own.

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 X.244.0.88:43698        X.106.38.107:5672       ESTABLISHED 1/node
tcp6       0      0 :::8080                 :::*                    LISTEN      1/node
tcp6       0      0 :::8081                 :::*                    LISTEN      1/node
tcp6       0      0 X.244.0.88:8081         X.244.0.1:45016         ESTABLISHED 1/node

1 Answer 1

0

In the dev environment I was simulating a severed connection by closing the client application. This was actually initiating the FIN packet sequence/ firing the 'end' event from the client. The solution was to add a default time to the keep-alive value on the server side.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .