0

I have a WebSocket server-side application fronted by an Nginx reverse proxy and all is working great. The WS app runs in a container as does Nginx, and both then work together as a service.

Now I'm considering the scale-up rules for the WS app, which are more-or-less straight forward. But I'm curious about whether or not I'll need to also scale-up the Nginx portion of the service. Connections will be established at a relatively low rate, so the scale-up portion is really to maintain many already-connected (i.e. long-lived) WS connections. I know I can test some of this myself with load tests, but I figured I'd also ask here: once Nginx reverse-proxies to the WS back-end (via the Upgrade & Connection headers) and the socket is connected between client and my WS app, does Nginx play a role in that continued communication, or is Nginx now 'out-of-the-loop'? I.e. do future packets sent/received (in either direction) get read or handled in any way by the Nginx processes?

If not, then I can likely scale-up the WS containers without needing to scale-up in the Nginx containers in 'lock-step'.

Thanks for any insight!

5
  • It heavily depends on the rules you write in nginx config, redirect or simply rewrite. Unless redirected, HTTP clients never know the actual upstream server and must go through the nginx instance for all requests.
    – Lex Li
    Commented Jan 11, 2023 at 4:52
  • I know the initial request goes through Nginx, but I'm curious about the duplex communication once the socket is established, since there are no more HTTP requests being made.
    – mmuurr
    Commented Jan 12, 2023 at 20:10
  • Like I said, all requests go through the proxy as long as you configure it there. Please study more from places like en.wikipedia.org/wiki/Reverse_proxy
    – Lex Li
    Commented Jan 12, 2023 at 21:06
  • Hey thanks for the friendly feedback :-) Once a WebSocket is established, there are no more HTTP requests. Here's a description of sockets for your future use: en.wikipedia.org/wiki/WebSocket Thanks again!
    – mmuurr
    Commented Jan 13, 2023 at 22:22
  • A note for future readers, to clear all doubt, use a tool like Wireshark to capture HTTP/WS packets yourself and see how everything works under the hood.
    – Lex Li
    Commented Jan 14, 2023 at 5:12

1 Answer 1

0

I think the answer fundamentally is, once Nginx proxies (via the protocol Upgrade header) and the socket is established, Nginx does maintain an open file descriptor to both ends of the socket but otherwise acts simply as a NOP filter (passthrough). This allows for quite-impressive scaling with only modest Nginx resources, as demonstrated here. Particularly true for long-lived connections, as CPU load should be negligible (since the rate of newly-created sockets is low) and the memory cost of holding those file descriptors open is quite low and predictably scales with number of sockets.

0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .