I have a WebSocket server-side application fronted by an Nginx reverse proxy and all is working great. The WS app runs in a container as does Nginx, and both then work together as a service.
Now I'm considering the scale-up rules for the WS app, which are more-or-less straight forward. But I'm curious about whether or not I'll need to also scale-up the Nginx portion of the service. Connections will be established at a relatively low rate, so the scale-up portion is really to maintain many already-connected (i.e. long-lived) WS connections. I know I can test some of this myself with load tests, but I figured I'd also ask here: once Nginx reverse-proxies to the WS back-end (via the Upgrade & Connection headers) and the socket is connected between client and my WS app, does Nginx play a role in that continued communication, or is Nginx now 'out-of-the-loop'? I.e. do future packets sent/received (in either direction) get read or handled in any way by the Nginx processes?
If not, then I can likely scale-up the WS containers without needing to scale-up in the Nginx containers in 'lock-step'.
Thanks for any insight!