I am building a system with client-server architecture. These clients(millions in number) are going to make TCP connection with keepalive to the server, let's say on port 443. On the other hand there are API handlers which receives a request, on which I have to selectively submit a JSON message to one of the clients which has an existing TCP connection which is kept alive (Assuming clients are under several NAT's). I want to know a really scalable(best practices) way of doing 1. to handle millions of alive TCP connections and to be able to selectively route messages to these clients (clients identified by a certain ID at the time of connection) 2. be able to detach the keep alive connections and API handlers runtime separated by some (microservice) model (gRPC etc..)
I have a nodejs app written to have a list of open sockets which does the job for 10 connections in the same runtime context. But this is a single instance(poor performance) and in no way scalable.
I have Websockets as an option but want to know a better way for handling such TCP connections in a distributed env for true scalability.
Aucun commentaire:
Enregistrer un commentaire