jeudi 21 janvier 2021

How to push events from Kafka via WebSockets in a Kubernetes cluster

I am running a serverless application, currently on AWS using Lambda, API Gateway and self-deployed Kafka, that pushes events from Kafka to WebSocket connections (via API GW persistent connections) that are kept track of using MongoDB. So far this works well.

WSClient <- API GW <- Lambda <- Kafka
                         |
                      MongoDB

However, I am interested to migrate from the lambdas to K8s and I am wondering, how might that work. As I understand, one issue is to make the WebSocket clients sticky through the ingress, I've found plenty of examples for that and they would solve problem 1 (which is to always hit the same server). However, I do have a problem 2, that would be that there also needs to be a Kafka consumer service, that needs to push messages out to the WebSockets (in this case to 1 client), so that Kafka consumer would need to talk to exactly the service that holds the connection with that WS client.

WS Client <- Ingress <-+- WS Server pod   ...
                       |  ...                  <-???- Kafka Consumer Service <- Kafka
                       `- WS Server pod.  ...

In AWS, problem 2 is solved by making the connection Id persistent, then look it up when a message arrives and then send the message to the API GW which keeps the connection open under the persistent Id. So far this works.

What would be potential scenarios / building blocks to achieve the same solutions in K8s?




Aucun commentaire:

Enregistrer un commentaire