mercredi 23 janvier 2019

python web server benchmarks lower than expected

I have a cherrrypy application server that is sitting behind nginx (reverse proxy). Cherrypy is being run as daemon process per core (my server as 4 cores) and nginx (also configured with 4 workers) performs the load balancing on the incoming requests.

I'm benchmarking GETing the front page of my webapp using hey. For values of 200 concurrent requests and a total of 10k requests, I reach about 400-500 rps. I need to be able to handle at least 10x that. When I look at the histogram:

 Response time histogram:
  0.014 [1] |
  0.721 [9193]  |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  1.429 [693]   |■■■
  2.137 [13]    |
  2.844 [0]     |
  3.552 [88]    |
  4.259 [0]     |
  4.967 [0]     |
  5.674 [0]     |
  6.382 [0]     |
  7.089 [12]    |

It's very weird that some requests take so long (1.5-7 seconds) to process, considering there isn't any IO operations involved in the main page URL handler, just generating a static template and sending it over.

How can I get an idea about what is my bottleneck, before I go down some rabbit hole of premature optimisation? Is the cherrypy? python itself? my hardware?




Aucun commentaire:

Enregistrer un commentaire