Git Product home page Git Product logo

py-frameworks-bench's Introduction

Async Python Web Frameworks comparison

Updated: 2022-03-14

benchmarks tests


This is a simple benchmark for python async frameworks. Almost all of the frameworks are ASGI-compatible (aiohttp and tornado are exceptions on the moment).

The objective of the benchmark is not testing deployment (like uvicorn vs hypercorn and etc) or database (ORM, drivers) but instead test the frameworks itself. The benchmark checks request parsing (body, headers, formdata, queries), routing, responses.

Table of contents

The Methodic

The benchmark runs as a Github Action. According to the github documentation the hardware specification for the runs is:

  • 2-core vCPU (Intel® Xeon® Platinum 8272CL (Cascade Lake), Intel® Xeon® 8171M 2.1GHz (Skylake))
  • 7 GB of RAM memory
  • 14 GB of SSD disk space
  • OS Ubuntu 20.04

ASGI apps are running from docker using the gunicorn/uvicorn command:

gunicorn -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8080 app:app

Applications' source code can be found here.

Results received with WRK utility using the params:

wrk -d15s -t4 -c64 [URL]

The benchmark has a three kind of tests:

  1. "Simple" test: accept a request and return HTML response with custom dynamic header. The test simulates just a single HTML response.

  2. "API" test: Check headers, parse path params, query string, JSON body and return a json response. The test simulates an JSON REST API.

  3. "Upload" test: accept an uploaded file and store it on disk. The test simulates multipart formdata processing and work with files.

The Results (2022-03-14)

Accept a request and return HTML response with a custom dynamic header

The test simulates just a single HTML response.

Sorted by max req/s

Framework Requests/sec Latency 50% (ms) Latency 75% (ms) Latency Avg (ms)
blacksheep 1.2.5 18546 2.80 4.53 3.41
muffin 0.87.0 16571 3.09 5.17 3.83
sanic 21.12.1 15558 4.70 5.14 4.08
falcon 3.0.1 15554 3.29 5.49 4.08
baize 0.15.0 13880 3.69 6.21 4.58
starlette 0.17.1 13797 3.70 6.16 4.60
emmett 2.4.5 13380 5.54 6.10 4.75
fastapi 0.75.0 9060 5.46 9.79 7.03
aiohttp 3.8.1 7240 8.74 9.01 8.84
quart 0.16.3 3425 18.99 20.08 18.68
tornado 6.1 3232 19.76 19.94 19.81
django 4.0.3 1002 59.00 66.26 63.72

Parse path params, query string, JSON body and return a json response

The test simulates a simple JSON REST API endpoint.

Sorted by max req/s

Framework Requests/sec Latency 50% (ms) Latency 75% (ms) Latency Avg (ms)
sanic 21.12.1 10777 6.97 7.67 5.90
blacksheep 1.2.5 10505 4.70 8.16 6.07
muffin 0.87.0 10319 4.79 8.41 6.17
falcon 3.0.1 10133 4.88 8.61 6.28
starlette 0.17.1 8135 6.03 10.76 7.83
emmett 2.4.5 7091 7.17 11.58 9.12
baize 0.15.0 6581 9.96 10.24 9.71
fastapi 0.75.0 5882 8.36 15.16 10.85
aiohttp 3.8.1 4496 14.15 14.32 14.24
tornado 6.1 2780 22.95 23.17 23.02
quart 0.16.3 2146 29.42 30.05 29.81
django 4.0.3 883 68.00 71.74 72.37

Parse uploaded file, store it on disk and return a text response

The test simulates multipart formdata processing and work with files.

Sorted by max req/s

Framework Requests/sec Latency 50% (ms) Latency 75% (ms) Latency Avg (ms)
blacksheep 1.2.5 5604 8.87 15.77 11.40
sanic 21.12.1 5025 10.44 16.83 12.72
muffin 0.87.0 4425 11.14 19.99 14.43
falcon 3.0.1 3433 14.56 25.48 18.73
baize 0.15.0 2834 21.89 24.48 22.57
starlette 0.17.1 2434 20.10 36.39 26.26
aiohttp 3.8.1 2218 28.81 29.09 28.84
fastapi 0.75.0 2099 23.61 41.91 30.44
tornado 6.1 2067 30.89 31.09 30.95
quart 0.16.3 1746 36.68 37.58 36.63
emmett 2.4.5 1414 41.83 50.86 45.21
django 4.0.3 689 86.45 89.44 92.51

Composite stats

Combined benchmarks results

Sorted by completed requests

Framework Requests completed Avg Latency 50% (ms) Avg Latency 75% (ms) Avg Latency (ms)
blacksheep 1.2.5 519825 5.46 9.49 6.96
sanic 21.12.1 470400 7.37 9.88 7.57
muffin 0.87.0 469725 6.34 11.19 8.14
falcon 3.0.1 436800 7.58 13.19 9.7
starlette 0.17.1 365490 9.94 17.77 12.9
baize 0.15.0 349425 11.85 13.64 12.29
emmett 2.4.5 328275 18.18 22.85 19.69
fastapi 0.75.0 255615 12.48 22.29 16.11
aiohttp 3.8.1 209310 17.23 17.47 17.31
tornado 6.1 121185 24.53 24.73 24.59
quart 0.16.3 109755 28.36 29.24 28.37
django 4.0.3 38610 71.15 75.81 76.2

Conclusion

Nothing here, just some measures for you.

License

Licensed under a MIT license (See LICENSE file)

py-frameworks-bench's People

Contributors

abersheeran avatar alkorgun avatar dependabot[bot] avatar gi0baro avatar github-actions[bot] avatar hawkowl avatar iurisilvio avatar klen avatar kludex avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

py-frameworks-bench's Issues

Framework benchmark results were run on AWS T2 instance

T2 instances inherently rely on excess CPU cycles and therefore do not provide for a reliable performance baseline upon which benchmarks can be run accurately.

Considering running the benchmarks on a dedicated private instance.

Complete benchmarks seem misleading

Since you're varying the ORM based on the http framework the benchmarks seem vary misleading. Some were using pee_wee, some sqlalchemy, and django was using django. Couldn't you make the benchmarks all use pee_wee so you're bench-marking the web delivery part of the framework in isolation of ORM performance and features?

add a LICENSE?

Hi,

I'd like to fork the codebase to do some improvements and use it to verify some other behavior, but I noticed this project has no license.

can you add a license? If there's no strong preference, MIT would be simplest, but really any license that lets me use the code in a derivative work is sufficient.

Thanks for your time!

json test unfair

Some framework use simplejson as default(flask), some use built-in json module(tornado).

AIOHttp benchmark stated incorrectly

The ORM benchmark seems to state the 50% time for AIOHttp incorrectly - the value listed in the table is actually the min (as shown on the graph). The value for 50% is rather 902.72. Screenshot below for details:

image

Missing requests/s metric

The benchmark is missing the number of requests/s. I ran a similar test as the JSON one, and yes flask is faster than aiohttp with those wrk settings. However, aiohttp handles more than 11 times more requests as flask. 4098 vs 359 in my case.

CON_MAX_AGE not set

The django database connection doesn't have CON_MAX_AGE set, which means that it will make a new database connection every time (the default behaviour), rather than keeping the TCP/IP connection open, which can be very slow.

make provision failure on git, make

I'm getting a TASK [bench.setup git]

which fails because it Failed to find required executable git.

I did a vagrant ssh and apt-get install git and then it passed, only to fail on make in TASK [bench.setup command].

benchmarks do not reflect recommended usage

Several frameworks's apps are written in a manner that does not reflect the usage suggested in their docs.

For example, the FastAPI documentation suggests using File() to have files extracted from forms and injected: https://fastapi.tiangolo.com/tutorial/request-files/. But in the benchmarks the Request.form is used just like in Starlette. This is obviously going to be faster, but then it's not using any of the features the framework provides on top of Starlette. If that's what users were doing, they'd be using Starlette.

I suggest that the apps get restructured to better reflect the documented usage of each framework. That is much more useful to users (and framework developers) and would help highlight issues like tiangolo/fastapi#4187

Некорректное тестирование Sanic

Проще показать, внимание на скриншоты.

gunicorn -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8080 app:app — самый тормозной вариант, не понимаю что происходит, и не хочется разбираться, но на каждый запрос принтится лого, отсюда и результат.

Screenshot from 2022-02-07 02-28-06

gunicorn -k sanic.worker.GunicornWorker -b 0.0.0.0:8080 app:app — более адекватный вариант, опять же, разбираться лень, но принтится access_log, что влияет на результат, как отключить, я сходу не нашёл.

Screenshot from 2022-02-07 02-28-57

sanic --host 0.0.0.0 --port=8080 --workers=1 app:app — дефолтный и наиболее корректный способ запуска, дающий результат на уровне blacksheep.

Screenshot from 2022-02-07 02-30-26

И, собственно, blacksheep, для сравнения:

Screenshot from 2022-02-07 02-32-28

make provision fails

make provision
[make] Run Ansible provision
ansible-playbook /media/cristian/Ddrive/tmp/py-frameworks-bench/deploy/setup.yml -i /media/cristian/Ddrive/tmp/py-frameworks-bench/deploy/inventory.ini -l vagrant -vv
ERROR: problem running /media/cristian/Ddrive/tmp/py-frameworks-bench/deploy/inventory.ini --list ([Errno 8] Exec format error)
make: *** [provision] Error 1

Release a 2018/2019 edition

It would be awesome if the Python's Web Framework Benchmarks were updated to reflect the current state of affairs (2018/2019).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.