Git Product home page Git Product logo

app-servers's Introduction

Table of Contents

Scope

The idea behind this repository is to benchmark different languages implementation of HTTP server.

Hello World

The application i tested is minimal: the HTTP version of the Hello World example.
This approach allows including languages i barely know, since it is pretty easy to find such implementation online.
If you're looking for more complex examples, you will have better luck with the TechEmpower benchmarks.

Disclaimer

Please do take the following numbers with a grain of salt: it is not my intention to promote one language over another basing on micro-benchmarks.
Indeed you should never pick a language just basing on its presumed performance.

Languages

I have became lazy with years and just adopt languages i can install via homebrew, sorry Oracle/MS. This also allows me to benchmark them in a single session, thus trying to use an environment as neutral as possible. Where possible i just relied on the standard library, but when it is not production-ready (i.e. Ruby, Python).

Ruby

Ruby 3.0.0 is used. Ruby is a general-purpose, interpreted, dynamic programming language, focused on simplicity and productivity.

Python

Python 3.9.1 is used. Python is a widely used high-level, general-purpose, interpreted, dynamic programming language.

JavaScript

Node.js version 15.5.0 is used. Node.js is based on the V8 JavaScript engine, optimized by Google and supporting most of the new language's features.

Dart

Dart version 2.10.4 is used. Dart is a VM based, object-oriented, sound typed language using a C-style syntax that transcompiles optionally into JavaScript.

Elixir

Elixir 1.11.2 is used. Elixir is a purely functional language that runs on the Erlang VM and is strongly influenced by the Ruby syntax.

Crystal

Crystal 0.35.1 is used. Crystal has a syntax very close to Ruby, but brings some desirable features such as statically typing and ahead of time (AOT) compilation.

Nim

Nim 1.4.2 is used. Nim is an AOT, Python inspired, statically typed language that comes with an ambitious compiler aimed to produce code in C, C++, JavaScript or ObjectiveC.

GO

GO 1.15.6 is used. GO is an AOT language that focuses on simplicity and offers a broad standard library with CSP constructs built in.

Tools

Wrk

I used wrk as the loading tool.
I measured each application server six times, picking the best lap (but for VM based languages demanding longer warm-up).

wrk -t 4 -c 100 -d30s --timeout 2000 http://0.0.0.0:9292

Platform

These benchmarks are recorded on a MacBook PRO 13 2019 having these specs:

  • macOS Catalina
  • 1.4 GHz Quad-Core Intel Core i5
  • 8 GB 2133 MHz LPDDR3

RAM and CPU

I measured RAM and CPU consumption by using macOS Activity Monitor dashboard and recording max consumption peak.
For the languages relying on pre-forking parallelism i reported the average consumption by taking a snapshot during the stress period.

Benchmarks

Results

Language App Server Requests/sec RAM (MB) CPU (%)
Ruby+MJIT Puma 36455.88 > 100 > 580
Elixir Plug with Cowboy 46416.25 50.5 583.8
Ruby Puma 47975.36 > 100 > 580
Dart Dart HttpServer 59335.33 193.2 429.1
JavaScript Node Cluster 87208.47 > 200 > 240
GO GO ServeMux 103847.10 10.0 429.1
Python Gunicorn with Meinheld 120105.65 > 40 > 380
Nim httpbeast 128257.98 11.4 99.6
Crystal Crystal HTTP 132699.78 8.5 246.7

Puma

I tested Ruby by using a plain Rack application served by Puma.

Bootstrap

RUBYOPT='--jit' puma -w 8 -t 2 --preload servers/rack_server.ru

Gunicorn with Meinheld

I tested Python by using Gunicorn spawning Meinheld workers with a plain WSGI compliant server.

Bootstrap

cd servers
gunicorn -w 4 -k meinheld.gmeinheld.MeinheldWorker -b :9292 wsgi_server:app

Node Cluster

I used the cluster module included into Node's standard library.

Bootstrap

node servers/node_server.js

Dart HttpServer

I used the async HTTP server embedded into the Dart standard library and compiled it with dart2native AOT compiler.

Bootstrap

dart2native servers/dart_server.dart -k aot
dartaotruntime servers/dart_server.aot

Plug with Cowboy

I tested Elixir by using Plug library that provides a Cowboy adapter.

Bootstrap

cd servers/plug_server
MIX_ENV=prod mix compile
MIX_ENV=prod mix run --no-halt

Crystal HTTP

I used Crystal HTTP server standard library, enabling parallelism by using the preview_mt flag.

Bootstrap

crystal build -Dpreview_mt --release servers/crystal_server.cr
./crystal_server

httpbeast

To test Nim i opted for the httpbeast library: an asynchronous server relying on Nim HTTP standard library.

Bootstrap

nim c -d:release --threads:on servers/httpbeast_server.nim
./servers/httpbeast_server

GO ServeMux

I used the HTTP ServeMux GO standard library.

Bootstrap

go run servers/servemux_server.go

app-servers's People

Contributors

aurimasniekis avatar costajob avatar kblok avatar nateberkopec avatar nono avatar willie avatar zanderso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

app-servers's Issues

ASP.NET Core

Would you accept a PR with a ASP.NET Core server?

Alternative NodeJS HTTP Server

I stumbled upon this which serves as a replacement for the standard library's http module. It claims high performance and efficiency.

Also, could you add a second benchmark for low-end specs (1 Core CPU + 512MB)... I guess that would speak a lot since most individuals/devs (except maybe startups) won't go in for full blown 16GB server until significant traffic is noticed.

Wrk on local host

To not have wrks load influence your servers you should not run it on the same system.

Vibe.d running debug build + possible Bug in detection threadsPerCpu

Finally some time at the PC. I took a quick look at your configuration and found it odd that the memory usage was so high for D. And after checking the command that you used to run, it downed on me that your running in debug mode.

Try:

dub build --build=release --compiler=ldc2 

The compiler switch may not be needed if you only have lcd installed. But if you have both installed, its possible it falls back to the slower ( but fast to compile DMD compiler and again in debug mode ).


Next issue with the multithreading:

Try:

	auto settings = new HTTPServerSettings;
	settings.port = 9292;
	settings.bindAddresses = ["::1", "0.0.0.0"];

	settings.options |= HTTPServerOption.distribute;
	setupWorkerThreads(4);

	listenHTTP(settings, &hello);
	runApplication();

Try this code. I suspect that your worker threadcount is not being set to 4 but only to 1. I think there is a bug with the threadpercpu detection on your system because distribute really needs to use all 4 Cpu Cores and not one.

Bootstrap of plug

Can you please clarify how did you run http server using elixir?

I started elixir by using iex interactive console as described on Plug read-me.

Do I understand correctly that you did this?

$ iex -S mix
iex> c "path/to/file.ex"
[MyPlug]
iex> {:ok, _} = Plug.Adapters.Cowboy.http MyPlug, []
{:ok, #PID<...>}

Multi-threaded httpbeast

Nim compiler by default do not turn on mutli-threaded support, but it doesn't mean nim or httpbeast is designed to be single-threaded. And it is very easy to turn on mutli-thread support by just adding one option --threads:on.

nim c -d:release --threads:on servers/httpbeast_server.nim

Just like most of the lang/framework are concurrent, httpbeast should also be.

Things to improve

Hello,

I think there are some things to improve this benchmark:

  • First, the payload for the Elixir version is Hello world! (12 bytes), but the payload for other tests is Hello world (11 byte). Not a big deal ;-)
  • Then, on the same topic, seeing the Transfer/sec is interesting. It looks like Node and Elixir are sending heavier HTTP headers than Ruby and Go for example.
  • For Elixir, I think it should be more fair to use the Hello World example from Plug instead of a router. Others don't have a router.
  • Last but not least, http pipelining should be disabled for all tests. See wg/wrk#197 for some details.

Node Version

Hello!

I like the idea behind this project - I've seen some people requesting this recently, as the last one I saw Node in was pre-io.js (but, admittedly, I haven't looked at performance reviews a lot).

However, Node v4 is, if you're using the current version, LTS. It would be beneficial and fair to also run Node's v6 latest branch - that's what the majority of software is running, sans a smaller group of Enterprise solutions that need the extreme stability of the LTS version.

If you have any questions, I'd be happy to answer or try to pull in appropriate people to answer them!

Test results qualification

Hi, thank you for detailed description of the benchmark. I've reproduced it step by step, but got different ratio of results. As I understand, you have run tests on different days, so maybe you could rerun all of them all at once?
Besides, I noticed, that response length with considering of headers isn't the same. And some of servers didn't even calculate date header.

Here is my results:

App Server Throughput (req/s) Latency in ms (avg/stdev/max) Response in bytes
Plug with Cowboy 43564.50 2.24/0.41/ 15.84 172
Rack with Puma 32947.19 0.39/0.11/ 17.38 72
Nim asynchttpserver 60916.85 1.64/0.28/ 26.79 47
Node Cluster 44437.09 2.27/0.92/ 58.59 139
Ring with Jetty 45946.47 2.39/3.81/129.38 159
Rust Hyper 55010.47 1.81/0.30/ 4.90 84
Gunicorn with Meinheld 53362.82 1.87/0.30/ 5.36 154
Servlet3 with Jetty 50938.14 2.37/5.43/147.65 154
Colossus 59850.19 1.67/0.28/ 7.36 72
GO ServeMux 49734.60 1.99/0.42/ 9.39 123
Crystal HTTP 67656.15 1.48/0.19/ 4.56 95

Dart2Native benchmark

Dart2Native is released. Please update your benchmark and use Dart2Native in place of Dart VM

Rust ?

Hi!

It would be nice to maybe include Rust in this little benchmark.

The standard lib does not ship with an HTTP server, but the folks of the Iron framework are doing a very decent job. ๐Ÿ˜„

Maybe you can also include D + Vibe.d

import vibe.vibe;

void main()
{
	listenHTTP(":8080", &handleRequest);
	runApplication();
}

void handleRequest(HTTPServerRequest req, HTTPServerResponse res)
{
	if (req.path == "/")
		res.writeBody("Hello, World!");
}

The default dmd compiler is the slowest. LDC is fast.

Add launcher scripts and benchmark logging

It would be nice to have a launcher script(s) for each of the servers (e.g.: server.sh and server.bat) that would:

  • start server
  • benchmark with wrk
  • add benchmark results to common log
  • stop server

update dartlang to v2

Thanks for this, i have not looked at all but can you update esp. dartlang to v2? :)

Clojure analysis

Concurrency and parallelism

Clojure leverages on the JVM to deliver parallelism: indeed it parallelizes better than Java, since it uses all of the available cores.

I don't think you can say that at all, actually. It's using more of the CPU, but possibly for additional garbage collection or a property of the differing Jetty configurations. Ring Jetty is built on top of Jetty+Servlets. It's functionally equivalent but adds more work (e.g. allocations) on top of the raw Java/Jetty/Servlet3 test as far as this simple "Hello World" goes.

The configuration of Jetty is also differing between the two tests:

https://github.com/ring-clojure/ring/blob/master/ring-jetty-adapter/src/ring/adapter/jetty.clj#L132-L142

Ring Jetty has default configuration of Jetty, whereas your test with Jetty does not change Jetty's default configuration. For reference, Jetty's default configuration creates a new threadpool with max 200 threads (vs. 50). At the default thread stack size (1mb on most platforms), an additional 150 threads (that is, if the threadpool creates that many threads -- it creates as necessary) is an additional 150mb of memory which can account for the extra memory (e.x. if Java/Jetty/Servlets is using 85 threads steady state and Clojure/Ring is using 50). In general you shouldn't really expect a stable or representative RSS from Java as it'll hold onto memory as it claims up to the max heap size. Run your benchmarks for longer and the footprint may well increase unless you cap the heap size.

Fundamentally, with the correct rigor you're just benchmarking the Ring library. But the analysis presented is incorrect.

I got 1285049 requests in 30.05s for Plug server

Running 30s test @ http://127.0.0.1:9292

4 threads and 100 connections

Thread Stats Avg Stdev Max +/- Stdev

Latency     2.34ms    1.99ms  67.24ms   90.90%

Req/Sec    10.75k     2.04k   17.82k    69.92%

1285049 requests in 30.05s, 190.19MB read

Requests/sec: 42761.97

Transfer/sec: 6.33MB`

I think there is some wrong with the results of Plug
I'm running this test on a 2011 i7 mb pro

Test accuracy could be improved

Running the client threads and server threads on the same machine is costing you valuable time during context switches, especially for platforms like OTP.

In other words (as @evadne puts it):

  1. The observer effect.
  2. NUMA.
  3. Context switching.
  4. Lack of core affinity.

... are all affecting your results, probably generating poor performance across all benchmarks, not just OTP!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.