Git Product home page Git Product logo

softlab-ntua / bencherl Goto Github PK

View Code? Open in Web Editor NEW
53.0 19.0 23.0 154.15 MB

A scalability benchmark suite for Erlang/OTP

Home Page: http://release.softlab.ntua.gr/bencherl/

Makefile 0.60% Erlang 73.41% Shell 2.58% ApacheConf 0.17% Perl 0.03% Ruby 1.40% HTML 2.66% Java 12.59% PigLatin 0.01% Gnuplot 0.01% CSS 0.83% Diff 0.04% JavaScript 2.20% Python 1.30% TeX 2.16% Awk 0.01% OpenEdge ABL 0.01%

bencherl's Introduction

bencherl - A scalability benchmark suite for Erlang

How to get ready to build the benchmark suite

Make sure you have the following installed on your machine:

How to build the benchmark suite

$ make all
or just
$ make
to omit the web interface. If you want to add that later, see the next section.

How to build the web interface

The web interface requires OTP R16B03-1 or later.

$ make ui

You have to build the web interface again if you move your bencherl folder:

$ make clean-ui
$ make ui

How to run the benchmark suite

Specify what you want to run and how in conf/run.conf, and then use bencherl to run the benchmark suite.

$ ./bencherl

How to specify a mnemonic name for a run

Use the -m option of the bencherl script.

$ ./bencherl -m everything-but-big

How to specify which benchmarks to run

Set the INCLUDE_BENCH variable in conf/run.conf, if you want to specify which benchmarks to run.

INCLUDE_BENCH=bang,big

Set the EXCLUDE_BENCH variable in conf/run.conf, if you want to specify which benchmarks not to run.

EXCLUDE_BENCH=dialyzer_bench

The values of both variables are one or more benchmark names separated with commas.

By default, all benchmarks are run.

How to list all benchmarks

Use the -l option of the bencherl script.

$ ./bencherl -l

How to specify the number of schedulers to run benchmarks with

Set the NUMBER_OF_SCHEDULERS variable in conf/run.conf.

The value of this variable can be either one or more integers separated with commas:

NUMBER_OF_SCHEDULERS=1,2,4,8,16,32,64

or a range of integers:

NUMBER_OF_SCHEDULERS=1..16

By default, benchmarks are run with as many schedulers as the number of logical processors.

How to specify the versions/flavors of Erlang/OTP to compile and run benchmarks with

Set the OTPS variable in conf/run.conf.

The value of this variable is one or more alias-path pairs separated with commas.

OTPS="R14B04=/usr/local/otp_src_R14B04,R15B01=/usr/local/otp_src_R15B01"

By default, benchmarks are compiled and run with the erlc and erl programs found in the OS path.

How to specify the erl command-line arguments to run benchmarks with

Set the ERL_ARGS variable in conf/run.conf.

The value of this variable is one or more alias-arguments pairs separated with commas.

ERL_ARGS="SOME_ARGS=+sbt db +swt low,SOME_OTHER_ARGS=+sbt u"

How to specify the number of slave nodes to run the benchmarks with

Set the NUMBER_OF_SLAVE_NODES variable in conf/run.conf.

The value of this variable can be either one or more integers separated with commas:

NUMBER_OF_SLAVE_NODES=1,2,4,6,8

or a range of integers:

NUMBER_OF_SLAVE_NODES=2..4

Benchmarks are executed with at most as many slave nodes as specified in the SLAVE_NODES variable.

By default, benchmarks are run with one master node and no slave nodes.

How to specify the slave nodes to run benchmarks with

Set the SLAVE_NODES variable in conf/run.conf.

The value of this variable is zero or more long or short node names separated with commas.

SLAVE_NODES=somenode@somehost,someothernode@someotherhost

The USE_LONG_NAMES variable determines whether long or short names are expected.

By default, benchmarks are run with no slave nodes.

How to specify the master node to run benchmarks with

Set the MASTER_NODE variable in conf/run.conf.

The value of this variabe is the short or the long name of the master node.

MASTER_NODE=somenode@somehost

The USE_LONG_NAMES variable determines whether long or short names are expected.

The default long name of the master node is:

master@`hostname -f`

and its default short name:

master@`hostname`

How to specify the magic cookie that master and slave nodes share

Set the COOKIE variable in conf/run.conf.

COOKIE="some_cookie"

The default cookie is cookie.

How to specify which version of the benchmarks to run

Set the VERSION variable in conf/run.conf.

The value of this variable can be short, intermediate or long.

VERSION=short

The default version is short.

How to specify whether to produce scalability graphs or not

Set the PLOT variable in conf/run.conf.

The value of this variable can be either 0 (do not produce any scalability graphs) or 1 (produce scalability graphs).

PLOT=1

The default value is 1.

How to specify whether to perform a sanity check or not

Set the CHECK_SANITY variable in conf/run.conf.

The value of this variable can be either 0 (do not perform sanity check) or 1 (perform sanity check).

SANITY_CHECK=1

By default, the sanity of the benchmark execution results is not checked.

How to specify the number of iterations

Set the ITERATIONS variable in conf/run.conf.

The value of this variable is an integer that is greater than or equal to 1.

ITERATIONS=5

The default number of iterations is 1.

What number to use if multiple iterations are run

By default, only the minimum runtime of any of the runs is reported. Set the OUTPUT_FORMAT variable in conf/run.conf to change this behaviour.

The value of this variable can be any of the following five options: min reports only the minimum runtime max reports only the maximum runtime avg reports the arithmetic mean of all iterations' runtimes avg_min_max reports all three of the above, in the order avg, min, max plain reports the runtime of each iteration individually

Note that the web interface only shows the first reported value and ignores any further numbers.

What is the result of running the benchmark suite

A new directory is created under the results directory. The name of this directory is the mnemonic name or, if no mnemonic name has been specified, a string that contains the date and time when the run started.

In the result directory, there is one subdirectory for each one of the benchmarks that was run, with the same name as the benchmark. Each such directory has three sub-directories:

  • graphs, which contains the scalability graphs;
  • output, which contains the output that the benchmark produced during its execution;
  • measurements, which contains the scalability measurements collected during the execution of the benchmark .

Web interface (UI)

The web interface can be used to view graphs for benchmark results stored in the results folder.

How to start the web server that serves the web interface of the benchmark suite

Use bencherlui with the -u (up) option to start the web server.

$ ./bencherlui -u

By default the web interface can be found at "http://localhost:8001".

You can change the port of the web server in ui/bencherlui/boss.config.

How to stop the web server that serves the web interface of the benchmark suite

Use bencherlui with the -d (down) option to stop the web server.

$ ./bencherlui -d

bencherl's People

Contributors

aggelgian avatar aronisstav avatar davidklaftenegger avatar francesquini avatar k4t3r1n4 avatar kjellwinblad avatar kostis avatar nickie avatar yiannist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bencherl's Issues

bencherl fails to produce correct data file with mawk

When using mawk 1.3.3 bencherl fails with the following error message after running the ets_bench benchmark:

awk: run time error: not enough arguments passed to printf("(delwait,"mixed=l:0.00%,=u:100.00%=131072"-[ordered_set,14,15,17,1.0,rw,1,{0,0,0}])")
FILENAME="-" FNR=1 NR=1

The problem seems to be the percentage sign. The result is that at least one data file is not produced correctly. It could be fixed by replacing all % signs with %% in the strings that are printed with awk's printf function. Everything works correctly when using gnu awk 4.1.0 instead of mawk.

issues in ets_test benchmark

See bencherl/bench/ets_test/src/ets_test.erl for the referenced code.

main bug in ets_test

The ets_test microbenchmark has a severe bug that is not catched by bencherl:
When running concurrently, the w-workers will inserts numbers from N down to 1, while the r-workers will lookup these same values from N down to 1.
If one r-worker does this faster, the match in line 75 will fail, and the worker process wil crash (badmatch). This will then bring the main Process down, as they are linked, and subsequently bring the entire benchmark to a hold.
bencherl does not catch that the benchmark application crashed, which probably could be changed to find such errors more easily on future benchmark applications.
Thus this benchmark currently measures how long it takes for a r-worker to overtake all w-workers.

The easiest way to solve this problem would be to just remove the match in line 75, but reversing lines 65 and 66 would also be an option. However, this would change the semantics of the benchmark even more than the easier change.

other problems

Another problem with the ets_test benchmark is that it uses
T = ets:new(x, [public]),
in line 43, which creates an ETS table of type set with read_concurrency and write_concurrency both set to false.
This somewhat counteracts any kind of scaling behaviour expected from this benchmark.

As there are more issues to the usefulness of this benchmark, like all workers having the exactly same access pattern to the table, the long term solution maybe is to replace it with some more expressive ETS benchmarks. I will create a pull request for mine once it is stable.

Compile ui error on OTP 17

I try to make ui and it show:
Compile Error, "src/lib/rfc4627.erl" -> [{"/home/haimh/git/softlab-ntua/bencherl/ui/bencherlui/src/lib/rfc4627.erl",[{382,erl_parse,["syntax error before: ","unsigned"]}]}]
11:28:46.427 [error] Load Module Error lib_modules : [[{"/home/haimh/git/softlab-ntua/bencherl/ui/bencherlui/src/lib/rfc4627.erl",[{382,erl_parse,["syntax error before: ","unsigned"]}]}]]
11:28:46.723 [error] Compile Error, "src/controller/bencherlui_results_controller.erl" -> [{"/home/haimh/git/softlab-ntua/bencherl/ui/bencherlui/src/controller/bencherlui_results_controller.erl",[{137,erl_parse,["syntax error before: ","length"]}]}]
11:28:46.723 [error] Load Module Error controller_modules : [[{"/home/haimh/git/softlab-ntua/bencherl/ui/bencherlui/src/controller/bencherlui_results_controller.erl",[{137,erl_parse,["syntax error before: ","length"]}]}]]

web-ui incompatible with Erlang/OTP R16B

Kjell's nice UI currently needs to be compiled and started with an older version of Erlang.

For now, a workaround is to change the otp version used for bencherl after starting the UI.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.