Git Product home page Git Product logo

benchmarks's Introduction

aurelia-benchmarks

ZenHub Join the chat at https://gitter.im/aurelia/discuss

Performance benchmarks for Aurelia.

To keep up to date on Aurelia, please visit and subscribe to the official blog and our email list. We also invite you to follow us on twitter. If you have questions, please join our community on Gitter or use stack overflow. Documentation can be found in our developer hub. If you would like to have deeper insight into our development process, please install the ZenHub Chrome or Firefox Extension and visit any of our repository's boards.

Installing

npm install
jspm install

Important: The jspm install will modify the paths in config.js. Revert these modifications:

paths

This issue is being investigated.

Running

Many factors can impact benchmark performance. It's a good idea to close unnecessary browser tabs and applications while running the benchmarks. A reboot prior to running the benchmarks might help if you're seeing inconsistent results.

To start the application:

gulp watch

Enable Chrome's microsecond timer with the --enable-benchmarking argument.

Windows:

start chrome --enable-benchmarking

Browse to http://localhost:3000.

Establish a baseline

The benchmark application has two sections: Harness and Tags. Use the harness section to select and add benchmarks to the run queue. Benchmarks are run sequentially. If it's your first time running the application you'll want to establish the baseline performance. Select all the benchmarks and press Run. The currently running benchmark will be indicated in blue:

running

When the benchmarks complete enter a tag name (ie "baseline") and press Tag. This will persist the most recent result for each of the selected benchmarks in dist/tags/tags.json.

tag

[
  {
    "name": "binding-bind",
    "timestamp": "2015-08-31T01:13:07.590Z",
    "userAgent": "Chrome on Windows 8.1",
    "period": 0.005819236144566549,
    "tag": "baseline"
  },
  {
    "name": "binding-interpolation-long",
    "timestamp": "2015-08-31T01:13:13.397Z",
    "userAgent": "Chrome on Windows 8.1",
    "period": 0.004415373401538267,
    "tag": "baseline"
  },
  {
    "name": "binding-interpolation-short",
    "timestamp": "2015-08-31T01:13:19.228Z",
    "userAgent": "Chrome on Windows 8.1",
    "period": 0.006167281012655818,
    "tag": "baseline"
  },
  ...

Tag names should be specific to the "version" or "change" to the Aurelia codebase you are performance testing. Test changes across multiple browsers and tag the results using the same tag name to enable cross-browser performance comparisons.

cross-browser

Test changes

Once you've established a baseline you're ready to test changes. Use the gulp update-own-deps command to pull in changes from the Aurelia repos. For quick tests select a subset of the benchmarks using the search feature.

search

Click Run to re-run the benchmarks. Results will appear alongside the baseline results. Improved performance is indicated in green. Worse performance is indicated in red.

perf

Repeat the process of making changes and using gulp update-own-deps until you're satisfied with the results.

Compare tags

The tags section charts the performance of each tagged result. Use the benchmark and user-agent filters to limit the results that appear in the chart.

tags


Adding Micro Benchmarks

Micro benchmarks are pure JavaScript code. To create a micro benchmark, add a folder to the benchmarks/micro folder and add a file named index.js.

Currently this framework only supports an index.js file that exports a single function. The function will take a deferred object as a parameter, and the function must call deferred.resolve() to complete the test. The function will execute more than once as the framework finds a statistically valid result.

Example

export default (deferred) => {
  setTimeout(() => deferred.resolve(), 1000);
}

Adding Macro Benchmarks

Macro benchmarks require a DOM and will be loaded into a hidden iframe for execution.

To add a macro benchmark, create a folder in the benchmarks\macro folder and include an index.html file. This file can load any scripts it needs, for example index.html can load an Aurelia app.

JavaScript code in a macro benchmark has two requirements:

  1. Invoke postMessage on the parent window and pass "test-start" to announce the start of a test.

  2. Invoke postMessage on the parent window and pass "test-end" to announce the end of a test.

Macro tests will currently only execute once.

Example

The following script is the main.js file for an Aurelia application and is attempting to measure the time from configuration start till the Aurelia framework fires the aurelia-composed event.

document.addEventListener("aurelia-composed", function (e) {
    parent.postMessage("test-end", "*");
  }, false);

export function configure(aurelia) {

  parent.postMessage("test-start", "*");

  aurelia.use
    .defaultBindingLanguage()
    .defaultResources()
    .eventAggregator();

  aurelia.start().then(a => a.setRoot('app', document.body));
}

benchmarks's People

Contributors

eisenbergeffect avatar jdanyow avatar odetocode avatar zewa666 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.