Git Product home page Git Product logo

Comments (9)

vstinner avatar vstinner commented on August 25, 2024

Currently, perf only supports a single benchmark. IMHO it's not enough. Most benchmark tools support multiple benchmarks. I wrote perf for the CPython Benchmark Suite http://hg.python.org/benchmarks which contains more than 30 benchmarks. I may be annoying to have 1 json file per benchmark. At least, it would be nice to be able to store multiple benchmarks in the same JSON file.

I changed the internal format of samples recently in perf from normalized timing per loop iteration to raw timing (total for all loop iterations). I made this change to be able to more easily detect if the benchmark is unstable if a single sample is too short (ex: less than 1 us):
http://perf.readthedocs.io/en/latest/perf.html#runs-samples-warmups-outter-and-inner-loops

Not only detect that the benchmark was unstable when running the benchmark, but also after when analyzing a file.

I'm not sure that having two formats for samples (raw and normalized) is a good idea.

from pyperf.

ionelmc avatar ionelmc commented on August 25, 2024

I changed the internal format of samples recently in perf from normalized timing per loop iteration to raw timing (total for all loop iterations).

Not sure what you mean here. What's this normalization? Feel free to point me to places in code.

from pyperf.

vstinner avatar vstinner commented on August 25, 2024

https://github.com/ionelmc/pytest-benchmark/blob/master/tests/test_storage/0001_b87b9aae14ff14a7887a6bbaa9731b9a8760555d_20150814_190343_uncommitted-changes.json

I looked at this example.

perf JSON perf is similar, but the main difference is that perf stores all samples instead of storing only statistics. Statistics are computed on demand. The advantage is that if perf is enhanced tomorrow, you will be able to display your benchmark result differently. For example, the first perf version didn't have histograms and only displayed mean, std dev, min and max. A new perf version may display IQR tomorrow, you will be able to analyze old benchmark results.

Another difference is that perf JSON only support one benchmark result per file, your format supports multiple results into a single file. But this difference is less important, I can enhance perf for that :-)

from pyperf.

vstinner avatar vstinner commented on August 25, 2024

Another difference is that perf JSON only support one benchmark result per file, your format supports multiple results into a single file. But this difference is less important, I can enhance perf for that :-)

FYI I introduced BenchmarkSuite in the future perf 0.6. A benchmark suite is made of multiple benchmarks. A benchmark is made of multiple runs, and each run is made of samples (including warmup samples). In a suite, a benchmark is identified by its name which must be unique.

from pyperf.

vstinner avatar vstinner commented on August 25, 2024

I just released perf 0.6 which now supports benchmark suites, not only individual benchmarks. A JSON file is a benchmark suite, it's the format version 2.

I don't want to modify perf format to only store statistical results, I prefer to store all samples grouped per run. If you want an interoperable format, I suggest to modify your format to use something closer to perf.

You may reopen the issue if you are interested to modify your format ;-)

from pyperf.

ionelmc avatar ionelmc commented on August 25, 2024

Do you have an example of the multiple-suite format?

from pyperf.

vstinner avatar vstinner commented on August 25, 2024

Example of a benchmark JSON file (indented for readability):

  • 3 benchmarks (call_simple, go, telco)
  • each benchmark has 2 runs, each run has 1 warmup sample + 3 raw samples
{
    "benchmarks": {
        "call_simple": {
            "metadata": {
                "description": "Test the performance of simple Python-to-Python function calls",
                "cpu_model_name": "Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz",
                "date": "2016-07-06T18:32:43",
                "python_implementation": "cpython",
                "python_executable": "/usr/bin/python3",
                "platform": "Linux-4.6.3-300.fc24.x86_64-x86_64-with-fedora-24-Twenty_Four",
                "python_version": "3.5.1 (64bit)",
                "perf_version": "0.6",
                "cpu_count": "4",
                "name": "call_simple",
                "aslr": "enabled",
                "timer": "clock_gettime(CLOCK_MONOTONIC), resolution: 1.00 ns",
                "hostname": "selma",
                "duration": "2.4 sec"
            },
            "warmups": 1,
            "runs": [
                [
                    0.271314807,
                    0.239580888,
                    0.237846342,
                    0.23672315
                ],
                [
                    0.271740932,
                    0.243592464,
                    0.237245481,
                    0.236748594
                ]
            ],
            "loops": 1,
            "inner_loops": 20
        },
        "go": {
            "metadata": {
                "description": "Test the performance of the Go benchmark",
                "cpu_model_name": "Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz",
                "date": "2016-07-06T18:32:46",
                "python_implementation": "cpython",
                "python_executable": "/usr/bin/python3",
                "platform": "Linux-4.6.3-300.fc24.x86_64-x86_64-with-fedora-24-Twenty_Four",
                "python_version": "3.5.1 (64bit)",
                "perf_version": "0.6",
                "cpu_count": "4",
                "name": "go",
                "aslr": "enabled",
                "timer": "clock_gettime(CLOCK_MONOTONIC), resolution: 1.00 ns",
                "hostname": "selma",
                "duration": "5.5 sec"
            },
            "warmups": 1,
            "runs": [
                [
                    0.56381269,
                    0.580709772,
                    0.576461901,
                    0.658624167
                ],
                [
                    0.605411489,
                    0.601818249,
                    0.555815024,
                    0.644006216
                ]
            ],
            "loops": 1
        },
        "telco": {
            "metadata": {
                "description": "Test the performance of the Telco decimal benchmark",
                "cpu_model_name": "Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz",
                "date": "2016-07-06T18:32:51",
                "python_implementation": "cpython",
                "python_executable": "/usr/bin/python3",
                "platform": "Linux-4.6.3-300.fc24.x86_64-x86_64-with-fedora-24-Twenty_Four",
                "python_version": "3.5.1 (64bit)",
                "perf_version": "0.6",
                "cpu_count": "4",
                "name": "telco",
                "aslr": "enabled",
                "timer": "clock_gettime(CLOCK_MONOTONIC), resolution: 1.00 ns",
                "hostname": "selma",
                "duration": "1.2 sec"
            },
            "warmups": 1,
            "runs": [
                [
                    0.097667996,
                    0.109760575,
                    0.10790484,
                    0.103240233
                ],
                [
                    0.109492493,
                    0.103709194,
                    0.100290215,
                    0.09819936
                ]
            ],
            "loops": 4
        }
    },
    "version": 2
}

from pyperf.

ionelmc avatar ionelmc commented on August 25, 2024

pytest-benchmark don't store all the data by default (it's an opt-in currently) because it's designed to save runs quite often. Maybe it could be an opt-out - provided there are some sensible default settings preventing 50mb output files.

Here's an example with the full data format:

{
    "benchmarks": [
        {
            "stats": {
                "mean": 2.1958498263759373e-06,
                "iqr_outliers": 3812,
                "stddev_outliers": 163,
                "iqr": 1.00000761449337e-07,
                "ld15iqr": 1.7999991541728377e-06,
                "q1": 1.8999999156221747e-06,
                "outliers": "163;3812",
                "hd15iqr": 2.1999985619913787e-06,
                "stddev": 6.292674923984151e-06,
                "max": 0.0009249150007235585,
                "iterations": 1,
                "q3": 2.0000006770715117e-06,
                "rounds": 58477,
                "data": [
                    4.300000000512227e-06,
                    2.499999027349986e-06,
                    2.099999619531445e-06,
                    1.999998858082108e-06,
                    1.8999999156221747e-06,
                    ....
                    2.70000055024866e-06,
                    2.599999788799323e-06
                ],
                "median": 2.599999788799323e-06,
                "min": 2.4000000848900527e-06
            },
            "fullname": "tests/test_lazy_object_proxy.py::test_perf[objproxies]",
            "group": null,
            "options": {
                "timer": "perf_counter",
                "max_time": 1.0,
                "warmup": false,
                "disable_gc": false,
                "min_time": 5e-06,
                "min_rounds": 5
            },
            "name": "test_perf[objproxies]"
        }
    ],
    "commit_info": {
        "id": "30e14966ebaf8f0fde100e1d43c860c42ac16541",
        "dirty": false
    },
    "version": "3.0.0",
    "machine_info": {
        "processor": "x86_64",
        "system": "Linux",
        "node": "newbox",
        "python_compiler": "GCC 4.8.4",
        "machine": "x86_64",
        "release": "3.16.0-71-generic",
        "python_version": "3.5.1",
        "python_implementation": "CPython"
    },
    "datetime": "2016-07-06T18:50:25.824890"
}

from pyperf.

ionelmc avatar ionelmc commented on August 25, 2024

Some questions on your format ... in this:

            "runs": [
                [
                    0.271314807,
                    0.239580888,
                    0.237846342,
                    0.23672315
                ],
                [
                    0.271740932,
                    0.243592464,
                    0.237245481,
                    0.236748594
                ]
            ],
            "loops": 1,
            "inner_loops": 20

  • What are the inner_loops for? What's the difference to loops?
  • Why do you have a list of lists in runs? What does it mean?

from pyperf.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.