Git Product home page Git Product logo

statcheck's Introduction

This is a Doom source port regression testing system that uses the
thousands of demos in the Compet-N archive to test demo playback.

The demos have been played back in Vanilla Doom running in statdump.exe
to save statistics about the levels that were completed. The output
from statdump.exe (in the form of .txt files) gives a useful set of
expected outputs. A source port able to output the same statistics
data can then be tested by playing back the same demos and comparing
against this expected data set.

This was developed for Chocolate Doom but it may be useful for other
source ports as well.

statcheck's People

Contributors

fragglet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

statcheck's Issues

please provide a summary

I have run ./regression-test twice today to check Crispy Doom for demo compatibility and each over the 10902 ZIP files run took well between one and two hours.

When the tests were finally finished, the small status window simply closed and the following was printed into the terminal:

Traceback (most recent call last):
  File "./regression-test", line 127, in <module>
    pipeline.finish()
  File "/home/greffrath/Debian/statcheck/common.py", line 237, in finish
    self.poll()
  File "/home/greffrath/Debian/statcheck/common.py", line 196, in poll
    callback(p.exit_code, p.stdout, p.stderr)
  File "./regression-test", line 104, in process_complete
    success = check_output(output_filename, relpath, lmpname)
  File "./regression-test", line 55, in check_output
    expected = read_file(expected_file)
  File "./regression-test", line 42, in read_file
    stream = open(filename)
IOError: [Errno 2] No such file or directory: 'output/pwads/mm2/nmare/m215n650.zip/m215n650.txt'

I assume this message indicates a bug, but this is not what I complain about, though this is the reason I run the test a second time.

It would be very nice if the regression-test tool could print some results after finishing the tests. For example:

10902 ZIP files tested
10901 OK (99%)
1 Failed (1%)
This took 1:05:00
[OK]

It would be even better if it could keep track of the successful and failed tests, i.e. store the file names appended with an "OK" or "FAIL" into a text file, so I don't have to search for the string "fail" through why whole terminal buffer.

please enable to run on only a subsets of the WADs

As I've written in my previous report, one whole test can easily take up more than an hour, whereas sometimes you may be only interested in the test results for e.g. av.wad -- which comes last. It should thus be possible to run the tests only over a subset of the WADs, like e.g. ./regression-test --wads doom2,av.

Some expected outputs are zero length

We paint all demos in the Compet-N with a broad brushstroke and play them back according to the name of the directory they're located in. But not all demos should be played back with the same game version / IWAD.

This is evident in the output/ directory where there are a bunch of .txt files that are of zero size. Some of these are legitimate: for example, for ExM8 levels, no statistics data gets generated. However, there are some that should not be zero, and that are only zero because they've been generated from the wrong configuration. Examples are:

  • Doom 1 demos that should be played back with the registered version IWAD, or with a different version to Doom 1.9.
  • Final Doom demos recorded with the "alternate" version of the .exe that was included with the Id Anthology. This has different teleport behavior.

We need a separate configuration file with individual "overrides" for these misconfigured demos, so that we can play them back properly and test these different versions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.