ajrouvoet / dummy Goto Github PK
View Code? Open in Web Editor NEWTest results management framework
License: MIT License
Test results management framework
License: MIT License
Could be useful during development to show test output.
Then looking up the log output with dummy show
is unnecessary.
Before public release, license dummy as Apache or whatever.
$ dummy show
$
yeah great, thanks for that
expected behaviour:
$ dummy show
Let's do this!
$
When running dummy show plot, it show the following error/warning, and no plot is shown.
eddie@eddie-ubuntu:~/school_stuff/BAP/turing$ dummy -D show --plot integration/1
> DEBUG Loaded metric `coverage`
> DEBUG Loaded metric `pass/fail`
> DEBUG Loaded statistic `coverage`
> DEBUG Loaded statistic `tests passing`
> DEBUG HEAD
> DEBUG Loaded testresult: `integration/1` result (commit: 0f593e4)
/home/eddie/school_stuff/BAP/dwarv-testing/src/python/local/lib/python2.7/site-packages/matplotlib/axes.py:4601: UserWarning: No labeled objects found. Use label='...' kwarg on individual plots.
warnings.warn("No labeled objects found. "
Should be relative to the root.
There are currently a couple of developers assigned to the dummy project who aren't actually contributing shit. Get them out.
Possible exclusees:
-Auke Booij (who's even talked to this guy lately?)
The following error occurs:
{ ce-ws005: ~/src/dwarv_latest } dummy -D run -c a0997 comparison/* Fri 21 11:04
> DEBUG Cleaning `.tmp`
> INFO Loading metrics...
> DEBUG Loading metric `status`
> DEBUG dummy.honeypot.GrepCollector
> DEBUG dummy.honeypot
> DEBUG GrepCollector
> DEBUG Loading metric `log`
> DEBUG dummy.honeypot.LogCollector
> DEBUG dummy.honeypot
> DEBUG LogCollector
> INFO Running tests for target `perf` [1/1]
> INFO ================================================================================
> WARNING Checking out commit `a0997`
> ERROR Could not checkout `a0997`... Sorry!
M dummyconfig.py
M src/eng/cg/cgd/arithmetic.cgd
M tests/common.mak
M tests/comparison/double_imm/main.c
M tests/comparison/double_imm/makefile
M tests/hwconfig/endian_neg/config.v
M tests/hwconfig/endian_neg/makefile
M tests/mem64/read/main.c
Your branch is ahead of 'origin/master' by 8 commits.
> INFO Checked out the original HEAD again!
> CRITICAL Could not checkout commit `a0997`: None
> /home/et3905-1/dummy/utils/git.py(25)checkout()
-> raise GitError( "Could not checkout commit `%s`: %s" % ( committish, e.output ))
(Pdb) w
/home/et3905-1/dummy/__main__.py(226)main()
-> run( args )
/home/et3905-1/dummy/runner/__init__.py(185)run()
-> runner.run( store=(not args.dryrun), target=t, commit=args.commit )
/home/et3905-1/dummy/runner/__init__.py(82)run()
-> git.checkout( commit )
> /home/et3905-1/dummy/utils/git.py(25)checkout()
-> raise GitError( "Could not checkout commit `%s`: %s" % ( committish, e.output ))
(Pdb) l
20 subprocess.check_call(
21 [ 'git', 'checkout', committish, '--' ] + paths,
22 stderr=subprocess.PIPE
23 )
24 except subprocess.CalledProcessError as e:
25 -> raise GitError( "Could not checkout commit `%s`: %s" % ( committish, e.output ))
26
27 def current_branch():
28 """ Get the name of the current branch
29 """
30 try:
(Pdb)
When the all test suite is configured as follows:
SUITES = {
'all': [
'*'
],
}
It will also include the run.sh
in the tests dir as a test case.
Relative to ROOT, which makes no sense.
Either give it an absolute path, or the test name.
I think the test name makes more sense, with a TEST_PATH env variable.
This is due to the fact that the runner might NEED the name if not every test has it's own directory (which is common).
there are no easter eggs yet. please add some fun into dummy.
There needs to be a way to add expected results to the result framework. This is needed for negative test cases.
If it is expected that a test output FAIL
then dummy should record that as a PASS
.
If dummy is run on a folder such as dummy run mem64/*
, then it saves a results.json to 3 folders:
results/.../mem64,
.../mem64/read/
.../mem64/write/
The first folder does not contain a test and so should also not contain a result.json.
Could be related to Issue #16
Currently coverage collector include should be of the form
'coverage' : {
'collector': 'dummy.honeypot.CCoverageCollector',
'kwargs': {
'include': ['*/turingparser.c'],
}
}
The * before the source path is not intuitive. More ideal could be automatically prepending SRC_PATH and a wildcard. Though I'm not sure if this would be appropriate in every use case...
To prevent multiple users from interfering with each other.
the only way to run a useful dummy command is by inspection of dummyconfig.py. this should not be necessary.
dummy run
INFO Loading metrics...
INFO Running tests for targetperf
[1/1]
INFO ================================================================================
CRITICAL No tests to run.
should be something along the lines of:
dummy run
INFO Loading metrics...
INFO Running tests for targetperf
[1/1]
INFO ================================================================================
CRITICAL No tests to run.
INFO Configured test suites:
INFO thesis all fixed_pass_fast pass_fast pass fail perf_fast perf
ditto for metrics and targets/contexts
Adding a quickstart functionality could help the user set up Dummy quickly and correctly.
See also 9d2765e76259053553c8e9b82c0a3d45417c2acc
it should be possible to specify a list of sources for coverage analysis
Terminal output:
{ ce-ws001: ~/src/dwarv_clone } dummy -D run -s comparison/double_imm Mon 17 18:23
> DEBUG Cleaning `.tmp`
> INFO Loading metrics...
> DEBUG Loading metric `pass/fail`
> DEBUG dummy.honeypot.PassFailCollector
> DEBUG dummy.honeypot
> DEBUG PassFailCollector
> INFO Running tests for target `perf` [1/1]
> INFO ================================================================================
> INFO Running pre-test hooks...
> INFO Running test: `comparison/double_imm` [1/1]
> INFO pass/fail: FAIL
> DEBUG Cleaned directory: `results/0060c63/perf/comparison/double_imm`
> DEBUG Stored results in `results/0060c63/perf/comparison/double_imm/result.json`
> DEBUG Removing temporary result directory: `.tmp/results/0060c63/perf/comparison/double_imm`.
> CRITICAL [Errno 2] No such file or directory: '.tmp/results/0060c63/perf/comparison/double_imm'
> /home/et3905-1/.local/lib/python2.7/shutil.py(237)rmtree()
-> names = os.listdir(path)
(Pdb) w
/home/et3905-1/dummy/__main__.py(205)main()
-> run( args )
/home/et3905-1/dummy/runner/__init__.py(185)run()
-> runner.run( store=args.store, target=t, commit=args.commit )
/home/et3905-1/dummy/runner/__init__.py(86)run()
-> self._run_tests( target=target, store=store )
/home/et3905-1/dummy/runner/__init__.py(134)_run_tests()
-> self._store_result( result )
/home/et3905-1/dummy/runner/__init__.py(159)_store_result()
-> shutil.rmtree( temp_result_dir )
> /home/et3905-1/.local/lib/python2.7/shutil.py(239)rmtree()
-> onerror(os.listdir, path, sys.exc_info())
> /home/et3905-1/.local/lib/python2.7/shutil.py(237)rmtree()
-> names = os.listdir(path)
(Pdb) l
234 return
235 names = []
236 try:
237 names = os.listdir(path)
238 except os.error, err:
239 -> onerror(os.listdir, path, sys.exc_info())
240 for name in names:
241 fullname = os.path.join(path, name)
242 try:
243 mode = os.lstat(fullname).st_mode
244 except os.error:
Older versions of lcov (1.9) do not support --rc
and thus cannot enable lcov_branch_coverage=1
.
By adding detection for this functionality it also possible to show an error message when it is missing, if it is enabled.
When dummy is run on a folder without test, it still proceeds to make in that folder.
e.g. coverage lines
A JSON formatter could be useful for output to other programs.
We all know what the future is going to be like, please rewrite dummy to Haskell.
Right now it:
checks out
checks out -- TEST DIRS
runs the tests
This will fail if files anywhere BETWEEN dummy and the test dir we checked out (test runner/test makefiles/etc) are incompatible with the 'future' tests.
Possible solution is to checkout the whole TESTS_DIR from future and have any test related stuff there.
But if the runner changes in the future this might be incompatible with 'old' project setup.
It's choosing between two evils; let's just choose the lesser one. We have relatively good error catching and fallbacks. The user will have to do the checkouts himself if his project is 'non trivially setup'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.