twosigma / marbles Goto Github PK
View Code? Open in Web Editor NEWRead better test failures.
Home Page: https://marbles.readthedocs.io
License: MIT License
Read better test failures.
Home Page: https://marbles.readthedocs.io
License: MIT License
Is your feature request related to a problem? Please describe.
This might be a thing in our team, but we really like docstrings. When a unit-test fails our current team relies on reliable docstrings to explain what is happening in a test. This is nice, because sometimes code is a language that is too cryptic to explain why a test is important
Describe the solution you'd like
I must admit that I'm just starting out with the project and there's parts that I love. I really enjoy the locals view in a failed test. But instead of notes, we might be interested in seeing the doctoring attached to the testing method. This might be a highly preferential thing on our team, but may just be worth considering.
Describe alternatives you've considered
Part of this problem could be explained with a note but a docstring feels like a more natural place to write a long part of text.
Additional context
This definitely falls into the nice-to-have category, something that one would only see if a certain flag is passed from the command line (like --verbose
). We're curious what the thoughts are on your side.
ps. Thanks again for speaking at PyData Amsterdam.
Is your feature request related to a problem? Please describe.
If a local variable has a value whose repr
is multiple lines, the first line is indented by a tab plus the variable name, but the rest of the lines are flush with the left margin of the terminal.
This is particularly annoying for tabular data, where the first line, usually column names, is offset from the data rows.
For example:
Source (/home/leif/git/marbles-demo-bikeshare/bikeshare/test_bikeshare.py):
49 long_trips = _data[_data['tripduration'] > pd.Timedelta('24h')]
> 50 self.assertDataFrameEmpty(long_trips)
51
Locals:
long_trips= tripduration starttime stoptime
1495 1 days 13:33:16 2015-01-08 01:06:37.000 2015-01-09 14:39:54.000
4064 1 days 21:25:08 2015-01-17 13:55:59.000 2015-01-19 11:21:08.000
7168 2 days 16:31:59 2015-01-26 17:14:12.000 2015-01-29 09:46:11.000
722 1 days 19:32:50 2015-02-06 15:31:02.000 2015-02-08 11:03:53.000
1388 1 days 02:19:46 2015-02-13 08:04:02.000 2015-02-14 10:23:49.000
5621 6 days 16:32:20 2015-03-17 16:26:54.000 2015-03-24 08:59:15.000
6453 191 days 14:29:48 2015-03-20 02:24:09.000 2015-09-27 16:53:58.000
Describe the solution you'd like
Multiline output reprs should be indented uniformly.
Describe alternatives you've considered
Could also special case things like pandas.DataFrame
but doing it for anything which repr
s to something with newlines seems better.
Additional context
Additionally, values are currently rendered with str
, not repr
, which means strings don't get quotes and so don't look like strings, and there's no space around the equals sign separating the name and value.
I'm not sure whether we want strings to have quotes or not, what do you think @thejunglejane?
marbles/marbles/mixins/marbles/mixins/mixins.py
Lines 1192 to 1209 in c1e5373
The real parameter names are level
and levels
but the docstring parameters and raises sections say level1
and level2
.
The per-file-ignores plug-in was added to the main flake8 project so we don’t need it anymore. Also, the syntax changed to:
per-file-ignores =
project/__init__.py:F401
setup.py:E121
other_project/*:W9
Is your feature request related to a problem? Please describe.
Currently, marbles doesn't do anything with local variables before putting them in the failure message beyond casting them to strings. When local variables have really long string representations, they can overwhelm the failure message, which is unpleasant to look at but also hinders the failure message's readability.
Describe the solution you'd like
I would like marbles to truncate long local variables, probably using the same length settings as unittest.util
.
unittest.util
has some repr-truncating functionality that we could use, and/or we could also use reprlib
.
Describe alternatives you've considered
A workaround for this is to have the test author make locals with long runtime values internal so they don't show up in the failure message at all. If they want the test consumer to be able to see those local variables, the test author can create their own public local that is the truncated representation.
Because we inspect the stack, marbles is incompatible with 3.4 and below. We can try to fix this at some point, but for now we should add some classifiers to restrict this (and probably add a few others from that list).
Project description is missing here https://pypi.org/project/marbles/
Should:
And document that decision in maintaining.rst
We want to allow users to write notes as triple-quoted multiline strings that are:
class MyTestCase(marbles.core.TestCase):
def test_bar(self):
note = '''Welcome to your new marbles test suite!
We're so glad you're here.
Here are some things you can do:
1. foo
2. bar
'''
self.assertTrue(False, note=note)
marbles.marbles.AnnotatedAssertionError: False is not true
Source:
38
> 39 self.assertTrue(False, note=note)
40
Locals:
Note:
Welcome to your new marbles test suite! We're so glad you're here.
Here are some things you can do:
1. foo
2. bar
----------------------------------------------------------------------
Ran 1 tests in 0.002s
FAILED (failures=1)
Currently, in order to achieve 2, users have to align their multiline strings all the way to the left in the source, or they have to do their own dedenting before passing the note to an assertion.
We can do this dedenting for users by calling textwrap.dedent
on the note before we do the remaining reformatting.
Logger is not writing anything to a log file.
I have the following in my main method:
if name == 'main':
log.logger.configure(logfile='./logfile.log',
attrs=['filename', 'date'])
marbles.core.main()
I ran python -m marbles <test.py>
Expected the test run to create a log file in the local directory.
Currently I write my tests in unittest
, but use pytest
as a test runner because the coloured output makes it easier to see what's going on at a glance (plus it just looks better). This could be added to marbles
as well, since nicer output is a major feature. Examples I'm thinking of:
.
/ red F
as tests run:..........................................F......
ok
/ red FAIL
in verbose mode:test_not_found (tests.test_users.TestUpdatePassword) ... ok
test_wrong_password (tests.test_users.TestUpdatePassword) ... ok
test_empty_lists (tests.test_utils.TestDropNones) ... ok
test_empty_strings (tests.test_utils.TestDropNones) ... ok
test_no_nones (tests.test_utils.TestDropNones) ... ok
test_nones (tests.test_utils.TestDropNones) ... ok
test_0_to_plus31 (tests.test_utils.TestFormatPhoneNumber) ... FAIL
test_already_formatted (tests.test_utils.TestFormatPhoneNumber) ... ok
test_bad_input (tests.test_utils.TestFormatPhoneNumber) ... ok
test_dashes (tests.test_utils.TestFormatPhoneNumber) ... ok
test_foreign (tests.test_utils.TestFormatPhoneNumber) ... ok
test_plus (tests.test_utils.TestFormatPhoneNumber) ... ok
test_spaces (tests.test_utils.TestFormatPhoneNumber) ... ok
FAIL
in title to make quick scrolling easier (more visible breaks between mesages) '+31612345678' != 'this will fail'
)======================================================================
FAIL: test_0_to_plus31 (tests.test_utils.TestFormatPhoneNumber)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/johnpaton/some/path/tests/test_utils.py", line 64, in test_0_to_plus31
self.assertEqual(output, 'this will fail')
File "/Users/johnpaton/some/path/venv/subsidy/lib/python3.6/site-packages/marbles/core/marbles.py", line 532, in wrapper
return attr(*args, msg=annotation, **kwargs)
marbles.core.marbles.ContextualAssertionError: '+31612345678' != 'this will fail'
- +31612345678
+ this will fail
Source (/Users/johnpaton/some/path/tests/test_utils.py):
63 output = utils.format_phone_number(input)
> 64 self.assertEqual(output, 'this will fail')
65
Locals:
output=+31612345678
expected=+31612345678
input=0612345678
----------------------------------------------------------------------
This might be a good use case for colorama
and/or pygments
at the expense of another dependency.
python -m unittest -k
is new in python 3.7: https://docs.python.org/3/whatsnew/3.7.html#unittest
We should test that this works with -m marbles
.
Several of marbles' documented options cause "unrecognized arguments" errors when I try to use them.
$ python3 -m marbles -h | grep TOP
-t TOP, --top-level-directory TOP
$ python3 -m marbles -t tests
python3 -m marbles: error: unrecognized arguments: -t
$ python3 -m marbles --top-level-directory tests
python3 -m marbles: error: unrecognized arguments: --top-level-directory
$ python3 -m marbles -h | grep START
-s START, --start-directory START
$ python3 -m marbles -s tests
python3 -m marbles: error: unrecognized arguments: -s
$ python3 -m marbles --start-directory tests
python3 -m marbles: error: unrecognized arguments: --start-directory
The options work as promised.
Is your feature request related to a problem? Please describe.
Many people use pytest instead of unittest for testing their Python code, meaning that many people may ask “how does marbles compare to pytest?”
Describe the solution you'd like
A section in the docs comparing and contrasting marbles and pytest that helps readers make informed decisions about which tool to use. Many of the differences between marbles and pytest are differences between unittest and pytest. It is worthwhile to document our reasons for choosing to expand on unittest instead of pytest.
This might be hard to backfill but we should start one.
This is almost surely something I'm doing wrong or something weird about OSX.
I set up a new development environment using pipenv and ran the tests to make sure that everything was looking good. It mostly is, except the marbles.core.tests.test_main.VersionTestCase.test_stderr
test fails:
(marbles-rOKSo-69) jane@18:47:44 marbles (master %) $ tox -e coverage
... # full traceback included below
======================================================================
FAIL: test_stderr (tests.test_main.VersionTestCase)
The error output should be empty.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/jane/Development/marbles/marbles/core/tests/test_main.py", line 254, in test_stderr
self.assertEqual('', self.stderr)
AssertionError: '' != 'sh: sysctl: command not found\n'
+ sh: sysctl: command not found
----------------------------------------------------------------------
Ran 230 tests in 14.262s
FAILED (failures=1, skipped=4)
Test failed: <unittest.runner.TextTestResult run=230 errors=0 failures=1>
error: Test failed: <unittest.runner.TextTestResult run=230 errors=0 failures=1>
ERROR: InvocationError for command '/Users/jane/Development/marbles/.tox/coverage/bin/pipenv run python -m coverage run marbles/core/setup.py test' (exited with code 1)
____________________________________________________________ summary _____________________________________________________________
ERROR: coverage: commands failed
sysctl does appear to be on my path
(marbles-rOKSo-69) jane@18:47:44 marbles (master %) $ which sysctl
/usr/sbin/sysctl
(marbles-rOKSo-69) jane@18:46:41 marbles (master) $ tox -e coverage
GLOB sdist-make: /Users/jane/Development/marbles/setup.py
coverage installed: -i https://pypi.org/simple,-e ./marbles/core[dev],-e ./marbles/mixins[dev],coverage==4.5.1,flake8-per-file-ignores==0.6,flake8==3.5.0,mccabe==0.6.1,numpy==1.14.3,pandas==0.20.3,pathmatch==0.2.1,pycodestyle==2.3.1,pyflakes==1.6.0,python-dateutil==2.7.3,pytz==2018.4,six==1.11.0,typing==3.6.4
coverage runtests: PYTHONHASHSEED='589084020'
coverage runtests: commands[0] | pipenv install --dev
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead.
Installing dependencies from Pipfile.lock (fa7917)…
🐍 ▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 52/52 — 00:00:06
coverage runtests: commands[1] | pipenv run python -m coverage erase
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead.
coverage runtests: commands[2] | pipenv run python -m coverage run marbles/core/setup.py test
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead.
running test
running egg_info
writing marbles.core.egg-info/PKG-INFO
writing dependency_links to marbles.core.egg-info/dependency_links.txt
writing namespace_packages to marbles.core.egg-info/namespace_packages.txt
writing requirements to marbles.core.egg-info/requires.txt
writing top-level names to marbles.core.egg-info/top_level.txt
package init file 'marbles/__init__.py' not found (or not a regular file)
package init file 'marbles/core/__init__.py' not found (or not a regular file)
reading manifest file 'marbles.core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'marbles.core.egg-info/SOURCES.txt'
running build_ext
test_success (tests.test_log.TestAssertionLogging) (use_env=False, use_file=False, use_annotated_test_case=True)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=False, use_file=False, use_annotated_test_case=True)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=False, use_file=False, use_annotated_test_case=True)
Can we capture other attributes of a TestCase on failure? ... ok
test_success (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=False, use_file=False, use_annotated_test_case=True)
On a successful assertion, do we log information? ... ok
test_verbose_override (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=False, use_file=False, use_annotated_test_case=True)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=False, use_file=False, use_annotated_test_case=True)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=False, use_file=False, use_annotated_test_case=True)
Can we capture other attributes of a TestCase on failure? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=False, use_file=False, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=False, use_file=False, use_annotated_test_case=True)
On a successful assertion, do we respect verbose=False? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseList) (use_env=False, use_file=False, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseList) (use_env=False, use_file=False, use_annotated_test_case=True)
On a successful assertion, do we respect a verbose list? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=False, use_file=False, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=False, use_file=False, use_annotated_test_case=True)
On a successful assertion, do we respect verbose=True? ... ok
test_success (tests.test_log.TestAssertionLogging) (use_env=False, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=False, use_file=True, use_annotated_test_case=True)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=False, use_file=True, use_annotated_test_case=True)
Can we capture other attributes of a TestCase on failure? ... ok
test_success (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=False, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we log information? ... ok
test_verbose_override (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=False, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=False, use_file=True, use_annotated_test_case=True)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=False, use_file=True, use_annotated_test_case=True)
Can we capture other attributes of a TestCase on failure? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=False, use_file=True, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=False, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we respect verbose=False? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseList) (use_env=False, use_file=True, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseList) (use_env=False, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we respect a verbose list? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=False, use_file=True, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=False, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we respect verbose=True? ... ok
test_success (tests.test_log.TestAssertionLogging) (use_env=True, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=True, use_file=True, use_annotated_test_case=True)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=True, use_file=True, use_annotated_test_case=True)
Can we capture other attributes of a TestCase on failure? ... ok
test_success (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=True, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we log information? ... skipped 'Only testing when the base class sets up with configure()'
test_verbose_override (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=True, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we log information? ... skipped 'Only testing when the base class sets up with configure()'
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=True, use_file=True, use_annotated_test_case=True)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=True, use_file=True, use_annotated_test_case=True)
Can we capture other attributes of a TestCase on failure? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=True, use_file=True, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=True, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we respect verbose=False? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseList) (use_env=True, use_file=True, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseList) (use_env=True, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we respect a verbose list? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=True, use_file=True, use_annotated_test_case=True)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=True, use_file=True, use_annotated_test_case=True)
On a successful assertion, do we respect verbose=True? ... ok
test_success (tests.test_log.TestAssertionLogging) (use_env=False, use_file=False, use_annotated_test_case=False)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=False, use_file=False, use_annotated_test_case=False)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=False, use_file=False, use_annotated_test_case=False)
Can we capture other attributes of a TestCase on failure? ... ok
test_success (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=False, use_file=False, use_annotated_test_case=False)
On a successful assertion, do we log information? ... ok
test_verbose_override (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=False, use_file=False, use_annotated_test_case=False)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=False, use_file=False, use_annotated_test_case=False)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=False, use_file=False, use_annotated_test_case=False)
Can we capture other attributes of a TestCase on failure? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=False, use_file=False, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=False, use_file=False, use_annotated_test_case=False)
On a successful assertion, do we respect verbose=False? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseList) (use_env=False, use_file=False, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseList) (use_env=False, use_file=False, use_annotated_test_case=False)
On a successful assertion, do we respect a verbose list? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=False, use_file=False, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=False, use_file=False, use_annotated_test_case=False)
On a successful assertion, do we respect verbose=True? ... ok
test_success (tests.test_log.TestAssertionLogging) (use_env=False, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=False, use_file=True, use_annotated_test_case=False)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=False, use_file=True, use_annotated_test_case=False)
Can we capture other attributes of a TestCase on failure? ... ok
test_success (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=False, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we log information? ... ok
test_verbose_override (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=False, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=False, use_file=True, use_annotated_test_case=False)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=False, use_file=True, use_annotated_test_case=False)
Can we capture other attributes of a TestCase on failure? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=False, use_file=True, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=False, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we respect verbose=False? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseList) (use_env=False, use_file=True, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseList) (use_env=False, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we respect a verbose list? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=False, use_file=True, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=False, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we respect verbose=True? ... ok
test_success (tests.test_log.TestAssertionLogging) (use_env=True, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we log information? ... ok
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=True, use_file=True, use_annotated_test_case=False)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingAttributeCapture) (use_env=True, use_file=True, use_annotated_test_case=False)
Can we capture other attributes of a TestCase on failure? ... ok
test_success (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=True, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we log information? ... skipped 'Only testing when the base class sets up with configure()'
test_verbose_override (tests.test_log.TestAssertionLoggingRespectsEnvOverrides) (use_env=True, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we log information? ... skipped 'Only testing when the base class sets up with configure()'
test_capture_test_case_attributes (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=True, use_file=True, use_annotated_test_case=False)
Can we capture other attributes of a TestCase? ... ok
test_capture_test_case_attributes_on_failure (tests.test_log.TestAssertionLoggingVerboseAttributeCapture) (use_env=True, use_file=True, use_annotated_test_case=False)
Can we capture other attributes of a TestCase on failure? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=True, use_file=True, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseFalse) (use_env=True, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we respect verbose=False? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseList) (use_env=True, use_file=True, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseList) (use_env=True, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we respect a verbose list? ... ok
test_failure (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=True, use_file=True, use_annotated_test_case=False)
On a failed assertion, do we log information? ... ok
test_success (tests.test_log.TestAssertionLoggingVerboseTrue) (use_env=True, use_file=True, use_annotated_test_case=False)
On a successful assertion, do we respect verbose=True? ... ok
test_annotated_assertion_error_not_raised (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Is no error raised if a test succeeds? ... ok
test_annotated_assertion_error_raised (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Is an ContextualAssertionError raised if a test fails? ... ok
test_assert_raises_failure (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Does assertRaises work correctly when the test fails? ... ok
test_assert_raises_missing_note (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Do we notice missing note for assertRaises? ... ok
test_assert_raises_success (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Does assertRaises work correctly when the test passes? ... ok
test_fail_handles_note_properly (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Does TestCase.fail() deal with note the right way? ... ok
test_fail_rejects_extra_args (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Does TestCase.fail() reject extra arguments? ... ok
test_fail_works_when_invoked_by_builtin_assertions (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True) ... ok
test_missing_annotation (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Does marbles check for missing annotations? ... ok
test_missing_msg_ok (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Is it ok to provide only note? ... ok
test_missing_note_dict (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
When passing a dict as msg, do we still check for note? ... ok
test_odd_argument_order_ok (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Does marbles handle a msg argument before the last position? ... ok
test_string_equality (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=True)
Can we use assertEqual on strings? ... ok
test_success (tests.test_marbles.TestAssertionLoggingFailure) (use_annotated_test_case=True)
When logging fails, do we allow the test to proceed? ... ok
test_assert_raises_kwargs_msg (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Do we capture kwargs annotations properly for assertRaises? ... ok
test_assert_raises_without_msg (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Do we capture annotations properly for assertRaises? ... ok
test_assert_stmt_indicates_line (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does e.assert_stmt indicate the line from the source code? ... ok
test_assert_stmt_surrounding_lines (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does _find_assert_stmt read surrounding lines from the file? ... ok
test_custom_assertions (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does the marbles note work with custom-defined assertions? ... ok
test_custom_assertions_kwargs (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does the marbles kwargs note work with custom assertions? ... ok
test_exclude_ignored_locals (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Are ignored variables excluded from output? ... ok
test_exclude_internal_mangled_locals (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Are internal/mangled variables excluded from the "Locals"? ... ok
test_get_stack (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does _get_stack() find the stack level with the test definition? ... ok
test_kwargs_stick_together (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does the kwargs form of an assertion enforce that message and ... ok
test_locals_hidden_when_all_private (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does marbles hide the Locals section if all are private? ... ok
test_locals_hidden_when_missing (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does marbles hide the Locals section if there are none? ... ok
test_locals_shown_when_present (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does marbles show the Locals section if there are some? ... ok
test_missing_msg_dict (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Is the default msg properly displayed when note is in a dict? ... ok
test_missing_msg_kwargs_note (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Is the default msg properly displayed? ... ok
test_named_assert_args (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Is annotation captured correctly if named arguments are provided? ... ok
test_note_rich_format_strings (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True) ... ok
test_note_wrapping (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Do we wrap the note properly? ... ok
test_odd_argument_order (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does marbles handle a msg argument before the last position? ... ok
test_positional_assert_args (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Is annotation captured correctly when using positional arguments? ... ok
test_positional_msg_kwargs_note (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Is annotation captured correctly when using a positional msg? ... ok
test_use_kwargs_form (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Does the kwargs form of an assertion work? ... ok
test_verify_annotation_dict_missing_keys (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Is an Exception raised if annotation is missing expected keys? ... ok
test_verify_annotation_locals (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Are locals in the test definition formatted into annotations? ... ok
test_verify_annotation_none (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=True)
Is an Exception raised if no annotation is provided? ... ok
test_annotated_assertion_error_not_raised (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Is no error raised if a test succeeds? ... ok
test_annotated_assertion_error_raised (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Is an ContextualAssertionError raised if a test fails? ... ok
test_assert_raises_failure (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Does assertRaises work correctly when the test fails? ... ok
test_assert_raises_missing_note (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Do we notice missing note for assertRaises? ... ok
test_assert_raises_success (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Does assertRaises work correctly when the test passes? ... ok
test_fail_handles_note_properly (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Does TestCase.fail() deal with note the right way? ... ok
test_fail_rejects_extra_args (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Does TestCase.fail() reject extra arguments? ... ok
test_fail_works_when_invoked_by_builtin_assertions (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False) ... ok
test_missing_annotation (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Does marbles check for missing annotations? ... ok
test_missing_msg_ok (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Is it ok to provide only note? ... ok
test_missing_note_dict (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
When passing a dict as msg, do we still check for note? ... ok
test_odd_argument_order_ok (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Does marbles handle a msg argument before the last position? ... ok
test_string_equality (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Can we use assertEqual on strings? ... ok
test_success (tests.test_marbles.TestAssertionLoggingFailure) (use_annotated_test_case=False)
When logging fails, do we allow the test to proceed? ... ok
test_assert_raises_kwargs_msg (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Do we capture kwargs annotations properly for assertRaises? ... ok
test_assert_raises_without_msg (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Do we capture annotations properly for assertRaises? ... ok
test_assert_stmt_indicates_line (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does e.assert_stmt indicate the line from the source code? ... ok
test_assert_stmt_surrounding_lines (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does _find_assert_stmt read surrounding lines from the file? ... ok
test_custom_assertions (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does the marbles note work with custom-defined assertions? ... ok
test_custom_assertions_kwargs (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does the marbles kwargs note work with custom assertions? ... ok
test_exclude_ignored_locals (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Are ignored variables excluded from output? ... ok
test_exclude_internal_mangled_locals (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Are internal/mangled variables excluded from the "Locals"? ... ok
test_get_stack (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does _get_stack() find the stack level with the test definition? ... ok
test_kwargs_stick_together (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does the kwargs form of an assertion enforce that message and ... ok
test_locals_hidden_when_all_private (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does marbles hide the Locals section if all are private? ... ok
test_locals_hidden_when_missing (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does marbles hide the Locals section if there are none? ... ok
test_locals_shown_when_present (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does marbles show the Locals section if there are some? ... ok
test_missing_msg_dict (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Is the default msg properly displayed when note is in a dict? ... ok
test_missing_msg_kwargs_note (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Is the default msg properly displayed? ... ok
test_named_assert_args (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Is annotation captured correctly if named arguments are provided? ... ok
test_note_rich_format_strings (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False) ... ok
test_note_wrapping (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Do we wrap the note properly? ... ok
test_odd_argument_order (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does marbles handle a msg argument before the last position? ... ok
test_positional_assert_args (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Is annotation captured correctly when using positional arguments? ... ok
test_positional_msg_kwargs_note (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Is annotation captured correctly when using a positional msg? ... ok
test_use_kwargs_form (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Does the kwargs form of an assertion work? ... ok
test_verify_annotation_dict_missing_keys (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Is an Exception raised if annotation is missing expected keys? ... ok
test_verify_annotation_locals (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Are locals in the test definition formatted into annotations? ... ok
test_verify_annotation_none (tests.test_marbles.TestContextualAssertionError) (use_annotated_test_case=False)
Is an Exception raised if no annotation is provided? ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_unittest.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_unittest.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_unittest.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_unittest.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_unittest.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_marbles.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_marbles.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_marbles.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_marbles.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=True, test_file='example_marbles.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_unittest.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_unittest.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_unittest.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_unittest.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_unittest.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_marbles.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_marbles.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_marbles.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_marbles.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='unittest', verbose=False, test_file='example_marbles.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_unittest.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_unittest.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_unittest.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_unittest.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_unittest.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_marbles.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_marbles.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_marbles.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_marbles.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=True, test_file='example_marbles.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_unittest.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_unittest.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_unittest.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_unittest.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_unittest.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_marbles.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_marbles.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_marbles.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_marbles.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='marbles', verbose=False, test_file='example_marbles.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_unittest.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_unittest.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_unittest.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_unittest.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_unittest.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_marbles.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_marbles.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_marbles.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_marbles.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=True, test_file='example_marbles.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_unittest.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_unittest.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_unittest.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_unittest.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_unittest.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_marbles.py')
Standard out should be empty in all cases. ... ok
test_show_locals (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_marbles.py')
Locals should be printed. ... ok
test_show_msg (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_marbles.py')
The failure message should always appear. ... ok
test_show_source (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_marbles.py')
The source code should appear. ... ok
test_traceback (tests.test_main.MainWithFailureTestCase) (runner='script', verbose=False, test_file='example_marbles.py')
The traceback should only be shown in verbose mode. ... ok
test_empty_stdout (tests.test_main.MainWithErrorTestCase) (runner='unittest', verbose=True, test_file='example_error.py')
Standard out should be empty in all cases. ... ok
test_traceback (tests.test_main.MainWithErrorTestCase) (runner='unittest', verbose=True, test_file='example_error.py')
The traceback should be shown for all errors. ... ok
test_empty_stdout (tests.test_main.MainWithErrorTestCase) (runner='unittest', verbose=False, test_file='example_error.py')
Standard out should be empty in all cases. ... ok
test_traceback (tests.test_main.MainWithErrorTestCase) (runner='unittest', verbose=False, test_file='example_error.py')
The traceback should be shown for all errors. ... ok
test_empty_stdout (tests.test_main.MainWithErrorTestCase) (runner='marbles', verbose=True, test_file='example_error.py')
Standard out should be empty in all cases. ... ok
test_traceback (tests.test_main.MainWithErrorTestCase) (runner='marbles', verbose=True, test_file='example_error.py')
The traceback should be shown for all errors. ... ok
test_empty_stdout (tests.test_main.MainWithErrorTestCase) (runner='marbles', verbose=False, test_file='example_error.py')
Standard out should be empty in all cases. ... ok
test_traceback (tests.test_main.MainWithErrorTestCase) (runner='marbles', verbose=False, test_file='example_error.py')
The traceback should be shown for all errors. ... ok
test_empty_stdout (tests.test_main.MainWithErrorTestCase) (runner='script', verbose=True, test_file='example_error.py')
Standard out should be empty in all cases. ... ok
test_traceback (tests.test_main.MainWithErrorTestCase) (runner='script', verbose=True, test_file='example_error.py')
The traceback should be shown for all errors. ... ok
test_empty_stdout (tests.test_main.MainWithErrorTestCase) (runner='script', verbose=False, test_file='example_error.py')
Standard out should be empty in all cases. ... ok
test_traceback (tests.test_main.MainWithErrorTestCase) (runner='script', verbose=False, test_file='example_error.py')
The traceback should be shown for all errors. ... ok
test_stderr (tests.test_main.VersionTestCase)
The error output should be empty. ... FAIL
test_stdout (tests.test_main.VersionTestCase)
The version output should contain marbles.core's version. ... ok
======================================================================
FAIL: test_stderr (tests.test_main.VersionTestCase)
The error output should be empty.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/jane/Development/marbles/marbles/core/tests/test_main.py", line 254, in test_stderr
self.assertEqual('', self.stderr)
AssertionError: '' != 'sh: sysctl: command not found\n'
+ sh: sysctl: command not found
----------------------------------------------------------------------
Ran 230 tests in 14.262s
FAILED (failures=1, skipped=4)
Test failed: <unittest.runner.TextTestResult run=230 errors=0 failures=1>
error: Test failed: <unittest.runner.TextTestResult run=230 errors=0 failures=1>
ERROR: InvocationError for command '/Users/jane/Development/marbles/.tox/coverage/bin/pipenv run python -m coverage run marbles/core/setup.py test' (exited with code 1)
____________________________________________________________ summary _____________________________________________________________
ERROR: coverage: commands failed
I expected all the tests to pass :(
The test_stderr
test fails when I run the tests with the below commands as well
(marbles-rOKSo-69) jane@18:51:00 core (master) $ cd marbles/core/
(marbles-rOKSo-69) jane@18:51:00 core (master) $ coverage run setup.py test
(marbles-rOKSo-69) jane@18:51:00 core (master) $ python -m unittest
Linking yourself through this page is sort of the fastest way to the docstrings, but it's kind of a stark thing to land on. We should put some text here that welcomes the reader and points them to the right next thing to click: https://marbles.readthedocs.io/en/stable/reference.html
Also, maybe the module docstring summaries, maybe also go one level deeper on the toctree?
Make sure there isn't anything not useful to the public
Should be TSOS and MIT license
python -m marbles --version
python -m marbles --version
python --version
uname -a
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
I cloned marbles from github (https://github.com/twosigma/marbles/), created a fresh conda environment, and installed marbles using python setup.py develop and tried to run python -m marbles on marbles.core tests and got two failures. Using python setup.py test all tests pass.
git clone https://github.com/wyegelwel/marbles.git
conda create -n marbles python=3.5
source activate marbles
python setup.py develop
python -m marbles
E..........................................................................E
======================================================================
ERROR: tests.test_log (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: tests.test_log
Traceback (most recent call last):
File "/home/wyegelwe/anaconda3/envs/marbles/lib/python3.5/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/home/wyegelwe/anaconda3/envs/marbles/lib/python3.5/unittest/loader.py", line 369, in _get_module_from_name
__import__(name)
File "/home/wyegelwe/sandbox/marbles/marbles/core/tests/test_log.py", line 41, in <module>
import tests.test_marbles as marbles_tests
File "/home/wyegelwe/sandbox/marbles/marbles/core/tests/test_marbles.py", line 299, in <module>
class ExampleTestCase(TestCase, ExampleTestCaseMixin):
TypeError: Cannot create a consistent method resolution
order (MRO) for bases TestCase, ExampleTestCaseMixin
======================================================================
ERROR: tests.test_marbles (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: tests.test_marbles
Traceback (most recent call last):
File "/home/wyegelwe/anaconda3/envs/marbles/lib/python3.5/unittest/loader.py", line 428, in _find_test_path
module = self._get_module_from_name(name)
File "/home/wyegelwe/anaconda3/envs/marbles/lib/python3.5/unittest/loader.py", line 369, in _get_module_from_name
__import__(name)
File "/home/wyegelwe/sandbox/marbles/marbles/core/tests/test_marbles.py", line 299, in <module>
class ExampleTestCase(TestCase, ExampleTestCaseMixin):
TypeError: Cannot create a consistent method resolution
order (MRO) for bases TestCase, ExampleTestCaseMixin
----------------------------------------------------------------------
Ran 76 tests in 5.460s
FAILED (errors=2)
All tests succeed when using python -m marbles
Works with python setup.py test
python setup.py test -q
running test
running egg_info
writing dependency_links to marbles.core.egg-info/dependency_links.txt
writing namespace_packages to marbles.core.egg-info/namespace_packages.txt
writing marbles.core.egg-info/PKG-INFO
writing requirements to marbles.core.egg-info/requires.txt
writing top-level names to marbles.core.egg-info/top_level.txt
reading manifest file 'marbles.core.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'marbles.core.egg-info/SOURCES.txt'
running build_ext
.............................ss.....................................ss................................................................................................................................................................
----------------------------------------------------------------------
Ran 230 tests in 6.729s
OK (skipped=4)
Is your feature request related to a problem? Please describe.
It would be good to enable people to run python setup.py marbles
and get the effect of switching from python -m unittest
to python -m marbles
, and also to alias test
to marbles
.
Describe the solution you'd like
Marbles should provide a setuptools.Command
subclass and install it as an additional setuptools command
A blog post describes this in more detail here.
Marbles should also document how to integrate this with a project, by adding marbles
as a setup_requires
and tests_require
dependency, and adding the following to setup.cfg
:
[aliases]
test = marbles
Should switch it to .rst
and rewrite it to remove internal content.
They're not actually needed for setup_requires, they're needed for development. This is captured in the Pipfile
, and their presence in setup_requires causes unnecessary installation sometimes.
The logging docs should specify how to execute your tests based on how you configured the logger. If you configure the logger in an if __name__ == '__main__'
block (as the docs suggest you do), you need to run your tests with python /path/to/tests.py
. Logging won't work if you try to run these same tests with python -m marbles test.py
.
Users
See #100
Probably either http://coveralls.io/ or https://codecov.io/. I believe we need an organization admin to enable it.
The unittest docs specify that:
All the assert methods accept a msg argument that, if specified, is used as the error message on failure (see also
longMessage
). Note that the msg keyword argument can be passed toassertRaises()
,assertRaisesRegex()
,assertWarns()
,assertWarnsRegex()
only when they are used as a context manager.
If these assertion methods are used as functions and not context managers, marbles will try it's best to find the msg argument, but because these methods don't accept a msg argument when they're used as functions, marbles will end up grabbing an argument that's intended for the callable and then the callable will be sad.
I originally found this while trying to run unittest's own tests (specifically, the test_assertions.py tests) under marbles, but you can reproduce this specific bug with the following test case:
import unittest
class MyTestCase(unittest.TestCase):
def test_foo(self):
self.assertRaises(self.failureException,
self.assertAlmostEqual, 1.0000001, 1.0)
if __name__ == '__main__':
unittest.main()
running this produces the following:
$ python -m marbles test.py
E
======================================================================
ERROR: test_foo (test.MyTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 8, in test_foo
self.assertAlmostEqual, 1.0000001, 1.0)
File ".../lib/python3.6/site-packages/marbles/core/marbles.py", line 538, in wrapper
return attr(*args, msg=annotation, **kwargs)
File ".../lib/python3.6/unittest/case.py", line 733, in assertRaises
return context.handle('assertRaises', args, kwargs)
File ".../lib/python3.6/unittest/case.py", line 178, in handle
callable_obj(*args, **kwargs)
File ".../lib/python3.6/site-packages/marbles/core/marbles.py", line 538, in wrapper
return attr(*args, msg=annotation, **kwargs)
TypeError: assertAlmostEqual() missing 1 required positional argument: 'second'
----------------------------------------------------------------------
Ran 1 test in 0.002s
FAILED (errors=1)
This is related to a more general issue which is that marbles expects every assertion to accept a msg argument. If a non-unittest assertion (e.g., mixin assertion, a user-defined assertion, etc.) doesn't accept a msg argument, marbles won't be able to execute that assertion.
An example of this occurs in unittest's own tests. The TestLongMessage
tests define a custom assertion, assertMessages
that doesn't accept a msg argument. When a test method calls this assertion, marbles will dig into its arguments to try to pull out the note and msg arguments, and then try to make the assertion. In this case, one or two things could happen:
It doesn't include getsitepackages
.
distutils.sysconfig
has something we can use instead.
Remove DataFrame, Categorical, and Panel mixins until we polish them more
Since we use slice indexing, I think assertUnique
will fail on collections which are not integer-indexable. Sets have this property, but sets already enforce uniqueness so that's not a real problem. multisets may expose a problem here though?
If there are no locals (besides self
, msg
, and note
), we should hide the “Locals” section of the output.
I love marbles! I also enjoyed Jane's talk at PyData Amsterdam.
My issue is that installing marbles automatically downgrades your pandas version.
If you do: pip install pandas
it installs version 0.23.1 at the time of writing.
Then running pip install marbles
downgrades pandas to 0.21.
This causes the following code to break:
from datetime import datetime
import pandas as pd
data = {
datetime(2018,4,1): [2500, 1500],
datetime(2018,5,1): [2000, 2500],
datetime(2018,6,1): [2000, 1500],
}
df = pd.DataFrame.from_dict(data, orient='index', columns = ['Income', 'Expense'])
print(df)
I believe the line is on marbles/marbles/mixins/setup.py line 57.
I could try and do a pull request if you wanted, but I'm not sure what all tests I would need to ensure are passing?
Currently we just say you need to sign one, we need to actually provide one to sign.
if len(sys.argv) > 1 and sys.argv[1] == '--version': # condition on this line is never true
import marbles.core
print('marbles.core version: {}'.format(marbles.core.__version__))
try:
import marbles.mixins
except ImportError:
print('marbles.mixins not installed')
else:
print('marbles.mixins version: {}'.format(marbles.mixins.__version__))
sys.exit(0)
Tried to run a unittest
test that uses the deprecated name assertEquals
instead of assertEqual
.
import marbles.core
class MyTestCase(marbles.core.TestCase):
def test_assertEqual_success(self):
x = 1
y = 1
self.assertEqual(x, y)
def test_assertEqual_failure(self):
x = 1
y = 2
self.assertEqual(x, y)
def test_assertEquals_success(self):
x = 1
y = 1
self.assertEquals(x, y)
def test_assertEquals_failure(self):
x = 1
y = 2
self.assertEquals(x, y)
The assertEquals
tests fail with the following stack:
======================================================================
ERROR: test_deprecated_assertEquals_failure (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Does the deprecated assertEquals method work on failure?
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/leif/git/marbles/marbles/core/tests/test_marbles.py", line 373, in test_deprecated_assertEquals_failure
self.case.test_deprecated_assertEquals_failure()
File "/home/leif/git/marbles/marbles/core/tests/test_marbles.py", line 93, in test_deprecated_assertEquals_failure
self.assertEquals(x, y)
File "/home/leif/git/marbles/marbles/core/marbles/core/marbles.py", line 538, in wrapper
return attr(*args, msg=annotation, **kwargs)
File "/usr/lib64/python3.6/unittest/case.py", line 1323, in deprecated_func
return original_func(*args, **kwargs)
TypeError: assertEqual() missing 1 required positional argument: 'second'
======================================================================
ERROR: test_deprecated_assertEquals_success (tests.test_marbles.InterfaceTestCase) (use_annotated_test_case=False)
Does the deprecated assertEquals method still work?
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/leif/git/marbles/marbles/core/tests/test_marbles.py", line 364, in test_deprecated_assertEquals_success
self.case.test_deprecated_assertEquals_success()
File "/home/leif/git/marbles/marbles/core/tests/test_marbles.py", line 88, in test_deprecated_assertEquals_success
self.assertEquals(x, y)
File "/home/leif/git/marbles/marbles/core/marbles/core/marbles.py", line 538, in wrapper
return attr(*args, msg=annotation, **kwargs)
File "/usr/lib64/python3.6/unittest/case.py", line 1323, in deprecated_func
return original_func(*args, **kwargs)
TypeError: assertEqual() missing 1 required positional argument: 'second'
----------------------------------------------------------------------
I expected assertEqual
and assertEquals
to behave the same. Instead, assertEqual
does the expected thing, while assertEquals
fails with a TypeError
Tried to install marbles.core from the sdist on pypi.
A clear and concise description of what you expected to happen.
Error message:
Traceback (most recent call last):
File "setup.py", line 34, in <module>
with open(os.path.join(root_dir, 'classifiers.txt'), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '../../classifiers.txt'
This is because we made the setup scripts for marbles.core and marbles.mixins assume that they're being run from within the repo, where the top level package exists two directories up.
We should change the way sdists are packaged so that they're self-contained.
Is your feature request related to a problem? Please describe.
We should provide a pull request template like https://github.com/flexyford/pull-request-template/blob/master/PULL_REQUEST_TEMPLATE.md. See https://help.github.com/articles/creating-a-pull-request-template-for-your-repository/ for github's docs on the subject.
Describe the solution you'd like
It should prompt for:
It should remind a contributor to:
Going along with one of the stated goals of marbles
write richer tests that expose more information on test failure to help you debug failing tests faster
The audience for reading such test results can be expanded by adding HTML reporting as an option in addition to current stdout reporting.
What I have in mind is something along the lines of report formatting provided by the HtmlTestRunner project which uses Jinja2 and Bootstrap under the hood.
HTML unittest reports can be saved to disk for historical auditing, or written to /tmp for one time or temporary viewing (default setting).
marbles
itself is fairly comprehensively tested, in terms of lines and also features, but we don't very well test how marbles interacts with all of the features of unittest
. For example, some of the assertions we're wrapping in marbles
may behave strangely depending on how we've wrapped them (we've seen problems before where things like assertMultiLineEqual
don't faithfully pass the msg
argument to all assertions they then make, which broke our annotations, but only when that method got called with something that wasn't a str
).
We should try to test marbles
against all assertions in unittest
, possibly by running unittest
's own test harnesses. We may also want to test interactions between marbles
and other features of unittest
, but let's start there.
Is your feature request related to a problem? Please describe.
See #92 (comment), #75, and the discussion on #109.
Broadly, we'd like to give various actors more control over how local variable values get formatted in assertion failure messages.
There are several actors at play here:
Describe the solution you'd like
Some prior art can be seen in jupyter/ipython, which has a protocol where, if an object has a method _repr_ipython_
, that method is used to display the object in output cells, and this falls back to __repr__
if it doesn't exist. Similarly, if a class implements _repr_marbles_
, we could call that method instead of repr
to create the string to display in failure messages.
We should consider how customization works in many directions:
_repr_marbles_
), which marbles could look for and use to render an instance of that class.numpy.array
) by registering a display hook somewhere within marbles to use based on an isinstance
check.If you're interested in implementing this, let's discuss in this issue what you think about that approach, how you'd handle edge cases like "what if multiple registered custom display hooks are implemented?", how we should approach testing it, and what user controls we'll need (Should the --verbose
flag affect this? Should we support a flag to disable these hooks?).
We should also consider what kinds of customization we want. Two easy ones that come to mind for me are line-wrapping lists on comma boundaries, and sorting sets so they're more visually comparable. A hard problem in this area that I haven't thought through is nested data structures.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.