nestorsalceda / mamba Goto Github PK
View Code? Open in Web Editor NEWThe definitive testing tool for Python. Born under the banner of Behavior Driven Development (BDD).
Home Page: http://nestorsalceda.github.io/mamba
License: MIT License
The definitive testing tool for Python. Born under the banner of Behavior Driven Development (BDD).
Home Page: http://nestorsalceda.github.io/mamba
License: MIT License
Would you consider adding support for being able to import mamba functions, context managers, etc such as description
, it
, context
, etc?
Having them "global" is nice but breaks most editors because of missing declarations.
For instance:
class Foobar:
# ... some classy stuff
with describe("Problem with mocks with specs"):
mock = MagicMock(spec=Foobar) # !!! this line causes the traceback
@before.all
def setup():
prepare_mock(mock)
def should_be_accessible():
use_in_some_way(mock)
The stack trace looks like this:
Traceback (most recent call last):
File "/usr/local/bin/mamba", line 9, in <module>
load_entry_point('mamba==0.5', 'console_scripts', 'mamba')()
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/cli.py", line 15, in main
runner.run()
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/runners.py", line 26, in run
for module in self.example_collector.modules():
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/example_collector.py", line 15, in modules
with self._load_module_from(path) as module:
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/example_collector.py", line 44, in _load_module_from
yield imp.load_source(name, path)
File "src/unittest/python/watcher/flickr_watcher_spec.py", line 73, in <module>
@pending
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/loader.py", line 96, in __exit__
self._sort_loaded_examples(frame)
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/loader.py", line 138, in _sort_loaded_examples
frame.f_locals['current_example'].examples.sort(key=lambda x: x.source_line)
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/loader.py", line 138, in <lambda>
frame.f_locals['current_example'].examples.sort(key=lambda x: x.source_line)
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/example.py", line 71, in source_line
return inspect.getsourcelines(self.test)[1]
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 690, in getsourcelines
lines, lnum = findsource(object)
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 526, in findsource
file = getfile(object)
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/inspect.py", line 420, in getfile
'function, traceback, frame, or code object'.format(object))
TypeError: <MagicMock spec='Foobar' id='4390321680'> is not a module, class, method, function, traceback, frame, or code object
I've seen a few examples where folks handle a skip of a test by making it pass
it('should multiple two numbers):
pass
which seems wrong, since when I run the suite I get the impression that all the tests passed. Which they did, in a technical sense because the 'pass' is there. But the test didn't actually 'skip', in the strategic sense
I was hoping for something like:
it.skip('should multiply two numbers):
pass #shouldn't matter what is put here as long as syntactically correct
or
it('should multiply two numbers):
skip
so that I can get greens for passing tests, red for failed tests, and ? yellows ? (whatever, ANYTHING) for skipped tests to differentiate them from passing/failing tests
Comments? thoughts?
p.s. also +1 on someone's comment from over a year ago that mamba needs more explicit documentation. My first reaction was "yay!" My second reaction was "damn it, do I really have to invest this much in searching because documentation that doesn't exist" and my third reaction was 'would it be faster to more to sure or expects or etc.?' Just seems like it'd be good never to get someone to that third point, esp when this project is so clearly rspec-ish goodness and also recent
This SO question highlights the problems Pythonistas face when ensuring that their Python unit tests can find the actual source modules located elsewhere.
This is incredibly important when the *_spec.py
specification files don't live in the same directory as the source code.
For my particular case, I'd like to use PyBuilder for my project's build. Under its conventions:
src/main/python
src/unittest/python
Mamba should be configurable such that I can execute tests in src/unittest/python
and be able to specify that imports are relative to src/main/python
. i.e. I can imagine executing tests like so:
$ mamba --sources src/main/python src/unittest/python
The path(s) given to the --sources
flag would simply be added to the Python path prior to test execution.
Perhaps the save the user from the trouble of specifying the source path every time, it can be specified in a mamba configuration file .mamba
:
# .mamba.py
src_dirs=['src/main/python']
test_dirs=['src/unittest/python']
Then the user may just execute mamba
, which consults the .mamba configuration file.
Hi there,
If I have installed in a virtualenv a module (themotion-python) when I try to run the tests locally mamba grabs the global installed version instead the local one even if you provide the PYTHONPATH. Example:
(34)vagrant@themotion-box:~/workspace/themotion-python$ PYTHONPATH=. mamba tests_integration/object_repository_spec.py
.......F
1 examples failed of 8 ran in 11.7993 seconds
Failures:
1) Object Repository S3 object repository it list content of directory
Failure/Error: tests_integration/object_repository_spec.py result = self.repository.list(parent)
AttributeError: 'ObjectRepository' object has no attribute 'list'
File "tests_integration/object_repository_spec.py", line 78, in 00000008__it list content of directory
result = self.repository.list(parent)
Edu provides a solution in #47 but I think this looks a bug. mamba should work more similar to python cli.
My structure is:
(34)vagrant@themotion-box:~/workspace/themotion-python$ tree
.
├── tests_integration
│ └── object_repository_spec.py
└── themotion
└── object_repository.py
And mamba takes the code from:
(34)vagrant@themotion-box:~/workspace/themotion-python$ PYTHONPATH=.: python -v /home/vagrant/venvs/34/bin/mamba tests_integration/object_repository_spec.py 2>&1 | grep object_repository
# /home/vagrant/venvs/34/src/themotion/themotion/__pycache__/object_repository.cpython-34.pyc matches /home/vagrant/venvs/34/src/themotion/themotion/object_repository.py
# code object from '/home/vagrant/venvs/34/src/themotion/themotion/__pycache__/object_repository.cpython-34.pyc'
import 'themotion.object_repository' # <_frozen_importlib.SourceFileLoader object at 0x7ff350a36e10>
Failure/Error: tests_integration/object_repository_spec.py result = self.repository.list(parent)
File "tests_integration/object_repository_spec.py", line 78, in 00000008__it list content of directory
# destroy tests_integration/object_repository_spec
# cleanup[2] removing themotion.object_repository
# destroy themotion.object_repository
Regards,
I stick my tests in a directory called src/unittest/python
. Let's say there's a file called helloworld_spec.py
inside.
When update helloworld_spec.py
I expect the tests to automatically run. But they don't.
I'm using Mac OS X, but since Mamba uses watchdog which is a cross-platform file watcher library I don't think this problem is specific to the Mac.
Can we have buitin output to render junit xml style or html report? It will have jenkins to integrate them.
Put an additional section when notifying errors for showing a complete traceback for an error
Example:
from mamba import describe, context, pending, before
from sure import should, expect
with describe('First describe block') as _:
@before.each
def setup():
_.question = "?"
def setup_runs_fine_and_intialises_context_object():
_.question.should.be.equal("?")
with describe('Second describe block'):
with context('with a context') as _:
@before.each
def setup():
_.answer = 42
def test_answer():
_.answer.should.be.equal(42)
Result:
First describe block
✓ setup runs fine and intialises context object
Second describe block
with a context
✗ test answer
'_Context' object has no attribute 'answer'
1 examples failed of 6 ran in 0.2448 seconds
Failures:
1) Second describe block with a context test answer
Failure/Error: '_Context' object has no attribute 'answer'
Traceback:
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/example.py", line 33, in _run_inner_test
self.test()
File "src/testincubator/python/scoped_setup_spec.py", line 22, in test_answer
_.answer.should.be.equal(42)
This is because as a fairly new Pythonista I kind of expect with
blocks to begin their own scope. They don't, however, and I get really confused as to why the @before.each
hook appears not to have been called at all.
This happens to me a lot when I'm working fast and refactoring code into @before.all
or @before.each
hooks, I have a habit of just calling the function setup()
.
If there was just an error message or something that tells the user that they must pick distinct names that'd be great.
When running mamba and doesn't exist a spec directory, mamba breaks.
Before test execution, I don't really have an idea of the files the collector saw or didn't see.
Add a verbose mode with debug information such that I can quickly see why files I want picked up aren't picked up etc.
It'd be great to have TeamCity understand mamba test run success and failures!
See https://pypi.python.org/pypi/teamcity-messages
Sample failure:
3) pooling it should reuse freed instances, if available
Failure/Error: spec/data/pool_spec.py expect(PoolStub.allocated).to(equal(1))
AssertionError:
expected: 12 to equal 1
File "/path/to/python3.6/site-packages/expects/expectations.py", line 25, in _assert
raise AssertionError(self._failure_message(matcher, reasons))
File "/path/to/python3.6/site-packages/expects/expectations.py", line 19, in to
self._assert(matcher)
Removing two lines fixes the issue:
def _traceback(self, example_):
tb = example_.error.traceback.tb_next
#if tb.tb_next is not None:
# tb = tb.tb_next
return tb
Giving the following:
3) pooling it should reuse freed instances, if available
Failure/Error: spec/data/pool_spec.py expect(PoolStub.allocated).to(equal(1))
AssertionError:
expected: 12 to equal 1
File "/path/to/python3.6/site-packages/expects/expectations.py", line 25, in _assert
raise AssertionError(self._failure_message(matcher, reasons))
File "/path/to/python3.6/site-packages/expects/expectations.py", line 19, in to
self._assert(matcher)
File "spec/data/pool_spec.py", line 53, in 00000027__it should reuse freed instances, if available--
expect(PoolStub.allocated).to(equal(1))
When an exception is raised in an example, all after.each
hooks for that example are skipped:
with description('ensuring "after" hooks get run'):
with it('when an exception is raised in the body of the test'):
raise Exception
with after.each:
print('this should be printed, but is not')
with description('ensuring "after" hooks get run'):
with context('nested context'):
with it('when an exception is raised in the body of the test'):
raise Exception
with after.each:
print('this should be printed, but is not')
with after.each:
print('this should also be printed, but is not')
Is this intentional? Or should the after.each
hooks be run?
Hi - I'm looking for some feedback and thoughts on the following:
I've been enjoying learning Mamba, and I'd like to make it more similar to rspec in a specific way.
As you can see with sumeet/expect -- a python implementation of what we see here with rspec where we can:
I have found this set of capabilities to be very useful - in ruby, at least. I'm unsure how to proceed though.
Thanks!
Currently, execution contexts are shared among all examples in a group, even among non-sibling examples, so the following fails twice:
with description('top level'):
with context('first group'):
with it('an example'):
self.value = 10
with it('a second example'):
expect(self).not_to(have_property('value'))
with context('second group'):
with it('another example'):
expect(self).not_to(have_property('value'))
I believe each example should be given its own execution context. That way, we can enforce a bit more test independence. Having this would mean running all hooks once per example, and I don't see a straightforward way to do it with the current relationship betwen Example and ExampleGroup.
Do you think this behaviour is desirable? Or do you think it's fine to share execution contexts?
Hi Néstor,
It would be great to have the bugfixes since 0.8.1 released to PyPI.
Thank you! 😄
Thanks. cc/ @eferro
Sometimes before run the test suite you need to bootstrap the project. It would be an interesting feature for the runner.
You can sorted it out importing a module in all your test modules, but it is more clean to be able to do it with the runner.
Add progress bar formatter like in rspec. https://www.relishapp.com/rspec/rspec-core/v/2-6/docs/command-line/format-option
I've got mamba working and passing its tests with watchdog-0.8.1. Could we at least bump the version on watchdog if not simply unpin it?
Consider adding a pkl cache to mamba's transformations.
My nasty, massive repo has the following speedup when using pkl:
Before: 853 examples ran (156 pending) in 1.5438 seconds
After: 853 examples ran (156 pending) in 1.1454 seconds
(~25% faster)
My diff looks like this:
@pickle_cache.cache_from_file('mamba_spec')
def _parse_and_transform_ast(self, path):
with open(path) as f:
# (...snip...)
Where pickle_cache.cache_from_file()
is defined here:
https://github.com/PhilHarnish/forge/blob/master/src/data/pickle_cache.py
tl;dr: if path
is unmodified then the AST is pulled from pkl.
Mamba's requirements are pretty straightforward (eg, function of only "path" and mamba's version number) and so it ought to be easy to accomplish in fewer lines.
I don't know why, but even though the PyPI version says 0.8.5, the changes for that version aren't present when you install mamba
.
Ah, forget about it. I somehow thought that #60 belonged in 0.8.5.
I just came across mamba during a search for a good test framework. From what I can tell it looks like an easy way to incorporate different testing methods. I think I have the general idea of how it works but wanted to know if there was more documentation on contexts and descriptions. Thanks.
I want to contribute, so I checked out the project, ran python setup.py install
and then ran tests by mamba
.
I got the following trace:
ExampleCollector
✓ it should order by line number without inner context
✓ it should put examples together and groups at last
when loading from file
✓ it should loads the module
✓ it should unload module when finished
when a pending decorator loaded
✓ it should mark example as pending
✓ it should mark example group as pending
when a pending decorator loaded_as_root
✓ it should mark inner examples as pending
Traceback (most recent call last):
File "/usr/local/bin/mamba", line 9, in <module>
load_entry_point('mamba==0.5', 'console_scripts', 'mamba')()
File "/usr/local/lib/python2.7/site-packages/mamba/cli.py", line 15, in main
runner.run()
File "/usr/local/lib/python2.7/site-packages/mamba/runners.py", line 26, in run
for module in self.example_collector.modules():
File "/usr/local/lib/python2.7/site-packages/mamba/example_collector.py", line 15, in modules
with self._load_module_from(path) as module:
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/lib/python2.7/site-packages/mamba/example_collector.py", line 44, in _load_module_from
yield imp.load_source(name, path)
File "spec/example_group_spec.py", line 3, in <module>
from doublex import *
ImportError: No module named doublex
And later,
ExampleCollector
✓ it should order by line number without inner context
✓ it should put examples together and groups at last
when loading from file
✓ it should loads the module
✓ it should unload module when finished
when a pending decorator loaded
✓ it should mark example as pending
✓ it should mark example group as pending
when a pending decorator loaded_as_root
✓ it should mark inner examples as pending
Traceback (most recent call last):
File "/usr/local/bin/mamba", line 9, in <module>
load_entry_point('mamba==0.5', 'console_scripts', 'mamba')()
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/cli.py", line 15, in main
runner.run()
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/runners.py", line 26, in run
for module in self.example_collector.modules():
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/example_collector.py", line 15, in modules
with self._load_module_from(path) as module:
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/Cellar/python/2.7.5/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/mamba-0.5-py2.7.egg/mamba/example_collector.py", line 44, in _load_module_from
yield imp.load_source(name, path)
File "spec/example_group_spec.py", line 3, in <module>
from doublex import *
File "/usr/local/lib/python2.7/site-packages/doublex/__init__.py", line 3, in <module>
from .doubles import *
File "/usr/local/lib/python2.7/site-packages/doublex/doubles.py", line 24, in <module>
import hamcrest
ImportError: No module named hamcrest
How can I use mamba
-- which I find cool enough, coming from Ruby -- with the subj.? I'm particularly interested in remote debugging. Thanks.
EDIT: I had problems getting DEBUG messages to show on the console when running mamba tests. I assumed this was because mamba suppresses logging output on stdout
or stderr
. This is not true. Mamba does not interfere with stdout
or stderr
much and thus I've closed this issue.
For posterity, my particular problem was solved by setting disable_existing_loggers = False
as shown here. Check the article to see whether this is relevant to you!
Hi, I'd like to bring up the topic of these two features to attempt an
implementation. I believe there are a few points to discuss:
First of all, notice that whenever new identifiers are used, we run the risk of
driving linters crazy (that is already the case with describe, description,
context, it, the 'pending' variants, before, after and self).
fdescribe, fdescription, fcontext, fit à la
Jasmine, or similar.
Here, the inspiration is clearly [rspec](https://relishapp.com/rspec/rspec-
core/v/3-4/docs/metadata/user-defined-metadata). I would not yet go as far as to
create a whole mechanism for user-defined metadata, just one for tags or even
just for the 'focus' feature.
Some ideas:
with description('spec description', 'a_tag', 'another_tag'):
pass
with description('spec description', a_tag, another_tag):
pass
with description('spec description'), 'a_tag', 'another_tag':
pass
with description('spec description'), a_tag, another_tag:
pass
with description('spec description') as a_tag, another_tag:
pass
with description('spec description') is a_tag, another_tag:
pass
with description('spec description') in a_tag, another_tag:
pass
It's probably harder to retrieve these from the AST.
'a_tag', 'another_tag'
with description('spec description'):
pass
a_tag, another_tag
with description('spec description'):
pass
with description('spec description'):
'a_tag', 'another_tag'
pass
with description('spec description'):
a_tag, another_tag
pass
In this case, we would need to move away from an AST towards something like a
concrete syntax tree such as redbaron and
similar or the tokenize
module.
with description('spec description'): #a_tag #another_tag
pass
I'm sure there are quite a few more candidates. Having tags in the same line is
clumsier to type but more readable; having them in the previous or next line is
more comfortable to type but less intuitive, I think.
ping: @deusz @eferro because their work in #78 suggests they're interested in this, @fatuhoku because he originally wrote #20
Here is the output of my jenkins project via mamba --format documentation --no-color
:
XmlCompiler
when called
XmlMerger
when multiple fail xmls
when multiple pass xmls
when mixed pass and fail xmls
4 examples ran in 0.6710 seconds
C:\JenkinsWorkspace\XmlUtils_Build\>exit 1
Build step 'Execute Windows batch command' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Sending email to: [email protected]
Finished: FAILURE
It'd be wonderful to be able to define a before.each
for the entire suite. This would allow things like ensuring that the database is always cleared between examples. As it is, I have to do the global test setup once per file.
With RSpec, which seems to be what mamba is modeled on, you can put before(:each)
and before(:all)
in the global rspec configuration.
Nowadays I've built up a slow-running integration test suite (by slow I mean, like 4 seconds or so) written with Mamba. I'm super happy with it.
However when I want to hack on a new integration test, I don't want to be running the entire suite. I want to specifically run the one test that I'm intensively developing and get immediate feedback (I don't want to wait 4 seconds!).
On a different console tab, I can have file-watching triggers launch the long-running unit- and integration-tests still.
This lets me make sure that once the test I'm hacking on passes, I can check the results of the test-suite as a whole and see that no regressions have happened.
For example
with describe('Widgets'):
with context('with foos'):
def should_not_be_run_between_bars_setup_and_teardown_methods():
print("inside test with FOOS")
with context('with bars'):
@before.each
def setup():
print("Setting up BARS")
@after.each
def teardown():
print("Tearing down BARS")
The console outputs:
Setting up BARS
inside test with FOOS
Tearing down BARS
What I imagine would be the right behaviour is that
with foos
detects one test; it is run without any before or after methods because none were declared within the with context('with foos')
blockwith bars
detects no tests, but @before.each
and @after.each
hooks are detected. Since there are no tests, neither of these hooks are triggered.So we should simply expect to see:
inside test with FOOS
Hi, I am relatively new to Python and I have difficult times understanding how the modules work.
For now I have a folder ~/my_python_modules
which is added to PYTHON_PATH
and if I want to write anything with tests, I can do that only in that folder. Is there a better way?
Lets say I have the following folder structure not in ~/my_python_modules
:
./Fancy
├── README.md
├── spec
│ └── fancy_module_spec.py
└── src
└── fancy_module.py
So how can I import fancy_module
for testing?
# fancy_module_spec.py
from fancy_module import fancy_function
with description('fancy_function'):
# ...
Hello!
Please, include on the documentation which versions of Python are supported by this tool.
I'm working on a legacy application that uses Python 2.6, and I didn't knew if it was going to work or not, until I've used myself.
One example of what I'm asking is the head of the README.md file of Sure
Best regards.
When having a second 'describe()' using the same title, mamba will skip the execution of the first example group altogether, without any warning.
with description('some description'):
with description('a duplicated description'):
it('does something'):
do() #THIS CODE WILL NOT RUN
with description('a duplicated description'):
it('does something'):
do() #THIS WORKS
It should fail or at least warn the user about this condition
I am totally new to Python and Mamba (from Ruby). So far I'm actually quite pleased with the rate I have made progress with my very limited Python knowledge.
(Please note this is specifically about manipulating the state of the subject of the example before executing the method under test - as such it isn't conveniently accessible in before/after hooks and shouldn't be there anyway rspec hooks )
In rspec I'm used to doing something like this ...
describe '#shiny?' do
let(:cat) { c = Cat.new; c.make_cat_shiny!; c }
subject(:mut) { cat.shiny? }
it "is a shiny cat" do
expect(mut).to be true
end
end
There isn't (from what I've seen) a direct equivalent in mamba. So far the most similar and yet some what idiomatic solution I've come up with in mamba is like so...
with description(Cat):
# self.subject is an instance of Cat
with description('#is_shiny'):
# self.subject is (still?) an instance of Cat
def test_cat(self):
t_c = self.subject
t_c.shine()
return t_c
def mut(self):
return self.test_cat().is_shiny()
with it("is a shiny cat"):
expect(self.mut()).to(equal(true))
Is there a better / more idiomatic / canonical approach or convention for accomplishing let?
What's the proper way to set the subject, when the subject is a method?
I saw something in the spec path that looked relevant and started me down the path I've highlighted above.
Given the following failing fixture:
from mamba import describe, before
with describe('weird mamba behaviour'):
@before.all
def setup():
raise Exception()
def should_not_pass():
pass
When any exception is raised on the setup
function, the 0.5
version just ignores the should_not_pass
test function; but at the end, the process returns 1.
I think of this as a quite an annoying behaviour, as the run looks all fine and green on the command line, but it would make any CI server fail, leaving you stranded.
$ mamba
weird mamba behaviour
0 examples ran in 0.0029 seconds
$ echo $?
1
Previous versions of mamba
doesn't ignore the test, and marks it as a failure, so I think this is a regression.
See 0.4
:
weird mamba behaviour
✗ should not pass
Failures:
1) weird mamba behaviour should not pass
Failure/Error:
Traceback:
1 specs failed of 1 ran in 0.0000 seconds
And 0.3
:
weird mamba behaviour
✗ should not pass
1 specs failed of 1 ran in 0.0000 seconds
After hooks only run if tests pass, that doesn't seem to be desirable. For system test coverage for example this could mean not leaving state in actual systems due to a test suite failing.
I've got mamba working and passing its tests with coverage-3.7.1. Could we at least bump the version on coverage if not simply unpin it?
When I develop against tests, I want to turn on debugging for one particular test.
Ideally I want to do this with a decorator like so:
import logging
with describe("Blah");
@log_at(logging.DEBUG)
def should_be_awesome():
# complex logic
This is how the decorator is defined: it sandwiches the test function should_be_awesome
(argument f
here) between setting and restoring the logging level.
def log_at(log_level=logging.DEBUG):
def wrap_with_logging_level(f):
logger = logging.getLogger()
old_level = logger.level
logger.setLevel(log_level)
f()
logger.setLevel(old_level)
return wrap_with_logging_level
However, when I run this, mamba runs the example but does not report the test correctly:
Blah
I appreciate that there's before
and after
hooks for a use case like this, a quick decorator to enable debugging function is just incredibly convenient.
NOTE: I like @pending
a lot. This issue is not asking to replace pending
, but asking for an additional decorator.
PyTest provides @skip
, @skipUnless
etc. decorators for conditional tests. Mamba should too!
For some examples, see here
Given the following object that cannot be constructed:
class Something(object):
def __init__(self):
raise Exception('Oops!')
def do_work(self):
pass
The following spec,
with description('something'):
with before.each:
self.something = Something()
with description('when we wanna reproduce the bug'):
with before.each:
self.something.do_work()
with it('swallows exceptions'):
pass
I would expect that the Exception("Oops!")
is raised, but an AttributeError()
exception is raised instead, producing the following result:
something
when we wanna reproduce the bug
✗ it swallows exceptions
'ExecutionContext' object has no attribute 'something'
...
The exception in the Something()
constructor DOES occur, but for some reason gets swallowed by mamba, no pun intended.
This in turn does not occur if there is at least one example in the outer describe block. In that case, the exception for the inner example is raised as expected.
with description('something'):
with before.each:
self.something = Something()
with description('when we wanna reproduce the bug'):
with before.each:
self.something.do_work()
with it('swallows exceptions'):
pass
with it('does not fail if this is here')
pass
produces this output:
something
✗ it does not fail if this is here
Oops!
when we wanna reproduce the bug
✗ it swallows exceptions
Oops!
...
before.all
seems to be executed for each example group (good) and again for all inner groups (bad?). Perhaps this is even WAI and the "fix" is more clarification in the docs?
This behavior started once I updated:
$ mamba --version
0.8.6
# ^ This version was OK.
$ pip install mamba -U
Collecting mamba
Downloading mamba-0.9.2.tar.gz
# (...snip...)
Here's a sample test:
from spec.mamba import *
times_called = 0
with description('has before all'):
with before.all:
global times_called
print('Running before.all')
times_called += 1
with it('should have run before.all once'):
expect(times_called).to(equal(1))
with description('inner description'):
with it('should have run before.all once'):
expect(times_called).to(equal(1)) # Fails; times_called == 2.
with context('inner context'):
with it('should have run before.all once'):
expect(times_called).to(equal(1)) # Fails; times_called == 3.
Probably this isn't a popular thing, but would y'all be interested in supporting Python 2.6? I'd be willing to do at least some of the work to get it going. Just now, I was setting up my library to test on TravisCI and I'd like to be able to have 2.6 support for my lib... which means the testing framework needs to also support 2.6.
I'm not sure what all would be involved, of course. What do y'all thing?
It would be really great if there was an shared example mechanism. Because the examples are collected via AST parsing, just calling a methods that define contexts/examples inside doesn't do anything at all.
I'm new to Python (coming from Ruby), so I mostly expect this is user error. Someone pointed me at an essay by Kenneth Reitz, from which I'm stealing this technique. So I have this directory structure:
▾ cetacean/
__init__.py
requirements.txt
setup.py
▾ spec/
__init__.py
acceptance_spec.py
context.py
But witness:
● cetacean-python master ❱ python --version
Python 2.7.9
● cetacean-python master ❱ cat spec/context.py
#!/usr/bin/env python
# encoding: utf-8
import os
import sys
sys.path.insert(0, os.path.abspath('..'))
import cetacean
● cetacean-python master ❱ mamba
Traceback (most recent call last):
File "/home/ben/.pyenv/versions/2.7.9/bin/mamba", line 9, in <module>
load_entry_point('mamba==0.8.3', 'console_scripts', 'mamba')()
File "/home/ben/.pyenv/versions/2.7.9/lib/python2.7/site-packages/mamba/cli.py", line 19, in main
runner.run()
File "/home/ben/.pyenv/versions/2.7.9/lib/python2.7/site-packages/mamba/runners.py", line 27, in run
for module in self.example_collector.modules():
File "/home/ben/.pyenv/versions/2.7.9/lib/python2.7/site-packages/mamba/example_collector.py", line 20, in modules
with self._load_module_from(path) as module:
File "/home/ben/.pyenv/versions/2.7.9/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/home/ben/.pyenv/versions/2.7.9/lib/python2.7/site-packages/mamba/example_collector.py", line 53, in _load_module_from
yield self._module_from_ast(name, path)
File "/home/ben/.pyenv/versions/2.7.9/lib/python2.7/site-packages/mamba/example_collector.py", line 70, in _module_from_ast
exec(code, module.__dict__)
File "spec/acceptance_spec.py", line 3, in <module>
from .context import cetacean
ValueError: Attempted relative import in non-package
I ended up finding some wisdom on Stack Overflow. Specifically, I think this bit might be relevant:
the python import mechanism works relative to the
__name__
of the current file. When you execute a file directly, it doesn't have it's usual name, but has"__main__"
as its name instead. So relative imports don't work. You can, as Igancio suggested, execute it using the -m option. If you have a part of your package that is mean to be run as a script, you can also use the__package__
attribute to tell that file what name it's supposed to have in the package hierarchy.
So, somehow Mamba is running the files it finds without them going through the normal package system? Or am I just doing something completely ridiculous?
Thanks, in advance, for your help.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.