django-behave / django-behave Goto Github PK
View Code? Open in Web Editor NEWA Django TestRunner for the Behave BDD module
A Django TestRunner for the Behave BDD module
Was using django-behave==0.1.5 and behave==1.2.5 and Django 1.7 without issue.
Upgraded to Django 1.8 and all non behave tests run from all apps, but behave tests only run for the first listed app.
e.g.
./manage.py test foo bar baz
all django unit/system tests run for foo and bar and baz
but behave tests only run for foo, not bar or baz (or any subsequent apps)
happens with using:
TEST_RUNNER = 'django_behave.runner.DjangoBehaveTestSuiteRunner'
and with using:
TEST_RUNNER = 'django_behave.runner.DjangoBehaveOnlyTestSuiteRunner'
I have spent a few hours on this today and can't work out what is going on.
Any help would be greatly appreciated.
Thanks
Hi,
Would you be willing to move django-behave into its own org so multiple people can be maintainer?
I'd be happy to be one of several people with commit privileges. I'm just not sure I'm the one to be sole owner.
David
It appears that a recent commit has broken django-behave.
suite
is no longer defined in DjangoBehaveTestSuiteRunner.build_suite
at
https://github.com/rwillmer/django-behave/blob/master/django_behave/runner.py#L189.
suite was defined a month ago in https://github.com/rwillmer/django-behave/blob/e0400adb9da9129394b8a3c8728a4f001482ead8/django_behave/runner.py#L185, Bet is no longer there after the next commit 2 days ago: https://github.com/rwillmer/django-behave/blob/e0400adb9da9129394b8a3c8728a4f001482ead8/django_behave/runner.py#L185
I tried simply adding back in the line:
suite = unittest.TestSuite()
at the beginning of the function, but while that allowed the unittests to run, no behave tests ran. This is my first time using django-behave so it is impossible for me to know whether there is a bug or I have improper configuration.
Hi there,
In the readme you mention "copy django-behave/features/steps/library.py, if wanted."
does not seem to be present?
thanks for this btw :)
๐
When running tests, I often see multiple test databases being created on the DB server. Running SHOW DATABASES
during a behave test shows: test_MYDB
, test_test_MYDB
and sometimes even test_test_test_MYDB
.
Here is my environment.py
def before_all(context):
from django.test.runner import DiscoverRunner
context.runner = DjangoBehaveTestSuiteRunner()
def before_scenario(context, scenario):
# start from a fresh test DB
context.old_db_config = context.runner.setup_databases()
def after_scenario(context, scenario):
# clear the DB of test DB after each featue
context.runner.teardown_databases(context.old_db_config)
I'm calling context.runner.setup_databases()
and runner.teardown_databases(..)
because the default behaviour wasn't clearing the DB between scenarios.
I didn't see anywhere in the django_behave
runner that would be touching the DB setup,
so it might be LiveServerTestCase
that I should look into? Any tips or pointers would be much appreciated.
I'm using Django 1.8.1 and latest django_behave
.
When running tests, I get the following exception:
$ ./manage.py test
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/minime/.virtualenvs/kohanka/lib/python3.4/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/home/minime/.virtualenvs/kohanka/lib/python3.4/site-packages/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/minime/.virtualenvs/kohanka/lib/python3.4/site-packages/django/core/management/commands/test.py", line 30, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/home/minime/.virtualenvs/kohanka/lib/python3.4/site-packages/django/core/management/base.py", line 378, in run_from_argv
parser = self.create_parser(argv[0], argv[1])
File "/home/minime/.virtualenvs/kohanka/lib/python3.4/site-packages/django/core/management/base.py", line 351, in create_parser
self.add_arguments(parser)
File "/home/minime/.virtualenvs/kohanka/lib/python3.4/site-packages/django/core/management/commands/test.py", line 57, in add_arguments
"The method to extend accepted command-line arguments by the "
RuntimeError: The method to extend accepted command-line arguments by the test management command has changed in Django 1.8. Please create an add_arguments class method to achieve this.
I was a bit suprised to see that neither Scenarios nor Features run in isolation, as far as the database is concerned.
How should we implement this.
Should we just like in the manual example empty the db between scenarios? Or should it be done between features?
Currently I create some fixtures (using the excellent model_mommy) in environment.before_all
adding a User, that sort of tedium.
But I can imagine that you descibe the fixtures broadly in the Background. So an empty database after a feature would mean my before_all
, would have to move to before_feature
after which the Background is run and then the Scenarios.
What patterns have arisen with your use? Should this even be automated, or should everyone just use context.config
and the before_... and afer_... functions to his/her liking?
Traceback (most recent call last):
File "manage.py", line 10, in
execute_from_command_line(sys.argv)
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django/core/management/init.py", line 399, in execute_from_command_line
utility.execute()
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django/core/management/init.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django/core/management/commands/test.py", line 50, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django/core/management/base.py", line 238, in run_from_argv
parser = self.create_parser(argv[0], argv[1])
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django/core/management/commands/test.py", line 53, in create_parser
test_runner_class = get_runner(settings, self.test_runner)
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django/test/utils.py", line 166, in get_runner
test_module = import(test_module_name, {}, {}, force_str(test_path[-1]))
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django_behave/runner.py", line 174, in
class DjangoBehaveTestSuiteRunner(DjangoTestSuiteRunner):
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django_behave/runner.py", line 178, in DjangoBehaveTestSuiteRunner
(option_list, option_info) = get_options()
File "/Users/montiniz/.virtualenvs/finn_dev/lib/python2.7/site-packages/django_behave/runner.py", line 65, in get_options
(make_option(name, **keywords),)
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 577, in init
checker(self)
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/optparse.py", line 660, in _check_type
raise OptionError("invalid option type: %r" % self.type, self)
optparse.OptionError: option --behave_logging-level: invalid option type: <bound method type.parse_type of <class 'behave.configuration.LogLevel'>>
this happens to me with the repo version.
running django 1.6, python2.7, behave 1.2.4a1
for now i just eddited runner.py line57 to
if long_option and long_option != "--logging-level":
I may be misunderstanding how behave is supposed to work, but I created a /features and a /features/steps directory. I made feature_1.feature and steps/feature_1.py and everything was fine.
Then I made feature_2.feature and steps/feature_2.py and suddenly my tests couldn't find the @when definitions for any of my feature_1 clauses (oddly the @given clauses were all found without issue).
I then tried copying everything from feature_1.py and feature_2.py into steps.py and everything worked again.
If there a way to execute a step which is defined in a different app without repeating?
My case: a step is defined in app1 which creates some data in db from step table. I need to use the same step in app2.
My original version of this was for use with the Splinter library; this file has been modified to use a different library. It's now a very confused file.
Hi,
in our Django 1.6 project we store apps in the "apps/" directory and wanted to validate our application with django-behave. Unfortunately, this doesn't work:
$ ./manage.py test myapp
(...)
ImportError: No module named myapp
But running with apps.myauth doesn't detect the features/ (the other tests run, but not the behave tests)
$ ./manage.py test myapp
Ignoring label with dot in: apps.myauth
(...)
Commenting the three lines doing this test gives the following:
$ ./manage.py test myapp
ImportError: No module named myauth
Replacing the test_labels in the DjangoTestSuiteRunner by a new array with the full names:
app = get_app(label)
test_labels_foo.append('.'.join(app.__name__.split('.')[0:-1]))
Works, but that's ugly.
Thanks in advance, cheers,
OdyX
Could you please provide a license?
I'd like to speed up my test runs using --keepdb - https://docs.djangoproject.com/en/1.8/ref/django-admin/#django-admin-option---keepdb
manage.py test: error: unrecognized arguments: --keepdb
Could you add this, please?
When using South for schema and data migrations, data from the migrations are missing from the testdb when running the tests.
South seems contain a monkeypatch around this problem.
Is there a way to only run behave tests? I don't like the idea of running both unittests and behave tests. I saw DjangoBehaveTestSuiteRunner
adds the behave tests as extra tests. Is there an option right now to make behave tests the only ones to be ran?
I mostly do unit testing but now want to do some integration testing use django-behave but I can't figure out how to set up my tests so that the browser I launch visits a site backed by the test database that is created by the Django test runner. What do other people do?
My features are set up as follows:
djangoproject
My accounts/features/environment.py file:
from splinter.browser import Browser
def before_all(context):
context.browser = Browser()
When I run my tests with the following command:
$ python ./manage.py test accounts --testrunner=django_behave.runner.DjangoBehaveOnlyTestSuiteRunner -v 3
I can see the test database being created and it gets used for my unit tests (I can prove that by asserting I don't have any users in my User model). But since I have to give the browser a url, I am running the tests against my dev site and the second time I run a feature, it fails because the username is already taken - as I can see when I look at the admin interface of my dev site.
Can I spin up a web site against the newly created, clean test database? If so, how? It doesn't exist until the test starts running.
If I can't start up a web site pointing to the test db, then I'll need some sort of database cleanup option so my dev database to destroy the user sign ups etc. Anyone have a full example of that? I see discussion and some code fragments various places but I have not been able to cobble together a working test from the pieces.
Is there a way to run a single test, or a single file with django-behave?
When trying to run a single test file, no tests are run:
./manage.py test --testrunner=django_behave.runner.DjangoBehaveOnlyTestSuiteRunner bdd_tests/features/intervention.feature
Ignoring label with dot in: bdd_tests/features/intervention.feature
Creating test database for alias 'default'...
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Once I figure it out, I think this should be added to the django-behave documentation.
I've set up django-behave
in our project following the first 3 steps of the How To Use section in the README. I do already have feature tests in a /features/
folder of the project root, which execute fine running Behave directly:
(virtualenv)$ behave
Feature: Dashboard notifications # features/f00_dashboard.feature:1
...
Scenario: Show off BDD testing with behave # features/f00_dashboard.feature:7
...
Feature: Login, logout, retrieve forgotten password # features/f00_login_logout.feature:1
...
Scenario: Homepage presents login form # features/f00_login_logout.feature:6
...
Scenario: Login to FooBarWebsite # features/f00_login_logout.feature:12
...
Scenario: Logout from FooBarWebsite # features/f00_login_logout.feature:18
...
Scenario: Back button does not reveal cached pages # features/f00_login_logout.feature:24
2 features passed, 0 failed, 0 skipped
5 scenarios passed, 0 failed, 0 skipped
18 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m22.439s
Now, unfortunately, with django-behave
set up manage.py test
executes with the specified TEST_RUNNER
, but no tests are discovered at all:
(virtualenv)$ python manage.py test
Creating test database for alias 'default'...
No filer contentypes to migrate. This is probably because migrate is running on a new database.
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Destroying test database for alias 'default'...
(Note: I don't use a test database for now, running my Selenium tests against a started runserver.)
What am I missing in the django-behave
configuration to make manage.py test
discover the tests?
What irritates me in step 3 of the "How To Use" section is the "apps":
- add features directories to apps
Does this mean in a Django project I can't run feature tests on the project level, outside of any app?
I'm getting the following error when installing from pypi:
jscn@ziz:~/venvs/djello$ pip install django-behave
Downloading/unpacking django-behave
Downloading django-behave-0.0.12.macosx-10.8-x86_64.tar.gz
Running setup.py egg_info for package django-behave
Traceback (most recent call last):
File "<string>", line 16, in <module>
IOError: [Errno 2] No such file or directory: '/home/jscn/venvs/djello/build/django-behave/setup.py'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 16, in <module>
IOError: [Errno 2] No such file or directory: '/home/jscn/venvs/djello/build/django-behave/setup.py'
It appears as though it's getting the macosx binary, instead of the source. (I'm running Ubuntu, not OSX). Seems to install okay if I specify the version number:
pip install django-behave==0.0.12
Unfortunately, I don't know enough about Python packaging to be able to say what the problem is.
django-behave is not working with Django>=3 because of dependency on django.utils.six
Can you release new version ? Last version is incompatibile with Django 1.7
I prefixed some of my behave features with @Skip thinking they wouldn't be ignored, but they still run.
Using the latest git master branch of django-behave (b9a7deb) with Django 1.8, I get the following error. It occurs when using both DjangoBehaveTestSuiteRunner
and DjangoBehaveOnlyTestSuiteRunner
.
./manage.py test bdd_tests --testrunner=django_behave.runner.DjangoBehaveOnlyTestSuiteRunner
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django/core/management/commands/test.py", line 30, in run_from_argv
super(Command, self).run_from_argv(argv)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django/core/management/base.py", line 390, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django/core/management/commands/test.py", line 74, in execute
super(Command, self).execute(*args, **options)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django/core/management/base.py", line 441, in execute
output = self.handle(*args, **options)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django/core/management/commands/test.py", line 90, in handle
failures = test_runner.run_tests(test_labels)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django/test/runner.py", line 209, in run_tests
suite = self.build_suite(test_labels, extra_tests)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django_behave/runner.py", line 267, in build_suite
features_dir = get_features(app)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django_behave/runner.py", line 55, in get_features
app_dir = get_app_dir(app_module)
File "/home/nnyby/src/worth2/ve/local/lib/python2.7/site-packages/django_behave/runner.py", line 48, in get_app_dir
app_dir = dirname(app_module.__file__)
AttributeError: 'NoneType' object has no attribute '__file__'
Makefile:40: recipe for target 'behave' failed
Our webpage says 'You can run all unittest2 tests with the following: python tests.py'
https://github.com/django-behave/django-behave
This fails with
Air:django-behave rachel$ ve/bin/python tests.py
Traceback (most recent call last):
File "tests.py", line 30, in test_runner_with_default_args_expect_bdd_tests_run
self.assertIn('scenario passed', actual.out)
AssertionError: 'scenario passed' not found in ''
Traceback (most recent call last):
File "tests.py", line 40, in test_runner_with_old_tag_specified_expect_only_old_bdd_test_run
self.assertIn('1 scenario passed, 0 failed, 1 skipped', actual.out)
AssertionError: '1 scenario passed, 0 failed, 1 skipped' not found in ''
Traceback (most recent call last):
File "tests.py", line 45, in test_runner_with_undefined_steps_expect_display_undefined_steps
self.assertIn('You can implement step definitions for undefined steps with', actual.err)
AssertionError: 'You can implement step definitions for undefined steps with' not found in 'Traceback (most recent call last):\n File "./manage.py", line 8, in \n from django.core.management import execute_from_command_line\nImportError: No module named django.core.management\n'
Ran 4 tests in 0.201s
FAILED (failures=3)
Does django-behave support Python 3?
If so, it would be nice to designate that on PyPI using classifiers in setup.py (see https://pypi.python.org/pypi?%3Aaction=list_classifiers)
If not, what does the path forward look like? I know both behave
and django
already support Python 3, so it shouldn't be much work. I would be glad to help.
I am getting the following error on Django 1.8
AttributeError: 'DjangoBehaveOnlyTestSuiteRunner' object has no attribute 'option_info'
when I run a custom management command that uses the DjangoBehaveOnlyTestSuiteRunner
return call_command('test',
*test_apps,
testrunner='django_behave.runner.DjangoBehaveOnlyTestSuiteRunner',
verbosity=3,
interactive=False)
if I use the same call_command but with
testrunner='django_behave.runner.DjangoBehaveTestSuiteRunner',
then it works fine (obviously running all the tests, not just the behave tests).
Interestingly
./manage.py test fooapp --testrunner=django_behave.runner.DjangoBehaveOnlyTestSuiteRunner
works fine so I am not sure what is going on here.
I think it will be very usefull that clean database between features/scenarios. Not very sure if that can be archivable now, I can't find any documentation related with it.
The second time I try to run the tests, I get this error:
======================================================================
ERROR: setUpClass (django_behave.runner.DjangoBehaveTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File ".../lib/python2.7/site-packages/django/test/testcases.py", line 1138, in setUpClass
raise cls.server_thread.error
WSGIServerException: [Errno 48] Address already in use
----------------------------------------------------------------------
The live server is not quitting when my tests quit, until I manually close the Firefox instance that gets open automatically whenever I run tests. Is there a way to avoid this? I'm not using Selenium (yet) in my tests.
I put my model files into a 'models' directory, rather than a 'models.py' file.
This breaks the test discovery.
I couldn't find a way to run all behave tests from all apps with a single command. Is there any command option by which I can run all tests without specifying app names?
EDIT: so I saw that the latest code has this feature, but when I install django-behave(via pip), it does not install the latest code. What can I do to correct this(except using git+https://github.com/django-behave/django-behave)?
I'm new to django and behave. Is there an option to load fixtures like the ones with djangos testing suite. e.g.
fixtures = ['contrib.json']
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.