Git Product home page Git Product logo

testfm's Introduction

TestFM

https://api.travis-ci.org/SatelliteQE/testfm.svg?branch=master https://requires.io/github/SatelliteQE/testfm/requirements.svg?branch=master

TestFM is a test suite based on pytest-ansible that exercises The Foreman maintenance tool

Quickstart

The following is only a brief setup guide for TestFM. The section on Running the Tests provides a more comprehensive guide to using TestFM.

TestFM requires SSH access to the server system under test, and this SSH access is implemented by pytest-ansible.

Get the source code and install dependencies:

git clone https://github.com/SatelliteQE/testfm.git
pip3 install -r requirements.txt

Before running any tests, you must create a configuration file:

cp testfm.sample.yaml testfm.local.yaml
OR
cp testfm.sample.yaml testfm.yaml

There are a few other things you need to do before continuing:

  • Make sure ssh-key is copied to the test system.
  • Make sure satellite maintain is installed on foreman/satellite server.

Running the Tests

Before running any tests, you must add foreman or satellite hostname to the testfm/inventory file (first copy it from`testfm/inventory.sample`).

That done, you can run tests using pytest

pytest -v --ansible-host-pattern server --ansible-user=root  --ansible-inventory testfm/inventory
tests/

It is possible to run a specific subset of tests:

pytest -v --ansible-host-pattern server --ansible-user=root --ansible-inventory testfm/inventory
tests/test_case.py

pytest -v --ansible-host-pattern server --ansible-user=root  --ansible-inventory testfm/inventory
tests/test_case.py::test_case_name

Want to contribute?

Thank you for considering contributing to TestFM! If you have any question or concerns, feel free to reach out to the team.

Recommended

  • Import modules in alphabetical order.
  • Every method and function will have a properly formatted docstring.

In order to ensure you are able to pass the Travis CI build, it is recommended that you run the following commands in the base of your testfm directory

pre-commit autoupdate && pre-commit run -a

Pre-commit will ensure that the changes you made are not in violation of PEP8 standards.

If you have something great, please submit a pull request anyway! The full documentation is available on ReadTheDocs.

Licensing

TestFM is licensed under GNU General Public License v3.0.

testfm's People

Contributors

ntkathole avatar gauravtalreja1 avatar jameerpathan111 avatar shubhamsg199 avatar dependabot[bot] avatar pondrejk avatar dependabot-preview[bot] avatar mshriver avatar vsedmik avatar sghai avatar akhil-jha avatar jyejare avatar lvrtelov avatar omkarkhatavkar avatar swadeley avatar tstrych avatar

Stargazers

 avatar  avatar Perry Gagne avatar  avatar

Watchers

Djebran Lezzoum avatar corey welton avatar Mike McCune avatar Pavel Novotny avatar Kedar Bidarkar avatar  avatar James Cloos avatar Lai Tran avatar  avatar Matt Pusateri avatar Tasos Papaioannou avatar Lukas Pramuk avatar Hong Sun avatar  avatar  avatar Samuel Bible avatar Suraj Bora avatar Sudhir avatar Radovan Dražný avatar Devendra avatar  avatar Ondřej Gajdušek avatar  avatar Vijay Singh avatar  avatar Adarsh dubey avatar

testfm's Issues

Restructure Testfm to use proper setup/teardown.

use following code as template:

import pytest

@pytest.fixture(scope='function')
def setup_for_testcase(request):
    print('\nsync_plan_setup()')
    def teardown_for_testcase():
        print('\nsync_plan_teardown()')
    request.addfinalizer(teardown_for_testcase)

def test_testcasename(setup_for_testcase):
    print("testing")

Add test for foreman-maintan pr "Check server response via non-auth call"

Foreman-maintain PR: theforeman/foreman_maintain#261
description:

This patch reimplements hammer ping to use simple net::http
call instead of hammer. The main advantage is we can easily
skip the hammer auth setup which is not required for the call.
Another advantage is we can extend ping to pure foreman
and proxy.

Internaly the ping implementation moved to the instance feature,
related CLI option was kept for backward compatibility. The ping?
now returns true/false and the additional details such as error
message and failing services can be checked using additional
methods. Restrictions to :katello feature were removed as
the ping can query all types of instances now.

checking also backup contents in tests

It would be nice to check if the created backups contain the expected files and directories, this would allow us to find possible regressions with backup succeeding but not creating all necessary files.
Needs some research on differences in online/offline backup files and also between sat and capsule. As a first step this needs an way to extract the backup dir name from the ansible output

add checks for exit codes where missing

Backup (but also majority of other) tests rely solely on

assert "FAIL" not in result['stdout']

which can miss some problems, for example if a required command option changes, ansible_module.command produces no stdout therefore the above assert passes even if command actually fails. I suggest adding also assert result['rc'] == 0 wherever possible

test restore fails wth "'ERROR: too many arguments'"

Seems like the test_positive_restore_online_backup never succeeded, probably because the construct_command method slips some unwanted None to the foreman-maintain command, output looks as follows:

2018-08-24 02:55:43,677 - testfm.log - INFO - {'_ansible_parsed': True, 'stderr_lines': [u'ERROR: too many arguments', u'', u"See: 'foreman-maintain restore --help'"], u'cmd': [u'foreman-maintain', u'restore', u'None', u'-y', u'/tmp/online_backup_restore/'], u'end': u'2018-08-24 02:55:43.642736', '_ansible_no_log': False, u'stdout': u'', u'changed': True, u'rc': 1, u'start': u'2018-08-24 02:55:42.550026', u'stderr': u"ERROR: too many arguments\n\nSee: 'foreman-maintain restore --help'", u'delta': u'0:00:01.092710', u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': False, u'_raw_params': u'foreman-maintain restore None -y /tmp/online_backup_restore/', u'removes': None, u'creates': None, u'chdir': None}}, 'stdout_lines': [], 'failed': True}

Scary thing is that did not prevent automation from passing, see test report for ttest_restore/test_positive_restore_online_backup in build no.37 but that was before the inclusion of exit code check in #5

Will investigate further, probably some additional method for base would be a good solution, similar to run_online_backup

Automate scenario

theforeman/foreman_maintain#210 (comment)

# ./bin/foreman-maintain service stop --only foreman-tasks
# ./bin/foreman-maintain backup online /tmp
Starting backup: 2018-09-14 05:48:24 -0400
Running preparation steps required to run the next scenarios
================================================================================
Make sure Foreman DB is up: 
/ Checking connection to the Foreman DB                               [OK]      
--------------------------------------------------------------------------------
Make sure Mongo DB is up: 
| Checking connection to the Mongo DB                                 [FAIL]    
undefined method `keys' for [SystemService(mongod [5])]:Array
--------------------------------------------------------------------------------
undefined method `keys' for [SystemService(mongod [5])]:Array

python version should be noted

requirements installation fails for me in python 3 virtual env:

Collecting pytest-ansible==1.3.1 (from -r requirements.txt (line 3))
  Downloading https://files.pythonhosted.org/packages/28/8a/1bc0e1473fb67a5854e5d3e4c625cda221c86805793faefe02d0b3944524/pytest-ansible-1.3.1.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-install-ms6oheq7/pytest-ansible/setup.py", line 67
        print "Removing '%s'" % rm

python 2 works ok, but I think we should either note in the readme that testfm is not py3 compatible, or maybe update pytest-ansible version if it doesn't break anything else

Add scenarios to test version-locking feature of FM

upstream PR: theforeman/foreman_maintain#249
upstream issue: https://projects.theforeman.org/issues/26216

Scenarios to test in Testfm:

  • satellite-installer --lock-package-versions

    • Check packages are locked or not using:
      • foreman-maintain packages is-locked
      • foreman-maintain packages status
      • Check using yum command
      • Check locked packages listed in file
    • Run satellite-installer --no-lock-package-versions
      • Check packages are unlocked or not using:
        • foreman-maintain packages is-locked
        • foreman-maintain packages status
        • Check using yum command
        • Check packages are not listed in file anymore
  • foreman-maintain packages lock

    • Check packages are locked or not using:
      • foreman-maintain packages is-locked
      • foreman-maintain packages status
      • Check using yum command
      • Check locked packages listed in file
    • Run foreman-maintain packages unlock
      • Check packages are unlocked or not using:
        • foreman-maintain packages is-locked
        • foreman-maintain packages status
        • Check using yum command
        • Check packages are not listed in file anymore

remove stubbed decorator from tests of test_health.py

Remove stubbed decorator for following tests because the checks which are used in tests are now available in foreman-maintain v0.4.3

  • test_positive_check_upstream_repository
  • test_positive_puppet_check_no_empty_cert_requests
  • test_positive_puppet_check_empty_cert_requests

Also fix test_positive_foreman_maintain_health_check failing because of check puppet-check-no-empty-cert-requests which needs users permission. Fix the test by using --assumeyes option

Verifying db migrations in testfm?

From user perspective, foreman_docker removal is done largely by foreman-maintain - relevant db migrations are run as part of upgrade procedure. I briefly checked code here, but it seems that testfm doesn't have any support for db migrations testing yet.

Is my assertion correct?

If it is, what would need to be done to enable db migrations testing here? Is this relatively small, or rather large task?

Automate Check to make sure root(/) partition has enough space

# foreman-maintain health check --label available-space
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check to make sure root(/) partition has enough space:                [OK]
--------------------------------------------------------------------------------

Capsule backup has different files

OFFLINE_BACKUP_FILES = [

backup tests failed on capsule

>       assert set(files_list).issuperset(
                   ONLINE_BACKUP_FILES + CONTENT_FILES), assert_msg
E       AssertionError: All required backup files not found
E       assert False
E        +  where False = <built-in method issuperset of set object at 0x7f0705091588>((['candlepin.dump', 'config_files.tar.gz', '.config.snar', 'foreman.dump', 'metadata.yml', 'mongo_dump', ...] + ['pulp_data.tar', '.pulp.snar']))
E        +    where <built-in method issuperset of set object at 0x7f0705091588> = {'.', '..', '.config.snar', '.pulp.snar', 'config_files.tar.gz', 'metadata.yml', ...}.issuperset
E        +      where {'.', '..', '.config.snar', '.pulp.snar', 'config_files.tar.gz', 'metadata.yml', ...} = set(['.', '..', 'config_files.tar.gz', '.config.snar', 'metadata.yml', 'mongo_dump', ...])

Foreman-maintain testing with upstream nightly

Goal: To perform foreman-maintain testing with upstream nightly.

Description: Currently our automation run in Jenkins get triggered automatically according to satellite builds. Extend the support for upstream nightly as well so that we can catch the issues in early phase.
This will include setting up upstream nightly instance and running tests from testfm on top of it.

Limitations:

Till we have upgrade support in foreman-maintain, we can run tests related to backup, restore and services functionality on upstream.

Action Item:
See how can we utilize existing upstream nightly provisioning instance and trigger testfm automation.

some travis setup

would be nice to have at least flake8 run, never tried to set it before, though will investigate

incremental backup previous backup dir too general

Incremental backup should use the backup dir from previous backup, current tests use the parent dir instead, so essentially the backup is essentially full, not incremental. Strange that f-m does not complain about it, will file an issue also there

Automate upstream repositories check

foreman-maintain health check --label check-upstream-repository

# ./bin/foreman-maintain health check --label check-upstream-repository
Running ForemanMaintain::Scenario::FilteredScenario
================================================================================
Check if any upstream repositories are enabled on system: 
- Checking for presence of upstream repositories                      [OK]      
--------------------------------------------------------------------------------

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.