teemtee / tmt Goto Github PK
View Code? Open in Web Editor NEWTest Management Tool
License: MIT License
Test Management Tool
License: MIT License
tmt tool supports command line options for setting test repo
Without this test repo has to be set manually in the ci.fmf plan, or additional workaround script is required.
so
tmt plans create ci2 --template full --repository https://github.com/psklenar/tmt
will create :
summary:
Essential command line features
discover:
how: fmf
repository: https://github.com/psklenar/tmt
prepare:
how: ansible
playbooks: plans/packages.yml
execute:
how: beakerlib
According to specification [1] tier
attribute is a string with a single value. This should be exported as a tag to nitrate (i.e.: tier: 1
should become Tier1
tag in nitrate test case tags) [2].
Therefore it implies that the other direction (converting tests from nitrate) should create tier: X
attribute in fmf file.
The issue is that nitrate test case can have multiple TierX
tags (common scenario) - how should this scenario be mapped into tier
attribute?
[1] https://tmt.readthedocs.io/en/latest/spec/tests.html#tier
[2] https://tmt.readthedocs.io/en/latest/stories/cli.html#convert
For some components not all artefacts are really needed and could break test environment preparation or needlessly prolong it, namely graphical packages for daemons like firewalld.
In (probably) prepare phase I'd like to set a list of white/black list wild cards for artefact name that should not/be part of installed SUT.
tmt run -d --all provision --how vagrant
provision:
how: container
image: fedora:latest
prepare:
how: shell
script:
- echo test > /test
tries to write to host FS.
- sh -c "echo test > /test"
works as I expected
Running tests, for me, results in:
$ make test
python3 -m pytest tests
===================================== test session starts =====================================
platform linux -- Python 3.8.0, pytest-5.2.1, py-1.8.0, pluggy-0.13.0
rootdir: /home/lpcs/lpcsf-new/test/tmt
collected 38 items / 1 errors / 37 selected
=========================================== ERRORS ============================================
_____________________ ERROR collecting tests/unit/steps/test_provision.py _____________________
ImportError while importing test module '/home/lpcs/lpcsf-new/test/tmt/tests/unit/steps/test_provision.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/unit/steps/test_provision.py:6: in <module>
from mock import MagicMock, patch
E ModuleNotFoundError: No module named 'mock'
!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!
====================================== 1 error in 0.10s =======================================
make: *** [Makefile:20: test] Error 2
I've tried pip install '.[tests]'
(which needs the quotes, otherwise zsh complains) and also pip install -e .
, but I'm still getting the error. Have I missed anything from the readme?
This can be done nicely using:
https://github.com/packit-service/ogr
During recent discussion it was raised that the tool should support an easy way how to explore L1 and L2 metadata. For example hide unnecessary implementation details such as that testsets are identified by the execute
keyword so instead of user learning this and doing:
fmf --key execute
there should be a more straighforward way to do this directly using tmt
, for example:
tmt testset --list
Similarly there should be a test
subcommand supporting L1 metadata investigation:
tmt test --list
most of the time it was autodetecting stuff.
[mvadkert@freedom /var/tmp/tmt/run-051/plans/basic/provision/gchBHnRSKzzDtZxm]$ time vagrant --debug provision --provision-with prepare
real 1m47.346s
It would be awesome if we could say to vargrant this is a RHEL or Fedora machine, so he does not need to do the autodetection by himself.
While trying to fix #26 I found out that I am out of luck. The fix is to include epel-release before installing the artifact but we have no way to do that now. It is questionable how to do it. We can mark parts of prepare to be before artifact installation or have a separate section, or maybe even put it in the already planned artifact step ....
Currently tmt test convert
stores yaml keys in random order. It would be nice to store the information in a way which is pleasant for user to read. For example summary
, if present, should be the first. Implementation should be relatively easy using an OrderedDict
.
The proposed order of the L1 Metadata keys:
It would be nice to have the Test Case Relevancy supported and naturally integrated into the tmt run
command so that irrelevant tests can be easily filtered out before the execution. Possibly tmt test
command could support filtering based on relevancy as well.
The question is how the environment dimensions should be provided/detected. There could be an option which would allow to manually define one or more dimensions of the environment (similarly as workflow-tomorrow
does), some of the dimensions could/should be auto-detected, for example distro
from the provision
step.
Let's discuss the implementation details here. This is blocking test open sourcing so we should look into this as soon as possible.
Requires in Makefile contain several packages:
@echo "Requires: a b c d" >> $(METADATA)
tmt tests convert
produces main.fmf with
require:
- a b c d
And thus it is loaded as
{'require': ['a b c d']}
instead of
{'require': ['a', 'b', 'c', 'd']}
In the documentation, there are several steps described. I'll use this issue to describe the confusion I have:
discover: gather info about test cases to be run
I assume this tep will inform the user about the tests that can be run, correct? If so, maybe describe it as: lists info about test cases to be run.
provision: what environment is needed for testing, how it should provisioned
What does this do? Does it provision the environment, or does it tell me what environment I need to prepare myself? Will it start potentially dangerous actions, like installing software to my workstation or talking to some service?
prepare: additional configuration needed for testing (e.g. ansible playbook)
What does this do? Does it run ansible in the previously provisioned thing? Or does it run ansible on my workstation?
execute: test execution itself (e.g. framework and its settings)
Does this execute the tests? Where? How?
report: adjusting notifications about the test progress and results
Adjusting? How?
finish: actions to be performed after the test execution has been completed
Huh? Are the actions performed when I run this command, or?
My goal is to have how:fmf in parent and just modify the filter for each child.
I have following plan
summary: Run test with environment
discover:
how: fmf
execute:
how: beakerlib
/first:
discover:
filter: "component:python-requests"
/second:
When I run tmt discover
I end up with
/var/tmp/tmt/run-071
/r/first
discover
how: shell
tests: 0 tests selected
/r/second
discover
how: fmf
directory: /home/lzachar/_important_/Tests/python-requests
tests: 12 tests selected
See that /r/first has how: shell
One can workaround this by having print of env
in each test but I'd prefer to have the content of environment variables logged by test executor itself.
Expected usage: Review that case was run under correct envar.
Install curl
hard dependecy specified in L1 metadata.
F.e. if user filters out some tests they don't need the dependency.
[python-virtualenv (py34-patch %)]$ git remote -v
churchyard ssh://pkgs.fedoraproject.org/forks/churchyard/rpms/python-virtualenv.git (fetch)
churchyard ssh://pkgs.fedoraproject.org/forks/churchyard/rpms/python-virtualenv.git (push)
origin ssh://[email protected]/rpms/python-virtualenv (fetch)
origin ssh://[email protected]/rpms/python-virtualenv (push)
[python-virtualenv (py34-patch %)]$ git branch
16.6.1
master
nopy2
* py34-patch
test38
you_shall_not_pass
[python-virtualenv (py34-patch %)]$ tmt discover
ERROR Unable to find tree root for '/home/churchyard/rpmbuild/fedora-scm/python-virtualenv'.
Seems we'll soon need a way how to store user configuration data. For example tmt test export --nitrate
might need a url
to the test case management system.
Brainstorming first ideas here: What about storing data under $HOME/.config/tmt
in the form of an fmf
tree? For the above-mentioned example we could have something like this:
/test/convert/nitrate:
url: https://nitrate.example.com/
This would work naturaly / consistently with how the rest of data is handled in tmt
.
If there is an error encountered during execution the last 30 lines (or so) of stderr/stdout should be displayed on the output to make it easier to debug the problem. This should be enabled by default for the debug level. In verbose and info level it would be nice to always give a reasonable message about the failed command. Or, perhaps, the tail output should be enabled always? @thrix, @pvalena, what do you think?
If I run:
$ tmt --root examples/mini run --debug --all provision --how=local provision
the provision is actually run with default how
, and --how=local
is discarded.
I think all steps do behave this way; and I find it quite counter-intuitive.
Maybe a warning could be thrown at least?
Note: if I run
tmt --root examples/mini run --all provision --how=local execute discover finish
it behaves as expected.
Adding the provision
at the end results in the unexpected behaviour.
I noticed the documentation https://tmt.readthedocs.io/en/latest/spec/tests.html says the enabled attribute can be either "yes" or "no" but tmt test convert actually creates "enabled: true" and the console output says "enabled: True" with a capital "T"
It should probably all be the same to avoid confusion.
Hello.
Just ordinary used of tmt test convert leads to very not nice traceback when kerberos ticket expires.
How to reproduce:
$ cd tuned/Regression/Verification-of-sysctll-is-broken-for-tabs
$ tmt test convert
Result:
Checking the '/home/rhack/git/tests/tuned/Regression/Verification-of-sysctll-is-broken-for-tabs' directory.
Makefile found in '/home/rhack/git/tests/tuned/Regression/Verification-of-sysctll-is-broken-for-tabs/Makefile'.
test: /CoreOS/tuned/Regression/Verification-of-sysctll-is-broken-for-tabs
description: Test for BZ#1711230 (Verification of sysctll is broken for tabs)
component: tuned
duration: 20m
Purpose found in '/home/rhack/git/tests/tuned/Regression/Verification-of-sysctll-is-broken-for-tabs/PURPOSE'.
description:
Bug summary: Verification of sysctll is broken for tabs
Bugzilla link: https://bugzilla.redhat.com/show_bug.cgi?id=1711230
Nitrate Traceback (most recent call last):
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/base.py", line 200, in _server
Config().nitrate.username,
AttributeError: 'Section' object has no attribute 'username'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/xmlrpc_driver.py", line 105, in single_request_with_cookies
h = self.send_request(host, handler, request_body, verbose)
File "/usr/lib64/python3.7/xmlrpc/client.py", line 1267, in send_request
connection = self.make_connection(host)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/xmlrpc_driver.py", line 185, in make_connection
chost, self._extra_headers, x509 = self.get_host_info(host)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/xmlrpc_driver.py", line 163, in get_host_info
response = vc.step()
File "</home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/decorator.py:decorator-gen-15>", line 2, in step
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/gssapi/_utils.py", line 167, in check_last_err
return func(self, *args, **kwargs)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/gssapi/sec_contexts.py", line 521, in step [23/169]
return self._initiator_step(token=token)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/gssapi/sec_contexts.py", line 542, in _initiator_step
token)
File "gssapi/raw/sec_contexts.pyx", line 245, in gssapi.raw.sec_contexts.init_sec_context
gssapi.raw.misc.GSSError: Major (851968): Unspecified GSS failure. Minor code may provide more information, Minor (2529639053): No Kerb
eros credentials available (default cache: KEYRING:persistent:20787)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/rhack/.virtualenvs/tmt/bin/tmt", line 7, in
exec(compile(f.read(), file, 'exec'))
File "/home/rhack/tmt/bin/tmt", line 11, in
tmt.cli.main()
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 764, in call
return self.main(*args, **kwargs)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/rhack/tmt/tmt/cli.py", line 418, in convert
data = tmt.convert.read(path, makefile, nitrate, purpose)
File "/home/rhack/tmt/tmt/convert.py", line 108, in read
testcases = list(TestCase.search(script=test))
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/mutable.py", line 912, in search
for inject in Nitrate()._server.TestCase.filter(dict(query))]
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/base.py", line 206, in _server
Config().nitrate.url).server
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/xmlrpc_driver.py", line 443, in init
login_dict = self.do_command("Auth.login_krbv", [])
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/xmlrpc_driver.py", line 393, in do_command
return getattr(self.server, verb)(*params)
File "/usr/lib64/python3.7/xmlrpc/client.py", line 1112, in call
return self.__send(self.__name, args)
File "/usr/lib64/python3.7/xmlrpc/client.py", line 1452, in __request
verbose=self.__verbose
File "/usr/lib64/python3.7/xmlrpc/client.py", line 1154, in request
return self.single_request(host, handler, request_body, verbose)
File "/home/rhack/.virtualenvs/tmt/lib/python3.7/site-packages/nitrate/xmlrpc_driver.py", line 137, in single_request_with_cookies
h.close()
UnboundLocalError: local variable 'h' referenced before assignment
Which basically says: you don't have kerberos ticket! :)
Should return smaller amount of words :).
Thank you for your time.
Instead of reading Makefile directly we could use testinfo.desc file.
It would require extra system call when converting which might take more time, however it would solve issue with Makefiles that use variables to create variables we read.
@lukaszachy you mentioned reading testinfo.desc makes more sense to you, do you have another use case than the one I mentioned?
Requested in:
When I am writing new test, I want to be able specify everything using tmt/fmf and let tmt create the test case on nitrate server.
Otherwise for new test I would need to manually (or by other tool) create some test case, and either manually create the tcms attribute in fmf file, or let the tmt tests convert import it.
When I tmt run --debug discover provision -h vagrant prepare execute report finish
a test, it provisions a Vagrant VM and then executes nohup ...
on the host itself and not the VM.
relevant log fragment:
execute
workdir: /var/tmp/tmt/run-076/plans/example/execute
how: beakerlib
Copy '/home/asosedki/code/tmt/tmt/steps/execute/run.sh' to '/var/tmp/tmt/r
un-076/plans/example/execute'.
Run command '('vagrant', 'rsync')'.
out: ==> default: Rsyncing folder: /var/tmp/tmt/run-076/plans/example/
=> /var/tmp/tmt/run-076/plans/example
Run command 'nohup /var/tmp/tmt/run-076/plans/example/execute/run.sh -v /v
ar/tmp/tmt/run-076/plans/example beakerlib /var/tmp/tmt/run-076/plans/example/exec
ute/stdout.log /var/tmp/tmt/run-076/plans/example/execute/stderr.log'.
err: nohup: ignoring input
Run command '('vagrant', 'plugin', 'install', 'vagrant-rsync-back')'.
Seems it would be nice if we would first check if the metadata are correct before trying to run them, so we can nicely report to user the errors. We can continue in case of warnings, but for errors we should not try to run ...
For example how to get all *.log and *.yml from directory the script was run.
It should not fail if some file doesn't exist.
I am not able to execute test if it sets the environment.
Steps to reproduce:
Plan in file repro.fmf with content
summary:
Basic smoke test
discover:
how: fmf
execute:
how: beakerlib
Test in main.fmf with content
test: ./runtest.sh
environment: {'BUNDLED_INJECT': 'PYTHONPATH=/usr/lib/python3.6/site-packages/pip/_vendor/',
'PACKAGES': 'python3-pip',
'PYTHON': 'python3'}
Execution as tmt run -d
ends up with
execute
workdir: /var/tmp/tmt/run-055/repro/execute
how: beakerlib
Copy '/usr/lib/python3.7/site-packages/tmt/steps/execute/run.sh' to '/var/tmp/tmt/run-055/repro/execute'.
Run command 'vagrant rsync'.
out: ==> default: Rsyncing folder: /var/tmp/tmt/run-055/repro/ => /var/tmp/tmt/run-055/repro
Run command 'vagrant ssh -c nohup /var/tmp/tmt/run-055/repro/execute/run.sh -v /var/tmp/tmt/run-055/repro beakerlib /var/tmp/tmt/run-055/repro/execute/stdout.log /var/tmp/tmt/run-055/repro/execute/stderr.log'.
out: nohup: ignoring input and appending output to 'nohup.out'
err: Connection to 192.168.122.226 closed.
Run command 'bash -c "vagrant plugin list | grep '^vagrant-rsync-back '"'.
out: vagrant-rsync-back (0.0.1, system)
Run command 'vagrant rsync-back'.
out: ==> default: Rsyncing folder: /var/tmp/tmt/run-055/repro/ => /var/tmp/tmt/run-055/repro
stdout.log: ED
stderr.log:
/var/tmp/tmt/run-055/repro $ main beakerlib < discover/tests.yaml
> /one/repro:
> environment: BUNDLED_INJECT=PYTHONPATH=/usr/lib/python3.6/site-packages/pip/_vendor/
> PACKAGES=python3-pip PYTHON=python3
Error: Unknown test variable: PACKAGES=python3-pip PYTHON=python3
> path: /one/tests/repro
> test: ./runtest.sh
Error: [/one/repro] Could not find test dir: '/var/tmp/tmt/run-055/repro/discover/one/tests/repro'
overview: E
result: 0 tests passed, 0 tests failed
ERROR 1 errors occured during tests.
tests.yml from discover step has content
/one/repro:
environment: BUNDLED_INJECT=PYTHONPATH=/usr/lib/python3.6/site-packages/pip/_vendor/
PACKAGES=python3-pip PYTHON=python3
path: /one/tests/repro
test: ./runtest.sh
Now when we have the initial set of commands already working it would be good to have the most common use cases documented. Probably the best place is the commands
section which contains only the tmt test convert
command:
I like what the tool currently implements. It is basically a listing of test metadata. It is very handy, as it shows the parsed state for all test sets. I think we should keep this functionality under a new subcommand though - ls
.
This command would show the listing of the tests.
$ cd examples/systemd
$ tmt ls
Found 2 testsets.
Testset: /ci/pull-request/functional (Tier two functional tests)
Discover:
{'filter': 'tier: 2 & distros: rhel-8',
'how': 'fmf',
'repository': 'https://github.com/systemd-rhel/tests'}
Provision:
{'how': 'vm'}
Prepare:
{'how': 'ansible', 'playbooks': ['ci/rhel-8.yml']}
Execute:
{'how': 'beakerlib'}
Report:
None
Finish:
None
Testset: /ci/pull-request/smoke (Basic set of quick smoke tests for systemd)
Discover:
{'filter': 'tier: 1 & distros: rhel-8',
'how': 'fmf',
'repository': 'https://github.com/systemd-rhel/tests'}
Provision:
{'how': 'vm'}
Prepare:
{'how': 'ansible', 'playbooks': ['ci/rhel-8.yml']}
Execute:
{'how': 'beakerlib'}
Report:
None
Finish:
None
And display only one test by specifying it.
$ tmt ls /ci/pull-request/functional
Testset: /ci/pull-request/functional (Tier two functional tests)
Discover:
{'filter': 'tier: 2 & distros: rhel-8',
'how': 'fmf',
'repository': 'https://github.com/systemd-rhel/tests'}
Provision:
{'how': 'container'}
Prepare:
{'how': 'ansible', 'playbooks': ['ci/rhel-8.yml']}
Execute:
{'how': 'beakerlib'}
Report:
None
Finish:
None
The run
command would be executing the tests sequentially for now. As we have in our flock prototype. I would like thought, if the runner from the start would be container based, so we would have easier path forward to change the implementation later on to something more robust and unified with the testing system itself. This would also keep tmt
minimal, the only thing we would need to add as a dep is podman.
$ cd examples/systemd
$ tmt run
Pulling container of worker quay.io/testing-farm/cruncher ...
Running test: /ci/pull-request/functional
[17:25:13] [+] [cruncher] Downloading image 'Fedora-Cloud-Base-30-20190906.0.x86_64.qcow2'
[17:26:13] [+] [cruncher] Booting VM with image 'Fedora-Cloud-Base-30-20190906.0.x86_64.qcow2'
[17:26:34] [+] [cruncher] [/ci/test/build/smoke] [execute] Running shell commands
# dnf -y install httpd curl
....
Also in the future we might want to add additional features, like reproducing a test that was run via Packit, Fedora CI or in RHEL. This will bend this tool to be more a command line tool of the test system itself. Not sure yet about this, but I think it would make sense to have just one tool for all these interactions.
Red is the colour of error. Please don't use it for success / information messages.
E.g. tmt tests import
following is printed in red:
Checking the '< path >' directory.
Metadata successfully stored into '< path >'.
None of those is an error.
Currently pepa
and hardware
are imported with the extra
prefix. Keys tcms
and task
are also not included in the L1 metadata specification. Should we use prefix for them as well? @thrix, @lukaszachy, @hegerj, what do you think?
The initial proof of concept of tmt test convert
does not check whether an attribute is already defined higher in the fmf
tree hierarchy. This would be good to prevent unnecessary data duplication.
main.fmf
Hi,
When you run tmt plan lint on a file with description like this:
description:
This special plan selects "tier: 1" tests to be run.
Next line of description...
You will get a traceback. The problem is the tier: 1
fmf.utils.FileError: Failed to parse 'plans/ci.fmf'
mapping values are not allowed here
Removing the ' : ' from the file fixes the problem.
I think the description part of the file should allow anything to be written there.
If I try to run discover again with previously run id it does not work ....
See below
$ tmt run discover
/var/tmp/tmt/run-011
/ci/test
discover
how: fmf
repository: git://pkgs.devel.redhat.com/tests/systemd
filter: tags: PoC & distros: rhel-8
tests: 1 test selected
$ tmt run -i /var/tmp/tmt/run-011 discover
/var/tmp/tmt/run-011
/ci/test
10:59 $ cat ci.fmf
summary:
Essential command line features
discover:
- name: name_1_here
how: fmf
repository: "git://pkgs.devel.redhat.com/cgit/tests/bind"
execute:
how: beakerlib
✔ ~/WTF/bind [rhel-8.2.0|…2]
10:59 $ tmt plan lint
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/fmf/base.py", line 316, in grow
data = yaml.load(datafile, Loader=YamlLoader)
File "/usr/lib64/python3.7/site-packages/yaml/init.py", line 114, in load
return loader.get_single_data()
File "/usr/lib64/python3.7/site-packages/yaml/constructor.py", line 41, in get_single_data
node = self.get_single_node()
File "/usr/lib64/python3.7/site-packages/yaml/composer.py", line 36, in get_single_node
document = self.compose_document()
File "/usr/lib64/python3.7/site-packages/yaml/composer.py", line 55, in compose_document
node = self.compose_node(None, None)
File "/usr/lib64/python3.7/site-packages/yaml/composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
File "/usr/lib64/python3.7/site-packages/yaml/composer.py", line 133, in compose_mapping_node
item_value = self.compose_node(node, item_key)
File "/usr/lib64/python3.7/site-packages/yaml/composer.py", line 82, in compose_node
node = self.compose_sequence_node(anchor)
File "/usr/lib64/python3.7/site-packages/yaml/composer.py", line 110, in compose_sequence_node
while not self.check_event(SequenceEndEvent):
File "/usr/lib64/python3.7/site-packages/yaml/parser.py", line 98, in check_event
self.current_event = self.state()
File "/usr/lib64/python3.7/site-packages/yaml/parser.py", line 393, in parse_block_sequence_entry
"expected , but found %r" % token.id, token.start_mark)
yaml.parser.ParserError: while parsing a block collection
in "/home/psklenar/WTF/bind/ci.fmf", line 4, column 5
expected , but found '?'
in "/home/psklenar/WTF/bind/ci.fmf", line 5, column 5
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/tmt", line 11, in
tmt.cli.main()
File "/usr/lib/python3.7/site-packages/click/core.py", line 763, in call
return self.main(*args, **kwargs)
File "/usr/lib/python3.7/site-packages/click/core.py", line 716, in main
rv = self.invoke(ctx)
File "/usr/lib/python3.7/site-packages/click/core.py", line 1136, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python3.7/site-packages/click/core.py", line 1136, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/lib/python3.7/site-packages/click/core.py", line 955, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/lib/python3.7/site-packages/click/core.py", line 554, in invoke
return callback(*args, **kwargs)
File "/usr/lib/python3.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/lib/python3.7/site-packages/tmt/cli.py", line 541, in lint
for plan in context.obj.tree.plans():
File "/usr/lib/python3.7/site-packages/tmt/base.py", line 498, in plans
return [Plan(plan, run=run) for plan in self.tree.prune(
File "/usr/lib/python3.7/site-packages/tmt/base.py", line 461, in tree
self._tree = fmf.Tree(self._path)
File "/usr/lib/python3.7/site-packages/fmf/base.py", line 85, in init
self.grow(data)
File "/usr/lib/python3.7/site-packages/fmf/base.py", line 319, in grow
fullpath, error)))
fmf.utils.FileError: Failed to parse '/home/psklenar/WTF/bind/ci.fmf'
while parsing a block collection
in "/home/psklenar/WTF/bind/ci.fmf", line 4, column 5
expected , but found '?'
in "/home/psklenar/WTF/bind/ci.fmf", line 5, column 5
We need to find the best way how to support different set of options for different step implementations. We could somehow hack option preparation before click
takes care of processing options:
import click
import sys
@click.group()
def main():
pass
@main.command()
@click.option('-h', '--how', help='Prepare method.')
def prepare(how, **kwargs):
print(f'playbook: {kwargs.get("playbook")}')
print(f'script: {kwargs.get("script")}')
if __name__ == '__main__':
arguments = ' '.join(sys.argv)
print(arguments)
if 'prepare --how ansible' in arguments:
prepare = click.option('-p', '--playbook', help='Playbook.')(prepare)
if 'prepare --how shell' in arguments:
prepare = click.option('-s', '--script', help='Shell script.')(prepare)
main()
Except for the hack (and the need to adjust somehow unit tests to work with this) something like that seems doable. Another option could be to provide separate subcommand for each implementation:
discover-fmf
discover-shell
provision-local
provision-virtual
provision-container
provision-container
...
This would, however, result in many subcommands in the usage message and also on the command line you would always have to provide the full subcommand name:
tmt run prepare-ansible --playbook book.yaml
And for selecting individual steps you would probably have to use the full name as well:
tmt run discover-fmf
I don't like this much. Any other ideas how to solve this? Any expert on click
who could help with implementing this purely in it?
See obvious typo (tmtXXXX) in prepare scriptlet: tmt run --all prepare --debug -s 'dnf install -y tmtXXXX'
Command failed as:
prepare
workdir: /var/tmp/tmt/run-019/ci/prepare
input: dnf install -y tmtXXXX
Looking for prepare script in: /home/lzachar/_important_/distgit_rpms/python-requests/dnf install -y tmtXXXX
Looking for prepare script: /var/tmp/tmt/run-019/ci/discover/one/tests/dnf install -y tmtXXXX
ERROR Failed to run command 'vagrant provision --provision-with prepare'.
Command 'vagrant provision --provision-with prepare' returned non-zero exit status 1.
After help, the relevant log part was in /var/tmp/tmt/run-019/ci/provision/JyIgLxPCkbQGqzfm/log.txt
Please make it more user friendly: e.g. direct link to the log, failed output in separated file, etc.
When you run tmt tests convert
in directory which has Makefile but it is not Beakerlib one you get traceback. For example if it is ran from tmt directory:
$ tmt tests convert
Checking the '/home/lzachar/_important_/Gits/tmt' directory.
Makefile found in '/home/lzachar/_important_/Gits/tmt/Makefile'.
Traceback (most recent call last):
File "/home/lzachar/.virtualenvs/tmt/bin/tmt", line 7, in <module>
exec(compile(f.read(), __file__, 'exec'))
File "/home/lzachar/_important_/Gits/tmt/bin/tmt", line 11, in <module>
tmt.cli.main()
File "/home/lzachar/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/home/lzachar/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/home/lzachar/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/lzachar/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/lzachar/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/lzachar/.virtualenvs/tmt/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/home/lzachar/.virtualenvs/tmt/lib/python3.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/lzachar/_important_/Gits/tmt/tmt/cli.py", line 447, in convert
common, individual = tmt.convert.read(path, makefile, nitrate, purpose)
File "/home/lzachar/_important_/Gits/tmt/tmt/convert.py", line 77, in read
beaker_task = re.search('export TEST=(.*)\n', content).group(1)
AttributeError: 'NoneType' object has no attribute 'group'
I'm using the following command line to run the /plans/smoke
plan against a running box:
tmt run --debug --all provision --how connect --user root --guest 1.2.3.4 --key /home/psss/.ssh/private_key plan --name smoke
The execute
step ends up with the following error:
Run command 'vagrant rsync'.
Run command 'vagrant ssh -c nohup /var/tmp/tmt/run-568/plans/smoke/execute/run.sh -v /var/tmp/tmt/run-568/plans/smoke shell /var/tmp/tmt/run-568/plans/smoke/execute/stdout.log /var/tmp/tmt/run-568/plans/smoke/execute/stderr.log'.
out: nohup: vstup ignoruji a výstup připojuji k 'nohup.out'
err: Connection to 1.2.3.4 closed.
out: nohup: spuštění příkazu '/var/tmp/tmt/run-568/plans/smoke/execute/run.sh' selhalo: No such file or directory
Run command 'bash -c "vagrant plugin list | grep '^vagrant-rsync-back '"'.
out: vagrant-rsync-back (0.0.1, global)
Run command 'vagrant rsync-back'.
out: ==> default: Rsyncing folder: /var/tmp/tmt/run-568/plans/smoke/ => /var/tmp/tmt/run-568/plans/smoke
err: There was an error when attempting to rsync a synced folder.
err: Please inspect the error message below for more info.
err
err: Host path: /var/tmp/tmt/run-568/plans/smoke
err: Guest path: /var/tmp/tmt/run-568/plans/smoke/
err: Command: rsync --verbose --archive --delete -z -e ssh -p 22 -o StrictHostKeyChecking=no -i '/home/psss/.ssh/private_key' --exclude .vagrant/ [email protected]:/var/tmp/tmt/run-568/plans/smoke/ /var/tmp/tmt/run-568/plans/smoke
err: Error: rsync: change_dir "/var/tmp/tmt/run-568/plans/smoke" failed: No such file or directory (2)
err: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1659) [Receiver=3.1.3]
err: rsync: [Receiver] write error: Broken pipe (32)
Works fine for the other two plans. @pvalena, could you please have a look?
When running the basic plan under localhost, installation of beakerlib fails:
$ tmt run -d plan --name basic
...
Run command 'sleep 1; set -x; nohup bash -c 'dnf install -y beakerlib' 1>/root/prepare.log 2>&1 && exit 0; cat prepare.log; exit 1'.
err: sleep: invalid option -- 'x'
err: Try 'sleep --help' for more information.
ERROR Failed to run command 'sleep 1; set -x; nohup bash -c 'dnf install -y beakerlib' 1>/root/prepare.log 2>&1 && exit 0; cat prepare.log; exit 1'.
@pvalena, could you please have a look into this?
The problem with the world "test" (or "plan") is that it is both a noun and a verb. In a command, it implies imperative when I read it. I cannot help it, but I read tmt test
as "tmt, please, run the tests".
A simple solution to this is to use plural ("tests", "plans"). In fact, it IMHO even makes more sense, as tmt tests
implies "let's do something with the tests" and it lists all the tests (plural) by default.
Similarly as the environment
key in L1 metadata it would be useful to be able to define environment variables in L2 metadata, probably in the prepare
or the execute
step. This would mean that every tests executed will have variables set as needed.
Discussed with @hegerj, @lukaszachy, @martinky82 today.
Currently it's not easy to make virtual provision working under Fedora 30. The rsync-back
plugin fails with the known error, uninstalling rubygem-fog-core
results in uninstalling vagrant. Then install vagrant
again and try vagrant plugin install vagrant-libvirt
which fails as well:
$ vagrant plugin install vagrant-libvirt
Installing the 'vagrant-libvirt' plugin. This can take a few minutes...
Fetching formatador-0.2.5.gem
Fetching excon-0.72.0.gem
Fetching builder-3.2.4.gem
Fetching fog-core-1.43.0.gem
Fetching mini_portile2-2.4.0.gem
Fetching nokogiri-1.10.7.gem
Building native extensions. This could take a while...
Fetching multi_json-1.14.1.gem
Fetching fog-json-1.2.0.gem
Fetching fog-xml-0.1.3.gem
Fetching ruby-libvirt-0.7.1.gem
Building native extensions. This could take a while...
Vagrant failed to properly resolve required dependencies. These
errors can commonly be caused by misconfigured plugin installations
or transient network issues. The reported error is:
ERROR: Failed to build gem native extension.
current directory: /home/user/.vagrant.d/gems/2.6.5/gems/ruby-libvirt-0.7.1/ext/libvirt
/usr/bin/ruby -I /usr/share/rubygems -r ./siteconf20200210-14111-10qat5h.rb extconf.rb
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.
Provided configuration options:
--with-opt-dir
--without-opt-dir
--with-opt-include
--without-opt-include=${opt-dir}/include
--with-opt-lib
--without-opt-lib=${opt-dir}/lib64
--with-make-prog
--without-make-prog
--srcdir=.
--curdir
--ruby=/usr/bin/$(RUBY_BASE_NAME)
--with-libvirt-include
--without-libvirt-include
--with-libvirt-lib
--without-libvirt-lib
--with-libvirt-config
--without-libvirt-config
--with-pkg-config
--without-pkg-config
extconf.rb:73:in `<main>': libvirt library not found in default locations (RuntimeError)
To see why this extension failed to compile, please check the mkmf.log which can be found here:
/home/user/.vagrant.d/gems/2.6.5/extensions/x86_64-linux/2.6.0/ruby-libvirt-0.7.1/mkmf.log
extconf failed, exit code 1
Gem files will remain installed in /home/user/.vagrant.d/gems/2.6.5/gems/ruby-libvirt-0.7.1 for inspection.
Results logged to /home/user/.vagrant.d/gems/2.6.5/extensions/x86_64-linux/2.6.0/ruby-libvirt-0.7.1/gem_make.out
Seems libvirt-devel
is another un-documented dependecy. We should make this much, much easier. @pvalena, what would you recommend? Would installing from copr help here?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.