Git Product home page Git Product logo

labgrid's Introduction

labgrid logo

Welcome to labgrid

LGPLv2.1 unit tests status docker build status coverage status documentation status chat chat

Labgrid is an embedded board control python library with a focus on testing, development and general automation. It includes a remote control layer to control boards connected to other hosts.

Purpose and Features

The idea behind labgrid is to create an abstraction of the hardware control layer needed for testing of embedded systems, automatic software installation and automation during development. Labgrid itself is not a testing framework, but is intended to be combined with pytest (and additional pytest plugins). Please see Design Decisions for more background information.

It currently supports:

  • remote client-exporter-coordinator infrastructure to make boards available from different computers on a network
  • pytest plugin to write automated tests for embedded systems
  • CLI and library usage for development and automation
  • interaction with bootloader and Linux shells on top of serial console or SSH
  • power/reset management via drivers for power switches
  • upload of binaries and device bootstrapping via USB
  • control of digital outputs, SD card and USB multiplexers
  • integration of audio/video/measurement devices for remote development and testing
  • Docker/QEMU integration

While labgrid is currently used for daily development on embedded boards and for automated testing, several planned features are not yet implemented and the APIs may be changed as more use-cases appear. We appreciate code contributions and feedback on using labgrid on other environments (see Contributing for details). Please consider contacting us (via a GitHub issue) before starting larger changes, so we can discuss design trade-offs early and avoid redundant work. You can also look at Ideas for enhancements which are not yet implemented.

Documentation

labgrid's documentation is hosted on Read the Docs.

Contributing

See our Development Docs.

Visit us in our IRC channel #labgrid on libera.chat (bridged to the Matrix channel #labgrid:matrix.org)

Background

Work on labgrid started at Pengutronix in late 2016 and is currently in active use and development.

Installation

See the Installation section for more details.

Install Latest Release

Install labgrid via PyPi:

$ virtualenv -p python3 venv
$ source venv/bin/activate
venv $ pip install --upgrade pip
venv $ pip install labgrid

Install Development State

Clone the git repository:

$ git clone https://github.com/labgrid-project/labgrid

Create and activate a virtualenv for labgrid:

$ virtualenv -p python3 venv
$ source venv/bin/activate
venv $ pip install --upgrade pip

Install labgrid into the virtualenv:

venv $ pip install .

Tests can now run via:

venv $ python -m pytest --lg-env <config>

labgrid's People

Contributors

a3f avatar atline avatar bastian-krause avatar benjamb avatar dependabot[bot] avatar edersondisouza avatar ejoerns avatar ekronborg avatar emantor avatar esben avatar fscherf avatar jbrun3t avatar jluebbe avatar joshuawatt avatar jpewdev avatar jremmet avatar kjeldflarup avatar krevsbech avatar liambeguin avatar mbgg avatar nefethael avatar nick-potenski avatar niklasreisser avatar nlabriet avatar obbardc avatar rohieb avatar rpoisel avatar sjg20 avatar smithchart avatar ynezz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

labgrid's Issues

Labgrid for parallel target testing

I've been evaluating labgrid for use in a new parallel hardware testing system. Unfortunately there are two main features that don't seem to be currently supported that would be required for our project:

  1. Multiple driver/resource instances on a single target
    This is listed as planned feature in the docs so I assume it's already being looked into. The use case I've been looking at is using a serial port for communication with the UUT and a second serial port for communication with a separate power control board. It might be possible to work around this by creating new resource/driver classes for the second device, but this does not seem like a good solution.
  2. Shared driver instances between multiple targets
    The power control board I mentioned above has multiple relays to control multiple target boards. I do not think it is currently possible to share a single serial device between boards like this, since bound resources/drivers cannot be controlled by another target.

Are these features planned to be supported?

Quickstart fails on first step

~/new$ virtualenv -p python3 labgrid-venv
Running virtualenv with interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in labgrid-venv/bin/python3
Also creating executable in labgrid-venv/bin/python
Failed to import the site module
Traceback (most recent call last):
  File "/home/bla/new/labgrid-venv/lib/python3.5/site.py", line 67, in <module>
    import os
  File "/home/bla/new/labgrid-venv/lib/python3.5/os.py", line 708, in <module>
    from _collections_abc import MutableMapping
ImportError: No module named '_collections_abc'
ERROR: The executable labgrid-venv/bin/python3 is not functioning
ERROR: It thinks sys.prefix is '/home/bla/new' (should be '/home/bla/new/labgrid-venv')
ERROR: virtualenv is not compatible with this system or executable
~/new$ python3.5 --version
Python 3.5.3

SSHDriver pytest environment variables.

Hi Community :)

I also post this message on the IRC channel...

I'm really new to labgrid so please be kind :D I have successfully run my first pytest to a board connected via SSH (SSHDriver, hence no serial).
Now I want to test something more complicated, I need to run a ash script which uses environment variables to execute different type of test. I have really no clue on what is the best way to pass out these environment variables.. Ideally these env variable should be reused among the differents tests I need to execute... Thanks for the support ! :)

regular termination of microcom should not lead to a backtrace

When microcom is ended using keyboard shortcut (microcom ends successfully in this case) no backtrace should be printed as this is normal and expected usage. Instead we get this backtrace:

sha-phycore-imx6sha@dude:~ labgrid-client console
connecting to  NetworkSerialPort(target=Target(name='sha-phycore-imx6', env=Environment(config_file='/opt/bin/labgrid.yaml')), state=<BindingState.bound: 1>, avail=True, host='guave', port=37055, speed=115200)
connected to xxx:xxxx:xxxx (port 37055)
Escape character: Ctrl-\
Type the escape character followed by c to get to the menu or q to quit

Enter command. Try 'help' for a list of builtin commands
-> ^Cexiting
Traceback (most recent call last):
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 893, in main
    args.func(session)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 596, in console
    res = self._console(place)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 587, in _console
    "{}:{}".format(resource.host, resource.port)
  File "/usr/lib/python3.5/subprocess.py", line 249, in call
    return p.wait(timeout=timeout)
  File "/usr/lib/python3.5/subprocess.py", line 1389, in wait
    (pid, sts) = self._try_wait(0)
  File "/usr/lib/python3.5/subprocess.py", line 1339, in _try_wait
    (pid, sts) = os.waitpid(self.pid, wait_flags)
KeyboardInterrupt

Basic client install fails

This instruction failed https://labgrid.readthedocs.io/en/latest/getting_started.html

sudo apt-get install python3 python3-virtualenv python3-pip
sudo apt install virtualenv
virtualenv -p python3 labgrid-venv
source labgrid-venv/bin/activate
pip3 install labgrid
labgrid-client --help
Traceback (most recent call last):
  File "/home/kfa/labgrid-venv/bin/labgrid-client", line 7, in <module>
    from labgrid.remote.client import main
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/labgrid/__init__.py", line 1, in <module>
    from .target import Target
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/labgrid/target.py", line 6, in <module>
    from .driver import Driver
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/labgrid/driver/__init__.py", line 3, in <module>
    from .serialdriver import SerialDriver
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/labgrid/driver/serialdriver.py", line 10, in <module>
    from ..resource import SerialPort, NetworkSerialPort
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/labgrid/resource/__init__.py", line 7, in <module>
    from .udev import USBSerialPort
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/labgrid/resource/udev.py", line 148, in <module>
    class USBSerialPort(SerialPort, USBResource):
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/attr/_make.py", line 644, in attrs
    return wrap(maybe_cls)
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/attr/_make.py", line 613, in wrap
    builder = _ClassBuilder(cls, these, slots, frozen, auto_attribs)
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/attr/_make.py", line 345, in __init__
    attrs, super_attrs = _transform_attrs(cls, these, auto_attribs)
  File "/home/kfa/labgrid-venv/lib/python3.5/site-packages/attr/_make.py", line 311, in _transform_attrs
    .format(a=a)
ValueError: No mandatory attributes allowed after an attribute with a default value or factory.  Attribute in question: Attribute(name='target', default=NOTHING, validator=None, repr=True, cmp=True, hash=None, init=True, convert=None, metadata=mappingproxy({}), type=None)

I have no idea what this error says.

This alternative though succeed, and after running this the error goes away in the virtualenv

$ git clone https://github.com/labgrid-project/labgrid
$ cd labgrid && python3 setup.py install

Info Driver find multiple matching drivers

If I provide an info driver in my environment it raises an the error:

labgrid.exceptions.NoDriverFoundError: multiple drivers matching <class 'labgrid.protocol.commandprotocol.CommandProtocol'> found in target Target(name='main', env=Environment(config_file='env.yaml'))

This is due to the fact, that I have an SSH-Driver, an Barebox-Driver and an Serial Connection to the Target defined in the environment.
I suppose #124 will solve the problem, as we than can select the right driver.

I also saw the TODO in the source of InfoDriver.

# TODO: rework CommandProtocol binding to select correct underlying driver
# (No UBoot/BareboxDriver, SSH > Serial,โ€ฆ)

Do you have a specific plan for that?

ExternalPowerDriver fail to call cmd

Setting cmd_on to a String holding the command and parameters fails.
from examples/shell/remotelab-1004.env
cmd_on: "ptx_remotelab -1 1004"

>>> subprocess.check_call("ptx_remotelab -1 1004")
  FileNotFoundError: [Errno 2] No such file or directory: 'ptx_remotelab -1 1004  
 >>> subprocess.check_call("ptx_remotelab")
   0

Passing a list, like
cmd_on: ["ptx_remotelab", "-1", "1004"]
fails due to cmd_on = attr.ib(validator=attr.validators.instance_of(str))

What is the preferred way to fix this? shell=true should be avoided, Forcing lists in the config isn't nice. Maybe cmd_on_param ?

doc/usage.rst: Stategy Fixtures Example

I have trouble to get the "Stategy Fixtures Example" working. Using the proposed conftest.py it got:

target = Target(name='main', env=Environment(config_file='local.yaml'))

    @pytest.fixture(scope='function')
    def active_command(target):
>       return target.get_active_driver(CommandProtocol)

conftest.py:31: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = Target(name='main', env=Environment(config_file='local.yaml'))
cls = <class 'labgrid.protocol.commandprotocol.CommandProtocol'>

    def get_active_driver(self, cls):
        """
            Helper function to get the active driver of the target.
            Returns the active driver found, otherwise None.
    
            Arguments:
            cls -- driver-class to return as a resource
            """
        for drv in self.drivers:
            if isinstance(drv, cls):
                if drv.state == BindingState.active:
                    return drv
        raise NoDriverFoundError(
>           "no driver matching {} found in target {}".format(cls, self)
        )
E       labgrid.exceptions.NoDriverFoundError: no driver matching <class 'labgrid.protocol.commandprotocol.CommandProtocol'> found in target Target(name='main', env=Environment(config_file='local.yaml'))

/home/jremmet/labgrid/labgrid-venv/lib/python3.5/site-packages/labgrid-0.1.dev475+n450511c-py3.5.egg/labgrid/target.py:127: NoDriverFoundError

I also noted the in "examples/strategy/test_barebox_strategy.py" CommandProtocol is replaced:

    #command = target.get_driver(CommandProtocol)
    command = target.get_driver(BareboxDriver

Dockers files

Proposal:
I've created a docker file to setup a coordinator in a container and one to run a labgrid-client/execute the tests.
Are you interested in these, and if yes. Should i create a PR adding the folder docker to the labgrid repository, or are you more up for a separate repository inside the project ?

Exclusive access to resources

We are currently evaluating using labgrid in a distributed deployment with multiple racks consisting of either one or multiple boards. The use case we are interested in is allowing multiple developers and a CI system to access the racks / boards / resources on boards in a way such that they do not interfere with each other.

I'm currently hitting the following problem: I got a rack with two boards and tests that either require one board or both boards. How am I supposed to make sure that I can lock each board individually (e.g. as an individual place) but also the whole rack of boards at once (as a single place)? Currently, if I create two places that match some common resources I can lock both places thereby granting access simultaneous access to the same board to multiple developers and the CI system. That's what I need to avoid. Is this a config issue or a labgrid issue? If it's a config issue, what is the best way to guarantee mutual exclusion and still be able to grab either the whole rack (as a place) or individual boards (as individual places) for tests?

SSHDriver cannot execute run_check

This code fails:

@pytest.fixture(scope='session')
## Returns a SSHDriver object to do ssh operations
def sshcommand(strategy):
    strategy.transition("ssh")
    return strategy.ssh

def test_run_check(sshcommand):
    sshcommand.run_check('echo ssh')
E               TypeError: got an unexpected keyword argument 'timeout'
/usr/lib/python3.5/inspect.py:2910: TypeError

The problem seems to be that CommandMixin calls run in this way:

stdout, stderr, exitcode = self.run(cmd, timeout=timeout)

But only ShellDriver implements timeout.

[ssh] <defunct> after powercut

The Raspberry Pi running the exporter had restarted for some reason, which will cut the power to the DUT. This left my test in the state below.
This situation seems to go unnoticed for 2590 seconds.

root        68    67  0 08:05 pts/0    00:00:04 /usr/local/bin/python /usr/local/bin/pytest --junitxml=logs/junit.xml --capture=sys --log-format=%(asctime)s %(levelname)s %(message)s --log-date-format=%Y-
root        78     1  0 08:05 ?        00:00:00 [ssh] <defunct>
root        79     1  0 08:05 ?        00:00:00 [ssh] <defunct>
root        94     1  0 08:06 ?        00:00:00 [ssh] <defunct>
root        95     1  0 08:06 ?        00:00:00 [ssh] <defunct>
root       328     0  0 08:51 pts/1    00:00:00 bash
root       338    68  0 08:53 pts/0    00:00:00 ssh -x -o LogLevel=ERROR -i /opt/labgridtools/ssh/labgrid_rsa -o PasswordAuthentication=no -F /dev/null -o ControlPath=/tmp/labgrid-ssh-tmp-0tr9x1fg/control

After a looong time I see this on the console:

Result: ('10pcm52-bsp-tests/test_pcm52-bsp.py', 137, 'test_os'):	passed	8.158427000045776s
FE
Result: ('10pcm52-bsp-tests/test_pcm52-bsp.py', 145, 'test_rt'):	failed	2590.722511768341s
The authenticity of host '172.29.5.34 (172.29.5.34)' can't be established.
ECDSA key fingerprint is SHA256:1yH9KbZJg2W8agWsOcjprrlqM5Kcpq34Lbd0zHWEwtk.

The authenticity of host can't be established.

When running labgrid from a docker container SSHDriver asks for this. If no operator, then the test fails.

The authenticity of host '172.29.1.25 (172.29.1.25)' can't be established.
ECDSA key fingerprint is 4b:6e:2e:f5:21:46:26:f5:7a:d9:9c:69:01:43:ec:d0.
Are you sure you want to continue connecting (yes/no)?

I patched my way around by adding -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no in sshdriver.py a number of places.

SSH-connection to target without IP

Is there a easy way to connect to the target via ssh without knowing its ip-address?
Currently we use:

service = target.get_resource(NetworkService)
service.address = ip_address
ssh = target.get_driver(SSHDriver)

with a NetworkService and a SSHDriver defined in the environment.

I would like an approach where we have a new driver, which serves as a NetworkService and responds with the ip address gathered from the info-driver.

exporter of a resource not started deserves a meaningful error message

Situation: A host exports a resource, a serial connection in this case, but the exporter is not started.
When trying to connect to this resource a meaningful error message should be printed. Instead, this
backtrace is printed:

sha-phycore-imx6sha@dude:~ labgrid-client console
Traceback (most recent call last):
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 893, in main
    args.func(session)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 596, in console
    res = self._console(place)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 574, in _console
    resource = target.get_resource(NetworkSerialPort)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/target.py", line 83, in get_resource
    "no resource matching {} found in target {}".format(cls, self)
labgrid.exceptions.NoResourceFoundError: no resource matching <class 'labgrid.resource.serialport.NetworkSerialPort'> found in target Target(name='sha-phycore-imx6', env=Environment(config_file='/opt/bin/labgrid.yaml'))

labgrid prints debug output in pytest without activation

I am running pytests using a custom smallubootstrategy inside my project.

When I run pytest using
pytest --lg-env local.yaml test_uboot_strategy.py
I get a lot of debug output from the Serial Output:
DEBUG: Read 12 bytes: b'0x80000000\r\n', timeout 11.17, requested size 1
DEBUG: Read 11 bytes: b'Loading: *\x08', timeout 11.17, requested size 1

Since I have neither touched logging.Logger.setLevel() nor have used a -v on my command line I would just expect some pytest output.

I am currently using latest master + my PullRequests that have not been meged by now.

Is this intended behavior? Or do you have any idea how to track that bug down?

Labgrid Exporter crashes on disconnect

On one of our computers the exporter crashes if it disconnects from the coordinator, logfiles below.

2017-06-20T09:57:05 Task was destroyed but it is pending!
task: <Task pending coro=<onDisconnect() running at /opt/venv/python3.4-x86_64/lib/python3.4/site-packages/labgrid-0.1.1.dev5_ng4b0922bc2f15-py3.4.egg/labgrid/remote/exporter.py:246> cb=[add_callbacks.<locals>.done() at /opt/venv/python3.4-x86_64/lib/python3.4/site-packages/txaio/aio.py:383]>
Exception ignored in: Exception ignored in: Exception ignored in: Exception ignored in: Exception ignored in: Exception ignored in: Exception ignored in:
Starting Exports devices on this host for the labgrid infrastructure...
Started Exports devices on this host for the labgrid infrastructure.
SessionDetails(realm=<realm1>, session=6410032928824258, authid=<exporter/adelgunde>, authrole=<public>, authmethod=ticket, authprovider=dynamic, authextra=None, resumed=None, resumable=None, resume_token=None)

Resource on exporter not available deserves a meaningful error message

Situation: When a resource is exported on an exporter, but is currently not available (due to USB cable to the USB serial converter not connected for example), a meaningful error message should be printed when trying to connect to this resource. Instead, the following backtrace is printed:

Traceback (most recent call last):
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 893, in main
    args.func(session)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 596, in console
    res = self._console(place)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 571, in _console
    target = self._get_target(place)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/remote/client.py", line 523, in _get_target
    RemotePlace(target, name=place.name)
  File "<attrs generated init a6e6d25803fd8b485c18a2103d7ba0a40859b244>", line 12, in __init__
    self.__attrs_post_init__()
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/resource/remote.py", line 89, in __attrs_post_init__
    super().__attrs_post_init__()
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/resource/common.py", line 89, in __attrs_post_init__
    self.manager._add_resource(self)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/resource/common.py", line 65, in _add_resource
    self.on_resource_added(resource)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/resource/remote.py", line 46, in on_resource_added
    remote_place.target, resource_entry.cls, resource_entry.args)
  File "/opt/venv/python3.5-x86_64/lib/python3.5/site-packages/labgrid/factory.py", line 22, in make_resource
    r = self.resources[resource](target, **args)
TypeError: __init__() missing 2 required positional arguments: 'host' and 'port'

--lg-log affects pytest's rootdir

Using the --lg-log option affects pytest's rootdir

(labgrid-venv) kfa@kjeld-Latitude-5580:/userdata/labgridtools/remotetests/remotetest$ pytest  --capture=sys -rs --lg-log /tmp/labgrid-log --lg-env local.yaml 
================================================================================== test session starts ==================================================================================
platform linux -- Python 3.5.2, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /, inifile:

The tests runs ok, but the pytest cache does not work
could not create cache path /.cache/v/cache/lastfailed
It seems like pytest sets rootdir to be the commom topdir of --lg-log and current working direktory

(labgrid-venv) kfa@kjeld-Latitude-5580:/userdata/labgridtools/remotetests/remotetest$ pytest  --capture=sys -rs --lg-log /userdata/labgridtools/remotetests/log --lg-env local.yaml 
================================================================================== test session starts ==================================================================================
platform linux -- Python 3.5.2, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /userdata/labgridtools/remotetests, inifile:

Automatic regression test via Docker Hub

A few times we have experienced that Labgrid got broken for our use. Thus we came up with this idea.

Setup a docker hub project like this one https://hub.docker.com/r/deifwpt/labgrid-client/~/settings/automated-builds/ which is linked to a repository.

This is our own, but it is only linked to our own repository where we put our Dockerfile. If the Docker file is moved to labgrid, then any changes in labgrid/master will cause a new docker build.

This docker build, can then be used to do nightly regression tests in our CI system.

We would not use the image for our "production" tests. We would setup a separate docker image and manually rebuild that when we need the latest labgrid.

This of course means that the dockerfiles made by https://github.com/krevsbech would be moved to labgrid

Stable release

Is there a stable release of labgrid?

I tried to follow https://labgrid.readthedocs.io/en/latest/getting_started.html but ran into a version related error, and once that was fixed, I soon ran into another.
Then I also tried the docker image, but that also suffered from upgrade issues.

I'm trying to start using labgrid, and don't really want to be on the bleeding edge.
Thus I would like to know if there are some version recommendations regarding to labgrid, but certainly also regarding to which packages besides labgrid has been installed on my system.

Some files are not hit by the autodoc importer

The autodoc importer does not hit some files because they are not imported anywhere.
Find a workaround for that.
List of files in the previous Hack:

  • labgrid/autoinstall/main
  • labgrid/driver/power/gude
  • labgrid/remote/common

Character decoding

SOMEBODY thought that it was a good idea to use a unicode centerdot 0xb7 in a product name.

Now this gives me a failure in the SSHDriver:

>       stdout = stdout.decode("utf-8").split('\n')
E       UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb7 in position 90: invalid start byte

The ShellDriver may have the same problem, as well as barebox and uboot
Changing to iso-8859-1 solves my problem, for now.

Possible solutions.

  1. Add an encoding parameter to the drivers.
  2. Add an encoding parameter to the ressource
  3. Add an encoding parameter to the run command
  4. Change the error handling in the decode command to ignore or replace

The most optimal solution to the problem is to be able to set the encoding for the individual target. The easiest place may be the SSHDriver directly. But the info may actually be more accurate to set in the resource in the exporter, thus the same test case can connect to two differently configured targets.

A more simple solution would be to write ("utf-8", 'ignore')
This though has the drawback, that stray binary output no longer causes the test to fail. This is clearly a tradeoff.
The run command could be added an ignore parameter. (OR codecignore to be less confusing), in this way the failing test cases could be modified only.

YKUSHPowerPort via coordinator fails

H

i just tried to run my tests (using the new YKUSHPowerDriver) via the exporter / coordinator over network (the actual setup uses 3 different docker containers for coordinator, exporter and client where only the exporter has USB access) and it does not work - looks like the client is trying to access USB locally.
(YKUSHPowerDriver complains about not being able to access the USB port on the HW)
Should that work or did i miss to implement something?
here my configuration:
exporter:

foo:
  location: bar
  NetworkService:
    address: "10.10.10.10"
    username: "root"
  YKUSHPowerPort:
    serial: "serial"
    index: 1

client:

targets:
  main:
    resources:
      RemotePlace:
        name: "dev1"
    drivers:
      YKUSHPowerDriver: {}
      SSHDriver:
        keyfile: "./id_rsa"
      BloksStrategy: {}

Labgrid driver/resource binding does not work with new attrs behavior

Labgrid is not compatible with the most recent version of attrs due to a change to the default hashing behavior (unhashable). See here: python-attrs/attrs#136

The BindingMixin class defines suppliers and clients as sets, which cannot contain unhashable objects. I think the fix would be to either change all drivers/resources to use the @attr.s(cmp=False) option to disable generation of comparison methods and make the objects hashable, or change the way that the BindingMixin registers suppliers and clients.

Entering kernel is not detected by driver

I have a pytest that fails during verification of kernel present. Looking at the 'ubootdriver.py' it looks like that the verification is depended on writing Linux version upon kernel boot.

Is there any way to configure this detection method as my Linux version 4.9.0-adi does not show this upon entering the kernel.

@Driver.check_active @step() def await_boot(self): """Wait for the initial Linux version string to verify we succesfully jumped into the kernel. """ self.console.expect(r"Linux version \d")

Also with multiple kernels on different targets I would like to be able to detect the correct kernels etc. as part of the driver.

Currently this is blocking all my tests :(

pip install labgrid[coordinator] dependency problem

In order to run crossbar and the coordinator I've ran:
pip3 install labgrid[coordinator]
Which works however it installs idna-2.6 which is not valid for crossbar, crossbar require =>2.5, <2.6
Manually uninstalling idna-2.6 and installing idna-2.5 works.

pip3 uninstall idna
pip3 install idna==2.5
I will be glad to help solving the issue upstream however, that would require a bit help in order to pointing at the place where the requirement is described.

Logo for labgrid

I heard labgrid needs a pretty logo :) Here are some rough sketches around one idea. What do you think? Is this a good direction? Which one do you like best? The logos on the left are logos of other Pengutronix projects, so you can see how the new one would fit in.

labgrid-logo

Note that this is not the final font. Can you find out which font was used for the other logos?

SSH-Connection breaks when rebooting

When I reboot the target with the barebox-driver during an active ssh-session, i get the following error while trying to send another command over ssh:

packet_write_wait: Connection to 10.171.8.15 port 22: Broken pipe
mux_client_request_session: read from master failed: Broken pipe
Host key verification failed.
   INFO: Socket already closed

I guess there is no way to circumvent the broken pipe problem, since the power gets cut off. However I would like labgrid to detect such a failure and try to reconnect. Currently I don't see an easy way to implement the behavior without changing the way the ssh-driver works internally. What would be an appropriate implementation here?

My current workaround is to simple deactivate and reactive the ssh-driver in the (pytest-)fixture which provides the ssh-driver. This works but since the ssh teardown and setup takes around 250ms it slows down the tests immensely.

driver/power/gude.py: Usage of powe switches with more than 8 ports

I am trying to use a Gude expert Power 8080 with labgrid. This device has 24 ports.

The current gude.py limits the number of ports to 8.
Question is:

  • Should I simply patch the driver to support 24 ports or
  • Should I create a new driver that is specially suitable for the 24 port devices?

Option 1 opens the possibility to try to access ports that do not on a device.
Option 2 leads to copied code with just a changed assertion.

As far as I can see there is no way I can pass the number of existing ports to the actual driver.

Single port binding for corporate environments

It seems that labgrid-client requires the use of multiple port number access to perform a simple console over Serial.

e.g.

  1. In order to connect to the coordinator, we need to pass a WebSocket URL with a randomized? port number.
labgrid-client -x ws://10.202.145.10:20408/ws -p home con
  1. Once the console is connecting, microcom will attempt to connect to the NetworkSerialPort via a port number that is automatically passed to microcom
microcom -s 115200 -t sirius02:48847

This post a problem for corporate companies as in corporate companies we have very strict regulations on port numbers. Lots of the port numbers are blocked off and labgrid-client console will not work in the enterprise environment.

Probably a good solution would be create a single port binding (e.g. 80, 443) and perform a port multiplexing, each port can be identified by a UID or a randomized token.

Any plans for this? I could help out in this if I can find the time.

ShellDriver.put_ssh_key not working on read-only filesystem

Doing ShellDriver.put_ssh_key() on a read-only file system only results in a stacktrace for me:

   INFO: Key not on target, mounting...
Traceback (most recent call last):
  File "test-xmodem.py", line 14, in <module>
    sd.put_ssh_key("../projectroot/root/.ssh/development_rsa.pub")
  File "/ptx/work/dude/WORK_B/rhi/realtimeci-DistroKit/labgrid-tests/env/lib/python3.5/site-packages/labgrid-0.1.1.dev1+g0897126-py3.5.egg/labgrid/binding.py", line 87, in wrapper
    return func(self, *_args, **_kwargs)
  File "/ptx/work/dude/WORK_B/rhi/realtimeci-DistroKit/labgrid-tests/env/lib/python3.5/site-packages/labgrid-0.1.1.dev1+g0897126-py3.5.egg/labgrid/driver/shelldriver.py", line 206, in put_ssh_key
    self._put_ssh_key(key)
  File "/ptx/work/dude/WORK_B/rhi/realtimeci-DistroKit/labgrid-tests/env/lib/python3.5/site-packages/labgrid-0.1.1.dev1+g0897126-py3.5.egg/labgrid/step.py", line 192, in wrapper
    _result = func(*_args, **_kwargs)
  File "/ptx/work/dude/WORK_B/rhi/realtimeci-DistroKit/labgrid-tests/env/lib/python3.5/site-packages/labgrid-0.1.1.dev1+g0897126-py3.5.egg/labgrid/driver/shelldriver.py", line 200, in _put_ssh_key
    self._run_check('mount --bind /tmp/keys ~/.ssh/authorized_keys')
  File "/ptx/work/dude/WORK_B/rhi/realtimeci-DistroKit/labgrid-tests/env/lib/python3.5/site-packages/labgrid-0.1.1.dev1+g0897126-py3.5.egg/labgrid/step.py", line 192, in wrapper
    _result = func(*_args, **_kwargs)
  File "/ptx/work/dude/WORK_B/rhi/realtimeci-DistroKit/labgrid-tests/env/lib/python3.5/site-packages/labgrid-0.1.1.dev1+g0897126-py3.5.egg/labgrid/driver/shelldriver.py", line 114, in _run_check
    raise ExecutionError(cmd)
labgrid.driver.exception.ExecutionError: mount --bind /tmp/keys ~/.ssh/authorized_keys

โ€ฆ which, in hindsight, is obvious, because on the target console:

root@DistroKit:~ mount --bind /tmp/keys ~/.ssh/authorized_keys
mount: mount point /root/.ssh/authorized_keys does not exist
root@DistroKit:~ mkdir ~/.ssh
mkdir: can't create directory '/root/.ssh': Read-only file system
root@DistroKit:~ mount -o rw,remount /
root@DistroKit:~ mkdir ~/.ssh
root@DistroKit:~ ls -la ~/.ssh
drwxr-xr-x    2 root     root             6 May 22  2017 .
drwxr-xr-x    3 1059     3000            26 May 22  2017 ..

On most freshly installed targets, /root/.ssh will most probably not exist, so on a read-only system, mkdir ~/.ssh in ShellDriver.py, line 179 will fail. I'm guessing the read-write mount is only done a few lines later with mount --bind, and if this works like I imagine, I think this issue could be solved by doing a mount --bind on the whole /root/ directory instead of only the authorized_keys file.

Feature: Selecting test cases based on available hardware/fixtures

To make the test environment really scalable, it would be great, if I could define for each test case, which dependencies it has. For example, when having a complete rack with switchable power supplies, IO boards and externel peripherals, I could run all tests. When running the tests on the desk, without having the full setup, I would define in a config, which HW I have and hereby filter the tests, that are executed (automatically).

USBSerialPort should export IP address

When using USBSerialPort gethostname is used to give the value of the "host" in the exported resource.
This assumes that the hostname of the system is defined in DNS.
OR the hostname should be set to an IP address.

A way to solve this would be to add a "host" parameter to USBSerialPort

USBSerialPort:
    match:
       'ID_SERIAL_SHORT': 'P-00-00682'
    speed: 115200
    host: 172.16.2.2

However the arguments for USBSerialPort seems to be divided into two objects.
class USBResource in labgrid/resource/udev.py
class SerialPort in labgrid/resource/base.py

It looks to be wrong to add a host parameter to both.

Another solution would be to look for LG_HOSTNAME in the environment before calling gethostname()

Can't import NetworkSerialPort - doesn't exist

Hey,

I have an issue when trying to run or start anything regarding Labgrid, that i get an ImportError: cannot import name 'NetworkSerialPort'

Complete error message:

labgrid-venv/lib/python3.5/site-packages/labgrid-0.1.1.dev136+g57bc22d.d20171127-py3.5.egg/labgrid/resource/__init__.py", line 2, in <module>
    from .serialport import RawSerialPort, NetworkSerialPort
ImportError: cannot import name 'NetworkSerialPort'

Do i need to install "NetworkSerialPort" or something? When i go to the specified folder i don't see any files called "NetworkSerialPort" either - should there be one?

Communication with CI-system

We are evaluating to include labgrid to our CI-system. The goal is, to have a full automated CI-system.
Is there a way that labgrid receives a notification to start setting up boards and start the testsuit for the specific target, after the buildsystem (buildbot) has finished the build?

Automatically match for port in USBSerialPort

On a system with multiple usb serial ports a configuration for a USBSerialPort with explicit port configured matches to the first available port. For example on a system with /dev/ttyUSB0 and /dev/ttyUSB1, the following config

  USBSerialPort:
    port: /dev/ttyUSB1
    speed: 115200

will falsely match to /dev/ttyUSB0.

The correct result is achieved by explicitly configuring a match

  USBSerialPort:
    port: /dev/ttyUSB1
    speed: 115200
    match:
      'DEVNAME': '/dev/ttyUSB1'

unfortunately this is not intuitive to the inexperienced user.

I suggest to automatically add this match to USBSerialPort if a port is configured, something like

class USBSerialPort(USBResource, SerialPort):
    def __attrs_post_init__(self):
        if self.port is not None:
            self.match['DEVNAME'] = self.port
        self.match['SUBSYSTEM'] = 'tty'
        super().__attrs_post_init__()

Error when starting labgrid-client: ValueError: No mandatory attributes allowed after an attribute....

After installing labgrid via pip, when I execute labgrid-client, I'm getting following error :

Traceback (most recent call last):
  File "/home/pi/.virtualenvs/labgrid/bin/labgrid-client", line 7, in <module>
    from labgrid.remote.client import main
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/labgrid/__init__.py", line 1, in <module>
    from .target import Target
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/labgrid/target.py", line 6, in <module>
    from .driver import Driver
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/labgrid/driver/__init__.py", line 3, in <module>
    from .serialdriver import SerialDriver
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/labgrid/driver/serialdriver.py", line 10, in <module>
    from ..resource import SerialPort, NetworkSerialPort
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/labgrid/resource/__init__.py", line 7, in <module>
    from .udev import USBSerialPort
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/labgrid/resource/udev.py", line 148, in <module>
    class USBSerialPort(SerialPort, USBResource):
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/attr/_make.py", line 644, in attrs
    return wrap(maybe_cls)
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/attr/_make.py", line 613, in wrap
    builder = _ClassBuilder(cls, these, slots, frozen, auto_attribs)
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/attr/_make.py", line 345, in __init__
    attrs, super_attrs = _transform_attrs(cls, these, auto_attribs)
  File "/home/pi/.virtualenvs/labgrid/lib/python3.5/site-packages/attr/_make.py", line 311, in _transform_attrs
    .format(a=a)
ValueError: No mandatory attributes allowed after an attribute with a default value or factory.  Attribute in question: Attribute(name='target', default=NOTHING, validator=None, repr=True, cmp=True, hash=None, init=True, convert=None, metadata=mappingproxy({}), type=None)

OS: Raspbian, Mac OS
Python: Python 3.5, Python 3.6

BareboxDriver working with the SerialPort Driver but not with the ExternalConsoleDriver

If I start the BareboxDriver using a local SerialPort it works fine, not so using labgrid-client console via the ExternalConsoleDriver.

   await_resources state='start'
   await_resources state='stop'
   transition state='start' args={'status': 'barebox'}
main: CYCLE the target example and press enter
     _write state='start' args={'data': b'\n'}
     _write result=2 state='stop'
     expect state='start' args={'pattern': 'barebox@[^:]+:[^ ]+ '}
     expect state='stop'
     expect state='start' args={'pattern': '[\\n]barebox 20\\d+'}
     expect result=(0, b"[   37.200223] fec 2188000.ethernet eth0: Link is Up - 1Gbps/Full - flow control rx/tx\r\n[   37.207910] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready\r\n\r\r\n ____                        _                 
  _      \r\n|  _ ` ___ _ __   __ _ _   _| |_ _ __ ___  _ __ (_)_  __\r\n| |_) / _ ` '_ ` / _` | | | | __| '__/ _ `| '_ `| ` `/ /\r\n|  __/  __/ | | | (_| | |_| | |_| | | (_) | | | | |>  < \r\n|_|   `___|_| |_|`__, |`__,_|`__|_|  `___/|_| |_$
_/_/`_`\r\n                 |___/                                  \r\n\r\n ____  _     _             _  ___ _   \r\n|  _ `(_)___| |_ _ __ ___ | |/ (_) |_ \r\n| | | | / __| __| '__/ _ `| ' /| | __|\r\n| |_| | `__ ` |_| | | (_) | . `| | |_ \r$
n|____/|_|___/`__|_|  `___/|_|`_`_|`__|\r\n\r\n\r\nOSELAS(R)-DistroKit-????.??.?-?-g365a82a5a1f4 / v7a-????.??.?-?-g365a82a5a1f4\r\nptxdist-2016.09.0/2016-12-01T10:39:49+0100\r\n\r\nDistroKit login: \r\r\n ____                        _       
            _      \r\n|  _ ` ___ _ __   __ _ _   _| |_ _ __ ___  _ __ (_)_  __\r\n| |_) / _ ` '_ ` / _` | | | | __| '__/ _ `| '_ `| ` `/ /\r\n|  __/  __/ | | | (_| | |_| | |_| | | (_) | | | | |>  < \r\n|_|   `___|_| |_|`__, |`__,_|`__|_|  `$
__/|_| |_|_/_/`_`\r\n                 |___/                                  \r\n\r\n ____  _     _             _  ___ _   \r\n|  _ `(_)___| |_ _ __ ___ | |/ (_) |_ \r\n| | | | / __| __| '__/ _ `| ' /| | __|\r\n| |_| | `__ ` |_| | | (_) | . $
| |\xfc\r\n\r", <_sre.SRE_Match object; span=(1370, 1383), match=b'\nbarebox 2015'>, b'\nbarebox 2015') state='stop'
     expect state='start' args={'pattern': ['barebox@[^:]+:[^ ]+ ', 'stop autoboot']}
     expect result=(1, b'.10.0-00026-g1099b60ef94a-dirty #67 Mon Feb 22 18:12:21 CET 2016\r\n\r\n\r\nBoard: RIoTboard Solo\r\ndetected i.MX6 Solo revision 1.1\r\r\nmdio_bus: miibus0: probed\r\nimx-usb 2184200.usb: USB EHCI 1.00\r\nimx-esdhc $
194000.usdhc: registered as 2194000.usdhc\r\nimx-esdhc 2198000.usdhc: registered as 2198000.usdhc\r\nimx-esdhc 219c000.usdhc: registered as 219c000.usdhc\r\nimx-ipuv3 2400000.ipu: IPUv3H probed\r\ncaam 2100000.caam: Instantiated RNG4 SH0\r\n$
aam 2100000.caam: Instantiated RNG4 SH1\r\ncaam_rng rng.15: registering rng-caam\r\nnetconsole: registered as netconsole-1\r\nWarning: Using dummy clocksource\r\nmalloc space: 0x2ff00000 -> 0x4fdfffff (size 511 MiB)\r\nmmc3: detected MMC car$
 version 4.65\r\nmmc3: registered mmc3.boot0\r\nmmc3: registered mmc3.boot1\r\nmmc3: registered mmc3\r\nbarebox-environment environment.16: setting default environment path to /dev/mmc3.barebox-environment\r\nrunning /env/bin/init...\r\n\r\n$
it m for menu or any other key to ', <_sre.SRE_Match object; span=(928, 941), match=b'stop autoboot'>, b'stop autoboot') state='stop'
     _write state='start' args={'data': b'\n'}
     _write result=2 state='stop'
     expect state='start' args={'pattern': 'barebox@[^:]+:[^ ]+ '}
     expect state='stop'
   transition state='stop'
   run state='start' args={'cmd': 'version'}
   run state='stop'
FAILED

SerialDriver driver activates U-boot prompt.

Sometimes the SerialDriver stops u-boot. It seems to press the anykey, and gets the u-boot prompt.
Looking in _await_login this comment "# TODO use step timeouts" indicates that this is a known problem.

The interruption of u-boot is caused by _await_login calling self.console.sendline("") as the first thing.
It is of course target dependent, but my board always prints login:, no need to send a newline to get it. Should the target already have passed that point, then say 500 ms silence would indicate that I need to send a newline, to get the current prompt, whether it would be the logged in prompt, the login prompt OR a u-boot prompt.

Thus _await_login should loop on self.console.expect and only send a newline when receiving a timeout and the serial port has been quiet during that period.
I had this behavior implemented in the Ruby code which I used before starting to use labgrid.

Furthermore it should be able to detect the U-boot prompt, and recover from that too. But if the interruption can be prevented, this is not necessary.

agetty issue breaks first login

I'm using the SerialDriver backed by the NetworkSerialPort to connect to a target console. The test tries to login as 'root' without a password.

Most of the time the login is successful, but sometimes it runs into a timeout and the test fails. The debug output contains the following messages, indicating that the test tried to login as 'orot' and failing to supply a password.

DEBUG: Write bytes (b'root\n')
DEBUG: Reading 1 (min 1) bytes with 5 timeout
DEBUG: Read bytes (b'o') or timeout reached
DEBUG: Reading 16 (min 1) bytes with 4.89918851852417 timeout
DEBUG: Read bytes (b'rot\r\r\nPassword: ') or timeout reached
DEBUG: Reading 1 (min 1) bytes with 4.79821252822876 timeout
DEBUG: Read bytes (b'') or timeout reached

I would not expect that the driver reorders the characters that are sent via the serial console.

Exporting named resources is broken

Goal is to run tests on a distributed setup where an exporter provides multiple named resources of the same type that are bound to drivers on the client.

Example exporter config

board1:
  NetworkService:
    name: network-board1
    address: 1.1.1.1
    username: root
board2:
  NetworkService:
    name: network-board2
    address: 2.2.2.2
    username: root

Example environment config

targets:
  main:
    resources:
      RemotePlace:
        name: remoteboards
    drivers:
      SSHDriver:
        keyfile: 'riot'
        bindings:
          networkservice: network-board1

This setup exposes two potential errors, one at the client and one at the exporter

  1. client side error
self = <labgrid.factory.TargetFactory object at 0x7fbd457450b8>
target = Target(name='main', env=Environment(config_file='remote.yaml')), resource = 'NetworkService', name = 'NetworkService'
args = {'address': '2.2.2.2', 'name': 'network-board2', 'username': 'root'}

    def make_resource(self, target, resource, name, args):
        print('Making resource {} for target {} named {} with args {}'.format(resource,target,name,args))
        assert isinstance(args, dict)
        if not resource in self.resources:
            raise InvalidConfigError("unknown resource class {}".format(resource))
        try:
            cls = self.resources[resource]
            args = filter_dict(args, cls, warn=True)
            if 'name' in args:
               print('name given by args: {} name by function {}'.format(args['name'], name))
>           r = cls(target, name, **args)
E           TypeError: __init__() got multiple values for argument 'name'

When calling make_resource labgrid/resource/remote.py does not remove the name key from the arguments, as done in ./labgrid/factory.py

Here is a fix for the problem

index 6398550..7c5eabe 100644
--- a/labgrid/resource/remote.py
+++ b/labgrid/resource/remote.py
@@ -45,8 +45,10 @@ class RemotePlaceManager(ResourceManager):
         resource_entries = self.session.get_target_resources(place)
         expanded = []
         for resource_name, resource_entry in resource_entries.items():
+            args = resource_entry.args
+            name = args.pop('name', resource_name)
             new = target_factory.make_resource(
-                remote_place.target, resource_entry.cls, resource_name, resource_entry.args)
+                remote_place.target, resource_entry.cls, name, args)
             new.parent = remote_place
             new.avail = resource_entry.avail
             new.extra = resource_entry.extra
  1. Exporter side error

The following is observed for the setup

root@9f1bafc24e84:/opt/labgrid# labgrid-client -x ws://coordinator:20408/ws resources
remote/board1/NetworkService
remote/board2/NetworkService

root@9f1bafc24e84:/opt/labgrid# labgrid-client -x ws://coordinator:20408/ws -p remoteboards create
root@9f1bafc24e84:/opt/labgrid# labgrid-client -x ws://coordinator:20408/ws -p remoteboards add-match remote/*/NetworkService
root@9f1bafc24e84:/opt/labgrid# labgrid-client -x ws://coordinator:20408/ws -p remoteboards show
Place 'remoteboards':
  aliases: 
  comment: 
  matches:
    remote/*/NetworkService
  acquired: None
  acquired resources:
  allowed: 
  created: 2018-05-25 14:08:12.618946
  changed: 2018-05-25 14:08:17.078097
Matching resource 'NetworkService' (remote/board1/NetworkService/NetworkService):
  {'acquired': None,
   'avail': True,
   'cls': 'NetworkService',
   'params': {'address': '1.1.1.1', 'name': 'network-board1', 'username': 'root'}}
Matching resource 'NetworkService' (remote/board2/NetworkService/NetworkService):
  {'acquired': None,
   'avail': True,
   'cls': 'NetworkService',
   'params': {'address': '2.2.2.2', 'name': 'network-board2', 'username': 'root'}}

We observe that the services are named remote/board1/NetworkService/NetworkService and remote/board2/NetworkService/NetworkService, while we expect remote/board1/NetworkService/network-board1 and remote/board2/NetworkService/network-board2

The client uses the same resource_name NetworkService as a key in directories for both resources, which makes only one of them available, when checking the environment

root@9f1bafc24e84:/opt/labgrid# labgrid-client -x ws://coordinator:20408/ws -p remoteboards env
OrderedDict([('name', 'network-board2'), ('address', '2.2.2.2'), ('username', 'root')])
targets:
  remoteboards:
    resources:
    - NetworkService:
        name: network-board2
        address: 2.2.2.2
        username: root

resource_name is NetworkService for all resources 

and when running a test

    def get_resource(self, cls, *, name=None, await=True):
        """
            Helper function to get a resource of the target.
            Returns the first valid resource found, otherwise None.
    
            Arguments:
            cls -- resource-class to return as a resource
            name -- optional name to use as a filter
            await -- wait for the resource to become available (default True)
            """
        found = []
        other_names = []
        if type(cls) is str:
            cls = self._class_from_string(cls)
    
        for res in self.resources:
            if not isinstance(res, cls):
                continue
            if name and res.name != name:
                other_names.append(res.name)
                continue
            found.append(res)
        if len(found) == 0:
            if other_names:
                raise NoResourceFoundError(
                    "all resources matching {} found in target {} have other names: {}".format(
>                       cls, self, other_names)
                )
E               labgrid.exceptions.NoResourceFoundError: all resources matching <class 'labgrid.resource.networkservice.NetworkService'> found in target Target(name='main', env=Environment(config_file='remote.yaml')) have other names: ['network-board2']

A fix is to check if a name for the resource is provided in the arguments at the exporter

--- a/labgrid/remote/exporter.py
+++ b/labgrid/remote/exporter.py
@@ -354,6 +354,7 @@ class ExporterSession(ApplicationSession):
                     if params is None:
                         continue
                     cls = params.pop('cls', resource_name)
+                    if 'name' in params: resource_name = params['name']
                     yield from self.add_resource(
                         group_name, resource_name, cls, params
                     )

In general the code is sometimes confusing, as resource_name as variable is often used for the type of the resource, for example NetworkService.

uboot strategy and multiple tests in pytest

Say I want to execute multiple pytests within the same test suite but I will have to bring the target into the uboot console as first step in every test, how do I go about with this?

I tried with this simple example where I would need the target to be powered off/on and interrupted in the bootsequence where, before the test is executed, I will have an uboot prompt.

Assuming 'in_bootloader' would handle this i could just execute the whole suite : pytest --lg-env fixture.yaml test_uboot_all.py all in once.

Currently this will fail, while running the individual tests one by one will pass both tests:
pytest --lg-env fixture.yaml test_uboot_all.py::test_uboot_1 && pytest --lg-env fixture.yaml test_uboot_all.py::test_uboot_2

How can I achive the stragtegy putting my environment to 'default' for each test and not just pickup where the other test left off ?

test_uboot_all.py :

import pytest
import logging

from labgrid.protocol import CommandProtocol
from labgrid.strategy import UBootStrategy


@pytest.fixture()
def strategy(target):
    try:
        return target.get_driver(UBootStrategy)
    except:
        pytest.skip("strategy not found")


@pytest.fixture(scope="function")
def in_bootloader(strategy, capsys):
    with capsys.disabled():
        strategy.transition("uboot")

def test_uboot_1(target, in_bootloader):
    command = target.get_driver(UBootDriver)

    stdout, stderr, returncode = command.run('run dfu_sf')
    assert returncode == 0
    assert len(stdout) > 0
    assert len(stderr) == 0
    assert 'n25q512' in '\n'.join(stdout)

def test_uboot_2(target, in_bootloader):
    command = target.get_driver(UBootDriver)

    stdout, stderr, returncode = command.run('run dfu_mmc')
    assert returncode == 0
    assert len(stdout) > 0
    assert len(stderr) == 0
    assert 'found' in '\n'.join(stdout)

Missing authorized_keys causes "ShellDriver.run can not be called"

If I remove .ssh/authorized_keys on my target I get an error:

This is the full call path of the error

target = Target(name='main', env=Environment(config_file='labgrid.yaml'))

    @pytest.fixture(scope='function')
    def strategy(target):
>       shell = target.get_driver('ShellDriver')

conftest.py:6: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../labgrid/labgrid/target.py:191: in get_driver
    self.activate(found[0])
../../labgrid/labgrid/target.py:433: in activate
    client.on_activate()
../../labgrid/labgrid/driver/shelldriver.py:69: in on_activate
    self._put_ssh_key(self.keyfile)
../../labgrid/labgrid/step.py:209: in wrapper
    _result = func(*_args, **_kwargs)
../../labgrid/labgrid/driver/shelldriver.py:252: in _put_ssh_key
    self._run_check("chmod 700 ~/.ssh/")
../../labgrid/labgrid/driver/commandmixin.py:37: in _run_check
    stdout, stderr, exitcode = self.run(cmd, timeout=timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = ShellDriver(target=Target(name='main', env=Environment(config_file='labgrid.yaml')), name=None, state=<BindingState.bo...800', keyfile='/userdata/labgridtools/ssh/labgrid_rsa.pub', login_timeout=120, console_ready='', await_login_timeout=2)
_args = ('chmod 700 ~/.ssh/',), _kwargs = {'timeout': 30}

    @wraps(func)
    def wrapper(self, *_args, **_kwargs):
        if self.state is not BindingState.active:
            raise StateError(
                "{} can not be called ({} is in state {})".format(
>                   func.__qualname__, self, self.state.name)
            )
E           labgrid.binding.StateError: ShellDriver.run can not be called (ShellDriver(target=Target(name='main', env=Environment(config_file='labgrid.yaml')), name=None, state=<BindingState.bound: 1>, prompt='~ # ', login_prompt=' login: ', username='root', password='****', keyfile='/userdata/labgridtools/ssh/labgrid_rsa.pub', login_timeout=120, console_ready='', await_login_timeout=2) is in state bound)

The error can be prevented in two ways:

  1. Change run to _run

    stdout, stderr, exitcode = self.run(cmd, timeout=timeout)

  2. Change the order, so state is changed before calling on_activate

    labgrid/labgrid/target.py

    Lines 432 to 434 in aeb049d

    # update state
    client.on_activate()
    client.state = BindingState.active

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.